text
stringlengths
100
500k
subset
stringclasses
4 values
Archetype C $\square$ Summary: System with three equations, four variables. Consistent. Null space of coefficient matrix has dimension 1. $\square$ A system of linear equations (Definition SLE).\begin{align*} 2x_1 - 3x_2 + x_3 - 6x_4 &= -7 \\ 4x_1 +x_2 +2x_3 + 9x_4 &= -7 \\ 3x_1 +x_2 +x_3 + 8x_4 &= -8 \end{align*} $\square$ Some solutions to the system of linear equations, not necessarily exhaustive (Definition SSLE): $x_1 = -7,\quad x_2 = -2,\quad x_3 = 7,\quad x_4 = 1$ $x_1 = -1,\quad x_2 = 7,\quad x_3 = 4,\quad x_4 = -2$ $\square$ Augmented matrix of the linear system of equations (Definition AM):\begin{bmatrix} 2 & -3 & 1 & -6 & -7\\ 4 & 1 & 2 & 9 & -7\\ 3 & 1 & 1 & 8 & -8 \end{bmatrix} $\square$ Matrix in reduced row-echelon form, row-equivalent to the augmented matrix. (Definition RREF)\begin{bmatrix} \leading{1} & 0 & 0 & 2 & -5\\ 0 & \leading{1} & 0 & 3 & 1 \\ 0 & 0 & \leading{1} & -1 & 6 \end{bmatrix} $\square$ Analysis of the augmented matrix (Definition RREF).\begin{align*}r&=3&D&=\set{1,\,2,\,3}&F&=\set{4,\,5}\end{align*} $\square$ Vector form of the solution set to the system of equations (Theorem VFSLS). Notice the relationship between the free variables and the set $F$ above. Also, notice the pattern of 0's and 1's in the entries of the vectors corresponding to elements of the set $F$ in the larger examples. $\colvector{x_1\\x_2\\x_3\\x_4}= \colvector{-5\\1\\6\\0} + x_4\colvector{-2\\-3\\1\\1}$ $\square$ Given a system of equations we can always build a new, related, homogeneous system (Definition HS) by converting the constant terms to zeros and retaining the coefficients of the variables. Properties of this new system will have precise relationships with various properties of the original system.\begin{align*} 2x_1 - 3x_2 + x_3 - 6x_4 &= 0 \\ 4x_1 +x_2 +2x_3 + 9x_4 &= 0 \\ 3x_1 +x_2 +x_3 + 8x_4 &= 0 \end{align*} $\square$ Some solutions to the associated homogenous system of linear equations, not necessarily exhaustive (Definition SSLE). Review Theorem HSC as you consider these solutions. $x_1 = 0,\quad x_2 = 0,\quad x_3 = 0,\quad x_4=0$ $x_1 = -2,\quad x_2 = -3,\quad x_3 = 1,\quad x_4=1$ $\square$ Form the augmented matrix of the homogenous linear system, and use row operations to convert to reduced row-echelon form. Notice how the entries of the final column remain zeros.\begin{bmatrix} \leading{1} & 0 & 0 & 2 & 0\\ 0 & \leading{1} & 0 & 3 & 0 \\ 0 & 0 & \leading{1} & -1 & 0 \end{bmatrix} $\square$ Analysis of the augmented matrix for the homogenous system (Definition RREF). Compare this with the same analysis of the original system, especially in the case where the original system is inconsistent (Theorem RCLS).\begin{align*}r&=3&D&=\set{1,\,2,\,3}&F&=\set{4,\,5}\end{align*} $\square$ For any system of equations we can isolate the coefficient matrix, which will be identical to the coefficient matrix of the associated homogenous system. For the remainder of the discussion of this system of equations, we will analyze just the coefficient matrix.\begin{bmatrix} 2 & -3 & 1 & -6 \\ 4 & 1 & 2 & 9 \\ 3 & 1 & 1 & 8 \end{bmatrix} $\square$ Row-equivalent matrix in reduced row-echelon form (Definition RREF).\begin{bmatrix} \leading{1} & 0 & 0 & 2 \\ 0 & \leading{1} & 0 & 3 \\ 0 & 0 & \leading{1} & -1 \end{bmatrix} $\square$ Analysis of the reduced row-echelon form of the matrix (Definition RREF). For archetypes begin as systems of equations, compare this analysis with the analysis for the coefficient matrices of the original system, and of the associated homogeneous system.\begin{align*}r&=3&D&=\set{1,\,2,\,3}&F&=\set{4}\end{align*} $\square$ Is the matrix nonsingular or singular? (Consider Theorem NMRRI. At the same time, examine the sizes of the sets $D$ and $F$ for the analysis of the reduced row-echelon version of the matrix.) Since the matrix is not square, the question does not apply. $\square$ The null space of the matrix. The set of vectors used in the span construction is a linearly independent set of column vectors that spans the null space of the matrix (Theorem SSNS, Theorem BNS). Solve a homogenous system with this matrix as the coefficient matrix and write the solutions in vector form (Theorem VFSLS) to see these vectors arise. Compare the entries of these vectors for indices in $D$ versus entries for indices in $F$.\begin{align*}\spn{\set{\colvector{-2\\-3\\1\\1}} }\end{align*} $\square$ The column space of the matrix, expressed as the span of a set of linearly independent vectors that are also columns of the matrix. These columns have indices that form the set $D$ above (Theorem BCS).\begin{align*}\spn{\set{\colvector{2\\4\\3},\,\colvector{-3\\1\\1},\,\colvector{1\\2\\1}} }\end{align*} $\square$ The column space of the matrix, as it arises from the extended echelon form of the matrix. The matrix $L$ is computed as described in Definition EEF. This is followed by the column space described as the span of a set of linearly independent vectors that equals the null space of $L$, computed as according to Theorem FS and Theorem BNS. When $r=m$, the matrix $L$ has no rows and the column space is all of $\complex{m}$.\begin{align*}L&=\begin{bmatrix}\end{bmatrix}\end{align*}\begin{align*}\spn{\set{\colvector{1\\0\\0},\,\colvector{0\\1\\0},\,\colvector{0\\0\\1}} }\end{align*} $\square$ The column space of the matrix, expressed as the span of a set of linearly independent vectors. These vectors are computed by bringing the transpose of the matrix into reduced row-echelon form, tossing out the zero rows, and writing the remaining nonzero rows as column vectors. By Theorem CSRST and Theorem BRS, and in the style of Example CSROI, this yields a linearly independent set of vectors that span the column space.\begin{align*}\spn{\set{\colvector{1\\0\\0},\,\colvector{0\\1\\0},\,\colvector{0\\0\\1}} }\end{align*} $\square$ Row space of the matrix, expressed as a span of a set of linearly independent vectors, obtained from the nonzero rows of the row-equivalent matrix in reduced row-echelon form. (Theorem BRS)\begin{align*}\spn{\set{\colvector{1\\0\\0\\2},\,\colvector{0\\1\\0\\3 },\,\colvector{0\\0\\1\\ -1}} }\end{align*} $\square$ Subspace dimensions associated with the matrix (Definition ROM, Definition NOM). Verify Theorem RPNC.\begin{align*}\text{Rank: }3&&\text{Nullity: }1&&\text{Matrix columns: }4&\end{align*} $\square$ Determinant of the matrix. The matrix is nonsingular if and only if the determinant is nonzero (Theorem SMZD).The determinant is not defined for matrices that are not square.
CommonCrawl
6.1.2 Ratios & Rates Understand the concept of ratio and its relationship to fractions and to the multiplication and division of whole numbers. Use ratios to solve real-world and mathematical problems. Strand: Number & Operation Benchmark: 6.1.2.1 Ratios Identify and use ratios to compare quantities; understand that comparing quantities using ratios is not the same as comparing quantities using subtraction. For example: In a classroom with 15 boys and 10 girls, compare the numbers by subtracting (there are 5 more boys than girls) or by dividing (there are 1.5 times as many boys as girls). The comparison using division may be expressed as a ratio of boys to girls (3 to 2 or 3:2 or 1.5 to 1). Benchmark: 6.1.2.2 Ratios, Fractions & Percents Apply the relationship between ratios, equivalent fractions and percents to solve problems in various contexts, including those involving mixtures and concentrations. For example: If 5 cups of trail mix contains 2 cups of raisins, the ratio of raisins to trail mix is 2 to 5. This ratio corresponds to the fact that the raisins are $\frac{2}{5}$ of the total, or 40% of the total. And if one trail mix consists of 2 parts peanuts to 3 parts raisins, and another consists of 4 parts peanuts to 8 parts raisins, then the first mixture has a higher concentration of peanuts. Benchmark: 6.1.2.3 Rates Determine the rate for ratios of quantities with different units. For example: 60 miles for every 3 hours is equivalent to 20 miles for every one hour (20 mph). Benchmark: 6.1.2.4 Ratio & Rate Problems Use reasoning about multiplication and division to solve ratio and rate problems. For example: If 5 items cost \$3.75, and all items are the same price, then 1 item costs 75 cents, so 12 items cost \$9.00. Standard 6.1 2 Essential Understandings The concept of ratio is a critical foundation in the learning progression of algebra concepts, connecting rational numbers to proportion to function in future years. Students at this level use simple reasoning about multiplication and division to solve ratio and rate problems. For example, If 5 items costs $3.75 and all items are the same price, then the cost of 12 items can be found by first dividing $3.75 by 5 to find the cost of one item and then multiplying the cost of a single item by 12. By analyzing simple drawings that indicate relative size of quantities and viewing equivalent ratios and rates as deriving from, and extending, pairs of rows (or columns) in the multiplication table, students extend whole number multiplication and division to ratios and rates. Thus, they expand the repertoire of problems that they can solve by multiplication and division, and build on their understanding of fractions to understand ratios. Students apply their knowledge of ratios, equivalent fractions, and percents to solve a wide variety of problems, including those involving mixtures and concentrations. 6.1.2.1 Identify and use ratios to compare quantities; understand that comparing quantities using ratios is not the same as comparing quantities using subtraction. 6.1.2.2 Apply the relationship between ratios, equivalent fractions and percents to solve problems in various contexts, including those involving mixtures and concentrations. 6.1.2.3 Determine the rate for ratios of quantities with different units. 6.1.2.4 Use reasoning about multiplication and division to solve ratio and rate problems. Understand that ratios can express part-to-part, part-to-whole, or whole-to-part relationships; Identify ratios in various contexts and represent them in multiple ways; Use reasoning about multiplication and division to determine equivalent ratios; Determine unit rates; Use the relationships between ratios, equivalent fractions, and percents to solve a variety of problems, including those involving mixtures and concentrations. Understand that a ratio is a comparison of two quantities; Understand that ratios can be represented in more than one way; Identify, order, write and compare fractions, mixed numbers, improper fractions, decimals, and percents; Locate fractions, mixed numbers, improper fractions, and decimals on a number line; Add and subtract decimals and fractions using multiple strategies including standard algorithms and a variety of representations; Use <, =, and > symbols to express relationships; Use equivalent fractions as a strategy to add and subtract unlike fractions; Multiply and divide whole numbers; Solve real-world and mathematical problems requiring addition and subtraction of decimals, fractions and mixed numbers, including those involving measurement, geometry and data. Understand numbers, ways of representing numbers, relationships among numbers, and number systems work flexibly with fractions, decimals and percents to solve problems; understand and use ratios and proportions to represent quantitative relationships; Compute fluently and make reasonable estimates develop, analyze, and explain methods for solving problems involving proportions, such as scaling and finding equivalent ratios Understand patterns, relations, and functions represent, analyze, and generalize a variety of patterns with tables, graphs, words, and, when possible, symbolic rules; relate and compare different forms of representation for a relationship. 6RP (Ratio and Proportional Relationships) Understand ratio concepts and use ration reasoning to solve problems. 6RP.1 Understand the concept of a ratio and use ratio language to describe a ratio relationship between two quantities. 6RP.2 Understand the concept of a unit rate a/b associated with a ratio a:b with b ≠ 0. 6RP.3 Use ratio and rate reasoning to solve real-world and mathematical problems, e.g., by reasoning about tables of equivalent ratios. 6RP.3.d Use ratio reasoning to manipulate and transform units appropriately when multiplying or dividing. Students may believe that 8:4 and 2:1 represent different ratios; Students may not understand that order matters in a ratio. For example, students may believe that 3:1 and 1:3 are the same ratios; Given the ratio 3 boys to every 7 girls, students may think there are exactly only 3 boys and 7 girls; Students see little difference between fractions and ratios, believing that all ratios express part-to-whole relationships; Students may misinterpret or misrepresent ratios expressed in words. For example, students may believe that the ratio of problems wrong to problems correct is 1:6 when a student gets one out of every six problems wrong rather than 1:5; Students believe that adding or subtracting the same number to the numerator and denominator produces an equivalent ratio (e.g., $\frac{4}{8}$ is equivalent to $\frac{4+2}{8+2}$ or $\frac{6}{10}$; When scaling up by non-integer values, students revert to additive structures (e.g., When asked, "If it takes 6 pizzas to feed 24 people, how many pizzas will it take to feed 36 people," students add 6 + 12 rather than multiply 6 x 1.5); Students do not understand unit rates as fractions (e.g., 25 students per bus means $\frac{25 students}{1bus}$) In the following vignette, students begin their investigation of ratio by solving a problem with familiar context using their prior knowledge of multiplication, division, or fractions. Later, the same problem is solved using the ideas of ratios and equivalent ratios. Class Party Problem: You are planning a menu for a class party. Each student will receive one drink at the party. Your teacher tells you that for a class this size, you can expect 3 out of 5 students to prefer cola, whereas 2 out of 5 will prefer lemonade. If, in fact, this guideline proves true for your class of 30 students, how many students will want cola? How many will want lemonade? Teacher: How did you find the number of colas and the number of lemonades you should order? Student: I drew a picture to represent the problem. I drew a column of 5 circles to represent the 5 students. I labeled 3 with C for cola and 2 with L for lemonade. Then I drew more of these columns of 5 circles until I had 30 circles. Teacher: How does your model show the answer? Student: My circles show the number of Cs (colas) and Ls (lemonades) for 30 students. There are 18 Cs and 12 Ls, so I would order 18 colas and 12 lemonades. Teacher: That makes sense. I can see that if you have 18 colas for 30 students and 12 lemonades for 30 students, then you have 3 colas for every 5 students and 2 lemonades for every 5 students. Did anyone else use a drawing to solve the problem? Student: I started to, but I realized that I didn't need to draw all the circles. Teacher: Tell us why. Student: I knew I needed 30 circles in all - that's 6 groups because there are 5 in each group. There are 3 Cs in each group, so there are 6 groups of 3 Cs, or 6 x 3 = 18 Cs in all. There are 2 Ls in each group, so there are 6 groups of 2 Ls, or 6 x 2 = 12 Ls in all. I used multiplication. Teacher: I understand. You used division and then multiplication to make sure that there were 3 colas for every 5 students and 2 lemonades for every 5 students. You ended up with 18 colas for 30 students and 12 lemonades for 30 students. Student: I used division and multiplication, but I didn't draw a model. I know that 30÷ 5 = 6, so there are six 5s in 30. That means that I would need 6 x 3 = 18, so I need 18 colas, and 6 x 2 = 12, so I need 12 lemonades. I added 18 + 12 = 30, so I know my answer is right. Teacher: So you also ended up with 18 colas for 30 students and 12 lemonades for 30 students, which is also 3 colas for every 5 students and 2 lemonades for every 5 students. Student: I drew the same picture, but I used fractions to solve the problem. The fraction of 5 students that want cola is ⅗.The fraction of 5 students that want lemonade is ⅖. Then I used equivalent fractions to find the fraction of 30 students that want cola and lemonade. $\frac{3}{5}=\frac{?}{30}$ $\frac{3 \times 6}{5 \times 6}=\frac{18}{30}$ So I need to order 18 colas. So I need to order 12 lemonades. Add 18 + 12 = 30, so my answer is right. Teacher: Again, there are 18 colas for 30 students (or 3 colas for every 5 students) and 12 lemonades for 30 students (or 2 lemonades for every 5 students). So what we've done today is use the idea called ratio. When we describe this situation by saying things like 3 for every 5 and 18 for every 30, we are using ratio language. We were able to use what we know about fractions and multiplication and division in this ratio situation because those ideas are also related to the idea of comparing some number to another number. Later that week... Teacher: Remember how we used drawings and multiplication and division to solve the Class Party Problem? (Teacher reviews drawings and procedures.) Now we know that we can use the ideas of ratios and equivalent ratios to solve the problem. Let's revisit the Class Party Problem, this time using ratios to compare the students who want cola and lemonade to the class, and then finding equivalent ratios for thirty students. Sample student work: Note: Students may understand that rather than use equivalent ratios to find both the number of colas and the number of lemonades needed for 30 students, they could find the equivalent ratio for one drink, and subtract that from the total, 30, to find the number of the other drink. For example, if students found that they needed 18 colas for 30 students, then they can surmise that they need 30 - 18, or 12, lemonades. Completing the problem both ways is valuable so that students realize that the sum of the colas and the lemonades equals 30, which is 18 + 12 = 30. Students that struggle to identify equivalent ratios written in the form a:b will benefit from representing the ratios with fractions before determining equivalency. For example, by writing 8:4 as $\frac{8}{4}$ and 2:1 as $\frac{2}{1}$, it quickly becomes apparent that the two ratios are equivalent. Similarly, by writing 3:1 as $\frac{3}{1}$ and 1:3 as $\frac{1}{3}$, it becomes apparent that the ratios are not equivalent and order matters; Visual models can help students understand that ratios represent a comparison of two quantities by division, rather than two unique quantities. The following pictures show that if the ratio of number of pieces of pizza eaten to the total number of pieces of pizza is $\frac{1}{2}$, that does not mean that the pizza was necessarily cut into 2 pieces; As students begin to use what they know about fractions to make comparisons, they will learn to make other comparisons. For example, if the ratio of children to adults in a family is 4 to 2, they can compare the number of adults to the total number of people in the family with the ratio 2 to 6, and so on. In general, students will learn that they can write part-to-part, part-to-whole, and whole-to-part ratios. Students must pay close attention to language to determine whether they are being asked to represent a part-to-part, part-to-whole, or whole to part ratio; Students also need to build a connection from division to ratio and rate. This can be accomplished when they learn that a ratio is a comparison of two numbers by division; for example, the ratio 4:5 can be written as $\frac{4}{5}$ and is the quotient of 4÷5. Students will use this relationship when they write ratios as decimals. Before connecting multiplication and division to the concepts of ratio and rate, students first need to build a connection from additive reasoning to multiplicative reasoning. That is, they need to understand that 2 times has a different meaning than 2 more than, and so on. Teachers must help students make the transition from only thinking about a fraction as representing the comparison of a part to a whole to thinking about a fraction as a representation of many types of ratios. Expanding this thinking is particularly important in dealing with the difference between the representations of equivalent fractions and equivalent ratios. In common representations of equivalent fractions, the whole stays the same. To create the equivalence, the whole is divided into smaller and smaller parts. Teachers need to demonstrate that any fraction can be expressed in an infinite number of ways. By contrast, in common representations of equivalent ratios, the whole is replicated over and over, or multiplied. In the equivalent ratios 1:2 = 2:4 = 3:6 = 5:10, the 1-for-every-2 relationship is replicated over and over, or multiplied, and the relationship of the new parts to the new whole does not change. This concept of scaling lays the foundation of proportionality in later grades. Visuals can be used to confront the misconception that adding or subtracting the same number in the numerator and denominator does not produce equivalent ratios, rather multiplying or dividing does. Example: $\frac{4}{8}$ is not equivalent to $\frac{4-2}{8-2}$, or $\frac{2}{6}$. However, $\frac{4}{8}$ is equivalent to $\frac{4÷2}{8÷2}$, or $\frac{2}{4}$. The multiplication table is a powerful model to show equivalent fractions, and therefore, equivalent ratios. For example, following the rows for 4 and 5 to the right on the table shown below shows the equivalent ratios $\frac{4}{5}, \frac{8}{10}, \frac{12}{15},$ and so on. A useful activity to help students build a connection from multiplication to ratio and rate is to explore multiplication as scaling. When a quantity is multiplied by a number greater than 1, it is scaled up. When a number is multiplied by a positive number less than 1, it is scaled down. Opportunities to examine such problems as those shown below can help students understand the concept of scaling. ● Ask students to recall when they have heard or used expressions with the word per, and make a list the expressions. Explain that these are all rates and have a denominator of 1. Ask students how they would find a unit rate from a rate such as $32.50 for 10 gallons of gas. Conversely, ask how they would use a unit rate of 25 miles per gallon to determine the number of miles traveled using 10 gallons. ● The words used to describe unit rates (e.g., 25 students per bus), and the fact that students often see unit rates described with only one number visible, make it difficult for students to think about unit rates in fraction form. It is beneficial for students to understand that every rate situation can be written in two ways with two different unit rates, with either unit as 1. For example, the situation in which 6 pounds of bananas cost $3 can be written as the rate $\frac{6 pounds}{3 dollars}$ and as the unit rate in terms of pounds per dollar: $\frac{2 pounds}{1 dollar}$. The same situation can be described by the rate $\frac{3 dollars}{6 pounds}$ and as the unit rate in terms of dollars per pound: $\frac{0.5 dollars}{1 pound}$ or $\frac{$0.50}{1 pound}$. When the unit rate is written in terms of amount of money for 1 quantity, then it is called the unit cost or unit price. In either instance, to find the unit rate, one must find the equivalent ratio whose denominator is 1 unit. Two ways to find equivalent unit rates is to use equivalent ratios or divide the numerator by the denominator. ● Students will benefit from constructing various representations of additive relationships and multiplicative relationships, as shown below. ● Bean Counting and Ratios Description: By using sampling from a large collection of beans, students get a sense of equivalent fractions, which leads to a better understanding of proportions. Equivalent fractions are used to develop an understanding of proportions. This lesson can be adapted for lower-skilled students by using a more common fraction, such as $\frac{2}{3}$. It can be adapted for upper grades or higher-skilled students by using ratios that are less instinctive, such as $\frac{12}{42}$ (which reduces to $\frac{2}{7}$). ● Ratio. Fraction. What's the Difference? This lesson explains why ratios and fractions are actually two different ideas and when they can actually overlap. ● ThinkingBlocks: Ratios This website provides an excellent opportunity for teachers to differentiate with regard to ratios. It has a variety of levels and can easily challenge the gifted students as well. There are video segments to support the teaching of skills. per: for each, or for every; often used with units to express a rate. Example: If apples cost $1.99 per pound, then each pound of apples costs $1.99. percent: a ratio comparing a number to 100. Example: 45% = $\frac{45}{100}$ proportion: an equation that states that two ratios are equivalent. Example: $\frac{2}{3}=\frac{4}{6}$ rate: a ratio that compares two quantities measured in different units; may be expressed using the word per. Examples: $\frac{100 students}{4 buses}$; 100 students per 4 buses. ratio: a comparison of two quantities by division. Examples: 12 to 25, 12:25, $\frac{12}{25}$ unit rate: a rate with a denominator of 1 unit. Example: $\frac{$3.89}{1 gallon}$ or $3.89 per gallon Reflection - Critical Questions regarding teaching and learning of these benchmarks How have equivalent fractions been used to build an understanding of equivalent ratios? Do students have enough time and experiences to fully understand ratio concepts before moving on to related proportional ideas? How can problem solving be used to strengthen connections among multiplication, division, fractions, and ratio? What experiences help students see why additive strategies don't make sense when working with ratios? How were real-life problems used to engage students? What evidence exists to show that students are transitioning from concrete and numerical representations to algebraic reasoning, generalization, and abstract representations? Materials - suggested articles and books ● NCTM. A Research Companion to Principles and Standards for School Mathematics, Chapter 14, The Ratio Concept, p. 217. Minnesota's K-12 Mathematics Frameworks. (1998). St. Paul, MN: SciMathMN. National Council of Teachers of Mathematics. (2010). Developing essential understanding of ratios, proportions & proportional reasoning grades 6-8. Reston, VA: National Council of Teachers of Mathematics, Inc. Ashley bought a twelve pack of juice boxes for $3.84. How much did one juice box cost? a. 0.32 b. $0.40 c. $3.20 d. $4.00 2. A map uses 8 cm to represent 28 miles. How many cm would be used to represent 70 miles? Answer: 20 cm 3. Sam read 60 pages of his novel in 100 minutes. How many pages of his novel can Sam expect to read in 45 minutes if he reads at the same rate? Answer: 27 pages 4. Ashley biked 32 miles in 2 hours. Mike biked 12 miles in 1 hour. How much farther can Ashley bike than Mike in 5 hours if they both continue at the same rate? a. 16 miles b. 20 miles c. 24 miles d. 28 miles DOK Level 3) 5. A drink recipe calls for 1 part lemonade, 3 parts orange juice, and 4 parts water. How much lemonade, orange juice, and water are needed to make 64 fluid ounces of the drink using the recipe? Explain how you found your answer. Answer: Sixty-four fluid ounces of the drink will require 8 fluid ounces lemonade, 24 fluid ounces orange juice, and 32 fluid ounces water.My first step was to find the total number of parts: 1 part + 3 parts + 4 parts = 8 parts.Then I divided the total number of fluid ounces by the total number of parts: $\frac{64\ fluid\ ounces}{8\ parts}=\frac{8\ fluid\ ounces}{1\ part}$. Next I multiplied the number of parts by $\frac{8\ fluid\ ounces}{1\ part}$ to find the number of fluid ounces of each ingredient: lemonade - $1\ part \times \frac{8\ fluid\ ounces}{1\ part}=8$ fluid ounces; orange juice - $3\ parts \times \frac{8\ fluid\ ounces}{1\ part}=24$ fluid ounces; water - $4\ parts \times \frac{8\ fluid\ ounces}{1\ part}=32$ fluid ounces. I know my answer is reasonable because when I add the number of fluid ounces of each ingredient, I get 64 fluid ounces: (8 fluid ounces lemonade + 24 fluid ounces orange juice + 32 fluid ounces water = 64 fluid ounces drink). 6. A photo measuring 4 inches wide by 6 inches long needs to be enlarged to be 8 inches wide, using the same ratio for the dimensions. Carlos says that the new dimensions will be 8 inches wide by 10 inches long. Devon says the new dimensions will be 8 inches wide by 12 inches long. Which student is correct? Explain how you know. Answer: Devon is correct because $\frac{4}{6}$ and $\frac{8}{12}$ are equivalent ratios. $\frac{4}{6} \times \frac{2}{2}=\frac{4 \times 2}{6 \times 2}= \frac{8}{12}$. 7. Your school plans to install a new flagpole that is a minimum of 20 feet high. When flown on a flagpole, it is suggested that the width of the American flag is $\frac{1}{4}$ the height of the flagpole. The standard ratio for width: length of the American flag is 10:19. Your task is to recommend a flagpole height and flag dimensions. Justify that your recommendations meet the requirements. Sample Answer: I recommend a 24 foot flagpole with an American flag that measures 6 feet wide: $\frac{6\ feet\ (flag\ width)}{24\ feet\ (flagpole\ height)}=\frac{1}{4}$ When describing a ratio such as 2:3 to students, it may be helpful to use language such us "for every..." E.g., for every 2 parts concentrate, we need 3 parts water. Use pictorial representations to help students visualize ratios; Encourage students to look for patterning when working with a rate table. Continue working with a number line to show the magnitude of a fraction or ratio; e.g., lay out more than one number line containing differing pre-segmented parts to compare two fractions. Using a multiplication table can assist with showing equivalent ratios. To help students see equivalent ratios when using non-consecutive rows such as 2 and 7 to make $\frac{2}{7},\frac{4}{14},$ and $\frac{6}{21}$, have the students cut apart the rows of the table and reposition the rows so they are directly above each other. Teach and assist students in translating statements in word problems into mathematical expressions. Ratio, Proportions, Algebra, and Functions This site provides a multitude of tutorials and practice opportunities for reviewing basic ratio, proportion and algebraic concepts involving sequences, expressions, equations and graphs. The vocabulary tutorial is strong. Create a visual glossary that includes examples of ratios represented in different forms, rates, unit rates, and proportions; Use Venn diagrams to show relationships among ratios, rates, unit rates, and proportions; Use pictorial representations of word problems to assist in problem solving; ConstantDimensions In this lesson, students measure the length and width of a rectangle with standard and nonstandard units to discover that the ratio of length to width is constant. In this activity, an applet allows students to vary the gear ratio of a bike. Students choose a route and attempt to capture five flags place on the route. Capture-Recapture In this lesson, students learn how to estimate the size of a total population by taking samples and using proportions. identifying part-to-part, part-to-whole, and whole-to-part ratios and representing them in different forms. transitioning students from thinking about fractions only as part-to-whole relationships to thinking about fractions as representations for many types of ratios. using multiplication tables to explore equivalent ratios. making explicit connections between pairs of rows (or columns) and equivalent ratios. finding equivalent ratios. building on student understanding of equivalent fractions. using reasoning about multiplication and division to solve problems involving proportional relationships. posing problems involving proportional relationships that arise from students' real-world experiences. using numbers, tables, graphs, and equations to think about quantities and their relationships. pointing out the difference between additive and multiplicative relationships. representing proportional relationships in words, tables, graphs, and equations. asking students to use multiple representations of proportional relationships. discussing, writing, and reflecting about their reasoning. paying attention to student reasoning to help them transition from concrete and numerical representations to algebraic reasoning, generalization, and abstract representations. Ratios, a website that explains how ratios are used to build proportions. Ratio and Proportion, a website that includes sample worked problem Ratios as Fractions in Simplest Form, a video that explains how ratios written in various forms can be rewritten as fractions in simplest form. Simplifying Rates and Ratios, a video that uses factor trees to find common factors and simplify rates and ratios. Finding Unit Rates, a video that uses division to find unit rates. 6.1.2.1 Ratios 6.1.2.2 Ratios, Fractions & Percents 6.1.2.3 Rates 6.1.2.4 Ratio & Rate Problems
CommonCrawl
Prime Magic Square/Examples/Order 3/Smallest < Prime Magic Square/Examples‎ | Order 3 1 Example of Order $3$ Prime Magic Square 2 Proof 2.1 $3$ cannot be used 2.1.1 Case $1$: The row containing $3$ has numbers with all remainders 2.1.2 Case $2$: The row containing $3$ leave out numbers with some remainder 2.2 Primes of remainder $1, 2$ cannot be mixed Example of Order $3$ Prime Magic Square This order $3$ prime magic square has the smallest elements: $\begin{array}{|c|c|c|} \hline 67 & 1 & 43 \\ \hline 13 & 37 & 61 \\ \hline 31 & 73 & 7 \\ \hline \end{array}$ For the purpose of this magic square only, we consider $1$ as a prime. A simple parity argument can show that $2$ cannot be included in a prime magic square: If it is, the row containing $2$ sum to an even number, while a row not containing $2$ will sum to an odd number. Although this article appears correct, it's inelegant. There has to be a better way of doing it. I'm drawing a blank on how to present the following result clearly You can help Proof Wiki by redesigning it. To discuss this proof in more detail, feel free to use the talk page. If you are able to fix this proof, then when you have done so you can remove this instance of {{Improve}} from the code. If you would welcome a second opinion as to whether your improvement is correct, add an invitation to {{Proofread}} the page (see the proofread template for usage). We aim to show that all elements of an order $3$ prime magic square has the same remainder when divided by $3$. There are two parts to this: $3$ cannot be used For simplicity, we denote the numbers in the cells by their remainders when divided by $3$. Note that $3$ is the only prime divisible by $3$. We define the off-diagonals as: $\begin{array}{|c|c|c|} \hline * & & \\ \hline & * & \\ \hline & & * \\ \hline \end{array} \begin{array}{|c|c|c|} \hline & * & \\ \hline & & * \\ \hline * & & \\ \hline \end{array} \begin{array}{|c|c|c|} \hline & & * \\ \hline * & & \\ \hline & * & \\ \hline \end{array} \begin{array}{|c|c|c|} \hline & & * \\ \hline & * & \\ \hline * & & \\ \hline \end{array} \begin{array}{|c|c|c|} \hline & * & \\ \hline * & & \\ \hline & & * \\ \hline \end{array} \begin{array}{|c|c|c|} \hline * & & \\ \hline & & * \\ \hline & * & \\ \hline \end{array}$ We also observe that, by switching rows and columns, the numbers in each row and column remains unchanged, while the two diagonals become two off-diagonals sharing one cell. Therefore the position of the numbers do not matter in the most part. Suppose $3$ is used in the square. Without loss of generality there are only two cases: Case $1$: The row containing $3$ has numbers with all remainders $\begin{array}{|c|c|c|} \hline 0 & 1 & 2 \\ \hline & & \\ \hline & & \\ \hline \end{array}$ Hence the row sum is divisible by $3$. $1 + 1 \equiv 2 \pmod 3$ there is a unique way to fill in the columns: $\begin{array}{|c|c|c|} \hline 0 & 1 & 2 \\ \hline 1 & 1 & 2 \\ \hline 2 & 1 & 2 \\ \hline \end{array}$ Note that the order of $1$ and $2$ in the leftmost column do not matter due to symmetry. The sums of rows $2$ and $3$ are not divisible by $3$. Hence this case cannot occur. $\Box$ Case $2$: The row containing $3$ leave out numbers with some remainder Without loss of generality suppose $2$ is not used. Filling in the columns: All off-diagonals sum to $1$, which is not $1 + 1 = 2$. Primes of remainder $1, 2$ cannot be mixed Without loss of generality suppose there are $2$ $1$'s and $1$ $2$. Then the row sum is not divisible by $3$. Filling in the first and third columns: $\begin{array}{|c|c|c|} \hline 1 & 1 & 2 \\ \hline 1 & & 1 \\ \hline 2 & & 1 \\ \hline \end{array}$ Finally, filling up the rows: There must be an off-diagonal with sum divisible by $3$. Using this result, we divide the primes $\le 73$ into two groups: Remainder of $1$: $\set {1, 7, 13, 19, 31, 37, 43, 61, 67, 73}$ Remainder of $2$: $\set {5, 11, 17, 23, 29, 41, 47, 53, 59, 71}$ We only need to show these primes cannot form a smaller magic square. $\begin{array}{|c|c|c|} \hline a & b & c \\ \hline d & e & f \\ \hline g & h & i \\ \hline \end{array}$ Let $C$ be the magic constant. \(\ds 4C\) \(=\) \(\ds \paren {a + e + i} + \paren {b + e + h} + \paren {c + e + g} + \paren {d + e + f}\) These are all lines passing through the center \(\ds \) \(=\) \(\ds \paren {a + b + c + d + e + f + g + h + i} + 3 e\) Center counted $4$ times \(\ds \) \(=\) \(\ds 3 C + 3 e\) Hence $e = \dfrac C 3$, which is $\dfrac 1 9$ of the sum of all numbers in the square. $1 + 7 + 13 + 19 + 31 + 37 + 43 + 61 + 67 + 73 = 352$ $5 + 11 + 17 + 23 + 29 + 41 + 47 + 53 + 59 + 71 = 356$ $352$ and $356$ have remainders $1$ and $5$ when divided by $9$. In the lists: $1, 19, 37, 73$ have a remainder of $1$ when divided by $9$. Omitting each number gives the corresponding center square values: $39, 37, 35, 31$ for the first list $39, 37, 35, 33$ for the second list Only $31, 37$ of the first list are possible candidates. $73 + 31 > 93$ Hence $31$ fail to produce a magic square. This leaves $37$, which possibility is demonstrated above. $\blacksquare$ Magic Constant of Smallest Prime Magic Square Weisstein, Eric W. "Prime Magic Square." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PrimeMagicSquare.html Retrieved from "https://proofwiki.org/w/index.php?title=Prime_Magic_Square/Examples/Order_3/Smallest&oldid=481551" MathWorld Articles Prime Magic Squares This page was last modified on 13 August 2020, at 10:19 and is 6,019 bytes
CommonCrawl
Search E-alert Submit My Account Login Article | Open | Published: 09 January 2018 Deep level transient spectroscopic investigation of phosphorus-doped silicon by self-assembled molecular monolayers Xuejiao Gao ORCID: orcid.org/0000-0001-6446-07841 na1, Bin Guan ORCID: orcid.org/0000-0003-1247-69141 na1, Abdelmadjid Mesli2, Kaixiang Chen1 & Yaping Dan ORCID: orcid.org/0000-0002-2983-72131 Nature Communicationsvolume 9, Article number: 118 (2018) | Download Citation Electronic properties and materials Molecular self-assembly It is known that self-assembled molecular monolayer doping technique has the advantages of forming ultra-shallow junctions and introducing minimal defects in semiconductors. In this paper, we report however the formation of carbon-related defects in the molecular monolayer-doped silicon as detected by deep-level transient spectroscopy and low-temperature Hall measurements. The molecular monolayer doping process is performed by modifying silicon substrate with phosphorus-containing molecules and annealing at high temperature. The subsequent rapid thermal annealing drives phosphorus dopants along with carbon contaminants into the silicon substrate, resulting in a dramatic decrease of sheet resistance for the intrinsic silicon substrate. Low-temperature Hall measurements and secondary ion mass spectrometry indicate that phosphorus is the only electrically active dopant after the molecular monolayer doping. However, during this process, at least 20% of the phosphorus dopants are electrically deactivated. The deep-level transient spectroscopy shows that carbon-related defects are responsible for such deactivation. Self-assembled molecular monolayer (SAMM) doping is a potential doping technique to tackle the challenges in the formation of sub-10-nm ultra-shallow junction1 and has the advantage of facilitating mass production and applicability to semiconductors like Si, Ge, InAs, GaAs, etc.2,3,4,5. In this technique, dopant-carrying molecules are first covalently immobilized on the semiconductor surface via surface reactions. Due to surface self-limiting property, the areal dose of dopant molecules can be modulated by varying reaction temperature6, reaction time6, molecule size7, and the composition of the molecules8,9. Subsequently, the dopants are driven into the semiconductor bulk and activated by thermal annealing. Unlike the technique of ion implantation, no lattice damage is found during the dopant-incorporation process2,8,10. In addition, this technique is suitable for doping in complex geometry structures, such as nanopillar arrays4 or fins in fin-FETs11. During the thermal annealing process, other atoms in the molecular monolayer such as oxygen, hydrogen, and especially carbon12 can be driven into silicon together with the desired doping element. These impurities are difficult to detect due to their atomic nature and low concentrations. It remains an issue whether these unintentional impurities form complex defects and how these defects affect the electrical properties of the substrate. Longo et al.13 have suspected the possible influence of unintentional carbon contamination during the doping process and hence reported a SAMM doping method to minimize carbon incorporation by breaking chemical bonds and releasing carbon at lower temperature than that of annealing. However, no detail information was given in their study on why carbon ligand was removed before thermal annealing and how they affect the electrical properties of the substrate. Shimizu et al.12 investigated the diffusion behavior of carbon and oxygen contaminants in phosphorus-doped Si substrates by time-of-flight secondary ion mass spectrometry (ToF-SIMS) and atom probe tomography (APT), finding that the contaminants were limited to the first atomic layer and could be easily removed. Puglisi and coworkers14 believed that a surface layer where silicon intermixed with carbon from dopant-carrying molecules was present after SAMM doping. However, with a significant solubility in silicon and a diffusion coefficient larger than phosphorus15, it is likely that carbon forms active defects, which would have significant influence on the electrical properties of the substrate. For example, interstitial carbon can bond with group V elements like substitutional phosphorus, arsenic, and antimony forming the pairs Ci–Ps, Ci–Ass, and Ci–Sbs with multiple deep energy levels16 corresponding to several atomic configurations. Deep-level transient spectroscopy (DLTS) is a very sensitive technique to study defects in bulk semiconductors, providing information on energy levels and concentrations of related defects17. Tremendous efforts have been made to acquire energy levels of impurities like carbon, oxygen, hydrogen, and their complex in silicon by using DLTS18,19,20. In this paper, we employ DLTS to investigate defects formed by impurities in SAMM-doped silicon. The molecular monolayer grafting and doping are characterized by X-ray photoelectron spectroscopy (XPS) and van der Pauw measurements, respectively. The total phosphorus concentration and the active fraction are determined by secondary ion mass spectrometry (SIMS) and low-temperature Hall measurements, respectively. The DLTS study shows that carbon-related defects are present in the SAMM-doped silicon, resulting in the electrical annihilation of phosphorus dopants due to bonding with interstitial carbon. We fabricated phosphorus-functionalized silicon as outlined in Fig. 1. Briefly, a freshly prepared hydrogen-terminated silicon (surface 1) was passivated with 5-hexenyl acetate (molecule 1) in Ar atmosphere at 95 °C for 16 h, yielding a surface with acetate terminus (surface 2). Subsequently the acetate surface was reduced into a hydroxyl-terminated surface (surface 3) by lithium aluminum hydride (LiAlH4) in tetrahydrofuron (THF) at 70 °C for 2 h. The hydroxyl groups on the surface were reacted with alkylphosphate (molecule 2) in the presence of activation agent dicyclohexylcarbodiimide (DCC), forming phosphate ester, thus rendering phosphorus-functionalized silicon (surface 4). Stepwise surface modification on Si (100) surfaces. Molecule 1 is chemically grafted onto surface 1 under thermal treatment at 95 °C for 16 h forming a molecular monolayer on surface 2. Molecule 2 reacts with the hydroxyl group on surface 3 leading to a phosphorus-funtionalized surface 4 Each step of modification was characterized by XPS as shown in Fig. 2 and Supplementary Figure 1. High-resolution narrow scan of C 1s for surface 2 (Fig. 2a) reveals a broad peak at 285.0 eV (FWHM 1.4 eV) related to aliphatic carbon-bonded carbon (C–C) from 5-hexenyl acetate. The broad peak has a side shoulder at 286.6 eV (FWHM 1.6 eV) attributed to oxygen-bonded carbon (C–O) and carbon adjunct to carbonyl (C(C=O)). The small bump at 289.0 eV (FWHM 1.6 eV) is assigned to the carbon of carbonyl (C=O). The integral peak area ratio of C–C, C–O/C(C=O), and C=O is 6:2:1 consistent with the stoichiometric ratio of 5-hexenyl acetate (5:2:1) immobilized on the surface. To provide a reference for surface 4 later, we also examined P 2s XPS spectrum of surface 2 ranging from 175 to 210 eV for phosphorus signal. As expected, no phosphorus was detected, except two broad peaks (Fig. 2b) due to silicon plasmon loss21. For surface 3, the C 1s scan (Fig. 2c) shows the same three carbon components as on surface 2, namely C–C, C–O/C(C=O), and C=O, with a peak area ratio of 10:3:1. This indicates that about half of acetate groups on the surface have been reduced to hydroxyl. For surface 4, the C 1s scan shows that the peak area ratio further increases to 30:5:1 (Fig. 2d), suggesting that the alkylphosphate is successfully coupled onto the Si surface. This successful coupling is also supported by an additional peak at 192.0 eV (FWHM 2.8 eV) in the P 2s spectrum (Fig. 2e) which is assigned to phosphorus from phosphate22. XPS spectra of modified silicon samples. a High-resolution narrow scans of C 1s and b P 2s obtained from 5-hexenyl acetate monolayers on silicon (surface 2 in Fig. 1). c C 1s spectrum of hydroxyl-terminated surface 3. d High-resolution scans of C 1s and e P 2s from phosphorus-modified silicon sample (surface 4) To drive the molecular-monolayer-carried phosphorus into the intrinsic silicon substrate (>10 kΩ cm), the chemically modified Si samples were first coated with SiO2 made from spin-on-glass (SOG) and then annealed at 1050 °C for 2 min. The SiO2 layer was later removed by buffered oxide etchant (BOE, HF:NH4F = 6:1) before electrical characterizations. Van der Pauw four-point measurements23 (Supplementary Note 1 and Supplementary Figure 2) were performed in darkness on the unmodified Si (surface 1), annealed surface 3 (as a control to phosphorus-doped sample), and surface 4 (phosphorus-doped sample). As shown in Table 1, the sheet resistance (Rs) for the control sample decreases slightly from 317 (for the undoped silicon) to 226 kΩ/sq, indicating no significant contamination introduced in the process. For the phosphorus-doped sample, the resistance drops dramatically to 1.06 kΩ/sq after doping. This suggests that the phosphorus dopants have diffused into and electrically doped the silicon substrate. Table 1 Sheet resistances of silicon samples via SAMM doping technique by van der Pauw measurement To examine the total amount of phosphorus incorporated into Si, the phosphorus-doped sample was analyzed by SIMS. As shown in Fig. 3a, the distribution of phosphorus dopants is highly non-uniform (see more discussions in Supplementary Note 2). The phosphorus concentration drops from around 3 × 1018 cm−3 by nearly three orders of magnitude within 200 nm below the surface. In terms of surface concentration, the phosphorus concentration per unit area is calculated to be 1.34 × 1013 cm−2 by integrating all phosphorus from the surface to bulk. To find out the free electron concentration of the phosphorus-doped samples, we performed Hall measurements. In Fig. 3b, the Hall resistance linearly changes with the applied magnetic field. The slope of the linear dependence is inversely proportional to the free electron concentration as shown in Eq. (1) from which the free electron concentration is found to be 8.92 × 1012 cm−2. Note that Eq. (1) is on the assumption of uniform doping. The non-uniform distribution of dopants in our sample may lead to a few percent errors in the obtained electron concentration (see Supplementary Note 3 for more discussions). $$\begin{array}{*{20}{l}} {N_{\mathrm{e}}} \hfill & \hskip-8pt = \hfill &\hskip-7pt { - \frac{{{\mathrm{\Delta }}B}}{{e \times ({\mathrm{\Delta }}V_{\mathrm{H}}/I)}}} \hfill \\ {} \hfill & \hskip-8pt = \hfill &\hskip-7pt { - \frac{1}{{e \times \mathrm{(slope)}}}} \hfill \\ {} \hfill & \hskip-8pt = \hfill &\hskip-7pt {\frac{1}{{1.6 \times 10^{ - 19}{\mathrm{C}} \times 70.1\;{\mathrm{m}}^2\;{\mathrm{C}}^{ - 1}}}} \hfill \\ {} \hfill & \hskip-8pt = \hfill &\hskip-7pt {8.92 \times 10^{12}\;{\mathrm{cm}}^{ - 2}} \hfill \end{array}$$ in which e is the unit charge, VH is the Hall voltage, I is the source current, B is the magnetic field, and Ne is the free electron concentration per unit area. Dopant ionization rate. a Doping profile of phosphorus-doped Si measured by SIMS. b Hall resistance versus magnetic field measured by Hall measurement at room temperature. c Free electron concentration versus temperature. Inset: Hall measurements of phosphorus-doped Si at several temperatures Previously, it was reported that nitrogen carried by tert-butyl-N-allylcarbamate can electrically dope silicon24. To check whether other impurities besides phosphorus dopants are also electrically active in our doped sample, low-temperature Hall measurements were performed as shown in Fig. 3c. The temperature was set from 80 K gradually up to 300 K. The electron concentration per unit area was obtained from Hall measurements at each temperature (Supplementary Figure 3 and Supplementary Table 1). As the electron concentration as a function of temperature follows Eq. (2)24, the activation energy of phosphorus dopants was found as 42 meV by fitting Eq. (2) to the experimental data, which is close to the known value (45 meV) of phosphorus ionization energy in silicon25. This finding indicates that there is no significant amount of electrically active impurities other than phosphorus donors in the SAMM-doped sample. From the fitting, we also attained the concentration of electrically active phosphorus dopants, which is 1.07 × 1013 cm−2. The free electrons in the doped sample are believed to originate from this part of phosphorus dopants. Thus, the ionization rate at room temperature is estimated to be 83.4% if we divide the electron concentration (8.92 × 1012 cm−2) by the electrically active phosphorus dopants (1.07 × 1013 cm−2). This ionization rate is reasonable, considering that the ionization rate of phosphorus dopants in high concentration (about 1018 cm−3 in particular) is as low as 80%26,27. Quantitatively, a theoretical ionization rate for electrically active phosphorus with the same distribution and concentration (1.07 × 1013 cm−2) was calculated considering the effects of the incomplete ionization26,27 and internal electric field. The resultant ionization rate is 81.3% (Supplementary Note 4 and Supplementary Figure 4), in good agreement with the experimental value. It means that this part (1.07 × 1013 cm−2) of electrically active phosphorus fits the classical case for phosphorus donors in silicon. Note that the total phosphorus dopant concentration detected by SIMS is 1.34 × 1013 cm−2. The interesting question is what happened to the remaining 20% (=(1.34 − 1.07)/1.34) of the phosphorus dopants (0.27 × 1013 cm−2). We speculate that the remaining phosphorus dopants are electrically annihilated by carbon-related defects. $$n_{\mathrm{c}} = \frac{{ - N_{\mathrm{c}} + \sqrt {N_{\mathrm{c}}^2 + 8N_{\mathrm{c}}N_{\mathrm{D}}{\mathrm{exp}}\left( {\frac{{{\mathrm{\Delta }}E}}{{kT}}} \right)} }}{{4{\mathrm{exp}}\left( {\frac{{{\mathrm{\Delta }}E}}{{kT}}} \right)}}$$ where Nc is the effective density of states function which is defined as \(N_{\mathrm{c}} \approx 2\left( {\frac{{2\pi m_{\mathrm{n}}^ \ast kT}}{{h^2}}} \right)^{\frac{3}{2}} = w(kT)^{\frac{3}{2}}\) with w being the constant related to the band structure of the semiconductor, ND is the concentration of donors, and ΔE is the activation energy which is equal to (Ec − Ed) with Ec and Ed being the conduction band edge and the donor energy level, respectively. To verify this hypothesis, DLTS measurements were performed on SAMM-doped samples. DLTS requires a Schottky contact to be formed on top of the SAMM-doped surface (Fig. 4a). The depletion region of the Schottky junction will be readily extended into the substrate bulk if an intrinsic substrate is used. As a result, the information extracted from DLTS will be mostly originating from the bulk. However, the impurities and defects introduced by the SAMM doping are dominantly located near the surface. To detect possible defects in this region, we prepared a set of new samples on phosphorus-doped n-type Si (100) substrate with a resistivity of 1–3 Ω cm (phosphorus concentration 1 – 5 × 1015 cm −3; carbon concentration <5 × 1016 cm−3) to confine the Schottky depletion region near the surface. The same SAMM doping process as described previously (on SAMM-doped surface 4) was conducted on the n-type substrate. The successful doping of phosphorus into the substrate was confirmed by SIMS (Fig. 4c). To form Schottky contact, a 150-nm-thick Au electrode was evaporated directly on the SAMM-doped surface, which had been cleaned with Piranha solution and hydrofluoric acid. Al film was evaporated on the back side of the substrate that had been extensively scratched by a diamond scribe. The scratch creates defects, which reduce the minority carrier lifetime and therefore facilitate the formation of Ohmic contact between the Al film and n-type silicon substrate (Supplementary Figure 5). No post annealing was conducted to avoid Au/Al diffusion into silicon. The device schematic is shown in the inset of Fig. 4a. A typical I–V curve of the device is depicted in Fig. 4a, evidencing that a Schottky diode is formed. A similar process was also applied to the blank and control sample (both are n-type) to form Schottky contacts (Supplementary Figure 6) for DLTS measurements. The blank sample went through the SiO2 capping and annealing process without any functionalization. The control sample went through all the processes except that the alkylphosphate was not added during esterification reaction, like surface 3 in Fig. 1. IV, CV, and DLTS data on SAMM-doped phosphorus-doped silicon. a I–V curve of the Schottky diode made on the SAMM-doped sample with the inset schematically showing the diode structure. b Capacitance as a function of bias voltage in form of 1/C2 versus V. c Charge carrier concentration at different depth derived from b. As a reference, phosphorus depth profile by SIMS is also presented in blue curve. d Comparison of DLTS spectra of the blank sample, control sample, and SAMM-doped sample with reversed-bias pulse from −2 to 0 V, at the rate window of 200 s−1. The inset shows the spectra at the range of 65–85 K The voltage-dependent capacitance of the Schottky junctions was first measured at 1 MHz with the dc bias sweeping from −2 to 0 V for the control sample and from −2 to 0.3 V for the SAMM-doped sample. Figure 4b shows the C–V dependence in form of 1/C2 versus dc voltage bias. For the control sample, the dependence is linear and the build-in potential is extracted as 0.57 V from the intercept with x coordinate. As expected, this built-in potential increases to 0.76 V as the temperature is lowered to 50 K (Supplementary Figure 7). For the SAMM-doped sample, the dependence of 1/C2 on dc voltage bias is nonlinear due to the highly non-uniform distribution of phosphorus dopants introduced by the SAMM doping process. This nonlinearity makes it unreliable to extract the built-in potential. But the ionized charge profile can be extracted, shown in red dots in Fig. 4c. The concentration of ionized charges in the control sample is around 3 × 1015 cm−3 (black dots in Fig. 4c) consistent with the nominal resistivity (1 – 3 Ω cm) of the n-type Si substrate. In contrast, the ionized charge concentration in the SAMM-doped sample drops from about 2 × 1016 cm−3 at a depth of 140 nm to about 3 × 1015 cm−3 at about 330 nm, indicating that SAMM-introduced phosphorus diffuses beyond 300 nm. Note that the phosphorus concentration from SIMS (blue lines) is constant at about 1016 cm−3 starting from a depth of 200 nm below the surface due to the relatively high detection limit of the SIMS technique. DLTS measurements were performed on the samples at bias of −2 V with applied pulse of 0 V (hereafter it is written in form of "bias voltage"–"pulse voltage", i.e., −2 to 0 V) as shown in Fig. 4d. No peaks are detected for the blank sample (black curve), demonstrating that there is nearly no defects in bare silicon wafer and that the capping layer and the annealing process introduce no defects into silicon. For the carbon-chains-functionalized control sample (red curve), a tiny kink at 75 K (Fig. 4d inset) and a visible peak at 155 K next to a broad bump from 200 to 300 K are observed, indicating that carbon from the dopant-carrying molecules can diffuse into the substrate and produce some defects in phosphorus-doped Si. These defects could be related to C, H, O, and N. Oxygen plays a significant role only in the presence of lattice defects such as vacancies28 which do not exist in the doping process considered in this work. Defects involving hydrogen are very unlikely as they do not exist after the high temperature treatments during which hydrogen out diffuses29. Finally, nitrogen if electrically active has very shallow energy levels, and thus none of the observed levels can be associated with this impurity, unless nitrogen binds to other unknown defects30. Therefore, we would attribute most of the observed defects to complexes where carbon is the main ingredient. For the sample doped by the molecular-monolayer-carried phosphorus (SAMM-doped sample, blue curve), the kink at 75 K is absent (Fig. 4d inset), whereas the peak at 155 K and the broad bump both grow much bigger than the corresponding peaks in the control sample, probably due to the increase in defect concentration brought by extra amount of carbon and phosphorus. What is more, the shape of the broad bump is skewed in comparison with the control sample, clearly because the closely spaced peaks in the bump increase differently in amplitude. By comparing the three curves in Fig. 4d, we conclude that the SAMM doping process produces defects in phosphorus-doped silicon. A better explanation for these phenomena needs quantitative identification of energy levels associated with the peaks. To find out the defect energy levels, DLTS measurements at different rate windows were carried out. It is known that DLTS signals peak when the charge emission rate from defects matches the experimental rate window given by the sampling time t1 and t2 (Supplementary Note 5). A higher rate window corresponds to a larger emission rate en, shifting DLTS peaks to higher temperatures (Fig. 5a, c), since the emission rate en is correlated to temperature T and defect energy level Ea by the following equation17: $$e_{\mathrm{n}} = (\sigma _{\mathrm{n}}\left\langle {\nu _{\mathrm{n}}} \right\rangle N_{\mathrm{c}}/g)\,{\mathrm{exp}}\left( { - \frac{{E_{\mathrm{a}}}}{{kT}}} \right)$$ where σn is the capture cross-section, 〈vn〉 is the mean thermal velocity of electron, g is the degeneracy factor (chosen 2 here), Nc is the effective density of states related to the semiconductor band structure, and k is the Boltzmann constant. Defect energy level analysis. DLTS spectra (a) and Arrhenius plot (b) of the n-type silicon control sample by annealing the chemically modified silicon surface as shown in the inset. DLTS spectra (c) and Arrhenius plot (d) of the SAMM-doped Si (the SAMM structure is displayed in the inset). DLTS simulations on the spectra (rate window of 5 s−1) of the control sample (e) and the SAMM-doped Si (f). Note that the DLTS signals in e are much smaller in amplitude than those in f Note that the factor 〈vn〉Nc is proportional to T2. Hence, the logarithm term ln(en/T2) is linearly correlated to 1/(kT), as shown in the Arrhenius plot of Fig. 5b, d. The slope of the lines gives the defect energy level Ea and the intercept with y axis provides information on the capture cross-section σn (Supplementary Note 6 and Supplementary Table 2). In Fig. 5a (DLTS spectra of the control sample), two isolated peaks at low temperature range are detected. As the rate window increases, the peaks are right-shifted in the range from 60 to 80 K and from 130 to 160 K. The associated defect energy levels are determined to be 102 meV and 254 meV (Fig. 5b), respectively. Considering that the only species really involved in the control sample are possibly carbon-related defects as mentioned above, the defect energy level at 102 meV can be best ascribed to carbon interstitials31, configuration of which is shown in Supplementary Figure 8. The defect energy level at 254 meV continues to appear in the SAMM-doped sample (252 meV) with a higher amplitude. A defect energy level at 319 meV is extracted for the SAMM-doped sample from the isolated peak shifting from 155 to 190 K in Fig. 5c. At the region of temperature above 200 K, two main peaks with associated energy levels at 378 meV and 467 meV can be identified from the bump for the control sample (Fig. 5a, b). Similarly, two energy levels at 405 meV and 469 meV are identified for the SAMM-doped sample in Fig. 5c, d. However, the broad bumps in the DLTS spectra (Fig. 5a, c), clearly consisting of multiple closely spaced peaks may even contain more than those identified main peaks. To identify the peaks in the broad bumps more accurately, DLTS simulations were conducted according to the basic principle as illustrated below, and the results were displayed in Fig. 5e, f. For DLTS, capacitance transient starts at the end of excitation pulse and then decays exponentially in its simplest form. The amplitude of a single peak detected at a given rate window can be expressed as Eq. (4). $${\mathrm{\Delta }}C = {\mathrm{\Delta }}C_0\left( {{\mathrm{exp}}\left( { - e_{\mathrm{n}}t_1} \right) - {\mathrm{exp}}\,\left( { - e_{\mathrm{n}}t_2} \right)} \right)$$ where ΔC0 is the initial capacitance transient (capacitance transient at the end of excitation pulse), and t1 and t2 defines the rate window. The emission rate en is given by Eq. (3). For multiple defect levels, the DLTS signal can be written as Eq. (5) (refer to Supplementary Equation (11)). $${\mathrm{\Delta }}C = \mathop {\sum }\nolimits^ {\mathrm{ }}\Delta C_{0{{i}}}\left( {{\mathrm{exp}}\left( { - e_{{\mathrm{n}{i}}}t_1} \right) - {\mathrm{exp}}\left( { - e_{{\mathrm{n}{i}}}t_2} \right)} \right)$$ in which i represents the ith defect. Table 2 summarizes the defect energy levels of the control and the SAMM-doped samples. The energy levels in bold are derived for the bias pulse from −2 to 0 V from both Arrhenius plots (number without underline) and simulation results (number with underline). The rest are for the other bias pulses, meaning that the DLTS are probing other regions, which will be discussed later. All the energy levels are in comparison with those of interstitial-carbon–substitutional-phosphorus (Ci–Ps) pairs from literature (the last row in Table 2). Five out of six energy levels for the SAMM-doped sample are consistent with the energy levels of Ci–Ps pairs reported previously. Only peak 5 at the energy level near 467 meV or 469 meV is found independently in both control and SAMM-doped samples, suggesting that this defect energy level does exist despite not showing in ref. 32. It is probably due to N-related defects16 rather than Ci–Ps multi-configurable defects, since the activation agent (DCC) and solvent (dimethylfomamide) in the SAMM grafting process contain nitrogen. Peak 4 in the SAMM-doped sample is determined by simulations to be 390 meV instead of 405 meV as shown in the Arrhenius plot in Fig. 5d. The peak at 405 meV indicated by the arrow in Fig. 5f is the result of overlap between peak 4 and peak 5. Note that the control sample is n-type silicon with phosphorus-doping concentration of 3 × 1015 cm−3 as purchased. Therefore, all the Ci–Ps related energy levels shown up in the SAMM-doped sample also appear in the control sample (but with much smaller magnitude), because the carbon defects can bind with both the SAMM-introduced phosphorus dopants and the background phosphorus dopants in the n-type Si substrate (Fig. 4d). It is worth pointing out that the DLTS envelope by simulations does not match the experimental results perfectly. Some other peaks clearly exist, which may originate from surface states, nitrogen contaminants33, or atomic disorder34. The full deconvolution of the DLTS spectra can be found in Supplementary Figures 9–11 and Supplementary Table 3. Table 2 Comparison of the energy levels derived from DLTS spectra, simulations and energy levels of Ci–Ps from ref. 32 To show clearly that carbon defects bind with phosphorus dopants introduced by the SAMM doping process, we tuned the bias voltages from −2 to 0 V and injection pulses from −1 to 0.2 V, pushing the DLTS probing region from bulk to near the surface (Fig. 6)35,36,37. For comparison, SIMS profiling was also performed for phosphorus and carbon in both the SAMM-doped sample and the blank sample, as shown in Fig. 6a. The background phosphorus doping (3 × 1015 cm−3) is detected by CV technique (pink curve) but not by SIMS (blue curve and green triangles) due to the relatively high detection limit of SIMS. A combination of SIMS and CV measurements indicates that the SAMM-introduced phosphorus dopants has a concentration of around 2 × 1018 cm−3 near the surface and rapidly declines to the background doping concentration of 3 × 1015 cm−3 at a depth of 300 nm below the surface. The certificated carbon concentration in our n-type Si substrate is <5 × 1016 cm−3. The carbon SIMS data reaches a floor at 2 × 1016 cm−3 in both the SAMM-doped sample and the blank sample, meaning that the actual background carbon concentration in the substrate is at this level or even lower. The concentration of carbon impurities introduced by the SAMM doping process is ~2 × 1018 cm−3 at the surface but slowly decays to 2 × 1016 cm−3 at a distance of about 300 nm from the surface. When the bias pulses to 0.2 from 0 V bias at 300 K, the depletion region edge sweeps approximately from 100 to 64 nm below the surface, in which carbon and phosphorus impurities mainly come from the SAMM, as shown in Fig. 6b. Note that the depletion region moves slightly deeper into the substrate at low temperature (Supplementary Figures 12 and 13 and Supplementary Table 4). Fig. 6c depicts the corresponding DLTS data within the above sweep range of the depletion region. All the peaks shown here are included in Table 2. DLTS is repeated at other bias pulses. The probing range and corresponding DLTS data are shown in Fig. 6d–i (also see the defects energy levels in Table 2). A detailed analysis on the peak positions and amplitudes will not be conclusive due to the well-known metastability of carbon-related complex defects32. But overall the experimental observations are consistent with the fact that a larger quantity of carbon and phosphorus impurities in a probing region will lead to stronger DLTS signals. For example, though the probing region width in Fig. 6h is larger than that in Fig. 6d, the corresponding DLTS signals in Fig. 6i are much weaker because the DLTS is probing a region deep in the bulk (Fig. 6h) where the phosphorus and carbon concentration are much lower. DLTS probing region analysis. a Phosphorus and carbon depth profiles by SIMS compared with ionized charge profile derived from CV. Silvaco simulation on band structure at 300 K with bias voltage of 0 V (b), −0.2 V (d), −2 V (f) and −2 V (h). Probing regions are shaded in gray with different pulses from 0 to 0.2 V (b), −0.2 to 0.2 V (d), −2 to 0 V (f) and from −2 to −1 V (h). DLTS simulations on the spectra of the SAMM-doped silicon with pulses from 0 to 0.2 V (c), −0.2 to 0.2 V (e), −2 to 0 V (g) and −2 to −1 V (i). The rate windows of DLTS spectra are 200 s−1. Note that c, e, g and i have the same y axis scale for better comparison. A close-up figure for i to show the fitting envelope can be found in Supplementary Figure 11 It is known that ultra-shallow junctions as the source and drain of modern complementary metal–oxide–semiconductor (CMOS) transistors help suppress the short channel effect8. The SAMM doping technique has the unique advantage of forming ultra-shallow junctions8. However, the ultra-scaled thickness of the junctions will inevitably increase series resistance in the source and drain, resulting in inferior performances for CMOS transistors. A possible solution is to increase the dopant concentration by increasing the molar ratio of dopant elements in the carrier molecule as demonstrated previously38. However, according to the carbon defect formation mechanism, a higher phosphorus concentration may lead to a larger portion of inactive phosphorus, offsetting the effect of higher dopant molar ratio on reducing the series resistance. Logically, new processes should be developed to remove carbon in dopant carrying molecules prior to thermal annealing so that the Ci–Ps defects can be minimized to achieve a high ionization rate for phosphorus dopants. In conclusion, we have successfully doped silicon with phosphorus by SAMM doping technique via a two-step molecular monolayer grafting process. Phosphorus is incorporated into silicon with an areal dose of 1.34 × 1013 cm−2. However, only 80% (1.07 × 1013 cm−2) of phosphorus is electrically active and the rest 20% is deactivated. Carbon diffuses into silicon together with phosphorus but with a much deeper depth. This carbon can bond with group V element forming complex defects. Corresponding deep energy levels are detected by DLTS for the first time in SAMM doping technique. With the assistance of DLTS simulation, multi-configurational defects Ci–Ps are confirmed, indicating that phosphorus dopants are partially deactivated by interstitial carbon. Therefore, for SAMM doping technique, carbon in dopant-carrying molecules is recommended to be removed or controlled at low concentration before thermal annealing. FZ single-side polished silicon wafers, (100)-oriented (〈100〉 ± 0.05°), 500 ± 25 μm thick, >10 kΩ cm in resistivity, and CZ single-side polished silicon wafers (100)-oriented (〈100〉 ± 0.05°), n-type (phosphorus), 500 ± 10 μm thick, 1–3 Ω cm in resistivity, were purchased from Suzhou Resemi Semiconductor Co. Ltd., China. All chemicals, unless noted otherwise, were of analytical grade and used as received. Isopropanol, acetone, and ethanol for surface cleaning were of CMOS grade. 5-Hexenyl acetate (98%) was purchased from TCI, Shanghai. Mono-N-dodecyl phosphate (97%) was from Alfa Aesar. Dicyclohexylcarbodiimide (DCC, 99%), lithium aluminum hydride (LiAlH4 powder, reagent grade, 95%), and hydrofluoric acid (HF, 48%, CMOS grade) were from Sigma Aldrich. Wafer cleaning Si wafers were cleaved into 1.5 cm by 1.5 cm pieces and cleaned with acetone and ethanol of CMOS grade in a sonication bath for 5 min, respectively. After rinsed with deionized (DI) water, the Si samples were immersed in "piranha solution" (98% H2SO4:30% H2O2, 3:1 (v/v)) for 30 min at 90 °C, followed by rinsing with DI water again. The wafers were then etched in 2.5% HF solution for 90 s to remove the oxide layer and render a hydrogen-terminated surface. The hydrogen-terminated samples were quickly rinsed in DI water, blown dry with nitrogen, and immediately proceeded to further modification. Thermal hydrosilylation and surface functionalization First, 5-hexenyl acetate was grafted onto Si by hydrosilylation reaction. The freshly etched Si (100) samples were immediately transferred to a deoxygenated sample of neat 5-hexenyl acetate in a dry Schlenk tube under Ar atmosphere. The reaction was then conducted at 95 °C in Ar atmosphere for 16 – 19 h. The resulting samples were copiously rinsed with ethanol, dichloromethane, and acetone, respectively, and then blown dried by a stream of N2. Subsequently, the acetate-terminated surface was immersed in dry tetrahydrofuran (THF) with 5% (w/v) LiAlH4 and refluxed at 70 °C for 2 h. After rinsing with DI water and ethanol, the hydroxyl-terminated samples were immersed into 0.5 M hydrochloric acid for 20 min to remove any Al residues. In the presence of bifunctional crosslinker DCC (40 mM), the hydroxyl-terminated samples were reacted with mono-dodecyl phosphate (5 mΜ) in dimethylfomamide (DMF) at room temperature for 48 h, affording phosphorus-containing functionalization. The samples were washed carefully with ethanol, dichloromethane, and acetone to remove any remaining coupling reagents and dried under N2 stream for further treatments. Silicon dioxide deposition and thermal annealing SiO2 capping layers on silicon were produced by SOG method with IC1-200 polysiloxane-based coating material (Futurrex Inc. USA). Briefly, silicon wafer was spin-coated with the IC1-200 at 3000 rpm for 40 s, followed by 100 °C bake on a hot plate for 60 s, 200 °C for 60 s and 400 °C bake in Ar for 30 min. After the formation of the capping layers, the functionalized silicon samples were thermally annealed at 1050 °C for 120 s, with a ramp temperature of 100 °C min−1, starting from 800 °C, in Ar environment in a tube furnace (Thermo scientific Lindberg/Blue, USA). After annealing, the doped Si samples were immersed in BOE (buffer oxide etchant) solution (HF:NH4F = 6:1, CMOS grade, J.T. Baker Co. USA) to remove SiO2 layer. Surface characterization XPS was carried on a Kratos AXIS UltraDLD spectrometer with a monochromated Al Kα source (1486.6 eV), a hybrid magnification mode analyzer and a multichannel detector at a takeoff angle of 90° from the plane of the sample surface. Analysis chamber pressure is <5 × 10−9 Torr. All energies are reported as binding energies in eV and referenced to the C 1s signal (corrected to 285.0 eV) for aliphatic carbon on the analyzed sample surface. Survey scans were carried out selecting 250 ms dwell time and analyzer pass energy of 160 eV. High-resolution scans were run with 0.1 eV step size, dwell time of 100 ms and the analyzer pass energy set to 40 eV. After background subtraction using the Shirley routine, XPS spectra were fitted with a convolution of Lorentzian and Gaussian profiles by using software Casa XPS. Secondary-ion mass spectrometry (SIMS) was conducted to obtain dopant profile at the top 500 nm of substrate by Evans Analytical Group, NJ, USA. Van der Pauw and Hall measurements The metal contacts on silicon for electrical measurements were realized by evaporating 200-nm aluminum or aluminum/gold films in a thermal evaporation system (Angstrom Engineering, Canada). Van der Pauw measurements were performed on square-shaped samples on which the metal contacts are exactly located at the four corners. The custom-made probe station shrouded in a completely dark metal box is equipped with four solid tungsten probe tips (the tip size < 1 μm). Keithley 2400 source meter units and a custom-written Labview script were employed to generate and collect current/voltage data. Hall measurements were performed on the same square-shaped samples, which were pre-mounted onto a dc resistivity sample holder via wire bonding, in a Physical Property Measurement System (PPMS, Quantum Design, USA). DLTS Schottky diodes for DLTS measurement are fabricated by depositing a circle Au electrode of 1 mm in diameter and 150 nm thick on the top of silicon and 150 nm thick Al on the backside via thermal evaporation (Angstrom Engineering, Canada). The circle Au electrode is deposited with assistance of lithography technique. The diodes are then mounted onto sample holder TO5. A conventional DLTS with boxcar mode are applied to get better resolution. Data were collected using Laplace DLTS software and plotted in usual DLTS plots. The carbon and phosphorus SIMS profiling was conducted at EAG laboratories in USA under high vacuum condition of about 3 × 10−11 Torr. Focused Cs+ primary ion beam was applied for sputtering, which facilitate high yields of secondary ions of phosphorus and carbon. Before sputtering, the samples were cleaned with oxygen plasma to remove possible carbon contamination from air. The data that support the findings of this study are available from the authors on reasonable request, see author contributions for specific data sets. Ho, J. C. et al. Wafer-scale, sub-5 nm junction formation by monolayer doping and conventional spike annealing. Nano Lett. 9, 725–730 (2009). Ho, J. C. et al. Nanoscale doping of InAs via sulfur monolayers. Appl. Phys. Lett. 95, 072108 (2009). Zhang, S., Sugioka, K., Fan, J., Toyoda, K. & Zou, S. Studies on excimer laser doping of GaAs using sulphur adsorbate as dopant source. Appl. Phys. A 58, 191–195 (1994). Cho, K. et al. Molecular monolayers for conformal, nanoscale doping of InP nanopillar photovoltaics. Appl. Phys. Lett. 98, 203101 (2011). Collins, G. & Holmes, J. D. Chemical functionalisation of silicon and germanium nanowires. J. Mater. Chem. 21, 11052–11069 (2011). Arduca, E. et al. Synthesis and characterization of P δ-layer in SiO2 by monolayer doping. Nanotechnology 27, 075606 (2016). Long, B. et al. in Proc. 20th International Conference on Ion Implantation Technology (IIT) (IEEE, Portland, OR, USA, 2014). Ho, J. C. et al. Controlled nanoscale doping of semiconductors via molecular monolayers. Nat. Mater. 7, 62–67 (2008). Ye, L. et al. Controlling the dopant dose in silicon by mixed-monolayer doping. ACS Appl. Mater. Interfaces 7, 3231–3236 (2015). Kong, E. Y.-J., Guo, P., Gong, X., Liu, B. & Yeo, Y.-C. Toward conformal damage-free doping with abrupt ultrashallow junction: formation of Si monolayers and laser anneal as a novel doping technique for InGaAs nMOSFETs. IEEE Trans. Electron Devices 61, 1039–1046 (2014). Ang, K.-W. et al. in Proc. 2011 IEEE International Electron Devices Meeting (IEDM) (IEEE, Washington DC, USA 2011). Shimizu, Y. et al. Behavior of phosphorous and contaminants from molecular doping combined with a conventional spike annealing method. Nanoscale 6, 706–710 (2014). Longo, R. C., Cho, K., Schmidt, W. G., Chabal, Y. J. & Thissen, P. Monolayer doping via phosphonic acid grafting on silicon: microscopic insight from infrared spectroscopy and density functional theory calculations. Adv. Funct. Mater. 23, 3471–3477 (2013). Caccamo, S. et al. Silicon doped by molecular doping technique: role of the surface layers of doped Si on the electrical characteristics. Mater. Sci. Semicond. Process. 42, 200–203 (2016). Goesele, U., Laveant, P., Scholz, R., Engler, N. & Werner, P. Diffusion engineering by carbon in silicon. in MRS Proceedings (Cambridge University Press, 2011). Gürer, E. & Benson, B. Multiconfigurational carbon-group V pair defects in silicon. in MRS Proceedings (Cambridge University Press, 1989). Lang, D. Deep-level transient spectroscopy: a new method to characterize traps in semiconductors. J. Appl. Phys. 45, 3023–3032 (1974). Pensl, G., Schulz, M., Hölzlein, K., Bergholz, W. & Hutchison, J. New oxygen donors in silicon. Appl. Phys. A 48, 49–57 (1989). Endrös, A., Krühler, W. & Koch, F. Electronic properties of the hydrogen-carbon complex in crystalline silicon. J. Appl. Phys. 72, 2264–2271 (1992). Asom, M., Benton, J., Sauer, R. & Kimerling, L. Interstitial defect reactions in silicon. Appl. Phys. Lett. 51, 256–258 (1987). Gouzman, I., Dubey, M., Carolus, M. D., Schwartz, J. & Bernasek, S. L. Monolayer vs. multilayer self-assembled alkylphosphonate films: X-ray photoelectron spectroscopy studies. Surf. Sci. 600, 773–781 (2006). Dubey, M., Gouzman, I., Bernasek, S. L. & Schwartz, J. Characterization of self-assembled organic films using differential charging in X-ray photoelectron spectroscopy. Langmuir 22, 4649–4653 (2006). Pauw, I. J. V. D. A method of measuring the resistivity and Hall coefficient of lamellae of arbitrary shape. Philips Tech. Rev. 20, 220–224 (1958). Guan, B. et al. Nanoscale nitrogen doping in silicon by self-assembled monolayers. Sci. Rep. 5, 12641 (2015). Pearson, G. L. & Bardeen, J. Electrical properties of pure silicon and silicon alloys containing boron and phosphorus. Phys. Rev. 75, 865–883 (1949). Altermatt, P., Schenk, A. & Heiser, G. A simulation model for the density of states and for incomplete ionization in crystalline silicon. I. Establishing the model in Si: P. J. Appl. Phys. 100, 113714 (2006). Schenk, A., Altermatt, P. P. & Schmithusen, B. in Proc. International Conference on Simulation of Semiconductor Processes and Devices (IEEE, Monterey, CA, USA, 2006). Nylandsted Larsen, A. & Mesli, A. Chapter two-electron and proton irradiation of silicon. Semiconductors and Semimetals. (Elsevier, London, 2015). Fukata, N. et al. Hydrogen molecules and hydrogen-related defects in crystalline silicon. Phys. Rev. B 56, 6642–6647 (1997). Suezawa, M., Sumino, K., Harada, H. & Abe, T. The nature of nitrogen-oxygen complexes in silicon. Jpn J. Appl. Phys. 27, 62 (1988). Song, L. W. & Watkins, G. D. EPR identification of the single-acceptor state of interstitial carbon in silicon. Phys. Rev. B 42, 5759–5764 (1990). Zhan, X. & Watkins, G. Electron paramagnetic resonance of multistable interstitial-carbon–substitutional-group-V-atom pairs in silicon. Phys. Rev. B 47, 6363 (1993). Tokumaru, Y., Okushi, H., Masui, T. & Abe, T. Deep levels associated with nitrogen in silicon. Jpn J. Appl. Phys. 21, L443 (1982). Mesli, A., Kringhøj, P. & Nylandsted Larsen, A. Pinning behavior of gold-related levels in Si using Si1-xGex alloy layers. Phys. Rev. B 56, 13202–13217 (1997). Rangel-Kuoppa, V.-T., Tonkikh, A., Werner, P. & Jantsch, W. Electron and hole deep levels related to Sb-mediated Ge quantum dots embedded in n-type Si, studied by deep level transient spectroscopy. Appl. Phys. Lett. 102, 232106 (2013). Rangel-Kuoppa, V.-T., Alexander, T., Nikolay, Z., Christian, E. & Peter, W. Valence band offset at the Si/SiSn interface by applying deep level transient spectroscopy. Nanotechnology 27, 075705 (2016). Rangel-Kuoppa, V.-T. Determination of conduction band offset between strained CdSe and ZnSe layers using deep level transient spectroscopy. Appl. Phys. Lett. 100, 252110 (2012). Ye, L. et al. Boosting the boron dopant level in monolayer doping by carboranes. ACS Appl. Mater. Interfaces 7, 27357–27361 (2015). The work is supported by the national "1000 Young Scholars" program of the Chinese central government, the National Science Foundation of China (grant number 21503135), the SJTU-UM Collaborative Research Program and the "Innovative Research Plan" of the Shanghai Bureau of Education. XPS and Hall Effect measurements are performed at Instrumental Analysis Center (IAC) and some microfabrication processes are carried out at the Center for Advanced Electronic and Material Devices (AEMD), Shanghai Jiao Tong University (SJTU). The authors appreciate Dr. Limin Sun and Ligang Zhou at IAC of SJTU for valuable discussions about XPS and Hall Effect measurements. Xuejiao Gao and Bin Guan contributed equally to this work. University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai, 200240, China Xuejiao Gao , Bin Guan , Kaixiang Chen & Yaping Dan Institut Matériaux Microélectronique Nanosciences de Provence, UMR 6242 CNRS, Université Aix-Marseille, 13397, Marseille Cedex 20, France Abdelmadjid Mesli Search for Xuejiao Gao in: Search for Bin Guan in: Search for Abdelmadjid Mesli in: Search for Kaixiang Chen in: Search for Yaping Dan in: Y.D. conceived the idea and directed the research. X.G., B.G. and Y.D. wrote the manuscript. X.G. and B.G. prepared the samples and carried out the electrical characterizations. A.M. performed DLTS measurements. X.G. and A.M. analyzed the DLTS data. K.C. simulated the energy band bending and redistribution of electrons ionized from phosphorus dopants. All authors reviewed the manuscript. Correspondence to Yaping Dan. The authors declare no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. https://doi.org/10.1038/s41467-017-02564-3 Toward Defect-Free Doping by Self-Assembled Molecular Monolayers: The Evolution of Interstitial Carbon-Related Defects in Phosphorus-Doped Silicon , Abdelmadjid Mesli , Limin Sun ACS Omega (2019) Analysis on the temperature dependent electrical properties of graphene/Al–ZnO Schottky contact Yapeng Li , Yingfeng Li , Jianhua Zhang , Xiangyu Zou & Yongshan Wang Current Applied Physics (2019) Density of defect states retrieved from the hysteretic gate transfer characteristics of monolayer MoS2 field effect transistors Qiang Xu , Yingri Sun , Peng Yang AIP Advances (2019) Mechanism of Phosphorus Transport Through Silicon Oxide During Phosphonic Acid Monolayer Doping Roberto C. Longo , Kyeongjae Cho , Siegfried Hohmann & Peter Thissen The Journal of Physical Chemistry C (2018) Effect of surface states on monolayer doping: Crystal orientations, crystallinities, and surface defects Chul Jin Park , Sang Min Jung , Jin Hwan Kim , Il To Kim & Moo Whan Shin Materials Science in Semiconductor Processing (2018) Nature Communications menu Editors' Highlights Top 50 Read Articles of 2018
CommonCrawl
Solve the differential equation $\frac{dy}{dx} - xy=1$ Solve the differential equation $\frac{dy}{dx} - xy=1$ given $y(0)=1$ The given differential equation is a first order linear differential equation of the form $\frac{dy}{dx} + Py=Q$ The integrating factor is $$e^{\int Pdx}=e^{-\int xdx}=e^{-\frac{x^2}{2}}$$ Solution is $$ye^{-\frac{x^2}{2}}=\int e^{-\frac{x^2}{2}}dx + c$$ But how do I integrate the second term? calculus ordinary-differential-equations lioness99a $\begingroup$ Just call it the error function - en.wikipedia.org/wiki/Error_function $\endgroup$ – Moo Mar 29 '17 at 13:00 $\begingroup$ it is $\int { { e }^{ -{ \frac { { x }^{ 2 } }{ 2 } } } } dx=\sqrt { \frac { \pi }{ 2 } } erf\left( \frac { x }{ \sqrt { 2 } } \right) +C$ $\endgroup$ – haqnatural Mar 29 '17 at 13:05 $\begingroup$ the solution is given by $$\left\{\left\{y(x)\to c_1 e^{\frac{x^2}{2}}+\sqrt{\frac{\pi }{2}} e^{\frac{x^2}{2}} \text{erf}\left(\frac{x}{\sqrt{2}}\right)\right\}\right\}$$ $\endgroup$ – Dr. Sonnhard Graubner Mar 29 '17 at 13:12 So far, you've solved the differential equation correctly. The integral you are considering does not have an elementary antiderivative. Regardless, we can still obtain an exact solution in terms of the error function. The considered integral is: $$\int e^{-\frac{x^2}{2}}~dx$$ Now, substitute $$u=\frac{x}{\sqrt{2}} \implies du=\frac{1}{\sqrt{2}}~dx\implies dx=\sqrt{2}~du$$ This gives an integral which is contained in the definition of the error function: $$\int e^{-\frac{x^2}{2}}~dx=\sqrt{2}\cdot \int e^{-u^2}~du$$ The definition of the error function is: $$\operatorname*{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}~dt$$ It follows from the definition that: $$\int e^{-u^2}~du=\frac{\sqrt{\pi}}{2}\cdot \operatorname*{erf}(u)+C$$ Can you continue? From solving the integral, you can find the general solution to your ODE, and then find the solution where $y(0)=1$. projectilemotionprojectilemotion Not the answer you're looking for? Browse other questions tagged calculus ordinary-differential-equations or ask your own question. Solve the partial differential equation Why don't we check the exactness of differential equation with Inspection cases? How to solve the differential equation $\frac{dy}{dx}-\frac{3x^2}{1+x^3}=\frac{\sin^2(x)}{{1+x}}$ and how to find its integrating factor? Relation between solutions of differential equations Solve the ordinary differential equation Solution of a linear differential equation Solving (in)exact differential equation in general form How to solve this Linear Differential Equation? Solve the differential equation $\frac{dy}{dx}=5+xy+2x+2y$ Solve the differential equation $t^2y''+3ty'+y=\frac{1}{t}$
CommonCrawl
Results for 'Ernst Pã¶Ppel' (try it on Scholar) Socialist Socrates-Ernst Bloch in the GDR.Anna-Sabine Ernst & Gerwin Klinger - 1997 - Radical Philosophy 84:6-21.details Ist die Moral objektiv? Eine Auseinandersetzung mit Thesen von Gerhard Ernst.Gerhard Ernst - 2010 - Zeitschrift für Philosophische Forschung 64 (1):84-90.details Ocean of Eloquence: Tsong Kha Pa's Commentary on the Yogācāra Doctrine of MindOcean of Eloquence: Tsong Kha Pa's Commentary on the Yogacara Doctrine of Mind.Paul J. Griffiths, Gareth Spahram & Tsong Kha Pa - 1995 - Journal of the American Oriental Society 115 (1):158.details Tibetan Philosophy in Asian Philosophy In the Jungle of Time: The Concept of Identity as a Way Out.Bin Zhou, Ernst Pã¶Ppel & Yan Bao - 2014 - Frontiers in Psychology 5.details Sadness is Unique: Neural Processing of Emotions in Speech Prosody in Musicians and Non-Musicians.Mona Park, Evgeny Gutyrchik, Lorenz Welker, Petra Carl, Ernst Pã¶Ppel, Yuliya Zaytseva, Thomas Meindl, Janusch Blautzik, Maximilian Reiser & Yan Bao - 2014 - Frontiers in Human Neuroscience 8.details Nachweis der Wiedergeburt: Prajñāsenas 'Jig rten pha rol sgrub pa, ein tibetischer Traktat aus DunhuangNachweis der Wiedergeburt: Prajnasenas 'Jig rten pha rol sgrub pa, ein tibetischer Traktat aus Dunhuang.Mark Tatz & Ernst Steinkellner - 1991 - Journal of the American Oriental Society 111 (1):204.details Nachweis der Wiedergeburt: Prajñāsenas 'Jig Rten Pha Rol Sgrub Pa: Ein Früher Tibetischer Traktat Aus Dunhuang. Volume 2'.Ernst Steinkellner - 1993 - Philosophy East and West 43 (1):159-159.details Ästhetik Versus Kunstgeschichte?: Ernst Cassirer Als Vermittler in Einer Bis Heute Offenen Kontroverse Zur Relevanz der Kunst Für Das Leben.Martina Sauer - 2018 - In Stefan Niklas & Thiemo Breyer (eds.), Ernst Cassirer in Systematischen Beziehungen: Zur Kritisch-Kommunikativen Bedeutung Seiner Kulturphilosophie. De Gruyter. pp. 239-260.details Aesthetics versus Art History? Ernst Cassirer as Mediator in an ongoing Controversy on the Relevance of Art for Life. Against the background of Ernst Cassirer's cultural philosophy, art studies are to be classified as cultural studies. Central to this is Cassirer's philosophy as the basis for answering a question that has been posed by the methods of formal aesthetics and iconology since the 19th century but is still unanswered today, namely the question of the relevance of the arts (...) for life. In this way, aesthetics/Kunstwissenschaft and art history gain a new meaning as cultural studies beyond their achievements in the humanities and in epistemology. (shrink) German Idealism, Misc in European Philosophy Phenomenology, Misc in Continental Philosophy What Does It Mean to Orient Oneself in Science? On Ernst Mach's Pragmatic Epistemology.Pietro Gori - forthcoming - In Friedrich Stadler (ed.), Ernst Mach - Life, Work, Influence. Dordrecht, Paesi Bassi: pp. 525-536.details The paper aims to investigate some aspects of Ernst Mach's epistemology in the light of the problem of human orientation in relation to the world (Weltorientierung), which is a main topic of Western philosophy since Kant. As will be argued, Mach has been concerned with that problem, insofar as he developed an original pragmatist epistemology. In order to support my argument, I firstly investigate whether Mach defended a nominalist or a realist account of knowledge and compare his view to (...) those elaborated by other pragmatist thinkers, such as W. James, H. Vaihinger and H. Poincaré. Secondly, the question of what does it mean, for Mach, to orient ourselves in science is addressed. Finally, it will be argued that, although Mach tried to keep his epistemology restricted to a mere operational and economical account of science, that question involves the wider plane of practical philosophy. (shrink) 19th Century Austrian Philosophy, Misc in 19th Century Philosophy Ernst Mach in 19th Century Philosophy Pragmatism, Misc in Metaphilosophy Ernst Mach dal punto di vista storico-critico.Pietro Gori - 2018 - In Ernst Mach tra scienza e filosofia. Pisa: pp. 11-31.details L'articolo si propone di accostarsi alla figura di Ernst Mach seguendo la stessa metodologia storico-critica da lui utilizzata. Essa permette di contestualizzarne la figura e l'opera in un momento significativo della storia della filosofia occidentale, ma anche di ridefinire alcuni concetti fondamentali del suo pensiero. Scopo ulteriore della ricerca è di osservare da una diversa prospettiva la questione relativa al valore filosofico del lavoro epistemologico di Mach, mostrando come esso possa essere affermato senza bisogno di uscire dai confini da (...) lui stesso tracciati. (shrink) Continental Philosophy, Misc in Continental Philosophy History of Science, Misc in General Philosophy of Science Was sind die Objekte der Wahrnehmung?: Ernst Cassirers Antwort auf die analytische Wahrnehmungstheorie.Tobias Endres - 2018 - In Stefan Niklas & Thiemo Breyer (eds.), Ernst Cassirer in Systematischen Beziehungen: Zur Kritisch-Kommunikativen Bedeutung Seiner Kulturphilosophie. Berlin: De Gruyter. pp. 25-46.details What are the Objects of Perception? Ernst Cassirer's Response to Analytic Theories of Perception. On the basis of its third volume, the Phenomenology of Knowledge (1929), Cassirer's principal work, the Philosophy of Symbolic Forms (1923-29), can be read as a phenomenology of perception. That is to say, Cassirer not only starts from the fact of multiple forms of cultural expression to reconstruct their transcendental conditions of objectification, but at once to trace their underlying forms of perceptive subjectivity. Hence, a (...) holistic theory of subjective and objective spirit, to which Cassirer's philosophy boils down, moves between exactly those two poles of perception and cultural expression. Starting from this interpretation, the article asks for the possibility to contribute to criticism towards recent theories of perception within the tradition of analytic philosophy. At the heart of things is the question: what should we actually conceive as the objects of perception? Inmost of its debates, analytic philosophy finds itself in the stranglehold of an internalism-externalism- dichotomy, that rests upon an unsettled understanding of objectivity. By contrast, Cassirer's understanding of objectivity as objectification allows us to reformulate the question of the objects of perception, and hence to undermine the above dichotomy. The main points of reference of the critical examination are Peter Strawson's Perception and its Objects (1979) and Tim Crane's What is the Problem of Perception (2005). It will be shown that the foundation of Cassirer's theory of perception, the distinction between perception of things and perception of expression, provides exactly the critical capability to move on from Crane's contemporary diagnosed unsatisfactory alternative between disjunctivism and intentionalism which amounts to a new version of the controversy between direct realism and sense-data-theories in the twentieth century. Cassirer's theory enables one to reconcile the directedness of perception with the representational capacities of the human mind. (shrink) Perception and Knowledge, Misc in Philosophy of Mind Perception and the Mind, Misc in Philosophy of Mind The Given in Philosophy of Mind The Reception of Ernst Mach in the School of Brentano.Denis Fisette - 2018 - Hungarian Philosophical Review 69 (4):34-49.details This paper is about the reception of Ernst Mach by Brentano and his students in Austria. I shall outline the main elements of this reception, starting with Brentano's evaluation, in his lectures on positivism, of Mach's theory of sensations. Secondly, I shall comment the early reception of Mach by Brentano's pupils in Prague. The third part bears on the close relationship that Husserl established between his phenomenology and Mach's descriptivism. I will then briefly examine Mach's contribution to the controversy (...) on gestalt qualities. The fifth part bears on Stumpf's debate with Mach on psychophysical relations and I shall conclude on Husserl's criticism of Mach's alleged logical psychologism. (shrink) Brentano School in 19th Century Philosophy The Proximate/Ultimate Distinction in the Multiple Careers of Ernst Mayr.John Beatty - 1994 - Biology and Philosophy 9 (3):333-356.details Ernst Mayr''s distinction between ultimate and proximate causes is justly considered a major contribution to philosophy of biology. But how did Mayr come to this philosophical distinction, and what role did it play in his earlier scientific work? I address these issues by dividing Mayr''s work into three careers or phases: 1) Mayr the naturalist/researcher, 2) Mayr the representative of and spokesman for evolutionary biology and systematics, and more recently 3) Mayr the historian and philosopher of biology. If we (...) want to understand the role of the proximate/ultimate distinction in Mayr''s more recent career as a philosopher and historian, then it helps to consider hisearlier use of the distinction, in the course of his research, and in his promotion of the professions of evolutionary biology and systematics. I believe that this approach would also shed light on some other important philosophical positions that Mayr has defended, including the distinction between essentialism: and population thinking. (shrink) Causation in Biology in Philosophy of Biology Levels and Units of Selection in Philosophy of Biology The Polemic Between Leonard Nelson and Ernst Cassirer on the Critical Method in the Philosophy.Tomasz Kubalica - 2016 - Folia Philosophica 35:53-69.details The subject of the paper is a polemic between Leonard Nelson and Ernst Cassirer mainly concerning the understanding of the critical method in philosophy. Nelson refutes the accusation of psychologism and attacks the core of the philosophy of the Marburg School of Neo-Kantianism. In response to those allegations, Cassirer feels obliged to defend the position of his masters and performs this task brilliantly. The present paper considers similarities and differences in the positions of both sides in this debate. I (...) try to evaluate the arguments of both sides and argue that they took basically the same positions, while the existing discrepancies did not justify such an intense polemic. If the disputing sides had approached the discussion in a less emotional way, it could have led to substantive and interesting conclusions. (shrink) Methodology in Metaphysics in Metaphysics Neo-Kantianism in European Philosophy Transcendental Arguments in Metaphilosophy Ernst Mayr, Naturalist: His Contributions to Systematics and Evolution. [REVIEW]Walter J. Bock - 1994 - Biology and Philosophy 9 (3):267-327.details Ernst Mayr''s scientific career continues strongly 70 years after he published his first scientific paper in 1923. He is primarily a naturalist and ornithologist which has influenced his basic approach in science and later in philosophy and history of science. Mayr studied at the Natural History Museum in Berlin with Professor E. Stresemann, a leader in the most progressive school of avian systematics of the time. The contracts gained through Stresemann were central to Mayr''s participation in a three year (...) expedition to New Guinea and The Solomons, and the offer of a position in the Department of Ornithology, American Museum of Natural History, beginning in 1931. At the AMNH, Mayr was able to blend the best of the academic traditions of Europe with those of North America in developing a unified research program in biodiversity embracing systematics, biogeography and nomenclature. His tasks at the AMNH were to curate and study the huge collections amassed by the Whitney South Sea Expedition plus the just purchased Rothschild collection of birds. These studies provided Mayr with the empirical foundation essential for his 1942Systematics and the Origin of Species and his subsequent theoretical work in evolutionary biology as well as all his later work in the philosophy and history of science. Without a detailed understanding of Mayr''s empirical systematic and biogeographic work, one cannot possibly comprehend fully his immense contributions to evolutionary biology and his later analyses in the philosophy and history of science. (shrink) Evolutionary Biology in Philosophy of Biology Arguments by Parallels in the Epistemological Works of Phya Pa Chos Kyi Seng Ge.Pascale Hugon - 2008 - Argumentation 22 (1):93-114.details The works of the Tibetan logician Phya pa Chos kyi seng ge (1109–1169) make abundant use of a particular type of argument that I term 'argument by parallels'. Their main characteristic is that the instigator of the argument, addressing a thesis in a domain A, introduces a parallel thesis in an unrelated domain B. And in the ensuing dialogue, each of the instigator's statements consists in replicating his interlocutor's previous assertion, mutatis mutandis, in the other domain (A or B). I (...) show that such a dialogue involves two parallel arguments that develop in an intersecting zigzag pattern, and discuss the principles involved in the establishment of the conclusion from the perspective of parity of reasoning and analogical argument. I examine the overall rhetorical strategy directing the use of arguments by parallels and the pedagogical and explanatory functions they can serve. I also evaluate the plausibility of their use in Phya pa Chos kyi seng ge's works mirroring a contemporary practice of oral debate, and reflect on the status of such arguments in the framework of Indo-Tibetan logic. (shrink) Systematics and the Origin of Species From the Viewpoint of a Botanist: Edgar Anderson Prepares the 1941 Jesup Lectures with Ernst Mayr. [REVIEW]Kim Kleinman - 2013 - Journal of the History of Biology 46 (1):73-101.details The correspondence between Edgar Anderson and Ernst Mayr leading into their 1941 Jesup Lectures on "Systematics and the Origin of Species" addressed population thinking, the nature of species, the relationship of microevolution to macroevolution, and the evolutionary dynamics of plants and animals, all central issues in what came to be known as the Evolutionary Synthesis. On some points, they found ready agreement; for others they forged only a short term consensus. They brought two different working styles to this project (...) reflecting their different appreciations of what was possible at this point in evolutionary studies. For Mayr, it was a focused project with definitive short term conclusions imminent while Anderson viewed it as an episode in an ongoing historical process that, while exciting and suggestive, remained openended. Thus, Mayr and Anderson represent two distinct perspectives on the Evolutionary Synthesis in formation; by understanding both of their points of view, we can grasp more fully the state of evolutionary theory at this key moment. (shrink) Species in Philosophy of Biology Systematic Biology in Philosophy of Biology An Early Bka'-Gdams-Pa Madhyamaka Work Attributed to Atiśa Dīpaṃkaraśrījñāna.James B. Apple - 2016 - Journal of Indian Philosophy 44 (4):619-725.details Although Atiśa is famous for his journey to Tibet and his teaching there, his teachings of Madhyamaka are not extensively commented upon in the works of known and extant indigenous Tibetan scholars. Atiśa's Madhyamaka thought, if even discussed, is minimally acknowledged in recent modern scholarly overviews or sourcebooks on Indian Buddhist thought. The following annotated translation provides a late eleventh century Indo-Tibetan Madhyamaka teaching on the two realities attributed to Atiśa Dīpaṃkaraśrījñāna entitled A General Explanation of, and Framework for Understanding, (...) the Two Realities. The text furnishes an exposition of the Middle Way thought of Nāgārjuna based on an exegesis of conventional reality and ultimate reality within the framework of Mahāyāna path structures found in texts attributed to Maitreyanātha. The General Explanation fills an important gap in the historical knowledge of Madhyamaka teachings in eleventh century India and Tibet. The text presents a Madhyamaka teaching brought to Tibet by Atiśa and provides previously unknown evidence for the type of pure Madhyamaka teachings that circulated among the communities of early followers of Atiśa. These teachings were disseminated before the rise of the early Bka'-gdams-pa monastery of Gsang-phu ne'u-thog and its debating traditions that, particularly beginning in the twelfth century, placed emphasis on the merger of Madhyamaka and Epistemology. (shrink) Indian Philosophy in Asian Philosophy Phya Pa Chos Kyi Seng Ge on Argumentation by Consequence (Thal 'gyur): The Nature, Function, and Form of Consequence Statements.Pascale Hugon - 2013 - Journal of Indian Philosophy 41 (6):671-702.details This paper presents the main aspects of the views of the Tibetan logician Phya pa Chos kyi seng ge (1109–1169) on argumentation "by consequence" (thal 'gyur, Skt. prasaṅga) based on his exposition of the topic in the fifth chapter of his Tshad ma yid kyi mun sel and on a parallel excursus in his commentary on Dharmakīrti's Pramānaviniścaya. It aims at circumscribing primarily the nature and function of consequences (thal 'gyur/thal ba) for this author—in particular the distinction between "proving consequences" (...) and "refuting consequences"—and the form prescribed for their enunciation in the context of debate. In addition to pointing out differences with the systems adopted by his predecessors, contemporaries and successors, the paper also discusses some of the similarities and differences between Phya pa's understanding of argumentation by consequence and the notion of reductio ad absurdum in Western logic. (shrink) Phya Pa Chos Kyi Seng Ge and His Successors on the Classification of Arguments by Consequence Based on the Type of the Logical Reason.Pascale Hugon - 2016 - Journal of Indian Philosophy 44 (5):883-938.details The Tibetan Buddhist logician Phya pa Chos kyi seng ge devoted a large part of his discussion on argumentation to arguments by consequence. Phya pa distinguishes in his analysis arguments by consequence that merely refute the opponent and arguments by consequence that qualify as probative. The latter induce a correct direct proof which corresponds to the reverse form of the argument by consequence. This paper deals with Phya pa's classification of probative consequences based on the type of the logical reason (...) involved. I first establish the basis of Phya pa's classification—the typology of logical reasons in inference-for-oneself—with a special attention to logical reasons consisting in the 'apprehension of something incompatible [with the negandum]' and among them the specific case of the 'apprehension of the cause of something incompatible [with the negandum]'. The treatment of the latter is shown to be instrumental in Phya pa's classification, as well as in explaining the divergences that occur in the models adopted by his successors, such as gTsang nag pa brTson ʾgrus seng ge and mTshur ston gZhon nu seng ge. Turning to Phya pa's effective application of this typology when he resorts himself to arguments by consequence, I examine Phya pa's rephrasing, in the form of four arguments by consequence, of the discussion on the relation between the two realities found in the Saṃdhinirmocanasūtra and relate it to a parallel discussion in an earlier Madhyamaka work by rGya dmar ba Byang chub grags. I compare the variant versions of these four arguments in three Madhyamaka works of Phya pa and show that the differences pertaining to the identification of the type of the logical reason result from apparently insignificant variations in the formulation of each of the arguments. In the conclusion, I discuss the potential philosophical or practical interest of such a classification. (shrink) "As It is Said in a Sutra": Freedom and Variation in Quotations From the Buddhist Scriptures in Early Bka'-Gdams-Pa Literature.Ulrike Roesler - 2015 - Journal of Indian Philosophy 43 (4-5):493-510.details The phyi dar or 'later dissemination' of Buddhism in Tibet is known to be a crucial formative period of Tibetan Buddhism; yet, many questions still wait to be answered: How did Tibetan Buddhist teachers of this time approach the Buddhist scriptures? Did they quote from books or from memory? Did they study Buddhism through original Sūtras or exegetical literature? To what degree was the text of the scriptures fixed and standardised before the Bka' 'gyur and the Bstan 'gyur were compiled? (...) In search for some answers to questions such as these, the present article focuses on the gzhung pa or 'scriptural tradition" of the Bka'-gdams-pa school of Tibetan Buddhism. Their works contain quotations from the Indian Buddhist scriptures that sometimes differ markedly from the mainstream editions of the Bka' 'gyur and Bstan 'gyur. There are several possible explanations for such discrepancies: The Tibetan authors might be quoting a different Tibetan translation that was later discarded by the redactors of the Tibetan canon; they might be quoting from a secondary source such as a commentary or Buddhist anthology; or they might be quoting from memory, changing the text either deliberately or by accident. Giving examples from works of the early Bka'-gdams-pa masters this article discusses how textual deviations from the canonical versions can be explained. It will thereby provide insights into the way the Indian Buddhist scriptures were studied and transmitted in the Tibetan Buddhist tradition around the 11th–13th centuries. (shrink) Buddhism in Philosophy of Religion The Limits of Experience and Explanation: F. A. Lange and Ernst Mach on Things in Themselves.Scott Edgar - 2013 - British Journal for the History of Philosophy 21 (1):100-121.details In the middle of the nineteenth century, advances in experimental psychology and the physiology of the sense organs inspired so-called "Back to Kant" Neo-Kantians to articulate robustly psychologistic visions of Kantian epistemology. But their accounts of the thing in itself were fraught with deep tension: they wanted to conceive of things in themselves as the causes of our sensations, while their own accounts of causal inference ruled that claim out. This paper diagnoses the source of that problem in views of (...) one Neo-Kantian, F. A. Lange, and argues that it is solved only by Ernst Mach. No less than Lange and other Neo-Kantians, Mach was inspired to develop a psychologistic account of the foundations of knowledge, but his account also includes a coherent denial of things in themselves' existence. Finally, this paper uses this account of Lange and Mach on things in themselves to illuminate Mach's relation to a certain strain of the Neo-Kantian philosophy of his own time: his views constitute a more fully coherent version of the psychologistic theory of knowledge Back to Kant figures tried to articulate. (shrink) 19th Century German Philosophy, Misc in 19th Century Philosophy Ernst Cassirer Ein Philosoph der Europäischen Moderne.Oswald Schwemmer (ed.) - 1997 - Oldenbourg Akademieverlag.details Ernst Cassirer wird in diesem Buch als ein Denker vorgestellt, der geistig in der philosophischen Tradition wurzelt und sich gleichzeitig den Herausforderungen duch die europäische Moderne stellt: dem Festschreiben eines vor allem durch die Wissenschaften beglaubigten universalen Vernunftanspruchs auf der einen und der Anerkennung einer Vielfalt kultureller Welten auf der anderen Seite. Über die Analyse einiger Grundbegriffe des Cassirerschen Entwurfs einer "Philosophie der symbolischen Formen" - in die auch die Werke aus dem Nachlass Cassirers miteinbezogen werden - versucht der (...) Autor, die Spannungen sichtbar zu machen, die diesem Entwurf die Modernität geben und ihn offen machen für eine Weiterentwicklung. "Die Vielfalt der symbolischen Welten und die Einheit des Geistes", "der Werkbegriff im Denken Cassirers", "Ausdruck und symbolische Prägnanz", "die ethische Dimension des symbolischen Handelns", "das Denken der Renaissance und die Wurzeln der Moderne", sind die Titel, unter denen - von einer bestimmten Seite aus - jeweils ein Blick auf das Ganze des Cassirerschen Denksn gewonnen werden soll. Dass dieses Ganze kein geschlossenes "System" sein, sondern Wege zum Verstehen unserer geistigen und kulturellen SDituation freilegen will, zeigt der Autor in einer abschließenden Analyse, die auch die Bedeutung des Cassirerschen Entwurfs für die philosophische Diskussion der Gegenwart darzulegen versucht. (shrink) Ernst H. Gombrich on Abstract Painting.Elisa Caldarola - 2015 - Aisthesis: Pratiche, Linguaggi E Saperi Dell'Estetico 8 (2):77-86.details Ernst H. Gombrich criticized abstract painting with several remarks scattered around his wide oeuvre. I argue that his view of abstract paintings is coherent with the account of pictorial representation he put forward in Art and Illusion, show some limits of such view, and maintain that, although several of Gombrich's criticisms of abstract painting should be rejected, some of his remarks are insightful and worth of consideration. Depiction in Aesthetics The Definition of Art in Aesthetics Per una critica dell'irragionevolezza. Sul concetto di funzione simbolica in Ernst Cassirer e Aby Warburg.Daniela Sacco - 2018 - Aisthesis. Pratiche, Linguaggi E Saperi Dell'Estetico 11 (1):181-192.details The fruitful intellectual exchange between Aby Warburg and Ernst Cassirer revolves around the concept of "symbolic function". In particular, the concept of function that emerges from Cassirer's early volume, Substance and Function, can be applied to the analysis of the compositional principle that gives shape to the architecture of Warburg's Mnemosyne Atlas. Ernst Mayr: Biologist-Historian. [REVIEW]Richard W. Burkhardt - 1994 - Biology and Philosophy 9 (3):359-371.details Ernst Mayr''s historical writings began in 1935 with his essay Bernard Altum and the territory theory and have continued up through his monumentalGrowth of Biological Thought (1982) and hisOne Long Argument: Charles Darwin and the Genesis of Modern Evolutionary Thought (1991). Sweeping in their scope, forceful in their interpretation, enlisted on behalf of the clarification of modern concepts and of a broad view of biology, these writings provide both insights and challenges for the historian of biology. Mayr''s general intellectual (...) formation was guided by the GermanBildung ideal, with its emphasis on synthetic and comprehensive knowledge. His understanding of how to write history was inspired further by the example of the historian of ideas Arthur Lovejoy. Some strengths and limitations of this approach are explored here through attention to Mayr''s treatment of the French biologist J.-B. Lamarck. It is contended that Mayr''s contributions to the history of biology are not restricted to his own very substantial historical writings but also include his encouragement of other scholars, his development of an invaluable archive of scientific correspondence, and his insistence that historians who write about evolution and related subjects acquire an adequate understanding of the principles of Darwinian biology. (shrink) Ernst Mach on the Self. The Deconstruction of the Ego as an Attempt to Avoid Solipsism.Markus Schrenk - 2011 - Deutscher Kongress Für Philosophie, 11. - 15. September 2011, Ludwig-Maximilians-Universität München.details In his Contributions to the Analysis of the Sensations (Mach 1885) the phenomenalist philosopher Ernst Mach confronts us with a difficulty: "If we regard the Ego as a real unity, we become involved in the following dilemma: either we must set over against the Ego a world of unknowable entities […] or we must regard the whole world, the Egos of other people included, as comprised in our own Ego." (Mach 1885: 21) In other words, if we start from (...) a phenomenalist viewpoint, i.e., if we believe that the manifold of sensations we are confronted with is ontologically fundamental —as Mach clearly does: "For us, colors, sounds, spaces, times,… are the ultimate ele-ments" (Mach 1885: 23)—then we are in danger to end up in solipsism. Unless, that is, we assume that some underlying thing-in-itself substratum from which matter, we ourselves, and all the others emanate. The only other alter-native seems to be—and Mach advertises it vehemently for he denies any "mons-trous notion of a thing-in-itself" (Mach 1885: 6)1—that we get rid of the Ego. For, if there is no Self in the first place, then the question whether there are others dissolves. To put it the other way round, it is ok that the others do not exist because, really, I do not exist either. If the Ego is a Myth solipsism is not just wrong but nonsense. There are two questions this paper wishes to address: first, do we need independent additional support for the denial of the Self or is the avoidance of solipsism reason enough to assume the Ego's non-existence? I will argue that we do need additional reasons and I will evaluate those that Mach indeed gives to prove that "the primary fact is not the I, the Ego, but the elements (sensations)" (Mach 1885: 19). Second, is the deconstruction of the I, even if further sufficient support can be found, really adequate to stop us from worrying about solipsism? The doubt I will put forward is that the illusion of a Self might conjure up enough of an Ego—just like feeling a pain is having a pain, even if it is located in a phantom limb—to start us wondering whether it also occurs elsewhere. (shrink) Phenomenalism in Metaphysics Skepticism, Misc in Epistemology Entfernte Einheit. Geschichte Und Natur in Ernst Troeltschs Geschichtsphilosophie.Gregor Schiemann - 2001 - In F. W. Graf (ed.), Ernst Troeltschs "Historismus"(Troeltsch-Studien Bd. 11). Gütersloher Verlagshaus.details In seiner Kritik der naturwissenschaftlichen Erkenntnis besteht Troeltschs Leistung darin, eine traditionelle Begrifflichkeit unter schon gewandelten Bedingungen auf die Reichweite ihrer Anwendbarkeit hin zu hinterfragen. Ich werde im ersten Teil den von ihm konstruierten Gegensatz von Natur und Geschichte thesenhaft skizzieren, soweit er im ersten Kapitel von »Der Historismus und seine Probleme« ausgeführt ist (1.). In einem zweiten Teil möchte ich dann einige Elemente hervorheben, die zur Vermittlung des Entgegengesetzten geeignet sind, ohne zu dessen Aufhebung zu führen. Sie verweisen auf (...) philosophische Gehalte, die in ein Natur und Geschichte umfassendes System der Erkenntnis aufzunehmen wären (2.). (shrink) Nature in Applied Ethics A Goy Who Studies Torah. Two Unpublished Sources by Ernst Simon and Gershom Scholem on the Spiritual Legacy of Franz Rosenzweig.Ynon Wygoda & Enrico Lucca - 2018 - Naharaim 12 (1-2):197-224.details In the early 1930s, Franz Rosenzweig's work was celebrated, criticized and questioned for its relevance within the specific cultural, religious and philosophical preoccupations of the inhabitants of pre-state Israel. This could be seen in nuce at the opening of the Schocken Library in Jerusalem in December 1936 that was marked by a celebratory conference dedicated to the memory of Franz Rosenzweig. The evening featured a collection of four lectures held in Hebrew by eminent German-Jewish scholars: Ernst Simon, Julius Guttmann, (...) Hugo Bergmann and Gershom Scholem. Simon and Scholem's lectures in particular put forward two strikingly different views on Rosenzweig's possible Nachleben in the yishuv. The article is followed by Scholem's hitherto unpublished lecture and Simon's German summary of his own contribution that evening. (shrink) Automorphisms of Countable Short Recursively Saturated Models of PA.Erez Shochat - 2008 - Notre Dame Journal of Formal Logic 49 (4):345-360.details A model of Peano Arithmetic is short recursively saturated if it realizes all its bounded finitely realized recursive types. Short recursively saturated models of $\PA$ are exactly the elementary initial segments of recursively saturated models of $\PA$. In this paper, we survey and prove results on short recursively saturated models of $\PA$ and their automorphisms. In particular, we investigate a certain subgroup of the automorphism group of such models. This subgroup, denoted $G|_{M(a)}$, contains all the automorphisms of a countable short (...) recursively saturated model of which can be extended to an automorphism of the countable recursively saturated elementary end extension of the model. (shrink) Model Theory in Logic and Philosophy of Logic Aspects of Aristotle's Logic. [REVIEW]D. J. - 1978 - Review of Metaphysics 32 (2):350-351.details A revised version of the author's Göttingen doctoral dissertation, this book is as much an independent essay in modal logic as it is an interpretation of Aristotle's modal syllogistic. In chapter 1 the author develops what he calls a "rich framework" including speech-act operators as well as epistemic and alethic modal operators, all expressed in a notation of his own devising; for example, "Pc if ENj Pc RNc S, Rc ENj Pa Rp S" translates as "The speaker claims that if (...) he is not justified in claiming that it is not certain that S, it is certain that he is not justified in allowing that it is possible that S." A dual system of quantification is then introduced, with and without existential import for both universal and particular propositions. In chapters 2 and 3 the author uses his framework as a model for discussing various difficulties in Aristotle's assertoric and modal logic, insisting that his results are relevant to Aristotle's even when they do not "correspond" to them. An appendix takes up the famous sea-battle of De interpretatione 9. The author is at his best when examining particular problems in Aristotle's logic; in this regard his criticism of the operation of conversion stands out as especially effective. He is at his worst when pursuing the "intrinsic interest" of his model above and beyond its usefulness in studying Aristotle and when making unwarranted claims of originality for his objectives. One such objective is to stress what he calls the "teleological" character of Aristotle's logic, a character which he says "is lost sight of in modern accounts" of Aristotle's work. He appears to be unaware of Ernst Kapp's classic Greek Foundations of Traditional Logic, which repeatedly calls attention to just this character. The author acknowledges the existence of twentieth century modal logic only once, in a one page criticism of Prior's defense of the distinction between de dicto and de re modalities. This occurs within a curious argument, based on a premise taken from Ammonius and Abelard, that the concept of logical necessity is "bogus."—J.D. (shrink) Aristotle: Logic and Philosophy of Language in Ancient Greek and Roman Philosophy Formal Epistemology in Epistemology Sensualistischer Phänomenalismus Und Denkökonomie. Zur Wissenschaftskonzeption Ernst Machs.Ralf Goeres - 2004 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 35 (1):41-70.details Sensationalistic Phenomenalism and Economy of Thought. On Ernst Mach's Concept of Science. Ernst Mach, natural scientist and major precursor of the Vienna Circle, never wants to be a philosopher. Nevertheless his writings are full of valuable hints for a modern theory of human knowledge – with respect to economical, historical and evolutionary aspects. His kind of phenomenalism is sensationalistic, monistic and instrumentalistic. This article deals with some contributions of his approach to actual debates in the general philosophy of (...) science. (shrink) Every Rooted Narrow Tree Kripke Model of HA is Locally PA.Mohammad Ardeshir & Bardyaa Hesaam - 2002 - Mathematical Logic Quarterly 48 (3):391-395.details We prove that every infinite rooted narrow tree Kripke model of HA is locally PA. Automorphisms of Countable Recursively Saturated Models of PA: Open Subgroups and Invariant Cuts.Henryk Kotlarski & Bozena Piekart - 1995 - Mathematical Logic Quarterly 41 (1):138-142.details Let M be a countable recursively saturated model of PA and H an open subgroup of G = Aut. We prove that I = sup {b ∈ M : ∀u < bfu = u and J = inf{b ∈ MH} may be invariant, i. e. fixed by all automorphisms of M. Ernst Bloch.Vincent Geoghegan - 1996 - Routledge.details Ernst Bloch is perhaps best known for his subtle and imaginative investigation of utopias and utpoianism, but his work also provides a comprehensive and insightful analysis of Western culture, politics and society. Yet, because he has not been one of the easiest writers to read, his full contribution has not been widely acknowledged. In this critical and accessible introduction to one of the most fascinating thinkers of the twentieth century, Vincent Geoghegan unravels much of the mystery of the man (...) and his ideas. (shrink) German Philosophy in European Philosophy $52.31 new $57.17 direct from Amazon $92.60 used Amazon page Monstrosity and the Not-Yet: Edward Scissorhands Via Ernst Bloch and Georg Simmel.Craig Hammond - 2015 - Film-Philosophy 19 (1):221-248.details This article explores and discusses Tim Burton's film Edward Scissorhands by applying a Georg Simmel/Ernst Bloch analysis. Aside from each of the philosophical approaches serving as insightful analyses of the symbolism and narrative of the film, it is also theoretically useful to compare and unpack the similarities and differences in aspects of both Simmel's and Bloch's philosophical ideas and metaphors, influenced by their collaboratory experiences; Bloch became associated with Georg Simmel in 1908. The association and friendship with Simmel lasted (...) until 1911; at this time Bloch became increasingly disillusioned with Simmel's apparent inability to commit to any particular philosophical position. Correspondence finally drew to a close when Simmel openly supported the war policy of Imperial Germany in 1914. The influences of many of Simmel's ideas in relation to the development of Bloch's philosophy are implicitly noticeable in the cross-referencing of similar ideas, metaphors and themes. This article will suggest and tentatively work through aspects of some similarities and differences. The aim of this comparison and contrast of Simmel in relation to Bloch via Edward Scissorhands, will also serve to highlight Bloch's philosophical departure from Simmel's fragments. By exploring and discussing Simmel's essays 'The Aesthetic Significance of the Face', 'The Ruin' and 'The Stranger,' in the context of Edward Scissorhands, I will suggest that the film can be seen as a particularly poignant and effective cultural metaphor of not only the problematic nature of human ideals, but also urban ennui and disconnectedness. By comparison, the Blochian treatment of Edward Scissorhands will emphasise the Gothic, the radical stridency of Youth, and, potential utopian possibilities that are, so far, 'Not-Yet'. These frameworks will suggest that Edward Scissorhands be understood as a beautiful-monster, a cultural refraction of the utopian incognito of Not-Yet articulated future possibilities. (shrink) Philosophy of Film in Aesthetics Ernst Glasersfeld's First Scientific Paper.P. Braffort - 2007 - Constructivist Foundations 2 (2-3):12-17.details Purpose: At Silvio Ceccato's suggestion, I invited Ernst von Glasersfeld to the "Séminaire Leibniz" which took place in Brussels, in February 1961. The paper he delivered then, Operational Semantics: Analysis of Meaning in Terms of Operations, was included in a Euratom internal report and is published here for the first time. Conclusion: These early works clearly show von Glasersfeld's methodological and philosophical coherence as well as his faithfulness to Ceccato's endeavour. Die Produktivität der Kunst–Der poietische Charakter der Kunst nach Ernst Cassirer.Christian Krüger - 2013 - Zeitschrift für Ästhetik Und Allgemeine Kunstwissenschaft 58 (2):225-246.details This paper aims to reconstruct Ernst Cassirer's theory of art against the backdrop of the systematic question of what overall contribution art can make to man's relation to the world. It will be shown that for Cassirer, the productive benefit of art is essentially developing new sensuous skills of perception when dealing with art. In Essay on Man Cassirer gives three central determinations to sketch out this idea. This paper argues, that in order to render Cassirer's concept of productive (...) art intelligible, one has to show these determinations as conceptually interrelated. Moreover, based on these three determinations this paper outlines three criteria that a satisfactory concept of productive art has to meet. (shrink) Ernst Cassirer in 20th Century Philosophy Ernst Mach Als Aussenseiter Machs Briefwechsel Über Philosophie Und Relativitätstheorie Mit Persönlichkeiten Seiner Zeit.Ernst Mach, John T. Blackmore & Klaus Hentschel - 1985details Ernst von Glasersfeld and the Italian Operative School.F. Accame - 2007 - Constructivist Foundations 2 (2-3):18-24.details Purpose: Appreciating the relationship between Sylvio Ceccato and Ernst von Glasersfeld, both as people and in their work. Approach: historical and personal accounts, archeological approach to written evidence. Findings: Ceccato's work is introduced to an English speaking audience, and the roots of Glasersfeld's work in Ceccato's is explored. Flaws in Ceccato's approach are indicated, together with how Glasersfeld's work overcomes these, specially in language and automatic translation, and what became Radical Constructivism. Conclusion: Glasersfeld willingly acknowledges Ceccato, who he still (...) refers to as the Master. But Ceccato's work is little known, specially in the English speaking world. The introduction, critique and delineation of extension and resolution of Ceccato's ideas in Glasersfeld's work is the intended value of the paper. (shrink) Poststructuralism in Continental Philosophy The Importance of Being Ernst.R. Glanville - 2007 - Constructivist Foundations 2 (2-3):5-6.details I shall write about my first meeting with Ernst von Glasersfeld, and how his comments then on my doctoral study continue to help me clarify what it is I am trying to talk about; how he challenged me to pursue what has turned out to be my life's work so far; and about how these seem to me now to fit in with that constellation of ideas. A Galois Correspondence for Countable Short Recursively Saturated Models of PA.Erez Shochat - 2010 - Mathematical Logic Quarterly 56 (3):228-238.details In this paper we investigate the properties of automorphism groups of countable short recursively saturated models of arithmetic. In particular, we show that Kaye's Theorem concerning the closed normal subgroups of automorphism groups of countable recursively saturated models of arithmetic applies to automorphism groups of countable short recursively saturated models as well. That is, the closed normal subgroups of the automorphism group of a countable short recursively saturated model of PA are exactly the stabilizers of the invariant cuts of the (...) model which are closed under exponentiation. This Galois correspondence is used to show that there are countable short recursively saturated models of arithmetic whose automorphism groups are not isomorphic as topological groups. Moreover, we show that the automorphism groups of countable short arithmetically saturated models of PA are not topologically isomorphic to the automorphism groups of countable short recursively saturated models of PA which are not short arithmetically saturated. (shrink) The Sum of Irreducible Fractions with Consecutive Denominators Is Never an Integer in PA -.Victor Pambuccian - 2008 - Notre Dame Journal of Formal Logic 49 (4):425-429.details Two results of elementary number theory, going back to Kürschák and Nagell, stating that the sums $\sum_{i=1}^k \frac{m_i}{n+i}$ (with $k\geq 1$, $(m_i, n+i)=1$, $m_i\lessthan n+i$) and $\sum_{i=0}^k \frac{1}{m+in}$ (with $n, m, k$ positive integers) are never integers, are shown to hold in $\mathrm{PA}^{-}$, a very weak arithmetic, whose axiom system has no induction axiom. Proof Theory in Logic and Philosophy of Logic Ernst Blochs Wirkung Ein Arbeitsbuch Zum 90. Geburtstag.Ernst Bloch - 1975details Èuber Ernst Cassirers Philosophie der Symbolischen Formen.Hans-jèurg Braun, Helmut Holzhey & Ernst Wolfgang Orth (eds.) - 1988 - Suhrkamp.details Guṅ-Thaṅ Bstan-Pa'i-Sgron-Me'i Gsuṅ 'bum.Guṅ-Thaṅ Dkon-Mchog-Bstan-Pa' & I.-Sgron-Me - 2003 - Mi Rigs Dpe Skrun Khaṅ.details Guṅ-Thaṅ Bstan-Pa'i-Sgron-Me'i Gsuṅ 'bum.Guṅ-Thaṅ Dkon-Mchog-Bstan-Pa'i-Sgron-Me - 2003 - Mi Rigs Dpe Skrun Khaṅ.details Philosophie der Kultur- und Wissensformen: Ernst Cassirer neu lesen.Tobias Endres, Pellegrino Favuzzi & Timo Klattenhoff - 2016 - Frankfurt am Main, Deutschland: Peter Lang.details Das Potenzial der Philosophie Ernst Cassirers ist keinesfalls erschöpft, sondern vielmehr in systematischer, transdisziplinärer und gesellschaftlich relevanter Perspektive anschlussfähig, um Fragestellungen der Gegenwartsphilosophie und der Wissenschaften zu begegnen. Die Cassirer-Rezeption befindet sich in dieser Hinsicht an der Schwelle des Eintritts in eine neue Phase, die im Lichte eines 'Neulesens' sowie einer zunehmend globalen Vernetzung betrachtet werden kann. Von der Wissensforschung und Wahrnehmungstheorie über neue Gebiete symbolischer Formung wie Film, Geld und Virtualität bis zum spannungsreichen Verhältnis zwischen Demokratie und Mythos: (...) Die Beiträge des Bandes verstehen sich als Aktualisierung von Cassirers Philosophie der Kultur- und Wissensformen im 21. Jahrhundert. (shrink) 20th Century German Philosophy, Misc in European Philosophy Cultural Pluralism in Social and Political Philosophy Ernst Mach tra scienza e filosofia.Pietro Gori (ed.) - 2018 - Pisa: ETS.details Ernst Mach (1838-1916) è stato una figura di riferimento per la cultura scientifica e filosofica tardo-ottocentesca e dei primi decenni del Novecento. Le sue ricerche in fisica e psicologia, così come il lavoro epistemologico che emerge dalle pagine di opere quali La meccanica nel suo sviluppo storico-critico e Conoscenza ed errore, hanno influito notevolmente su molti autori a lui contemporanei. In questi testi, Mach delinea una concezione antimetafisica del pensiero scientifico e una concezione biologico-evolutiva della conoscenza umana che si (...) ritrovano elaborate in vario modo nella teoria della relatività di A. Einstein, nell'epistemologia evoluzionistica di K. Popper e D. Campbell, nel pragmatismo di W. James e, più in generale, nelle idee che animarono il primo Circolo di Vienna. I saggi raccolti nel presente volume si propongono di commemorarne la figura e l'opera, guardando a Mach come figura di confine tra prospettive di indagine che la storia della filosofia dell'ultimo secolo ha spesso visto contrapposte. (shrink) 19th Century German Philosophy in 19th Century Philosophy Art Forms in Nature the Prints of Ernst Haeckel : One Hundred Color Plates.Ernst Heinrich Philipp August Haeckel, Olaf Breidbach & Irenäus Eibl-Eibesfeldt - 1998details
CommonCrawl
The crown-root morphology of central incisors in different skeletal malocclusions assessed with cone-beam computed tomography Xiao-ming Wang1, Ling-zhi Ma2, Jing Wang3 & Hui Xue4 To determine the discrepancy of crown-root morphology of central incisors among different types of skeletal malocclusion using cone-beam computed tomography (CBCT) and to provide guidance for proper torque expression of anterior teeth and prevention of alveolar fenestration and dehiscence. In this retrospective study, a total of 108 CBCT images were obtained (ranging from 18.0 to 30.0 years, mean age 25.8 years). Patients were grouped according to routine sagittal and vertical skeletal malocclusion classification criteria. The patients in sagittal groups were all average vertical patterns, with Class I comprised 24 patients—14 females and 10 males; Class II comprised 20 patients—13 females and 7 males; and Class III comprised 22 subjects—13 females and 9 males. The patients in vertical groups were all skeletal Class I malocclusions, with low angle comprised 21 patients—12 females and 9 males; average angle comprised 24 patients; and high angle comprised 21 patients—11 females and 10 males. All the CBCT data were imported into Invivo 5.4 software to obtain a middle labio-lingual section of right central incisors. Auto CAD 2007 software was applied to measure the crown-root angulation (Collum angle), and the angle formed by a tangent to the central of the labial surface of the crown and the long axis of the crown (labial surface angle). One-way analysis of variance (ANOVA) and Scheffe's test were used for statistical comparisons at the P < 0.05 level, and the Pearson correlation analysis was applied to investigate the association between the two measurements. The values of Collum angle and labial surface angle in maxillary incisor of Class II and mandibular incisor of Class III were significantly greater than other types of sagittal skeletal malocclusions (P < 0.05); no significant difference was detected among vertical skeletal malocclusions. Notably, there was also a significant positive correlation between the two measurements. The maxillary incisor in patients with sagittal skeletal Class II malocclusion and mandibular incisor with Class III malocclusion present remarkable crown-root angulation and correspondingly considerable labial surface curvature. Equivalent deviation during bracket bonding may cause greater torque expression error and increase the risk of alveolar fenestration and dehiscence. Adequate labial or lingual inclination of anterior teeth is important to establish the ideal anterior occlusal relationship and satisfying esthetic effect in orthodontics. However, orthodontists cannot always achieve the expected extent of tooth movement in alveolar bone. Researchers paid plenty of attention to the alveolar height and thickness in the past two decades, while the tooth morphological variation was frequently ignored (Fig. 1). In 1984, Bryant firstly analyzed the variability in the permanent incisor morphology by establishing three anatomic features and investigated the discrepancy among different malocclusions [1], two of which adopted by the following studies [2]. a, b The inclination of the root and crown in maxillary and mandibular incisors are inconsistent with each other in the surface view, which indicates the crown-root angulation phenomenon One feature was the crown-root angulation (Collum angle, CA) in a labiolingual direction, which was formed by the long axis of crown and root and might limit the degree to which the roots of incisor could be torqued lingually for relating to the lingual cortical plate of bone. Later, several recent studies suggested that the CA caused abnormal stress distribution of periodontal ligament when tooth movement [3, 4]. Moreover, researchers found the mean value of CA for Angle Class II division 2 malocclusion was significantly larger than Class II division 1 and Class III malocclusions [4,5,6,7,8,9,10]. The above research implied us to furtherly investigate the diversity among different skeletal malocclusions. The other feature was the labial surface angle (LSA), which was formed by a tangent to the bracket site on the labial surface and the long axis of the crown from a proximal view, and the significant amount of variation in LSA potentially affected the precision of torque expression and axial inclination [2]. Kong drew a tangent to the labial surface of the crown 3.5–5.0 mm gingivally from the incisal edge and measured the LSA of 77 incisors [2]. He demonstrated that the significant variation in LSA was greater than the variations between different types of preadjusted appliances, and the brackets still needed to be custom-made when using the straight-wire approach [2]. Thus, the preoperative judgment about the individual LSA was essential for achieving optimal torque expression. Moreover, the developmental tooth proved to be closely affected by environmental and genetic factors, which seemed coincident with the determinants of the facial growth pattern, while little was known about the correlation to the different skeletal malocclusions [11, 12]. Previous research primarily based on cephalometric radiographs and the disadvantages of magnifying distortion and unclear manual tracing of the tooth boundary might sacrifice the accuracy. Currently, CBCT is widely used in the clinic with abundant sample sources, clear three-dimensional imaging of tooth bone structure, and precise measurement via digital software, but the application in tooth morphometry remains rare [13,14,15]. The main purposes of the present study were to investigate the variations in the morphology of maxillary and mandibular central incisors, including the CA and LSA, using CBCT images and Invivo 5.4 software to capture images, and analyzing via AutoCAD. Finally, we discussed the effect on torque expression for the variable anatomic feature among different types of skeletal malocclusions. Firstly, a power analysis established by G*Power (version 3.1.9.4, Franz Faul, Universita¨t Kiel, Kiel, Germany) software, based on 1:1 ratio between groups, with sample size of 108 cases, would give more than 70% power to detect significant differences with 0.40 effect size and at the α = 0.05 significance level. Sample selection and classification The study was carried out on the CBCT scans of three classifications of the sagittal skeletal malocclusions selected from the archives of the Department of Stomatology, the Affiliated Suzhou Hospital of Nanjing Medical University. By August 2018, 2855 sets of images were stored in the database of the department. Because our study was a retrospective case-control study using the archive, no ethical approval was gained, and all the patients took CBCT for clinical orthodontic needs. The CBCT images of 108 patients (mean age 25.8 years, 18 to 30 years) were selected as the criteria presented in Table 1. CBCT images were obtained using the GALILEOS (SIRONA, Germany), with a visual range of 150 × 150 mm2, tube voltage of 90 kV, tube current of 7.0 mA, slice thickness of 0.20 mm, exposure time of 20 s, and radiation dose of 0.029 mSv. During scanning, patients should parallel the interpupillary line and Frankfurt plane to the ground, and the facial midline coincided to the median reference line of the machine, with central occlusion and no swallow. Table 1 Criteria for sample selection Lateral cephalometric radiographs were captured using Invivo 5.4 software and then classified into three groups on the basis of sagittal skeletal malocclusion using Dolphin 11.0 for cephalometric analysis (Fig. 2). The grouping criteria and sample distribution were presented in Table 2 [2, 16, 17]. Measurements to classify sagittal and vertical skeletal malocclusion. A, A-point, deepest bony point on the contour of the premaxilla below ANS; B, B-point, deepest bony point on the contour of the mandible above pogonion; ANB, angle between point A, B and point N; 1. Wits, perpendicular lines are dropped from points A and B onto the occlusal plane, Wits is measured from Ao to Bo; 2. S, sella, center of sella turcica; N, nasion, the most anterior limit of the frontonasal suture on the frontal bone in the facial midline; SN, connection between S and N, stands for anterior cranium base plane; Go, gonion, the most posterior inferior point of mandible angle; Me, menton, most inferior point of the bony chin; MP, connection between Me and Go, stands for mandibular plane; SN-MP, angle between SN and MP; 3. S-Go, the distance between lines parallel to FH plane passing through S and Go, represents the posterior facial height; N-Me, the distance between lines parallel to FH plane passing through N and Me, represents for the anterior facial height; FHI(S-Go/N-Me), facial height index, the ratio of posterior and anterior height, stands for vertical growth pattern of individual Table 2 The distribution of samples Measuring image capture The CBCT images underwent a three-dimensional adjustment with Invivo 5.4 software (Anatomage Dental) to orient the head in natural head position in three planar views. Firstly for the horizontal view, the horizontal line located rightly at the frontal edges of the bilateral ramus, and the vertical line was perpendicular to it and passed through the center of the incisive canal (Fig. 3a). Then for the coronal view, the vertical line should be parallel to the mid-sagittal reference line at crista galli (Fig. 3b). Lastly for the sagittal view, the horizontal line connecting the anterior nasal spine to the posterior nasal spine should be parallel to the bottom of the monitor (Fig. 3c). Measuring image capture. The natural position of the head is adjusted in three dimensions. a The horizontal view. b The coronal view. c The sagittal view. A bunch of cutting lines (green) was vertical to incisor labial surface (d) and located at the central coronal view (e). The median sagittal views were established with nine layers (f–n), interval 0.10 mm, and the middle one was the measuring image (j) Then, the median sagittal tomographic images of incisors in labio-lingual direction were adjusted to capture using the Arch Section tab. In detail, the bunch of cutting lines (green) should be vertical to the labial surface and pass through the center in horizontal view (Fig. 3d) and divide incisor equally in coronal view (Fig. 3e). Thus, the median one (Fig. 3j) of the nine images (Fig. 3f–n) in sagittal direction was selected for angular measurement. The thickness of sectional slices was 2.0 mm with the interval set at 0.1 mm. Marker and measurement The measuring images were marked and measured via AutoCAD (Autodesk, San Rafael, CA) as follows (Fig. 4a). "CEJ" represented the labial or lingual cementoenamel junction. Point A was the incisor superior, and point R was the root apex. Point B was labial cementoenamel junctions, point L was lingual cementoenamel junctions, and point O was the midpoint between points B and L. a The Collum angle is formed by the extension of the long axis of the crown and the long axis of the root. b Tangent L passes through upper and lower intersections of labial surface of crown and circle with the T center and radius of 0.5 mm. c The measuring example of Collum angle and labial surface angle The straight line "AO" represented the long axis of the crown, and "RO" was the long axis of the root. Point T was the tangent point on the labial surface of the crown, which was the intersection of the perpendicular line of "AO" and the labial surface of the crown, with the foot point V. The tangent line via T was defined approximately by the line passing through points T1 and T2, which were the intersections of a circle with the point T center and 0.5 mm radius on the labial surface of the crown (Fig. 4b). "Collum angle (CA)" was an acute angle between the line RO and reverse extension line AO. When line RO located lingual side to the extension line, the CA was defined as a positive value; otherwise, the labial side was negative, and the coincidence was zero. "Labial surface angle (LSA)" was formed by the tangent line and forward extension line of AO, with point P as the vertex. For example, the CA was − 6.89° and LSA was 18.59° (Fig. 4c). All statistical analyses were performed with the SPSS software (version 13.0, SPSS, Chicago). The normality test of Kolmogorov-Smirnov and Levene's variance homogeneity test with all the data were found to be normally distributed with the homogeneity of variance among groups. Further statistical comparisons of CA and LSA in different malocclusion groups were undertaken by one-way analysis of variance (ANOVA) and Scheffe's test. At last, the Pearson correlation analysis was applied to investigate the association between CA and LSA in the same incisor ("r" was the Pearson correlation coefficient). The level of statistical significance was set at P < 0.05(*), P < 0.01(**), and P < 0.001(***). Error in measurements To assess the intra-observer and inter-observer error, repeated measurements performed on all the samples were measured by two operators on two occasions at a 2-week interval and analyzed with Student's t test for paired samples adopting an α-level of 0.05. The mean values calculated by combining the measurements of both operators were used for inter-group difference analysis. The technical error of measurement (TEM) was assessed with the formula [18], $$ \mathrm{TEM}=\sqrt{\sum {d}_i^2/2n} $$ in which di was the difference between the first and second measurement on the ith sample and n was the whole sample number. As a result, all the measurements presented no significant difference according to the t test (P > 0.05). The technical error of measurement was 0.35°. Comparison of CA and LSA among different sagittal skeletal malocclusion groups (Table 3) In the maxilla, according to ANOVA, the mean values of CA in Class I, Class II, and Class III respectively achieved − 1.02 ± 6.30°, 5.18 ± 4.97°, and 0.43 ± 5.44°, and LSA were 14.44 ± 4.06°, 17.78 ± 3.74°, and 14.18 ± 4.20°. There were significant differences in both of the two measurements among different types of sagittal skeletal malocclusions (P = 0.002 < 0.01 and P = 0.008 < 0.01). Further Scheffe's test was conducted for multiple comparisons. As a result, Class II patients had greater mean values of CA and LSA than patients in the other groups (I vs II: P = 0.003 < 0.01 and P = 0.028 < 0.05; II vs III: P = 0.030 < 0.05 and P = 0.019 < 0.05). No significant difference was noted between the Class I and Class III groups (P = 0.688 > 0.05 and P = 0.977 > 0.05) (Fig. 5a, b). Table 3 Collum angle/labial surface angle of central incisors among different sagittal skeletal malocclusions (°) The value of CA and LSA in maxillary incisor of Class II (a, b) and mandibular incisor of Class III (c, d) are significantly greater than other groups. There is no statistical difference among different vertical skeletal classifications In the mandible, the mean values of CA in Class I, Class II, and Class III were 0.40 ± 5.80°, 0.82 ± 5.78°, and 5.59 ± 5.64°, and LSA were 11.32 ± 3.91°, 12.18 ± 4.39°, and 15.32 ± 3.05°, respectively. Both the two measurements were also detected to be significantly different (P = 0.006 < 0.01 and P = 0.002 < 0.01). Furthermore, Class III groups had greater CA and LSA than the other two groups (I vs III: P = 0.013 < 0.05 and P = 0.003 < 0.01; II vs III: P = 0.033 < 0.05 and P = 0.034 < 0.05), while no difference was detected between Class I and Class II (P = 0.970 > 0.05 and P = 0.759 > 0.05) (Fig. 5c, d). The consistency of the significant difference distribution implied us that there might be some extent correlation between the two measurements within the same jaw. Thus, we furtherly analyzed the association between CA and LSA within the same incisor by adapting the data from all the samples. As a result, the Pearson correlation test indicated that the CA and LSA were strongly positively correlated both in maxilla and mandible (upper jaw: r = 0.723, P = 0.000; lower jaw: r = 0.752, P = 0.000) (Fig. 6). Both in the maxilla (a) and mandible (b), the CA and LSA are significantly and positively correlated Comparison of CA and LSA among different vertical skeletal malocclusion groups (Tables 4 and 5) We detected no statistically significant differences in both CA and LSA among different vertical skeletal malocclusion groups (upper jaw: P = 0.915 > 0.05 and P = 0.347 > 0.05; lower jaw: P = 0.609 > 0.05; P = 0.217 > 0.05). Table 4 Collum angle/labial surface angle of central incisors among different vertical skeletal malocclusions (°) Table 5 Pearson correlation analysis indicated the significant positive correlation between CA and LSA (maxillary: r = 0.723, P = 0.000 < 0.001; mandibular: r = 0.752, P = 0.000 < 0.001) The precise expression of anterior torque is essential to obtain normal overjet and overbite and achieve the satisfying esthetic effect and stable occlusal relationship. The ideal preadjusted torque in straight wire brackets is hard to accomplish adequately because of the material properties of wire, slot width, ligature selection, operation experience, individual tooth, and alveolar morphology [19]. Lots of studies found that the height and thickness of local alveolar predominantly restricted the range of anterior teeth movement [20], while less attention was paid to the limitation caused by the morphology. However, some orthodontists demonstrated that the variations in tooth morphology should be taken into deep consideration, which proved to be more important than the variations between the different types of preadjusted brackets [18]. The research about the influence of variability in incisor morphology on torque expression was first conducted by Bryant, who proposed three anatomic features of the maxillary central incisor [1]. The three features from a proximal view were the crown-root angulation (supplementary angle of the Collum angle) formed by the intersection of the longitudinal axis of the crown and the longitudinal axis of the root, the labial surface angle formed by a tangent to the bracket bonding point on the labial surface of the crown and the long axis of the crown, and the lingual curvature of the crown. The following morphological studies of anterior teeth mainly focused on the first two features [2, 19, 21]. Before the introduction of CBCT, visualization of the Collum angle and labial surface angle mainly depended on the lateral cephalogram, which might provide a magnified image with virtual distortion and controversial conclusion [5, 22,23,24]. The use of high-resolution CBCT enables us to measure the two anatomical features convincingly in three-dimension with quantitative and qualitative evaluating software [25]. Recently, researchers have used CBCT to examine the morphology of the anterior teeth, including the Collum angle and labial surface angle [2, 7]. Nevertheless, none of them investigated the differences among various skeletal malocclusions, even though the values of Collum angle of maxillary central incisors were found great differences among various Angle malocclusions [1, 5, 26, 27]. For the Collum angle (CA), our observation furtherly confirmed the widespread existence of the crown-root phenomenon, which was consistent with previous lateral cephalography studies [1, 4, 5, 18, 21, 26, 28, 29] (Fig. 7a–c). Generally, the morphology was susceptible during development, for the genetic and environmental factors, and the physiological mineralization of crown preceded that of root [12]. Thus, when erupting, forces from peroral muscles, mastication, and orthodontic appliance integrally changed the developmental direction or position [30, 31]. Previous studies had indicated that the CA differs among groups with different types of Angle malocclusion and notable lingual side bending of the long axis of crown relative to long axis of root in upper incisor in Angle Class II division 2 patient [1, 8, 10]. Hence, we hypothesized that the formation of CA might associate with facial growth pattern for the common environmental and genetic determinants. In addition, we excluded samples of Angle Class II division 2 because of the proved apparent CA in maxillary central incisor. In the current study, Class II (5.18 ± 4.97°) samples had significantly greater CA compared with Class I (− 1.02 ± 6.30°) and Class III (0.43 ± 5.44°) in maxillary, while in mandibular, the Class III (5.59 ± 5.64°) samples presented significantly greater CA compared with Class I (0.40 ± 5.80°) and Class II (0.82 ± 5.7°). Combining with previous viewpoints, we suggested that remarkable CA in the maxillary incisor of skeletal Class II and the mandibular incisor of Class III could cause the root to be closer to the lingual cortical alveolar compared with the other types of skeletal malocclusion, which increased the risk of dehiscence and fenestration, root resorption, and torque limitation in the process of labial inclination [1, 5, 10]. The various Collum angle in central incisor, the long axis of the root can deviate to the labial side (a) or lingual side (c) of the long axis of the crown, or coincidence (b). The schematic diagram indicates that the root bends toward lingual cortical alveolar because of Collum angle (d). The schematic diagram elucidates that the more obvious Collum angle accompanies with the greater labial surface curvature of the crown (e) Labial surface angle (LSA) was another anatomical feature of the tooth, standing for the labial surface curvature of the crown [2]. Fredericks observed a variation of 21° when LSA measured at the point 4.2 mm apart from the incisor edge in the occlusal-gingival direction using 30 extracted incisors [1]. Thus, the individual variety of labial surface curvature led to elusive torque control on preadjusted appliances. Miethke indicated that there was considerable variation of labial surface curvature among teeth in different positions. The curvature of lower incisor was the smallest while the lower first molar was the largest, which was consistent with our results on LSA in maxillary and mandibular incisor (15.37 ± 4.27° vs 12.92 ± 4.14°). The significant discrepancy of LSA caused a wide range of torque 12.3 to 24.9° when detecting it at 4.5 mm apart from the occlusal surface [26]. Kong also found the value of LSA was significantly different at different heights from incisor edge, and the tangent point at a height from 3.5 to 5 mm, each 0.5 mm increase, the torque reduced by 1.5° [2]. Our study indicated that the values of LSA were greater in maxillary incisor of sagittal skeletal Class II malocclusion and mandibular incisor of Class III than other facial groups. Hence, when treating the same type of incisor with brackets with the same prefabricated torque at the same vertical height from the incisal edge, greater torque expression deviation might occur in the two groups of patients. Interestingly, our study also detected a significant positive correlation between the value of CA and LSA, meaning the labial surface curvature was correspondingly greater in cases with remarkable crown-root angulation. Hence, the root tip became easier to contact the lingual cortical alveolar and more challenging to avoid dehiscence and fenestration when labially inclined. Consistent with the previous study, we detected no statistical difference in both CA and LSA among the vertical skeletal classifications. Harris found no correlation between CA and PP-FH, OP-FH, FH-MP, and lower face height ratio measurements standing for vertical growth pattern [5]. However, CA still affected the stress distribution of the periodontal ligament in the vertical direction with CA increasing and the center of tooth rotation gradually approached the dental cervix, which prevented the teeth from intruding into the alveolar bone [19, 32]. The cause of excessive lingual bending of incisor is still controversial at present, but more scholars prefer environmental factors. Harris reported that the mandibular incisor erupts earlier and provided restriction and guidance for the eruption of maxillary incisor when establishing occlusal contact. The remarkable CA of maxillary incisor usually accompanied by obvious anterior retroclination in Class III patients. In fact, these incisors presented excessive labial inclined feature due to compensatory reason. Moreover, other studies and present study found no significant difference compared with the Class I, so the conclusion of Harris was debatable [5]. Srinivasan furtherly discussed the relationship between the position of the lower lip line and CA and demonstrated that CA positive and increased when lower lip line ranged from the incisal 1/3 to middle 1/3, while the CA was negative and decreased when the lower lip line located at the crown cervix [8]. Mcintyre also agreed with the oral environmental contributors for the root tip 1/3 was still under mineralization after the eruption, which was sensitive to external forces [27]. Unlike the above views, Ruf and Pancherz reported no morphological difference in upper incisor between twins, one of whom belonged to Angle Class II division 1, another to Angle Class II division 2, even though with the higher located lower lip line [6], which illustrated the determinant role of genetic factors. Summing up the former viewpoints, we suggested that when the anterior occlusal relationship was initially established, neither the bite force conducting along the long axis of incisors was enough to resist tooth over eruption, nor balanced the perioral forces from tongue and lip. As a result, the crown-root angulation formed for the eruption direction of crown changed, while the root still mineralized along the assumptive pattern. Only when the incisor continued to erupt and balance with perioral muscle force, could the crown-root morphology be stabilized. Thus, it was important to coordinate oral and maxillofacial muscle function in preventing tooth abnormal morphology at the occlusion establishing stage. The maxillary incisor in sagittal skeletal Class II and mandibular incisor in Class III present greater crown-root angulation (Fig. 7d) and labial surface curvature than other types of malocclusion (Fig. 7e). There is a significant positive correlation between the two anatomical features. The above findings indicate that the morphologies of these teeth do play vital roles in torque variations, dehiscence, fenestration, and root resorption because of the root bending toward lingual cortical alveolar. Thus, when positioning a bracket, the variability of crown-root morphology is essential to be assessed before the operation. ANB: A, A-point, deepest bony point on the contour of the premaxilla below ANS; B, B-point, deepest bony point on the contour of the mandible above pogonion; ANB, angle between point A, B and point N ANOVA: One-way analysis of variance Collum angle CBCT: Cone-beam computed tomography CEJ: Cementoenamel junction FHI: S-Go, the distance between lines parallel to Frankfurt plane passing through S and Go, represents the posterior facial height; N-Me, the distance between lines parallel to Frankfurt plane passing through N and Me, represents for the anterior facial height; FHI (S-Go/N-Me), facial height index, the ratio of posterior and anterior height, stands for vertical growth pattern of individual FH-MP: The angle formed by the mandibular and Frankfurt plane, representing the extent of the vertical growth pattern LSA: Labial surface angle OP-FH: The angle formed by the occlusal and Frankfurt plane, representing the extent of the vertical growth pattern PP-FH: The angle formed by the Palatal and Frankfurt Plane, representing the extent of the vertical growth pattern SN-MP: S, sella, center of sella turcica; N, nasion, the most anterior limit of the frontonasal suture on the frontal bone in the facial midline; SN, connection between S and N, stands for anterior cranium base plane; Go, gonion, the most posterior inferior point of mandible angle; Me, menton, most inferior point of the bony chin; MP, connection between Me and Go, stands for mandibular plane; SN-MP, angle between SN and MP SPSS software: Statistical Product and Service Solutions software TEM: Technical error of measurement Wits: The perpendicular lines are dropped from points A and B onto the occlusal plane, Wits is measured from Ao to Bo Bryant RM, Sadowsky PL, Hazelrig JB. Variability in three morphologic features of the permanent maxillary central incisor. Am J Orthod. 1984;86:25–32. Kong WD, Ke JY, Hu XQ, Zhang W, Li SS, Feng Y. Applications of cone-beam computed tomography to assess the effects of labial crown morphologies and Collum angles on torque for maxillary anterior teeth. Am J Orthod Dentofacial Orthop. 2016;150:789–95. Heravi F, Salari S, Tanbakuchi B, Loh S, Amiri M. Effects of crown-root angle on stress distribution in the maxillary central incisors' PDL during application of intrusive and retraction forces: a three-dimensional finite element analysis. Prog Orthod. 2013;14:26. Shen YW, Hsu JT, Wang YH, Huang HL, Fuh LJ. The Collum angle of the maxillary central incisors in patients with different types of malocclusion. J Dent Sci. 2012;7:72–6. Harris EF, Hassankiadeh S, Harris JT. Maxillary incisor crown-root relationships in different angle malocclusions. Am J Orthod Dentofac Orthop. 1993;103:48–53. Ruf S, Pancherz H. Class II division 2 malocclusion: genetics or environment? A case report of monozygotic twins. Angle Orthod. 1999;69:321–4. Ma ESW. Differential CBCT analysis of Collum angles in maxillary and mandibular anterior teeth in patients with different malocclusions, UNLV Theses, Dissertations, Professional Papers, and Capstones; 2016. p. 2880. Srinivasan B, Kailasam V, Chitharanjan A, Ramalingam A. Relationship between crown-root angulation (Collum angle) of maxillary central incisors in Class II, division 2 malocclusion and lower lip line. Orthodontics (Chic). 2013;14:e66–74. Williams A, Woodhouse C. The crown to root angle of maxillary central incisors in different incisal classes. Br J Orthod. 1983;10:159–61. Delivanis H, Kuftinec M. Variation in morphology of the maxillary central incisors found in Class II, division 2 malocclusions. Am J Orthod. 1980;78:438–43. Cobourne MT, Sharpe PT. Tooth and jaw: molecular mechanisms of patterning in the first branchial arch. Arch Oral Biol. 2003;48:1–14. Li J, Parada C, Chai Y. Cellular and molecular mechanisms of tooth root development. Development. 2017;144:374–84. Oz U, Orhan K, Abe N. Comparison of linear and angular measurements using two-dimensional conventional methods and three-dimensional cone beam CT images reconstructed from a volumetric rendering program in vivo. Dentomaxillofac Radiol. 2014;40:492–500. Lione R, Franchi L, Fanucci E, Laganà G, Cozza P. Three-dimensional densitometric analysis of maxillary sutural changes induced by rapid maxillary expansion. Dentomaxillofac Radiol. 2013;42:79–82. Ferreira JB, Christovam IO, Alencar DS, da Motta AFJ, Mattos CT, Cury-Saramago A. Accuracy and reproducibility of dental measurements on tomographic digital models: a systematic review and meta-analysis. Dentomaxillofac Radiol. 2017;46:20160455. Celikoglu M, Kamak H. Patterns of third-molar agenesis in an orthodontic patient population with different skeletal malocclusions. Angle Orthod. 2012;82:165. Tang N, Zhao ZH, Liao CH, Zhao MY. Morphological characteristics of mandibular symphysis in adult skeletal Class II and Class III malocclusions with abnormal vertical skeletal patterns. West China J Stomatol. 2010;28:395–8. Knösel M, Jung K, Attin T, et al. On the interaction between incisor crown-root morphology and third-order angulation. Angle Orthod. 2009;79:454–61. Papageorgiou SN, Sifakakis I, Keilig L, Patcas R, Affolter S, Eliades T et al. Torque differences according to tooth morphology and bracket placement: a finite element study. Eur J Orthod. 2017;39:411–18. Sun B, Tang J, Xiao P, Ding Y. Presurgical orthodontic decompensation alters alveolar bone condition around mandibular incisors in adults with skeletal Class III malocclusion. Int J Clin Exp Med. 2015;8:12866–73. Israr J, Bhutta N, Rafique Chatha M. Comparison of Collum angle of maxillary central incisors in Class II div 1 & 2 malocclusions. Pakistan Oral Dent J. 2016;36:91–94. Edwards JG. A study of the anterior portion of the palate as it relates to orthodontic therapy. Am J Orthod. 1976;69:249–73. Ramirez-Sotelo LR, Almeida S, Ambrosano GM, Boscolo F. Validity and reproducibility of cephalometric measurements performed in full and hemifacial reconstructions derived from cone beam computed tomography. Angle Orthod. 2012;82:827–32. Kanj AH, Bouserhal J, Osman E, El Sayed AAM. The inflection point: a torque reference for lingual bracket positioning on the palatal surface curvature of the maxillary central incisor. Prog Orthod. 2018;19:39. Kapila S, Conley RS, Jr HW. Current status of cone beam computed tomography imaging in orthodontics. Dentomaxillofac Radiol. 2011;40:24–34. Van LM, Degrieck J, De PG, Dermaut L. Anterior tooth morphology and its effect on torque. Eur J Orthod. 2005;27:258. Mcintyre GT, Millett DT. Crown-root shape of the permanent maxillary central incisor. Angle Orthod. 2003;73:710. Bauer TJ. Maxillary central incisor crown-root relationships in Class I normal occlusions and Class II division 2 malocclusions. MS (Master of Science) thesis. University of Iowa; 2014. https://ir.uiowa.edu/etd/4572/. Feres MFN, Rozolen BS, Alhadlaq A, Alkhadra TA, El-Bialy T. Comparative tomographic study of the maxillary central incisor collum angle between Class I, Class II, division 1 and 2 patients. J Orthod Sci. 2018;7:1–5. Sarrafpour B, Swain M, Li Q, Zoellner H. Tooth eruption results from bone remodelling driven by bite forces sensed by soft tissue dental follicles: a finite element analysis. PLoS One. 2013;8:e58803. Kong X, Cao M, Ye R, Ding Y. Orthodontic force accelerates dentine mineralization during tooth development in juvenile rats. Tohoku J Exp Med. 2010;221:265–70. Pai S, Panda S, Pai V, Anandu M, Vishwanath E, Suhas AS. Effects of labial and lingual retraction and intrusion force on maxillary central incisor with varying Collum angles: a three-dimensional finite elemental analysis. J Indian Orthod Soc. 2017;51:28. This study was supported by the Science and Technology Department of Guangdong Province, China (No.2011A030300012), for the case selection and the Youth Science and Technology Foundation of Suzhou, Jiangsu Province, China (No. KJXW2016033), for the analysis and paper writing. Please contact the author for data requests. State Key Laboratory of Oral Diseases, Department of Cleft Lip and Palate Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China Xiao-ming Wang Department of Orthodontics, Stomatological Hospital of Kunming Medical University, Kunming, 650032, China Ling-zhi Ma Department of Orthodontics, Xi'an JiaoTong University Hospital of Stomatology, Xi'an, 710004, Shaanxi Province, China Department of Stomatology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, 215000, Jiangsu Province, China Hui Xue Search for Xiao-ming Wang in: Search for Ling-zhi Ma in: Search for Jing Wang in: Search for Hui Xue in: XM-w carried out the statistical analysis and writing. LZ-m carried out the cases collection. J -w participated in image measurement. H-x conceived of the study and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. Correspondence to Hui Xue. The experiment was independently reviewed and approved by our hospital ethics committee before the experiment and the registration number of the clinical research is K2016051 by the Ethics Committee Reviewing Biomedical Research of the Suzhou Municipal Hospital of Nanjing Medical University. All the processes were conducted in full accordance with the World Medical Association Declaration of Helsinki. At the same time, all the patients were informed of the study destination and processes, and all of them were consent to sign the treatment consent form voluntarily. That is to say, the signed consents were obtained from the parents/guardians of all participants involved in our study. Finally, the consent form was approved by the hospital ethics committee. The individual person's data of CBCT and intraoral images were consent to publish by the patient. Wang, X., Ma, L., Wang, J. et al. The crown-root morphology of central incisors in different skeletal malocclusions assessed with cone-beam computed tomography. Prog Orthod. 20, 20 (2019). https://doi.org/10.1186/s40510-019-0272-2 Crown-root morphology Skeletal malocclusion Cone-beam CT
CommonCrawl
Manual » Atomic-Scale Calculators » DFT: LCAO¶ QuantumATK can model the electronic properties of closed and open quantum systems within the framework of density functional theory (DFT) using numerical LCAO basis sets (linear combination of atomic orbitals). For periodic systems, QuantumATK can also use the plane-wave basis set for DFT calculations, as discussed in DFT: Plane Wave. The key parameter in the self-consistent calculation of the Kohn–Sham equations is the density matrix, which defines the electron density. For open systems, the density matrix is calculated using non-equilibrium Green's functions (NEGFs), see NEGF: Device Calculators, while for closed or periodic systems it is calculated by diagonalization of the Kohn–Sham Hamiltonian, as described in the present chapter. The electron density then sets up an effective potential, which is given by the Hartree, exchange-correlation, and external potentials. Knowing the effective potential allows us to obtain the Kohn–Sham Hamiltonian. The next section describes the mathematical formalism behind the DFT-LCAO model. Background information¶ The LCAOCalculator provides a description of electronic structure using DFT and norm-conserving pseudopotentials. The method is based on an expansion of the single-particle wave functions in a basis of numerical atomic orbitals with compact support. In this section, we describe the mathematical formalism of the DFT-LCAO calculator, which closely follows the work of Soler et al. [SAG+02]. Kohn–Sham Hamiltonian¶ Within density functional theory, the many-body electronic structure of the system is described in terms of the one-electron Kohn–Sham Hamiltonian: \[\hat{H}_\mathrm{1el} =-\frac{\hbar^{2}}{2m} \nabla^2 + V^{\mathrm{eff}}[n](\mathbf{r}).\] In this equation, the first term is the kinetic energy of the electron, while the second term (the effective potential) is the potential energy of the electron moving in the mean field created by the other electrons as well as in any external potential field, e.g. the electrostatic potential of ions or any other external field. The electrons are described in terms of the total electron density, \(n=n(\mathbf{r})\). The electron density is discussed in detail in the section Electron density, and the effective potential in the section Effective potential. Solving the Kohn–Sham equations by means of a basis set expansion¶ We calculate the one-electron eigenfunctions of the Kohn–Sham Hamiltonian, \(\psi_{\alpha}\), by solving the one-electron Schrödinger equation, \[\hat{H}_{\mathrm{1el}} \psi_{\alpha}({\mathbf{r}}) = \varepsilon_{\alpha} \psi_{\alpha}({\mathbf{r}}).\] This differential equation is called the Kohn–Sham equation within DFT. To solve it, we expand the eigenfunctions \(\psi_{\alpha}({\mathbf{r}})\) in a set of basis functions, \(\phi_i\): \[\psi_{\alpha}(\mathbf{r}) = \sum_i c_{\alpha i} \phi_i(\mathbf{r}).\] This allows us to represent the differential equation as a matrix equation for determining the expansion coefficients, \(c_{\alpha i}\): \[\sum_{j } H_{ij} c_{\alpha j} = \varepsilon_\alpha \sum_{j } S_{ij} c_{\alpha j},\] where the Hamiltonian matrix, \(H_{ij} = \langle \phi_i | \hat{H}_{\mathrm{1el}} | \phi_j \rangle\), and overlap matrix \(S_{ij} = \langle \phi_i | \phi_j \rangle\) are given by the multiple integrals with respect to the electron coordinates. Electron density¶ The electron density of the many-electron system is given by the occupied eigenstates of the Kohn–Sham Hamiltonian: \[n(\mathbf{r}) = \sum_{\alpha} f_\alpha |\psi_\alpha(\mathbf{r})|^2,\] where \(f_\alpha\) is the occupation of the level denoted by \(\alpha\). For finite temperature calculation the occupations are determined by the Fermi-Dirac distribution \(f_\alpha = \frac{1}{1 + e^{(\epsilon_\alpha - \epsilon_F)/kT}}\) but other smooth distributions may be introduced in order to help speed convergence (see Occupation Methods). The electron density can then be expressed in terms of the density matrix: \[n(\mathbf{r}) = \sum_{ij} D_{ij} \phi_i(\mathbf{r}) \phi_j(\mathbf{r}),\] where the density matrix is given by the basis set expansion coefficients \(c_{\alpha i}\): \[D_{ij} = \sum_{\alpha} f_\alpha c_{\alpha i}^* c_{\alpha j}.\] For open systems (DeviceConfiguration), the density matrix is calculated using non-equilibrium Green's functions, see NEGF: Device Calculators. Electron difference density¶ It is often convenient to compare the electron density of the many-body system to a superposition of individual atom-based electron densities, \(n^{\mathrm{atom}}(\mathbf{r}-\mathbf{R}_\mu)\), where \(\mathbf{R}_\mu\) is the position of atom \(\mu\) in the many-body system: \[\Delta n(\mathbf{r}) = n(\mathbf{r}) - \sum_{\mu} n^{\mathrm{atom}}(\mathbf{r} - \mathbf{R}_\mu).\] \(\Delta n\) is called the electron difference density, and it is calculated using the ElectronDifferenceDensity analysis object. Effective potential¶ The effective potential, \(V^\mathrm{eff}[n]\), has three contributions: \[V^{\mathrm{eff}}[n] = V^{H}[n] + V^\mathrm{xc}[n] + V^\mathrm{ext}.\] The first two terms are due to electron–electron interactions, which depend on the electron density. The first term, \(V^{H}[n]\), is the Hartree potential due to the mean-field electrostatic interaction between the electrons, while the second term, \(V^\mathrm{xc}[n]\), is the exchange-correlation potential, which arises from the quantum mechanical nature of the electrons. The potential \(V^\mathrm{ext}\) represents any other electrostatic fields in the system. It can be separated into two contributions; the electrostatic potential of ions (given by norm-conserving pseudopotentials) and external electrostatic fields (given by one or more external sources). External potential¶ The external potential is given by the pseudopotentials and an external electrostatic field. Such a field may arise from the inclusion of metallic gates: \[V^\mathrm{ext} = \sum_\mu V^\mathrm{pseudo}_\mu + V^\mathrm{gate}.\] The pseudopotential has two contributions: \[V^\mathrm{pseudo}=V^\mathrm{local}+\sum_{n,n'}|\chi_n \rangle B_{n,n'}\langle \chi_{n'} |,\] where the first and second terms correspond to the local and nonlocal part of the pseudopotential, respectively. The term \(V^\mathrm{gate}\) is the electrostatic potential due to external gates, calculated with zero electron density. This term will be returned by the Analysis object ExternalPotential and is calculated using the MultigridSolver, DirectSolver and ParallelConjugateGradientSolver Poisson solvers. The Analysis object ElectrostaticDifferencePotential also includes this term, among others. Hartree potential¶ Using the electron density, we can calculate the classical electrostatic potential, the so-called Hartree potential. The actual calculation of the Hartree potential is described in detail in the section The Hartree Potential. Exchange-correlation potential¶ In the DFT method, the quantum mechanical part of the electron–electron interaction is approximated by the exchange-correlation term, and a large number of different approximate exchange-correlation density functionals exist. ATK supports many of these, see the section Exchange-correlation energy. The exchange-correlation potential is defined as the functional derivative of the exchange-correlation energy with respect to the electron density, which corresponds to a mean-field quantum mechanical interaction potential between the electrons: \[V^\mathrm{xc}[n](\mathbf{r}) = \frac{\delta E^\mathrm{XC}} {\delta n}(\mathbf{r}).\] Total energy and forces¶ The DFT total energy of a many-electron system is a functional of the electron density, \(n\): \[E[n] = T[n] + E^\mathrm{xc}[n] + E^{H}[n] + E^\mathrm{ext}[n],\] where \(T[n]\) is the kinetic energy of a non-interacting electron gas with density \(n\), \(E^\mathrm{xc}[n]\) the exchange-correlation energy, \(E^{H}[n]\) the Hartree potential energy, and \(E^\mathrm{ext}[n]\) the interaction energy of the electrons in the electrostatic field created by ions and other external sources. The electron kinetic energy may be defined as \[T[n] = \sum_{\alpha} f_\alpha \langle \psi_{\alpha}| \frac{-\hbar^{2}}{2m} \nabla^{2} | \psi_{\alpha} \rangle.\] The total energy is calculated using TotalEnergy. First-principles forces are calculated by differentiating the total energy with respect to the ionic coordinates of atom \(i\) at position \(\mathbf{R}_i\): \[\mathbf{F}_i = -\frac{d E[n]}{d \mathbf{R}_i}.\] Pseudopotentials¶ QuantumATK uses norm-conserving pseudopotentials and PAW potentials, and is shipped with a database for the entire periodic table. See the full list here: Pseudopotentials. This database is reviewed and updated on a regular basis to provide increasingly accurate and general-purpose pseudopotentials and PAW potentials. QuantumATK uses the unified pseudopotential format (upf) defined by the Quantum ESPRESSO consortium. From their website, one may download tools to convert several different pseudopotential formats into a upf file. LCAO basis set¶ The eigenfunctions of the Kohn–Sham Hamiltonian can be expanded in a Linear Combination of Atomic Orbitals (LCAO's): \[\phi_{nlm}(\mathbf{r}) = R_{nl}(r) Y_{lm}(\hat{\mathbf{r}}),\] where \(Y_{lm}\) are spherical harmonics, and \(R_{nl}\) are radial functions with compact support, being exactly zero outside a confinement radius. The basis set functions have a finite range, but the interaction range of the Hamiltonian is larger than that of the basis set. Theoretically, the Hamiltonian interaction range may exceed twice the basis set range, but is usually smaller in practice. The basis orbitals have a number of parameters that determine the shape of the orbitals. It is possible to assemble the basis orbitals into your own basis set through the use of the BasisSet keyword. QuantumATK comes with a number of pre-built basis sets for each chemical element and for each type of pseudopotential. See the full list here: LCAO basis sets Exchange-correlation energy¶ The QuantumATK package use the libxc library for providing a large suite of exchange-correlation functionals. For each functional it is possible to add a Hubbard correction, as described in the section XC+U mean-field Hubbard term. The main families of exchange-correlation functionals supported in QuantumATK are the Local-density approximation (LDA), the Generalized-gradient approximation (GGA), and the Meta-GGA. Each functional comes in four spin variants: non-polarized, spin-polarized (collinear), noncollinear, noncollinear with spin-orbit coupling. The choice of spin variant for the exchange-correlation functional determines the spin type used in all of the DFT-LCAO calculation. Furthermore, a Hubbard correction can be added to all exchange-correlation variants, regardless of spin type, see XC+U mean-field Hubbard term. Several standard GGA functionals can also be extended with the DFT-D2 and DFT-D3 dispersion corrections by Grimme and co-workers. Initial spin¶ Both collinear and noncollinear calculations may have local minima of the total energy that correspond to different spin configurations. It is therefore important to prepare the system in the correct initial spin configuration, such that the DFT self-consistent calculation will end up in the lowest energy spin configuration. This is done through the use of the methods InitialSpin and RandomSpin. The initial spin direction on each atom is specified in physical spherical coordinates \((r,\theta,\phi)\), where \(\theta\) is the angle with the z-axis, and \(\phi\) the polar angle in the x-y plane relative to the x-axis. The collinear case is \(\theta=0\) Radians or \(\theta=\pi\) Radians. Local-density approximation (LDA)¶ In the LDA approximation, the exchange-correlation energy is taken as a functional of the local electron density, \[E^\mathrm{LDA}[n] = \int n(\mathbf{r}) \varepsilon^\mathrm{LDA}(n(\mathbf{r})) d\mathbf{r},\] where \(\varepsilon^\mathrm{LDA}(n(\mathbf{r}))\) is the exchange-correlation energy density of a homogeneous electron gas with density \(n(\mathbf{r})\). It is possible to derive an exact, analytical expression for the exchange energy of the homogeneous electron gas, the so-called Dirac–Bloch exchange energy, which is used for all the LDA functionals. The correlation energy of the electron gas cannot be calculated exactly, so a number of different approximations have been proposed over the years. Most of them give almost identical results; the most commonly used variant is the parametrization by Perdew and Zunger, which is also the default LDA functional in DFT: LCAO. For a list of the parametrizations for the LDA correlation functional implemented in QuantumATK, see the documentation of the ExchangeCorrelation class. In particular, the section Abbreviations explains how to select a particular exchange-correlation functional. See also Table 21, Table 24, and Table 25. Generalized-gradient approximation (GGA)¶ The GGA functionals is a large family of semi-local approximations for the exchange-correlation energy, where the functional depends on both the local value and the local gradient of the electron density, \[E^\mathrm{GGA}[n] = \int n(\mathbf{r}) \varepsilon^\mathrm{GGA}(n(\mathbf{r}), \nabla n(\mathbf{r})) d\mathbf{r}.\] For a list of the GGA exchange-correlation functionals available in QuantumATK, see the documentation of the ExchangeCorrelation class. In particular, the section Abbreviations explains how to select a particular exchange-correlation functional. See also Table 22, Table 24, and Table 25. Meta-GGA¶ The meta-GGA (MGGA) family of functionals extend the GGA approximation by additionally depending on one or both of the following quantities: the Laplacian of the density \[\nabla^2 n(\mathbf{r})\] and the so-called kinetic energy density \[\frac{1}{2} \sum_\alpha f_\alpha \left | \nabla \psi_\alpha (\mathbf{r}) \right |^2.\] For a list of the MGGA exchange-correlation functionals available in QuantumATK, see the documentation of the ExchangeCorrelation class. In particular, the section Abbreviations explains how to select a particular exchange-correlation functional. See also Table 23, Table 24, and Table 25. Different formulations of the MGGA can produce functionals with different uses. The SCAN functional has been found to give improved energetics over the LDA and GGA in several cases; however, band gaps are similar. Conversely, the TB09 functional can provide a much more accurate description of band gaps than ordinary LDA and GGA. In fact, the accuracy of semiconductor band gaps obtained with TB09 are often comparable to those from calculations using GW or hybrid functionals, which have a significantly higher computational cost. It is even possible to tune the TB09 method such that it works optimally on both sides of an interface (see ExchangeCorrelation). Alternatively, one can use the Hubbard U model within the LDA or GGA with the U parameters for the semiconductor elements fitted against TB09 results. However, TB09 is not meant for energetics, and, hence, geometry optimizations are disabled with this functional. For this, the GGA or other MGGA functionals should be used. XC+U mean-field Hubbard term¶ The local approximations to the exchange-correlation energy have a number of shortcomings. Two of these are of particular interest: Self-interaction: The electron is formally allowed to interact with itself. This can prevent electrons from localizing properly. Excited states: The LDA and GGA description of conduction-band energy levels is often poor, so band gaps are often too low. The mean field Hubbard correction by Dudarev et al. [DBS+98] and Cococcioni et al. [CdG05], often denoted XC+U, DFT+U, LDA+U, or GGA+U, is a semi-empirical correction which attempts to improve on these deficiencies of the local exchange-correlation functionals by adding an extra term to the exchange-correlation functional: \[E_{U} = \frac{1}{2} \sum_\mu U_\mu (n_\mu - n_\mu^2) .\] In this equation, \(n_\mu\) is the projection onto an atomic shell and \(U_\mu\) is the Hubbard U for that shell. The \(E_{U}\) energy term is zero for a fully occupied or unoccupied shell, while positive for a fractionally occupied shell. The energy is thereby lowered if states become fully occupied or empty. This may happen if the energy levels move away from the Fermi Level, increasing the band gap, or if the broadening of the states is decreased, the electrons are then more localized. Thus, the Hubbard U method improves on the deficiencies of the local exchange-correlation functionals listed above. The value of \(U\) is often used as an empirical parameter, which is varied in order to improve the comparison between DFT and experimental data. However, it has also been suggested that the value of U may be obtained by minimizing the total energy of the system [CdG05]. Some caution must be taken when using the XC+U correction, since for large U values the electron density may have several local minima where some of them are unphysical, and it is often necessary to select an anisotropic initial electron state to reach the correct local minimum. XC+U implementations in QuantumATK¶ The XC+U implementation in QuantumATK comes in four main variants; a standard local-orbital formulation, so-called Onsite representation, and the Dual representation introduced by Han et al. [HOY06]. The difference between the two implementations is in the definition of the local occupation matrix. The occupations may be summed over a single orbital or over an entire angular momentum shell, the latter corresponds to the OnsiteShell and DualShell representations. Onsite representation¶ In the onsite representation, the local occupation matrix is given by \[n_{\mu, m m'}^\sigma = D_{\mu m, \mu m' }^\sigma,\] where \(D\) is the density matrix, \(\mu\) a basis orbital index, and \(\sigma\) a spin index. OnsiteShell representation¶ In the onsite shell representation, the occupation is obtained by summing together all the basis functions in each angular momentum shell, \(l\): \[n_{l, m m'}^\sigma = \sum_{\mu \in l} n_{\mu, m m'}^\sigma.\] Dual representation¶ In the dual representation, the local occupation matrix is given by \[n_{\mu, m m'}^\sigma = \sum_{i, n} \left[ S_{\mu m, i n} D_{i n, \mu m'}^\sigma + D_{\mu m, i n }^\sigma S_{ i n, \mu m'} \right],\] where \(S\) is the overlap matrix. DualShell representation¶ Similar to the onsite shell representation, the dual shell occupation is obtained by summing up all dual occupations in each angular momentum shell, \(l\): Which model to use?¶ The onsite representation corresponds to projecting into a local orbital, and since the local orbitals are non-orthogonal, there are no sum rules for the occupation matrix and it is not guaranteed that a fully occupied shell has occupation 1. In the dual representation, the occupation matrix has the form of a Mulliken population, and this form has the advantage that sum rules apply, i.e. the trace of the occupation matrix sums up to the total number of electrons. The dual representation corresponds to projecting into orthogonalized orbitals. Such orbitals may have non-zero weights also on neighboring centers, and our experience shows that this kind of non-locality can result in non-physical results in some cases. The default in QuantumATK is therefore to use the onsite representation. The onsite shell and dual shell representations often give a better description with multiple-zeta basis sets compared to the onsite and dual representation, where the occupation is obtained by projecting into single basis functions. Input format for XC+U¶ The Hubbard U approximation in the onsite representation may be selected through keywords of the types LDAU.XCTYPE, LSDAU.XCTYPE, GGAU.XCTYP, and SGGAU.XCTYPE. The keyword LSDAU.PZ gives the default LSDA functional with the default Hubbard U method. The specification exchange_correlation = LSDAU.PZ is therefore identical to exchange_correlation = ExchangeCorrelation( exchange=DiracBloch, correlation=PerdewZunger, hubbard_term=Onsite, number_of_spins=2, Use this form with the option hubbard_term=Dual to select the dual representation. The value of the U term is specified through the basis set: basis_set = [LDABasis.Nickel_SingleZeta(hubbard_u=[4.6, 0.0]*eV, filling_method=Anisotropic)] which will give a single-zeta basis set for nickel, consisting of a 3d and a 4s orbital, where the 3d orbital has U=4.6 eV. The 3d orbitals are filled anisotropically, i.e, there is 1 electron in in orbitals with m=-2,-1,0,1, leaving m=2 empty. The alternative is filling_method=SphericalSymmetric, which puts 4/5 electron in each orbital. DFT-1/2 method¶ The DFT-1/2 method (often also denoted LDA-1/2 or GGA-1/2) is a semi-empirical approach to correct the self-interaction error in local and semi-local exchange-correlation functionals for extended systems. As such, it can be viewed as an alternative to the XC+U mean-field Hubbard term with broadly the same aims: to improve the description of conduction-band energy levels and band gaps. This method, introduced by Ferreira et al. [FMT08], is based on the much older Slater half-occupation scheme for molecules [SJ72] (also known as the transition state method). Slater's original method consists in carrying out a self-consistent calculation with half an electron removed from the system, and taking the eigenvalue of the half-filled state as an estimate for the ionization energy. This approach was later formalized in the theorem by Janak [Jan78]: \[\frac{\partial E}{\partial f_\alpha} = \varepsilon_\alpha \left ( f_\alpha \right ),\] where \(E\) is the total energy of the system, \(f_\alpha\) is the occupation of state \(\alpha\) (between 0 and 1), and \(\varepsilon_\alpha\) is the eigenvalue of the state. The success of the half-occupation scheme is based on the fact that \(\varepsilon_\alpha \left ( f_\alpha \right )\) is known to be almost precisely linear for many cases, which makes the relationship with the ionization energy exact. The DFT-1/2 method makes use of the theoretical insights from the half-occupation scheme and to tackle the fundamental problem of the self-interaction error for the case of extended systems with a band gap. The method attempts to correct for this by defining an atomic self-energy potential which cancels the electron-hole self-interaction energy. This potential is calculated for atomic sites in the system, and is defined as the difference between the potential of the neutral atom and that of a charged ion resulting from the removal of a fraction of its charge, between 0 and 1 electrons. The total self-energy potential is the sum of these atomic potentials. The addition of the DFT-1/2 self-energy potential to the DFT Hamiltonian has been found to greatly improve band gaps for a wide range of semiconducting and insulating systems [FMT11]. It is important to note that the method is not entirely free of empirical parameters. Much like the value of \(U\) for the Hubbard U method, there are two values which much be fixed for every species in the system: the fractional charge \(f_\alpha\) removed from the neutral atom, and a cutoff radius \(r^\mathrm{cut}\) beyond which to trim the atomic self-energy potential. The latter is needed to avoid excessive overlap of the potentials from different sites in the crystal. The original authors of the method suggest to fix \(r^\mathrm{cut}\) variationally, by choosing the value which maximizes the band gap [FMT08] [FMT11]. The choice of \(f_\alpha\) is less well-defined: although a value of 0.5 is standard (hence the name of the method), this is known not to be the optimal choice for various materials (e.g., silicon). It may therefore be appropriate to treat it as an empirical parameter, to be varied by comparison with experiment. It is also important to note that not all species in the system necessarily require the DFT-1/2 correction; it is generally advisable only to add this to the anionic species, and leave the cationic species as normal [FMT08] [FMT11]. Input format for DFT-1/2¶ The DFT-1/2 method may be selected by specifying one of the pre-defined exchange-correlation functionals: LDAHalf, LSDAHalf, NCLDAHalf, SOLDAHalf, GGAHalf, SGGAHalf, NCGGAHalf, SOGGAHalf. The keyword LDAHalf.PZ gives the default LDA functional with the DFT-1/2 correction. The specification exchange_correlation = LDAHalf.PZ dft_half_enabled=True In general, the keyword dft_half_enabled=True can be specified for any user-defined exchange-correlation functional. The value of the DFT-1/2 parameters are specified through the basis set: dft_half_parameters = DFTHalfParameters( element=Arsenic, fractional_charge=[0.3, 0.0], cutoff_radius=4.0*Bohr) basis_set = [ LDABasis.Arsenic_DoubleZetaPolarized( dft_half_parameters=dft_half_parameters), LDABasis.Gallium_DoubleZetaPolarized( dft_half_parameters=Disabled) The above example shows a typical case for gallium arsenide, in which the anion (As) is given a DFT-1/2 self-energy potential calculated with \(f_\alpha = 0.3\) and \(r^\mathrm{cut} = 4~a_0\), while the cation (Ga) has its DFT-1/2 correction turned off with the keyword Disabled. If dft_half_parameters is not specified for a species, the default Automatic is used. This will set the DFT-1/2 default parameters from the table of optimized values given below. When the DFT-1/2 method is enabled, the self-energy potential is included in the calculation of the total energy, forces and stress. However, the method is only intended as a scheme for calculating band structures, not for structural relaxations. We recommend that structural properties be calculated before applying the DFT-1/2 correction. DFT-1/2 default parameters¶ The default DFT-1/2 parameters used by the Automatic keyword have been optimized against a wide range of materials, and should improve upon the standard DFT band gap in most cases. The complete list is given in the table below. As explained above, we only include elements which tend to have anionic character in compounds of interest. All elements not present in the table default to Disabled, i.e., no DFT-1/2 correction. For meta-GGA exchange-correlation functionals, all elements default to Disabled. Table 10 Optimized DFT-1/2 parameters.¶ LDA functionals GGA functionals fractional_charge cutoff_radius C 0.3 2.5 Bohr 0.4 2.5 Bohr N 0.4 3.0 Bohr 0.4 3.0 Bohr O 0.5 2.5 Bohr 0.4 3.0 Bohr F 0.7 3.0 Bohr 0.6 2.5 Bohr Si 0.2 4.0 Bohr 0.2 4.0 Bohr P 0.3 4.0 Bohr 0.3 3.5 Bohr S 0.6 3.5 Bohr 0.4 3.5 Bohr Cl 0.7 3.5 Bohr 0.7 3.5 Bohr Zn 0.5 2.0 Bohr 0.5 2.0 Bohr Ge 0.5 4.0 Bohr 0.6 4.0 Bohr As 0.3 4.0 Bohr 0.3 4.0 Bohr Se 0.5 3.5 Bohr 0.5 3.5 Bohr Br 0.7 4.0 Bohr 0.7 4.0 Bohr Sn 0.7 5.0 Bohr 0.9 4.5 Bohr Sb 0.3 4.5 Bohr 0.3 4.5 Bohr Te 0.4 4.0 Bohr 0.4 4.0 Bohr I 0.6 4.5 Bohr 0.6 4.0 Bohr Spin-orbit coupling (SOC)¶ Background¶ The spin-orbit coupling can lead to energy level splits of bandstructures and molecular energy spectra. As a rule of thumb, the magnitude of SOC scales as \(Z^{4}/n^{3}l^{2}\), where \(Z\), \(n\), and \(l\) are the atomic charge, principle, and angular quantum number of the considered energy level, respectively. The SOC is fundamentally a relativistic effect, and can be described by a second-order expansion of the relativistic Hamiltonian in terms of the fine structure constant \(\alpha\) [FernandezSOSF06]. In DFT: LCAO, the SOC is taken into account via the NormConservingPseudoPotential. For example, the SG15 pseudopotentials, shipped with QuantumATK as of the 2016 release, come in two versions for each element; a standard scalar-relativistic pseudopotential (denoted "SG15"), and one that is generated by mapping the solution to the Dirac equation, which naturally includes the SOC, to a scalar-relativistic pseudopotential (denoted "SG15-SO"): \[V_{ps} = V_{L} + V_{NL}^{+1/2} + V_{NL}^{-1/2},\] with a local contribution \(V_{L}\) and non-local contributions from total angular momenta \(j=l+1/2\) and \(j=l-1/2\). Each non-local term is expanded in spin-orbit projector functions \(P_{\alpha\beta}^{l \pm 1/2, \xi}\), \[V_{NL}^{\pm 1/2} = \sum_{l,\xi,\alpha,\beta} \nu_{l\pm 1/2,\xi} P_{\alpha\beta}^{l \pm 1/2, \xi},\] where \(\nu_{l\pm 1/2,\xi}\) are normalization constants. The indices \(\alpha,\beta\) denote the possible spin orientations (up, down). Each nonlocal pseudopotential term requires four projector functions per expansion order \(\xi\). Use SG15-SO or OpenMX pseudopotentials for DFT: LCAO calculations with spin-orbit. The SOC terms are enabled by choosing a pseudopotential containing spin-orbit terms (SG15-SO or OpenMX), and appropriate settings for the ExchangeCorrelation keyword. For example, an QuantumATK Python script for silicon with SOC could contain the following specification for the calculator: #---------------------------------------- # Basis Set BasisGGASG15SO.Silicon_Medium, # Exchange-Correlation exchange_correlation = SOGGA.PBE # Calculator k_point_sampling = MonkhorstPackGrid(na=9,nb=9,nc=9) numerical_accuracy_parameters = NumericalAccuracyParameters( k_point_sampling=k_point_sampling, density_mesh_cutoff=100.0*Hartree, calculator = LCAOCalculator( basis_set=basis_set, exchange_correlation=exchange_correlation, numerical_accuracy_parameters=numerical_accuracy_parameters, Note the setting exchange_correlation=SOGGA.PBE, which switches on the SOC terms in the pseudopotential. Omitting this keyword or setting it to a non-SOC method, e.g. NCGGA.PBE, would still result in a calculation with the SG15-SO pseudopotential, but the SOC terms would be disregarded. [CdG05] (1, 2) M. Cococcioni and S. de Gironcoli. Linear response approach to the calculation of the effective interaction parameters in the LDA+U method. Phys. Rev. B, 71:035105, Jan 2005. doi:10.1103/PhysRevB.71.035105. [DBS+98] S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, and A. P. Sutton. Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study. Phys. Rev. B, 57:1505–1509, Jan 1998. doi:10.1103/PhysRevB.57.1505. [FernandezSOSF06] L. Fernández-Seivane, M. A. Oliveira, S. Sanvito, and J. Ferrer. On-site approximation for spin–orbit coupling in linear combination of atomic orbitals density functional methods. J. Phys.: Condensed Matter, 18(34):7999, 2006. URL: http://stacks.iop.org/0953-8984/18/i=34/a=012. [FMT08] (1, 2, 3) Luiz G. Ferreira, Marcelo Marques, and Lara K. Teles. Approximation to density functional theory for the calculation of band gaps of semiconductors. Phys. Rev. B, 78:125116, Sep 2008. doi:10.1103/PhysRevB.78.125116. [FMT11] (1, 2, 3) Luiz G. Ferreira, Marcelo Marques, and Lara K. Teles. Slater half-occupation technique revisited: the LDA-1/2 and GGA-1/2 approaches for atomic ionization energies and band gaps in semiconductors. AIP Adv., 1(3):032119, 2011. doi:10.1063/1.3624562. [HOY06] M. J. Han, T. Ozaki, and J. Yu. O(N) LDA+U electronic structure calculation method based on the nonorthogonal pseudoatomic orbital basis. Phys. Rev. B, 73:045110, Jan 2006. doi:10.1103/PhysRevB.73.045110. [Jan78] J. F. Janak. Proof that $\partial E / \partial n_i = \varepsilon \,$ in density-functional theory. Phys. Rev. B, 18:7165–7168, Dec 1978. doi:10.1103/PhysRevB.18.7165. [SJ72] J. C. Slater and K. H. Johnson. Self-consistent-field $X \alpha \,$ cluster method for polyatomic molecules and solids. Phys. Rev. B, 5:844–853, Feb 1972. doi:10.1103/PhysRevB.5.844. [SAG+02] J. M. Soler, E. Artacho, J. D. Gale, A. García, J. Junquera, P. Ordejón, and D. Sánchez-Portal. The SIESTA method for ab initio order-N materials simulation. J. Phys.: Condensed Matter, 14(11):2745, 2002. URL: http://stacks.iop.org/0953-8984/14/i=11/a=302.
CommonCrawl
Avoidable mortality due to long-term exposure to PM2.5 in Colombia 2014–2019 Laura A. Rodriguez-Villamizar1, Luis Carlos Belalcazar-Ceron2, María Paula Castillo2, Edwin Ricardo Sanchez2, Víctor Herrera1,3 & Dayana Milena Agudelo-Castañeda4 Environmental Health volume 21, Article number: 137 (2022) Cite this article To compare estimates of spatiotemporal variations of surface PM2.5 concentrations in Colombia from 2014 to 2019 derived from two global air quality models, as well as to quantify the avoidable deaths attributable to the long-term exposure to concentrations above the current and projected Colombian standard for PM2.5 annual mean at municipality level. We retrieved PM2.5 concentrations at the surface level from the ACAG and CAMSRA global air quality models for all 1,122 municipalities, and compare 28 of them with available concentrations from monitor stations. Annual mortality data 2014–2019 by municipality of residence and pooled effect measures for total, natural and specific causes of mortality were used to calculate the number of annual avoidable deaths and years of potential life lost (YPLL) related to the excess of PM2.5 concentration over the current mean annual national standard of 25 µg/m3 and projected standard of 15 µg/m3. Compared to surface data from 28 municipalities with monitoring stations in 2019, ACAG and CAMSRA models under or overestimated annual mean PM2.5 concentrations. Estimations from ACAG model had a mean bias 1,7 µg/m3 compared to a mean bias of 4,7 µg/m3 from CAMSRA model. Using ACAG model, estimations of total nationally attributable deaths to PM2.5 exposure over 25 and 15 µg/m3 were 142 and 34,341, respectively. Cardiopulmonary diseases accounted for most of the attributable deaths due to PM2.5 excess of exposure (38%). Estimates of YPLL due to all-cause mortality for exceeding the national standard of 25 µg/m3 were 2,381 years. Comparison of two global air quality models for estimating surface PM2.5 concentrations during 2014–2019 at municipality scale in Colombia showed important differences. Avoidable deaths estimations represent the total number of deaths that could be avoided if the current and projected national standard for PM2.5 annual mean have been met, and show the health-benefit of the implementation of more restrictive air quality standards. Exposure to air pollutants have adverse effects on human health leading to increased mortality. Although various atmospheric pollutants are associated with increased risk of mortality, especially for respiratory and cardiovascular diseases, atmospheric particulate matter < 2.5 μm -PM2.5- is widely studied and is often used as a proxy indicator of air pollution exposure [1]. PM2.5 consists of inhalable particles and its adverse effects are due to their capacity to penetrate and deposit into the lower respiratory tract, facilitating the submicron particles to avoid the tissues' natural mechanisms of clearance and to form active oxides into the lungs [2, 3]. These are the most cytotoxic ambient particles and as a result, there is a long- term retention of the particles and their absorbed chemicals cause oxidative damage and an increase in the risk of toxicity [2, 4,5,6]. There is vast epidemiological evidence of the association between PM2.5 and mortality and morbidity outcomes [7,8,9,10]. The International Agency of Research on Cancer (IARC) have raised environmental concerns about atmospheric particles affecting air quality and human health and declared PM in outdoor pollution as carcinogenic to humans [11]. In 2017, it was estimated that 92% of the world's population lived in areas that exceeded the World Health Organization (WHO) Air Quality Guidelines (AQG) 2005 for PM2.5, thus contributing to 2.9 million deaths [12]. The burden of disease caused by ambient air pollution is large, particularly in low- and middle-income countries, being the leading environmental risk factor and one of the most important overall risk factors for global mortality [13, 14]. It is estimated that ambient air pollution is responsible for 4 to 9 million deaths each year worldwide, and therefore reducing air pollutants concentrations has become a global goal related to achieving the Sustainable Development Goals (SDGs) by making more restrictive air quality regulations [15]. In Colombia, according to the national burden of environmental disease study, there were 15,361 deaths attributable to air pollution in 2016 (rate 719.18 per 100,000 population) [16]. Increases in PM10 concentrations have been associated with increased risk of cardiovascular mortality in Bogotá [17], and increases in PM2.5 levels have been associated with increased risk of cardiopulmonary morbidity in different Colombian cities [18]. Particulate matter, both PM2.5 and PM10, are the air pollutants of most concern by environmental authorities in Colombia as they are the pollutants with more exceedances per year based on national regulatory levels [19]. The evolution of air quality regulations has been dynamic, and air quality regulations have been created and updated. Colombia has had exclusive regulations for the control of air pollution. In 2010, the Air Pollution Prevention and Control Policy was approved along with the resolution 610 that established the national air quality standard. The standard was updated by the resolution 2254 of 2017, in which the maximum permissible PM2.5 levels of 50 and 25 µg·m− 3 were established for daily (24 h) and annual mean, respectively. Starting in 2018, the maximum permissible levels were more restrictive for an average exposure time of 24 h with limits of 37 µg·m− 3. These levels established in 2017 were still above the WHO 2005 AQG for PM2.5 daily (24 h) and annual mean of 25 and 15 µg·m− 3 respectively. The annual 2005 AQG was proposed to be achieved in 2030. The updated 2021 WHO AQG level for PM2.5 is 15 and 5 µg·m− 3 for daily and annual mean, respectively, and therefore the current national standard for PM2.5 annual mean corresponds to the new WHO interim target 2 [15]. Many of the world's most developed countries measure PM2.5 concentrations through networks of monitoring stations, concentrated principally in urban areas. Although these data sources are valuable, in developing countries, air quality monitoring stations are scarce in urban areas, as well as far from mid-sized cities, suburban, and rural areas. Therefore, to obtain surface data of PM2.5 concentrations from these locations around the world, the results of air quality stations must be combined with satellite observations and information from global models [20]. Colombia has a national air quality network composed of 24 surveillance systems and 175 monitoring stations of which 92 monitored PM2.5 in 2019 [19] (IDEAM, 2021). Consequently, spatial PM2.5 resolution in all Colombian cities is limited, due to the lack of enough air quality stations. In addition, previous studies showed that the region contributes to biomass burning aerosol associated to PM [21], which occurs mainly in locations far from existing monitoring stations. Thus, the importance of using global models to estimate the exposure to PM2.5. Recently, some products with high temporal and spatial resolutions from geostationary-orbit satellites have been available. Several studies assessed the relationship of PM2.5 and mortality, using satellite-derived estimations [22,23,24,25,26,27,28], mainly in the USA, Europe and Southeast Asia. Limited studies are available in South America [29,30,31,32,33]. For countries, estimating the avoidable mortality related to air pollution is central for decision making and for quantifying the potential health co-benefits that can be obtained with more strict air quality regulations. Consequently, the aim of the study was to compare estimates of spatiotemporal variations of surface PM2.5 concentrations in Colombia from 2014 to 2019 derived from two global air quality models, as well as to quantify the avoidable deaths at municipality level attributable to the long-term exposure to current and projected for 2030 Colombian standard for PM2.5 annual mean of 25 and 15 µg/m3, respectively. Colombia is a country located in South America with an estimated total population of 49,395,678 inhabitants in 2019, with 29,221,754 over 25 years old, distributed in 1,122 municipalities and 33 departments, including the capital district [34]. According to this census, 51.2% of the total population are women and 71.8% of the population live in urban areas. Based on 2018 census, it is estimated that the total population in Colombia in 2030 will be 55,678,083 inhabitants. According to the Colombian Institute of Hydrology, Meteorology, and Environmental Studies - IDEAM, ninety-two out of 1,122 municipalities regularly measure air quality in Colombia [19]. Large cities such as Bogota, Medellin, Bucaramanga, Cali, and Barranquilla have automatic air quality monitoring networks. On the other hand, medium-sized and smaller cities perform periodic manual measurements that are not readily available [35]. Because of the scarcity of surface measurements in the country, we retrieved PM2.5 concentrations at the surface level between 2014 and 2019 (as a measure of long-term exposure) from the global estimations of the Atmospheric Composition Analysis Group (ACAG) model and from the Copernicus Atmospheric Monitoring Service -CAMS- Reanalysis (CAMSRA). The ACAG is a global three-dimensional model that estimates surface concentrations by combining Aerosol Optical Depth (AOD) retrievals with GEOS-Chem chemical transport model and estimations are calibrated with global-ground based observations using a geographically weighted regression. We used the ACAG global surface PM2.5 estimations at 0.1° resolution which are freely available from the Washington University ACAG website as version V5.GL.01 [36]. CAMSRA uses four-dimensional variational data assimilation techniques that combines satellite observations with a global scale atmospheric model to produce aerosol and particle concentrations and mixing ratios of several gases at the surface and vertical gridded data [37, 38]. We obtained CAMSRA PM2.5 concentrations at the surface level over Colombia using the ECMWF Web API in Python provided at this platform [39]. We retrieved daily data at three hourly temporal resolutions and gridded at a 0.125° resolution (≈ 12 km) from January 1st, 2014, to December 31st, 2019. The PM2.5 concentrations were averaged per year and estimated at the centroid of each municipality by using the Inverse Distance Weighted (IDW) interpolation method from the nearest four retrieved CAMSRA concentrations [40]. Then, the results obtained from IDW interpolation were assessed statistically through the Spearman correlation coefficient (Rho) and Mean Bias (MB). In order to evaluate the responsiveness of ACAG and CAMSRA PM2.5 data, we compared retrieved annual average PM2.5 concentrations with available ground-based measurements provided by IDEAM. We only used surface PM2.5 monitoring stations with more than 75% of measurements. We found that from 2014 to 2018, very few municipalities had enough information to compare with the models. Only in 2019, 28 municipalities reported PM2.5 measurements in 69 stations. Information from all the stations available in each municipality was averaged, and this average compared with ACAG and CAMSRA downloaded concentrations in each municipality using Pearson´s correlation coefficient and plotting the data with a linear regression line. We used Bland & Altman limits of agreement for estimating the mean differences between models estimations and ground concentrations and their 95% confidence intervals [41]. Mortality data, avoidable mortality and years of potential life lost estimation The annual mortality data for 2014–2019 by municipality of residence and the codes from the International Classification of Diseases 10th version (ICD-10) were obtained from the public information system for social protection in Colombia (SISPRO, for its initials in Spanish) which compiles vital statistics validated from the National Department of Statistics (DANE, for its initials in Spanish) [42]. Population estimations by municipality and life expectancy for adults 25 years and older were obtained from the DANE [34]. We calculated the number of annual avoidable deaths related to the reduction of levels of PM2.5 concentration between 2014 and 2019 to accomplish the national standard of 25 µg/m3 implemented in Colombia since January 2011 and the 2030 projected national standard of 15 µg/m3 which corresponds to current WHO interim target 3. For this purpose, we calculated annual avoidable premature deaths for adults 25 years and older using the pooled effect estimates for total, natural and specific causes of mortality derived from meta-analyses of all international studies from Chen & Hoek [9] using Risk Ratios (RR) and selected international studies from Pope [43] using Hazard Ratios (HR), as shown in Table 1. Table 1 Pooled effect estimates for PM2.5 exposure and mortality used for calculation of avoidable mortality We used these RR/HRs as reference because there are no cohort studies to estimate the effect of long-term exposure on mortality available for Colombia and these studies are the most updated systematic review and meta-analyses of the effect of long-term exposure to PM2.5 on mortality based on results of 104 [9] and 75 [43] cohort studies around the world, respectively. The cohort studies included in these studies were conducted in North America, Europe, and Asia, with no studies from Africa, Central and South America; however, they included a wide range representation of PM2.5 concentrations that included mean annual levels in PM2.5 in Colombia. Using the RR/HRs derived from those meta-analyses, the number of annual avoidable deaths were calculated using the log-normal function expressed by Eq. (1) [44]: $$\mathrm{Avoidable}\;\mathrm{deaths}\;\triangle Y\:=\:{\mathrm Y}_0\;\ast\;\mathrm P\mathrm{opulation}\;\ast\;\left(1-\mathrm e^{-\mathrm\beta\ast\triangle\mathrm{PM}}\right)$$ Where ΔY are the change in mortality expressed as avoidable deaths. Y0 is the baseline mortality rate for all causes or specific causes (2014–2019); Population is the exposed population, ΔPM is the annual PM2.5 concentration change from baseline (annual concentration obtained from models) to the Colombia standard of 25 µg/m3 which is the control scenario, and ß is the coefficient of theRR/HRs for an increase in PM2.5 concentration. This coefficient was calculated based on the RR/HRs calculated as pooled effect estimates from the studies from Chen & Hoek (2020) and Pope (2020) expressed according to Eq. (2) [45]: $$\mathrm\beta\;=\;\ln\;(\mathrm{RR})\;/\;\triangle Q$$ where ΔQ refers to the PM2.5 concentration change that the studies used for RR/HRs estimation that is usually 10 µg/m3. The baseline scenarios using the annual PM2.5 concentration at municipality level were obtained from satellite data from ACAG and CAMSRA models as described before. The control scenario was the Colombia standard of 25 µg/m3 implemented in Colombia since January 2011. Therefore, the annual number of avoidable deaths was zero for those municipalities with an annual PM2.5 concentration equals to or lower than 25 µg/m3. For those municipalities with annual PM2.5 concentration over 25 µg/m3, the avoidable deaths calculated represent the number of deaths that could be avoided if the national standard had been reached in that year. The total number of avoidable deaths 2014–2019 was calculated using the sum of the annual avoidable deaths for each municipality. The sum of avoidable deaths for municipalities of the same department were used to calculate annual and total avoidable deaths by department. The same procedure for calculating avoidable deaths was used with the projected 2030 national standard of 15 µg/m3 . Furthermore, we calculated years of potential life lost (YPLL) due to nationwide total avoidable deaths by year for the period 2014–2019. First, we calculated age-group specific avoidable deaths as the product of age-group specific proportions of national deaths by the total number of avoidable deaths by year: $$\varDelta {Y}_{}ij=\left(\frac{{D}_{ij}}{{D}_{i}}\right)\times \varDelta Yi$$ where i indexed the year of interest (2014, 2015, …, 2019), j indexed sixteen five-year age groups (25–29, 30–34, …, 95–99, ≥ 100 years), and D corresponded to the nationwide total number of deaths among individuals 25 years and older. Then annual YPLL was calculated as the sum of the products of age-group avoidable deaths and their mean life expectancies (LE): $${YPLL}_{i}={\sum }_{=1}^{}\ 16\left(\varDelta {Y}_{ij}\times {LE}_{ij}\right)$$ PM2.5 exposure estimations and comparison against monitoring data Figure 1 shows PM2.5 annual concentrations for 2019 at the municipality level retrieved from ACAG and CAMSRA models. There are significant differences in terms of the location and magnitude of the municipalities with the highest concentrations with both models. The ACAG model shows that the municipalities with the highest concentrations (above 20 µg/m3) are in the northern central and south part of the country and are mainly in Amazon area and its surrounding areas. On the contrary, CAMSRA shows that the highest concentrations are in the country's center, mainly Bogotá and the municipalities at the west of this city, overlapping with the most densely populated region of Colombia (World Population Review, 2021. CAMSRA also showed high PM2.5 concentrations (> 25 µg/m3) in the north of the country and along part of the Caribbean coast, overlapping with highly populated municipalities. However, in that region, ACAG model reproduces lower concentrations that are between 15 and 20 µg/m3. The ACAG models reported PM2.5 concentrations over 20 µg/m3 over the eastern lowland, in the Amazon and the Orinoco basins. There are also some differences in concentrations over the time between 2014 and 2019, but in general ACAG models are consistent in estimating higher concentration in the north-central part and south part of the country and CAMSRA model are consistent in estimating higher concentrations in the central part of the country (See Supplementary figures S1-S2). Figure 2 shows the comparison of the estimated surface PM2.5 concentrations derived from ACAG and CAMSRA models for 2019 showing that CAMSRA exceeded estimations from ACAG model in 673 municipalities (60%) with a mean difference of 1,4 µg/m3. PM2.5 mean annual concentrations for 2019 at municipality level (a) Estimations based on ACAG model (b) Estimations based on CAMSRA model Comparison of surface PM2.5 mean annual concentrations for 2019 at municipality level based on estimations from ACAG and CAMSRA models We also evaluated the ability of ACAG and CAMSRA to reproduce surface PM2.5 concentrations. This evaluation compared surface data from 28 Colombian cities to the data obtained from the models. Figure 3 shows results from this comparison using yearly means in 2019 for ACAG and CAMSRA. Figure 3a) and c) show that ACAG and CAMSRA do not correlate well with ground-based measurements (ρ = 0.25, p = 0.193 and ρ=-0.12, p = 0.558, respectively) and overestimate annual ground-based average PM2.5 concentrations from most cities (Fig. 3b and d): for ACAG model in 22/28 (78.6%), mean bias = 1.7 µg/m3; 95%CI: 0.1–3.3 and for CAMSRA model in 22/28 (78.6%), mean bias = 4.7 µg/m3; 95%CI: 2.4–6.9. Both ACAG and CAMSRA overestimate ground-based concentrations of PM2.5 for Bogota, the largest city in Colombia (about 6% and 56%, respectively), and Medellin, the second largest city (about 8% and 10%, respectively), whereas for Cali, the third-largest city, both models underestimated ground-based measurements (23% and 5%, respectively). Comparison of ground- and model-based PM2.5 mean annual concentrations for 2019 in 28 cities. Scatter plot (a) and mean difference plot (b) compare ground-based concentrations with estimations based on ACAG model. Scatter plot (c) and mean difference plot (d) compare ground-based concentrations with estimations based on CAMSRA model. Note: In figures (a) and (c) the dotted lines represent the perfect correlation among measurements and the solid black lines represent the fitted regression line for the data. In figures (b) and (c) the dotted blue lines represent the ranges of the difference between model and ground data and the red lines represent the mean of the differences (solid red lines) and their 95% confidence intervals (dotted red lines) Using the ACAG model, there were a total of 10 (1%) municipalities exceeding the level of the national standard of 25 µg/m3 in 2014, 27 (2,4%) in 2015, 40 (3,6%) in 2016, 15 (1,3%) in 2017, 10 (1%) in 2018, and 18 (1,6%) in 2019. On the other hand, using the CAMSRA model, there were a total of 169 (15,1%) municipalities exceeding the level of the national standard in 2014, 238 (21,2%) in 2015, 233 (20,7%) in 2016, 81 (7,2%) in 2017, 81 (7,2%) in 2018, and 150 (13,4%) in 2019. For the same year, the PM2.5 ground-based concentrations for de 28 cities showed that only one municipality exceed the annual standard. Therefore, the ACAG model showed a better estimation of ground-based concentrations compared to CAMSRA model. For the scenario with annual PM2.5 concentration standard of 15 µg/m3, using the ACAG model, there were 97 (8.6%) municipalities in 2014 that comply this limit, 94 (8.4%) in 2015, 93 (8.3%) in 2016, 116 (10.3) in 2017, 172 (15.3) in 2018, and 127 (11.3%) in 2019. Estimation of avoidable mortality Based on the results from the comparison of both air quality models with surface data, we chose the ACAG model the estimation of the avoidable mortality. Using PM2.5 estimates at municipality level from ACAG model, the total number of avoidable deaths during 2014–2019 was 142 for current PM2.5 annual standard of 25 and 34,341 for the projected 2030 annual standard of 15 µg/m3. Table 2 shows the number of avoidable deaths by year for the current and projected PM2.5 annual standard of 25 and 15 µg/m3 using the ACAG model. Figure 4 shows the avoidable mortality for all causes using ACAG model by municipality for 25 and 15 µg/m3 annual mean concentrations. Table 2 Number of avoidable deaths using national standard and international interim target 3 as control scenarios, by mortality cause using the ACAG model for surface PM2.5 concentration estimations, Colombia 2014–2019 Avoidable mortality for all causes derived from estimations of annual surface PM2.5 concentrations based on ACAG model by municipality, Colombia, 2014–2019 (a) for national standard of 25 µg/m3 (b) for international interim target of 15 µg/m3 For comparison, the total avoidable deaths estimated for the 28 cities with data from monitor stations in 2019 showed only 6 avoidable deaths from the municipality of Yumbo near to the city of Cali. The total avoidable deaths estimated with CAMSRA model were on average 52 times higher than the estimation using the ACAG model (7,368 deaths; 268,9 deaths per million people over 25 years old) (See supplementary table S1). Large differences are explained mainly because annual PM2.5 estimations using CAMSRA were higher and over the national standard for more municipalities, and particularly for Bogotá and surrounded municipalities. Considering the current standard of 25 µg/m3 as reference and the ACAG model, the municipalities with the highest total avoidable deaths for all causes were Barrancabermeja (57), San José de Cúcuta (19), both locate at the northeast of the country, and Leticia (7), the capital of the department of Amazonas. Avoidable mortality by department using ACAG model for PM2.5 exposure estimation for annual standard of 25 and 15 µg/m3 and estimations by municipality for annual standard of 15 µg/m3 are presented in Supplementary material (Tables S2-S3). Avoidable deaths related to cardiopulmonary causes accounted for 45% and 38% of the total preventable deaths using the ACAG model for annual mean concentrations of 25 and 15 µg/m3, respectively. For the estimations using the 15 µg/m3 level as control reference, the ischemic heart disease represented 18% of total preventable deaths, while acute lower respiratory infections represent 4%, and lung cancer represented less than 1% of total avoidable deaths. Finally, the calculated total YPLL due to all-cause mortality for the current PM2.5 annual standard of 25 µg/m3 for the period 2014–2019 were 2,381 and 122,996 years based on ACAG and CAMSRA PM2.5 estimations, respectively (Fig. 5). In accordance with the number of avoidable deaths, YPLL from ACAG estimations had lower mean and annual variation than YPLL from CAMSRA: 397 years (range: 78 − 1,076) and 20,499 (range: 13,759 − 26,851), respectively. Years of potential life lost attributable to exposure to annual PM2.5 concentrations above 25 µg/m3 as estimated using ACAG (YPLLA) and CAMSRA (YPLLC) models Our study estimated the avoidable mortality due to long-term exposure to PM2.5 in Colombia during 2014–2019 having as control scenarios the current national annual standard of 25 µg/m3 and the projected standard for 2030 of 15 µg/m3 which correspond to the current WHO interim target 3. The estimated avoidable deaths were calculated at municipality level as the number of deaths that could be avoided if the national standard for PM2.5 annual mean of 25 µg/m3 have been met and the avoidable deaths if the PM2.5 annual mean of 15 µg/m3 have been implemented. Estimations of attributable deaths to PM2.5 exposure differed depending on the global air quality model used for estimating the ground levels at municipality scale: More accurate surface PM2.5 concentration estimations for Colombia, and therefore more accurate estimated avoidable mortality were obtained using ACAG model. Ground-based monitoring is ideal due to its high accuracy. Nevertheless, it is not feasible with geographic coverage. The few PM2.5 air quality networks in developing countries may limit our ability to accurately assess human exposure to PM2.5 since measured concentrations may vary with increasing distance from the monitoring station. For this reason, advances are emerging in using ground-based data jointly with land use regression (LUR) and air quality models, satellite information, and low-cost sensors for improvement of air quality estimated data. This combination of techniques may provide better spatial coverage, although information on temporal coverage is still being studied [46]. These developments are especially important if used to estimate personal exposure and variability within a city. There were differences in estimations of ground concentrations using ACAG and CAMRA models, particularly in the Amazon and the Orinoco basins. Although this country's area has less than 3% of the population, this region is affected by seasonal wildfires between January and April that produce large amounts of PM2.5 [21]. Differences between estimated data from models and ground monitoring information may be due to uncertainty of emission inventories or the incorrect representation of the meteorology in the region. Validation of air quality models with regional inventories is common for the United States, Canada, México, Europe, and East Asia [47, 48], although not for Colombia or other developing countries. For CAMSRA, modeling inconsistencies due to cloud interference in the Amazon and South America, or some smoke episodes may overestimate values in some cases [49]. For ACAG, the MODIS data used in the Global Fire Emission Database (GFED) inventories is too coarse to detect the small and transient burning fires or other local emissions; thus, estimated values from both models may vary from ground-based monitoring air quality stations [50]. During the last years, events of Saharan dust intrusion to the Andes are rather scarce but have affected air quality in Medellín, Bogotá, and North of Colombia; increased by regional biomass burning in the Orinoco basin and the Magdalena Valley [51, 52]. Despite the substantial improvement of spatial resolution of atmospheric emission inventories in most developed countries, few emission datasets have been addressed for temporal profiles [53]. As explained above, model performance depends on the uncertainty of the emission inventories and meteorology. Differences between estimated data from models and ground monitoring information may be due to regional emission inventories, modeled meteorology uncertainty and accuracy of the satellite- retrieved total column aerosol optical depth (AOD). Van Donkelaar et al. [36] studied monthly global estimates of fine particulate matter and their uncertainty for ACAG model, were results pointed the obtained largest uncertainties over the relatively under-monitored regions of Africa, Latin America and Asia. For example, uncertainties in Andean and Tropical region of Latin America ranges from 3.4 to 4.4 µg/m3 and 0.5–2.3, respectively. For CAMSRA model, mostly all profile are based on western European data and therefore the source of uncertainty for other world regions [53]. Using the same spatial resolution for both models (around 10 Km x 10Km), the ACAG model seem to capture better emission from Amazonian region and more precise estimation in other regions compared to CAMSRA model. Using the ACAG model, for 2019 the exceedances were more frequently present over the current national standard in municipalities located in the departments of Antioquia, Córdoba, Santander, and particularly in departments at south of the country with lower population density, and therefore they account for few total avoidable deaths. In contrast, municipalities in the department of Atlántico, located in the Caribbean region, showed no exceedances using the ACAG model for any of the year with no contribution to the total avoidable deaths under the current annual standard. Estimated avoidable deaths in Colombia for PM2.5 annual average scenario of 15 µg/m3 using ACAG model were concentrated in particularly in Bogotá, Medellín, Cali, and Bucaramanga, regions with high population density. In Colombia, the national study of economic valuation of environmental degradation estimated that 8.030 deaths in the population over 44 years in 2015 were attributable to urban air pollution; 92% (7.362 deaths) were related to cardiopulmonary diseases and 8% (668 deaths) were related to lung cancer. Overall, the economic valuation of attributable deaths was estimated at 10,6 billion pesos. This study estimated PM2.5 concentration based on PM10 reports of the national monitoring system and included only the population from 21 regions where air quality surveillance systems were in place, which included nearly half of the country's population (DNP, 2018). The national study of burden of environmental diseases estimated 15,681 deaths attributable to air pollution in 2016 following the methodology of the GBD study 2015 and used PM2.5 data from available monitoring stations; the deaths were mainly related to IHD (rate 290.15 though 100,000 population) and COPD (143.99 per 100,000 population). The attributable fraction of IHD and COPD due to PM2.5 were estimated at 15.8% and 17.5%, respectively, compared to 2.8% and 4% due to indoor pollution (INS, 2018). Differences in attributable deaths in these national studies are probably explained by the population covered, sources and estimation of PM2.5 levels, the causes of deaths included, and exposure-response curves used. Our study adds to previous studies on avoidable mortality related to ambient air pollution in Colombia as we included all municipalities of the country by using PM2.5 estimates from two different air quality global models, and used the up-to-date global estimations of pooled-effect measures (RR/HRs) derived from hundreds of cohort studies conducted around the world, including developing countries with a wide range of PM2.5 annual mean levels [9, 43]. However, it is important to note that estimates of our study correspond to the deaths attributable to the excess of PM2.5 exposure over the current and projected 2030 national annual standard and they did not represent the total deaths attributable to PM2.5 exposure during the study period; therefore, our estimates cannot be compared directly with estimates of total deaths from the national studies mentioned above. It is also important to note that our analysis is concerned only with long-term exposure to PM2.5 represented by an average exposure over six years (2014–2019) and therefore our results are not comparable with those from analysis of short-term effects that usually use days as time variable for the analysis. National standards differ by country and Colombian annual standard for PM2.5 is the highest in the region of the Americas. The WHO global AQG were updated in September 2021 being one of their objectives to "provide interim targets to guide reduction efforts towards the ultimate and timely achievement of the AQG levels for those countries that substantially exceed the AQG levels" (WHO, 2021). According to the updated WHO air quality guideline, the current Colombian PM2.5 annual standard of 25 µg/m3 corresponds to the new interim target 2. In Brazil and Chile, the current national PM2.5 annual mean standard is 20 µg/m3 and for Ecuador the standard is 15 µg/m3 which corresponds to the new interim target 3 [54]. According to the new WHO guidelines all countries need to strengthen their effort in reducing PM2.5 sources and emissions and review their national standards in a route to decrease the morbidity and mortality attributable to ambient air pollution. There are limited studies assessing the amount of avoidable mortality at a nation-wide scale in South America and most of them have been conducted in Brazil. Andreão et al. [29] studied 102 cities in the southeast region of Brazil by the estimation of daily PM levels using satellite data and mortality that would be avoided if they comply with WHO air quality guidelines. Results of this research showed that particle concentrations exceeded WHO guidelines by 8 to 12 times. Recently, Andreão et al. [44] assessed avoidable mortality during 2014–2018 including 5570 Brazilian cities using ACAG model for estimating PM2.5 annual mean; the total avoidable deaths were estimated in 1,335 (10,7 per million population over 25 years old) for a target of 20 µg/m3 and 48,700 deaths (389,6 per million population over 25 years old) for a target of 10 µg/m3. Despite the wide coverage of Brazilian cities, the results of this study showed lower estimates of total avoidable deaths compared to our study in Colombiabased on the ACAG model when comparing with the intermediate standard of 10 µg/m3. It is important to note that for 2014 and 2018 in Brazil there were no cities exceeding this standard and in 2015 exceedances occurred only in 2.7% of the cities, which imply a high compliance to the standard. Similarly, exceedances above the standard of 25 µg/m3 using the ACAG model ranged between 1% and 4% of the cities during the same years in Colombia, which might explain the low number of estimated attributable deaths. Results of Brazilian studies also showed that exposures to these fires induces acute health disorders [29, 44]. Colombia, similar to Brazil, suffers the effects of the high levels of particles emitted from wildfires with significant fire activity [27]. Also, burns over the Amazonia in South America and grassland plains in Northern South America during the dry season fires and the Orinoco River basin deteriorate air quality in highly populated urban centers hundreds of kilometers away from the sources [21]. Colombia updated its National Determined Contribution (NDCs) in 2020 having as its central goals the reduction in greenhouse emissions by 51%, and the reduction of black carbon emission by 40% by 2030 (compared to 2014 levels) [55]. The Colombian NDCs are aiming to achieve the national goals for the Sustainable Development Goals (SDGs) which were reinforced as political goals for all countries in the Glasgow Pact 2021 [56]. It is also now well recognized that countries need to make a joint effort in reducing air quality and climate change strategies as they are strongly related. Countries need to assess the effects of air pollution on human health as a strategy to advocate for more restrictive air quality control and climate change mitigation goals. In this regard, quantification of mortality attributable to ambient air pollution is a key indicator of burden of disease due to air pollution. Seen in the benefit-point of view the total of avoidable deaths are the total deaths that might be prevented if PM2.5 standard had been accomplished and more restrictive standard is introduced. WHO recommends the quantification of the health co-benefits related to reduction in air pollutants concentrations as part of their national NDCs [57]. Quantifying avoidable deaths lets the government and citizens assign a value to the magnitude of the long-term air pollution effects, and define local, regional, and national goals and resources allocation. Moreover, distribution of avoidable deaths might differ by geographical area and therefore spatial distribution of avoidable deaths at municipality level, as presented in this study, informs local governments and communities for tailored local air quality planning. Strengths of this study include the use of two global air quality satellite-based models to estimate PM2.5 ground concentrations at municipality level and the comparison of both model estimates with air quality monitoring data. This is the first study in Colombia and one of the limited studies in South America to assess the avoidable mortality related to PM2.5 long-term exposure using satellite-based global air quality models. Estimates differed widely between CAMSRA and ACAG models but coincide in showing elevated levels of PM2.5 not only in large urban areas but also in some medium-size and small municipalities. This geographical distribution of aerosols has been reported in other studies for Colombia, which showed the influence of biomass burning in small and large municipalities [58, 59]. The biomass burning along with a wide variability in meteorological conditions across the country might explain to some extent the difference between both model estimates and between model estimates and ground levels. The main difference between ACAG and CAMSRA is the input information used, especially the emissions datasets. ACAG uses the GFED4 biomass burning emission inventory as input information to quantify pollutants produced from this source, while CAMSRA uses the GFASv1.2 data set. A recent study compared these two and other data sets and found significant discrepancies between biomass burning emissions reported (Pan et al. 2020). Therefore, this is one important reason for the PM2.5 concentration differences found in this study. Further research should improve emission estimations from biomass burning and identify other sources of uncertainty. In addition, our study estimated the number of avoidable deaths using updated pooled-effect estimates from the most recent global meta-analysis of cohorts assessing long-term effects on mortality. Although this meta-analysis did not include cohorts from Colombia or other countries in South America, they do include studies from different regions of the world including countries in South Asia and China which cover a wide range of long-term PM2.5 exposure. Therefore, estimations of RR/HRs for mortality from these global studies (Chen and Hoek, 2020; Pope et al., 2020) are the best available estimate for assessing risks for mortality related to long-term PM2.5 exposure. One important limitation of our study is that only 28 municipalities (2.5%) had available PM2.5 data from ground air quality monitoring systems, which limited the comparison with estimates from global satellite-based models. Despite the small percentage of municipalities represented, the municipalities with available data account for most of the large cities and the municipalities with coal extraction in the country [19]. However, the scarce data from the Orinoquia and Amazon region do not represent concentrations derived from these large mostly rural areas of the country where biomass burning occur. This limitation, however, do not affect the results for estimations of avoidable deaths derived from global models. Comparison of two global air quality satellite-based models for estimating surface PM2.5 concentrations during 2014–2019 at municipality scale in Colombia showed important differences. Compared to surface data from monitoring stations from 28 cities in 2019, ACAG model estimates of PM2.5 surface concentrations were more accurate compared to CAMSRA model. We estimated a total of 142 and 34,341 avoidable deaths during 2014–2019 due to long-term exposure to PM2.5 exceeding the current and projected national standard annual mean of 25 and 15 µg/m3 using estimations based on the ACAG global air quality model. For both scenarios, the cardiopulmonary diseases, and particularly the IHD, accounted for most of the attributable deaths due to PM2.5 excess of exposure. These numbers represent the range of total number of deaths that could be avoided if the current national standard for PM2.5 annual mean of 25 µg/m3 have been met and more restrictive standard have been implemented. Estimates of avoidable deaths related to PM2.5 excess of exposure at municipality level might inform national and local decision makers about priority air quality interventions and support the health co-benefits of making more restrictive air quality regulations. WHO. Ambient air pollution: a global assessment of exposure and burden of disease. Geneva: World Health Organization; 2016. https://apps.who.int/iris/handle/10665/250141. Agudelo-Castañeda D, Teixeira E, Schneider I, Lara SR, Silva LFO. Exposure to polycyclic aromatic hydrocarbons in atmospheric PM1.0 of urban environments: carcinogenic and mutagenic respiratory health risk by age groups. Environ Pollut. 2017;224:158–70. Espitia-Pérez L, da Silva J, Espitia-Pérez P, Brango H, Salcedo-Arteaga S, Hoyos-Giraldo LS, et al. Cytogenetic instability in populations with residential proximity to open-pit coal mine in Northern Colombia in relation to PM < inf > 10</inf > and PM PM<inf>2.5</inf>levels. Ecotoxicol Environ Saf. 2018;148. Agudelo-Castañeda D, Teixeira EC. Assessing environmental carcinogenic risk for polycyclic aromatic hydrocarbons in PM1.0, PM2.5 and PM2.5-10 at an urban area at South Brazil. Int J Chem Environ Eng. 2016;7:2–6. Garcia KO, Teixeira EC, Agudelo-Castañeda DM, Braga M, Alabarse PG, Wiegand F, et al. Assessment of nitro-polycyclic aromatic hydrocarbons in PM1 near an area of heavy-duty traffic. Sci Total Environ. 2014;479–480:57–65. https://doi.org/10.1016/j.scitotenv.2014.01.126. Agudelo-Castañeda DM, Teixeira EC, Rolim SBa, Pereira FN, Wiegand F. Measurement of particle number and related pollutant concentrations in an urban area in South Brazil. Atmos Environ Elsevier Ltd. 2013;70:254–62. Brook RD, Rajagopalan S, Pope CA, Brook JR, Bhatnagar A, Diez-Roux AV, et al. Particulate matter air pollution and cardiovascular disease: an update to the scientific statement from the american heart association. Circulation. 2010;121:2331–78. Burnett R, Chen H, Szyszkowicz M, Fann N, Hubbell B, Pope CA, et al. Global estimates of mortality associated with longterm exposure to outdoor fine particulate matter. Proc Natl Acad Sci U S A. 2018;115:9592–7. Chen J, Hoek G. Long-term exposure to PM and all-cause and cause-specific mortality: A systematic review and meta-analysis. Environ Int [Internet]. Elsevier; 2020;143:105974. Available from: https://doi.org/10.1016/j.envint.2020.105974. Hystad P, Larkin A, Rangarajan S, AlHabib KF, Avezum Á, Calik KBT, et al. Associations of outdoor fine particulate air pollution and cardiovascular disease in 157 436 individuals from 21 high-income, middle-income, and low-income countries (PURE): a prospective cohort study. Lancet Planet Heal. 2020;4:e235–45. Loomis D, Grosse Y, Lauby-Secretan B, Ghissassi F, El, Bouvard V, Benbrahim-Tallaa L, et al. The carcinogenicity of outdoor air pollution. Lancet Oncol [Internet]. Elsevier; 2013 [cited 2021 Sep 8];14:1262–3. Available from: https://linkinghub.elsevier.com/retrieve/pii/S147020451370487X. HEI. State of Global Air 2019. Special Report [Internet]. Heal. Eff. Inst. 2019. Available from: http://www.stateofglobalair.org/sites/default/files/soga_2019_report.pdf. Cohen AJ, Brauer M, Burnett R, Anderson HR, Frostad J, Estep K, et al. Estimates and 25-year trends of the global burden of disease attributable to ambient air pollution: an analysis of data from the Global Burden of Diseases Study 2015. Lancet [Internet]. The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license; 2017;389:1907–18. Available from: https://doi.org/10.1016/S0140-6736(17)30505-6. Murray CJL. Global burden of 87 risk factors in 204 countries and territories, 1990–2019: a systematic analysis for the global burden of Disease Study 2019. Lancet. 2020;396:1223–49. WHO. Global air quality guidelines. Particulate matter (PM2.5 and PM10), ozone, nitrogen dioxide, sulfur dioxide and carbon monoxide. Switzerland: Geneva; 2021. INS. Carga de enfermedad ambiental en Colombia. Décimo Inf. Técnico Espec. 2018. Blanco-Becerra LC, Miranda-Soberanis V, Hernández-Cadena L, Barraza-Villarreal A, Junger W, Hurtado-Díaz M, et al. Effect of particulate matter less than 10µm (PM10) on mortality in Bogota, Colombia: a time-series analysis, 1998–2006. Salud Publica Mex. 2014;56:363–70. Rodríguez-Villamizar LA, Rojas-Roa NY, Blanco-Becerra LC, Herrera-Galindo VM, Fernández-Niño JA. Short-term effects of air pollution on respiratory and circulatory morbidity in colombia 2011–2014: a multi-city, time-series analysis. Int J Environ Res Public Health. 2018;15(8):1610. https://doi.org/10.3390/ijerph15081610. IDEAM. Informe del Estado de la Calidad del Aire en Colombia 2019. Bogotá D.C.; 2021. Li S, Chen L, Huang G, Lin J, Yan Y, Ni R, et al. Retrieval of surface PM2.5 mass concentrations over North China using visibility measurements and GEOS-Chem simulations. Atmos Environ [Internet]. Elsevier Ltd; 2020;222:117121. Available from: https://doi.org/10.1016/j.atmosenv.2019.117121. Mendez-Espinosa JF, Belalcazar LC, Morales Betancourt R. Regional air quality impact of northern South America biomass burning emissions. Atmos Environ [Internet]. Elsevier; 2019;203:131–40. Available from: https://doi.org/10.1016/j.atmosenv.2019.01.042. Akyuz E, Samavati M, Kaynak B. Spatial distribution of health risks associated with PM2.5 in Turkey and Iran using satellite and ground observations. Atmos Pollut Res [Internet]. Elsevier B.V.; 2020;11:2350–60. Available from: https://doi.org/10.1016/j.apr.2020.08.011. Jiang Z, Jolleys MD, Fu TM, Palmer PI, Ma Y, Tian H, et al. Spatiotemporal and probability variations of surface PM2.5 over China between 2013 and 2019 and the associated changes in health risks: An integrative observation and model analysis. Sci Total Environ [Internet]. The Authors; 2020;723:137896. Available from: https://doi.org/10.1016/j.scitotenv.2020.137896. Wang Y, Hu J, Zhu J, Li J, Qin M, Liao H, et al. Health Burden and economic impacts attributed to PM2.5 and O3 in china from 2010 to 2050 under different representative concentration pathway scenarios. Resour Conserv Recycl [Internet]. Elsevier B.V.; 2021;173:105731. Available from: https://doi.org/10.1016/j.resconrec.2021.105731. Wang H, Li J, Gao M, Chan TC, Gao Z, Zhang M, et al. Spatiotemporal variability in long-term population exposure to PM2.5 and lung cancer mortality attributable to PM2.5 across the Yangtze River Delta (YRD) region over 2010–2016: A multistage approach. Chemosphere [Internet]. Elsevier Ltd; 2020;257:127153. Available from: https://doi.org/10.1016/j.chemosphere.2020.127153. Ciarelli G, Colette A, Schucht S, Beekmann M, Andersson C, Manders-Groot A, et al. Long-term health impact assessment of total PM2.5 in Europe during the 1990–2015 period. Atmos Environ X [Internet]. Elsevier; 2019;3:100032. Available from: https://doi.org/10.1016/j.aeaoa.2019.100032. Roberts G, Wooster MJ. Global impact of landscape fire emissions on surface level PM2.5 concentrations, air quality exposure and population mortality. Atmos Environ [Internet]. Elsevier Ltd; 2021;252:118210. Available from: https://doi.org/10.1016/j.atmosenv.2021.118210. Roux E, Ignotti E, Bègue N, Bencherif H, Catry T, Dessay N, et al. Toward an early warning system for health issues related to particulate matter exposure in brazil: the feasibility of using global pm2.5 concentration forecast products. Remote Sens. 2020;12:1–45. Andreão WL, Pinto JA, Pedruzzi R, Kumar P, Albuquerque TT de. A. Quantifying the impact of particle matter on mortality and hospitalizations in four brazilian metropolitan areas. J Environ Manage. 2020;270. Ayubi E, Safiri S. Assessment of population exposure to PM2.5 for mortality in China and its public health benefit based on BenMAP: biases due to spatial auto correlation and the modifiable areal unit problem (MAUP). Environ Pollut. 2017;223:635. Broome RA, Powell J, Cope ME, Morgan GG. The mortality effect of PM2.5 sources in the Greater Metropolitan Region of Sydney, Australia. Environ Int. 2020;137. Vodonos A, Schwartz J. Estimation of excess mortality due to long-term exposure to PM2.5 in continental United States using a high-spatiotemporal resolution model. Environ Res [Internet]. Elsevier Inc.; 2021;196:110904. Available from: https://doi.org/10.1016/j.envres.2021.110904. Xie R, Sabel CE, Lu X, Zhu W, Kan H, Nielsen CP, et al. Long-term trend and spatial pattern of PM2.5 induced premature mortality in China. Environ Int. 2016;97:180–6. DANE. National Population and Housing Census [Internet]. 2021. Available from: https://microdatos.dane.gov.co/index.php/catalog/643/study-description. Rodriguez-Villamizar LA, Belalcazar-Ceron LC, Fernandez-Nino JA, Marin-Pineda DM, Rojas-Sanchez OA, Acuna-Merchan LA, et al. Air pollution, sociodemographic and health conditions effects on COVID-19 mortality in Colombia: an ecological study. Sci Total Environ J [Internet]. Elsevier B.V.; 2020;2020.07.22.20159293. Available from: https://doi.org/10.1016/j.scitotenv.2020.144020. Van Donkelaar A, Hammer MS, Bindle L, Brauer M, Brook JR, Garay MJ, et al. Monthly global estimates of fine particulate matter and their uncertainty. Environ Sci Technol. 2021;55:15287–300. ECMWF. European Centre for Medium-Range Weather Forecasts [Internet]. 2021 [cited 2022 Jul 2]. Available from: www.ecmwf.int. Flemming J, Benedetti A, Inness A, Engelen JR, Jones L, Huijnen V, et al. The CAMS interim reanalysis of Carbon Monoxide, ozone and aerosol for 2003–2015. Atmos Chem Phys. 2017;17:1945–83. Inness A, Ades M, Agustí-Panareda A, Barr J, Benedictow A, Blechschmidt AM, et al. The CAMS reanalysis of atmospheric composition. Atmos Chem Phys. 2019;19:3515–56. Ogbozige FJ, Adie DB, Abubakar UA. Water quality assessment and mapping using inverse distance weighted interpolation: a case of River Kaduna, Nigeria. Niger J Technol. 2018;37:249. Bland JM, Altman DG. Measuring agreement in method comparison studies with heteroscedastic measurements. Stat Methods Med Res. 1999;8:135–60. SISPRO. Sistema Integrado de Información de la Protección Social [Internet]. 2021. Available from: https://www.sispro.gov.co/Pages/Home.aspx. Pope CA, Coleman N, Pond ZA, Burnett RT. Fine particulate air pollution and human mortality: 25 + years of cohort studies. Environ Res [Internet]. Elsevier Inc.; 2020;183:108924. Available from: https://doi.org/10.1016/j.envres.2019.108924. Andreão WL, Albuquerque TT de. A. Avoidable mortality by implementing more restrictive fine particles standards in Brazil: an estimation using satellite surface data. Environ Res. 2021;192. Sacks JD, Lloyd JM, Zhu Y, Anderton J, Jang CJ, Hubbell B, et al. The environmental benefits mapping and analysis program – Community Edition (BenMAP–CE): a tool to estimate the health and economic benefits of reducing air pollution. Environ Model Softw. 2018;104:118–29. Sorek-Hamer M, Chatfield R, Liu Y. Review: Strategies for using satellite-based products in modeling PM2.5 and short-term pollution episodes. Environ Int [Internet]. Elsevier; 2020;144:106057. Available from: https://doi.org/10.1016/j.envint.2020.106057. Hammer MS, Van Donkelaar A, Li C, Lyapustin A, Sayer AM, Hsu NC, et al. Global estimates and long-term Trends of fine particulate matter concentrations (1998–2018). Environ Sci Technol. 2020;54:7879–90. Tian R, Ma X, Jia H, Yu F, Sha T, Zan Y. Aerosol radiative effects on tropospheric photochemistry with GEOS-Chem simulations. Atmos Environ [Internet]. Elsevier; 2019;208:82–94. Available from: https://doi.org/10.1016/j.atmosenv.2019.03.032. Gueymard CA, Yang D. Worldwide validation of CAMS and MERRA-2 reanalysis aerosol optical depth products using 15 years of AERONET observations. Atmos Environ [Internet]. Elsevier Ltd; 2020;225:117216. Available from: https://doi.org/10.1016/j.atmosenv.2019.117216. Li S, Zhang L, Cai K, Ge W, Zhang X. Comparisons of the vertical distributions of aerosols in the CALIPSO and GEOS-Chem datasets in China. Atmos Environ X [Internet]. Elsevier; 2019;3:100036. Available from: https://doi.org/10.1016/j.aeaoa.2019.100036. Mendez-Espinosa JF, Rojas NY, Vargas J, Pachón JE, Belalcazar LC, Ramírez O. Air quality variations in Northern South America during the COVID-19 lockdown. Sci Total Environ [Internet]. Elsevier B.V.; 2020;749:141621. Available from: https://doi.org/10.1016/j.scitotenv.2020.141621. Trejos EM, Silva LFO, Hower JC, Flores EMM, González CM, Pachón JE, et al. Volcanic emissions and atmospheric pollution: a study of nanoparticles. Geosci Front. 2021;12:746–55. Guevara M, Jorba O, Tena C, Denier Van Der Gon H, Kuenen J, Elguindi N, et al. Copernicus Atmosphere Monitoring Service TEMPOral profiles (CAMS-TEMPO): global and european emission temporal profile maps for atmospheric chemistry modelling. Earth Syst Sci Data. 2021;13:367–404. Nazarenko Y, Pal D, Ariya PA. Air quality standards for the concentration of particulate matter 2.5, global descriptive analysis. Bull World Health Organ. 2021;99:125–37. Gobierno de Colombia. Actualización de la Contribución. Contribución Prevista Determinada a Nivel Nacional de la República de Colombia. 2020. UNFCCC. Glasgow Climate Pact Advance unedited version Decision. Cop26. 2019;1–8. WHO. Health in National Determined Contributions (NDCs): a WHO review [Internet]. Geneva; 2020. Available from: https://www.who.int/publications/i/item/who-review-health-in-the-ndcs. Ballesteros-González K, Sullivan AP, Morales-Betancourt R. Estimating the air quality and health impacts of biomass burning in northern South America using a chemical transport model. Sci Total Environ [Internet]. Elsevier B.V.; 2020;739:139755. Available from: https://doi.org/10.1016/j.scitotenv.2020.139755. Luna MAG, Luna FAG, Espinosa JFM, Cerón LCB. Spatial and temporal assessment of particulate matter using AOD data from MODIS and surface measurements in the ambient air of Colombia. Asian J Atmos Environ. 2018;12:165–77. The authors would like to thank Yurley Rojas for her contribution to the generation of maps. The opinions, findings, ideas, and conclusions expressed in this publication are of personal responsibility and do not commit the Universidad del Norte or those of the supporting agencies. The authors have no conflicts of interest relevant to this article to declare. Reference to any companies or specific commercial products does not constitute endorsement, recommendation of favoring of any university that participated in this research. This work was supported by the Colombian Ministry of Science and Technology -MINCIENCIAS Grant No. 905–2019, and Grant No. 874 of 2020. The funder did not have any role in the design, analysis, or interpretation of the study. Department of Public Health, Universidad Industrial de Santander, Carrera 32 29-31 Of. 301 Facultad de Salud, 68002, Bucaramanga, Colombia Laura A. Rodriguez-Villamizar & Víctor Herrera School of Engineering, Universidad Nacional de Colombia, Bogotá, Colombia Luis Carlos Belalcazar-Ceron, María Paula Castillo & Edwin Ricardo Sanchez Faculty of Health Sciences, Universidad Autónoma de Bucaramanga, Bucaramanga, Colombia Víctor Herrera Department of Civil and Environmental Engineering, Universidad del Norte, Barranquilla, Colombia Dayana Milena Agudelo-Castañeda Laura A. Rodriguez-Villamizar Luis Carlos Belalcazar-Ceron María Paula Castillo Edwin Ricardo Sanchez Laura A. Rodriguez-Villamizar: Conceptualization, Methodology, Formal analysis, Writing - original draft. Luis Carlos Belalcazar-Ceron: Methodology, Validation, Data curation, Formal analysis, Writing - original draft. María Paula Castillo: Validation, Data curation, Formal analysis, Writing - review & editing. Edwin Ricardo Sanchez: Validation, Data curation, Formal analysis, Writing - review & editing. Victor Herrera: Methodology, Formal analysis, Writing - original draft. Dayana Milena Agudelo-Castañeda: Conceptualization, Methodology, Writing - original draft. The author(s) read and approved the final manuscript. Correspondence to Laura A. Rodriguez-Villamizar. Figure S1. Estimations of surface PM2.5 mean annual concentrations at municipality level based on the ACAG model, Colombia, 2014-2019. Figure S2. Estimations of surface PM2.5 mean annual concentrations at municipality level based on CAMSRA model, Colombia, 2014-2019. Additional file 2: Table S1. Number of avoidable deaths using national standard of 25 ug/m3 as control scenario using exposure estimations from CASAG and CAMSRA global air quality models for surface PM2.5, Colombia 2014-2019. Table S2. Number of avoidable deaths by departments using national standard and international interim target 3 as control scenarios by mortality cause using ACAG model for surface PM2.5 concentration estimations, Colombia 2014-2019. Table S3. Number of avoidable deaths by municipality using WHO interim target 3 of 15 µg/m3 as control scenario using exposure estimations from ACAG global air quality model, Colombia 2014-2019. Rodriguez-Villamizar, L.A., Belalcazar-Ceron, L.C., Castillo, M.P. et al. Avoidable mortality due to long-term exposure to PM2.5 in Colombia 2014–2019. Environ Health 21, 137 (2022). https://doi.org/10.1186/s12940-022-00947-8 Reference standards Submission enquiries: [email protected]
CommonCrawl
Advances in Difference Equations Effect of harvesting Numerical examples Global dynamics in a stochastic three species food-chain model with harvesting and distributed delays Nafeisha Tuerxun1, Zhidong Teng1Email author and Ahmadjan Muhammadhaji1 Advances in Difference Equations20192019:187 Received: 18 January 2019 Accepted: 29 April 2019 This paper proposes a stochastic three species food-chain model with harvesting and distributed delays. Some criteria for the global dynamics of all positive solutions, including the existence of global positive solutions, stochastic boundedness, extinction, global asymptotic stability in the mean, and the probability distribution, are established by using the stochastic integral inequalities, Lyapunov function method, and the inequality estimation technique. Furthermore, the effects of harvesting are discussed, the optimal harvesting strategy and the maximum of expectation of sustainable yield (MESY for short) are obtained. Finally, numerical examples are carried out to illustrate our main results. Stochastic food-chain model Distributed delay Inequality estimation Global stability The notion of food-chain was first postulated by Eiton in 1927 (see [1]). As he said, he proposed this idea due to the Chinese folk-adage: big fish eat small fish, small fish eat shrimps, shrimps eat mud. We see that food-chain models have been extensively studied because of their academic and pragmatic implication. The following deterministic three species food-chain model has been investigated by many scholars (see [2–5]): $$ \textstyle\begin{cases} \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}= x_{1}(t)[r_{1}-a_{11}x_{1}(t)-a _{12}x_{2}(t)],\\ \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}=x_{2}(t)[-r_{2}+a _{21}x_{1}(t)-a_{22}x_{2}(t)-a_{23}x_{3}(t)],\\ \frac{\mathrm{d}x_{3}(t)}{ \mathrm{d}t}=x_{3}(t)[-r_{3}+a_{32}x_{2}(t)-a_{33}x_{3}(t)], \end{cases} $$ where \(x_{i}(t)\) (\(i=1,2,3\)) represents population sizes of prey, intermediate predator, and top predator at time t, respectively. Nevertheless, in the real world, it is hard to protect population systems from environmental noise (see [6–15]). Taking the influence of white noises into the above model, Liu and Bai in [16] proposed the following stochastic three species food-chain model: $$ \textstyle\begin{cases} \mathrm{d}x_{1}(t)=x_{1}(t)[r_{1}-h_{1}-a_{11}x_{1}(t)-a_{12}x_{2}(t)]\,\mathrm{d}t+ \sigma _{1}x_{1}(t)\,\mathrm{d}B_{1}(t),\\ \mathrm{d}x_{2}(t)=x_{2}(t)[-r _{2}-h_{2}+a_{21}x_{1}(t)-a_{22}x_{2}(t)-a_{23}x_{3}(t)]\,\mathrm{d}t+\sigma _{2}x _{2}(t)\,\mathrm{d}B_{2}(t),\\ \mathrm{d}x_{3}(t)=x_{3}(t)[-r_{3}-h_{3}+a _{32}x_{2}(t)-a_{33}x_{3}(t)]\,\mathrm{d}t+\sigma _{3}x_{3}(t)\,\mathrm{d}B_{3}(t). \end{cases} $$ Time-delay is common and inevitable in nature, and often makes the system property decline or even causes instability. However, any species in nature will not always react at once to variation in its own population size or that of an interacting species, but will do so after a time lag preferably. In other words, it is essential to investigate the effect of delays on the food-chain model. Thus, Li and Wang in [17] proposed a delayed food-chain system with the Beddington–DeAngelis functional response, and they found that delays affect the stability of equilibrium points and the existence of Hopf bifurcation. From [18, 19], we obtain that systems with distributed time delays include those not only with the discrete time delays but also the continuously distributed time delays. To the best of our knowledge to date, the problem of a stochastic food-chain model with harvesting and distributed delays has not been studied in the past research. Motivated by the above discussion, considering distributed time delays and white noises, in this paper, we establish the following stochastic three species food-chain model: $$ \textstyle\begin{cases} \mathrm{d}x_{1}(t)= x_{1}(t)[r_{1}-h_{1}-a_{11}x_{1}(t)-a_{12}\int _{-\tau _{12}} ^{0}x_{2}(t+ \theta )\,\mathrm{d}\mu _{12}(\theta )]\,\mathrm{d}t \\ \hphantom{\mathrm{d}x_{1}(t)=}{}+\sigma _{1}x_{1}(t)\,\mathrm{d}B_{1}(t), \\ \mathrm{d}x_{2}(t)= x_{2}(t)[-r_{2}-h_{2}+a_{21}\int _{-\tau _{21}}^{0}x_{1}(t+\theta ) \,\mathrm{d}\mu _{21}(\theta )-a_{22}x_{2}(t)\\ \hphantom{\mathrm{d}x_{2}(t)=}{} -a_{23}\int _{-\tau _{23}}^{0}x_{3}(t+\theta )\,\mathrm{d}\mu _{23}(\theta )] \,\mathrm{d}t+\sigma _{2}x_{2}(t)\,\mathrm{d}B_{2}(t),\\ \mathrm{d}x_{3}(t)= x_{3}(t)[-r_{3}-h_{3}+a_{32}\int _{-\tau _{32}}^{0}x_{2}(t+ \theta )\,\mathrm{d}\mu _{32}(\theta )-a_{33}x_{3}(t)]\,\mathrm{d}t \\ \hphantom{\mathrm{d}x_{3}(t)=}{}+\sigma _{3}x _{3}(t)\,\mathrm{d}B_{3}(t), \end{cases} $$ where \(r_{1}>0\) is intrinsic growth rate of species \(x_{1}\), \(r_{i}>0\) (\(i=2,3\)) stand for death rates of species \(x_{i}\), \(a_{ii}>0\) (\(i=1,2,3\)) are intraspecific competition coefficients of species \(x_{i}\), \(a_{12}\geq 0\) and \(a_{23}\geq 0\) are capture rates, \(a_{21}\geq 0\) and \(a_{32}\geq 0\) measure efficiency of food conversion, \(h_{i}\geq 0\) (\(i=1,2,3\)) stands for the harvesting effort of species \(x_{i}\), \(\mu _{ij}(\theta )\) (\(i,j=1,2,3\)) are nonnegative variation functions defined on \([-\tau _{ij},0]\) satisfying \(\int _{-\tau _{ij}} ^{0}\mathrm{d}\mu _{ij}(\theta )=1\), \(B_{i}(t)\) (\(i=1,2,3\)) are standard independent Brownian motions defined on the complete probability space \((\varOmega ,\{\mathcal{F}_{t}\}_{t\geq 0},P)\) with a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) satisfying the usual conditions, and \(\sigma _{i}^{2}\) (\(i=1,2,3\)) is the intensity of \(B_{i}(t)\). In this paper we firstly investigate the global dynamics of model (1), including the existence of global positive solutions, stochastic boundedness, extinction, global asymptotic stability in the mean, and the probability distribution, by using the stochastic integrals inequalities, Lyapunov function method, and the inequality estimation technique. Next, we discuss the effects of harvesting for the extinction and persistence of species of model (1), and establish the optimal harvesting effort \(H^{*}=(h_{1}^{*},h_{2}^{*},h _{3}^{*})\) such that all the species are not extinct and the maximal expectation of sustained yield \(Y(H^{*})=\lim_{t\to \infty }\sum_{i=1} ^{3}E(h^{*}_{i}x_{i}(t))\). The organization of this paper is as follows. In Sect. 2, we propose some useful lemmas which will be used in the proofs of main results. We also obtain the existence and stochastic boundedness of unique global positive solution with any positive initial value. In Sect. 3, the global dynamics of positive solutions are investigated. A whole criterion for the extinction and global asymptotic stability in the mean with probability one is established. Furthermore, the criterion for the global asymptotic stability in the probability distribution is also established. In Sect. 4, the effects of harvesting for the extinction and persistence of species are discussed, and the sufficient conditions for the existence and non-existence of optimal harvesting are obtained. We also offer the numerical examples to illustrate our main results in Sect. 5. Lastly, in Sect. 6 we give a brief conclusion and propose some interesting open problems. 2 Preliminaries Firstly, for convenience of the statements, we denote \(b_{1}=r_{1}-h _{1}-\frac{1}{2}\sigma _{1}^{2}\), \(b_{2}=r_{2}+h_{2}+\frac{1}{2}\sigma _{2}^{2}\), \(b_{3}=r_{3}+h_{3}+\frac{1}{2}\sigma _{3}^{2}\), \(\Delta _{11}=b _{1}\), \(\Delta _{21}=b_{1}a_{22}+b_{2}a_{12}\), \(\Delta _{22}=b_{1}a_{21}-b _{2}a_{11}\), \(\Delta _{31}={b_{1}(a_{22}a_{33}+a_{32}a_{23}) +b_{2}a _{33}a_{12}-b_{3}a_{12}a_{23}}\), \(\Delta _{32}=a_{33}(b_{1}a_{21}-b_{2}a _{11})+b_{3}a_{11}a_{23}\), \(\Delta _{33}=(b_{1}a_{21}-b_{2}a_{11})a _{32}-b_{3}(a_{11}a_{22}+a_{12}a_{21})\), \(H_{1}=a_{11}\), \(H_{2}=a_{11}a _{22}+a_{12}a_{21}\), and \(H_{3}=a_{11}a_{22}a_{33}+a_{33}a_{12}a_{21}+a _{11}a_{32}a_{23}\). Obviously, when \(b_{1}\geq 0\), we have \(\Delta _{21}\geq 0\). Furthermore, we have the following. If \(\Delta _{33}>0\), then \(\Delta _{31}>0\) and \(\Delta _{32}>0\). Let \(x_{1}^{*}=\frac{\Delta _{31}}{H_{3}}\), \(x_{2}^{*}=\frac{\Delta _{32}}{H_{3}}\), and \(x_{3}^{*}=\frac{\Delta _{33}}{H_{3}}\). Then \(x_{3}^{*}>0\). By calculating, we can obtain $$ a_{32}x_{2}^{*}=b_{3}+a_{33}x_{3}^{*}>0, \qquad a_{21}x_{1}^{*}=b_{2}+a_{22}x _{2}^{*}+a_{23}x_{3}^{*}>0. $$ Therefore, we have \(\Delta _{31}>0\) and \(\Delta _{32}>0\). This completes the proof. □ For any real numbers \(A\geq 0\), \(B\geq 0\), \(A_{i}\geq 0\) (\(1\leq i \leq n\)), and \(p>0\), \(q>0\) with \(\frac{1}{p}+\frac{1}{q}=1\), one has $$ \Biggl(\sum_{i=1}^{n}A_{i} \Biggr)^{p}\leq n^{p}\sum_{i=1}^{n}A_{i}^{p}, \qquad AB \leq \frac{A^{p}}{p}+\frac{B^{q}}{q}. $$ Let \(\gamma =\max \{\tau _{12},\tau _{21},\tau _{23},\tau _{32}\}\). The initial condition for model (1) is given by $$ x_{1}(\theta )=\xi (\theta ),\qquad x_{2}( \theta )=\eta (\theta ),\qquad x_{3}( \theta )=\varsigma (\theta ),\quad -\gamma \leq \theta \leq 0. $$ On the existence and the ultimate boundedness of the global positive solution for model (1), we have the following results. For any \((\xi (\theta ),\eta (\theta ),\varsigma (\theta ))\in C([- \gamma ,0],R_{+}^{3}))\), model (1) with condition (2) has a unique global solution \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t))\in R ^{3}_{+}\) a.s. for all \(t\geq 0\). Moreover, for any \(p>0\), there exist constants \(K_{1}(p)>0\), \(K_{2}(p)>0\), and \(K_{3}(p)>0\) such that $$ \limsup_{t\to \infty }E\bigl[x_{1}^{p}(t)\bigr] \leq K_{1}(p),\qquad \limsup_{t\to \infty }E \bigl[x_{2}^{p}(t)\bigr]\leq K_{2}(p),\qquad \limsup _{t\to \infty }E\bigl[x_{3}^{p}(t)\bigr]\leq K_{3}(p). $$ Since the coefficients of model (1) are locally Lipschitz, from [14, 20] we obtain that, for any initial data \((\xi (\theta ),\eta ( \theta ),\varsigma (\theta ))\in C([-\gamma ,0],R_{+}^{3}))\), model (1) has a unique solution \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t)) \in R^{3}_{+}\) for all \(t\in [-\gamma ,\tau _{e})\), where \(\tau _{e}\) is the explosion time. We need to prove \(\tau _{e}=\infty \) a.s. Let \(k_{0}>0\) be an enough large integer such that \(\xi (0),\eta (0), \varsigma (0)\in (\frac{1}{k_{0}},k_{0})\). For each integer \(k>k_{0}\), define stopping times as follows: $$ \tau _{k}=\inf \biggl\{ t\in [0,\tau _{e}): x_{1}(t)\notin \biggl(\frac{1}{k},k\biggr),x _{2}(t) \notin \biggl(\frac{1}{k},k\biggr),x_{3}(t)\notin \biggl( \frac{1}{k},k\biggr)\biggr\} . $$ It is clear that \(\tau _{k}\) is increasing with k. Set \(\tau _{\infty }=\lim_{k\to \infty }\tau _{k}\). We have \(\tau _{\infty }\leq \tau _{e}\) a.s. Thus, we only need to prove \(\tau _{\infty }=\infty\) a.s. If the conclusion is false, then there exist \(T>0\) and \(\varepsilon \in (0,1)\) such that \(P(\tau _{\infty }\leq T)>\varepsilon \). Hence, there exists an integer \(k_{1}>k_{0}\) such that, for any \(k>k_{1}\), $$ P(\tau _{k}\leq T)>\varepsilon . $$ Define \(V_{i}(x_{i})=x_{i}-1-\ln x_{i}\) (\(i=1,2,3\)). Using Itô's formula, we obtain $$\begin{aligned}& \mathrm{d}V_{1}(x_{1})=\mathcal{L} \bigl[V_{1}(x_{1})\bigr]\,\mathrm{d}t+\sigma _{1}(x _{1}-1)\,\mathrm{d}B_{1}(t), \\& \mathrm{d}V_{2}(x_{2})=\mathcal{L}\bigl[V_{2}(x _{2})\bigr]\,\mathrm{d}t+\sigma _{2}(x_{2}-1) \,\mathrm{d}B_{2}(t), \\& \mathrm{d}V_{3}(x _{3})=\mathcal{L}\bigl[V_{3}(x_{3}) \bigr]\,\mathrm{d}t+\sigma _{3}(x_{3}-1)\,\mathrm{d}B _{3}(t), \end{aligned}$$ $$\begin{aligned}& \begin{aligned} &\mathcal{L}\bigl[V_{1}(x_{1}) \bigr]= (x_{1}-1) \biggl(r_{1}-h_{1}-a_{11}x_{1}(t)-a_{12} \int _{-\tau _{12}}^{0}x_{2}(t+\theta )\,\mathrm{d}\mu _{12}(\theta )\biggr) + \frac{1}{2}\sigma _{1}^{2}, \\ &\mathcal{L}\bigl[V_{2}(x_{2})\bigr]=(x_{2}-1) \biggl(-r _{2}-h_{2}+a_{21} \int _{-\tau _{21}}^{0}x_{1}(t+\theta )\,\mathrm{d}\mu _{12}( \theta )-a_{22}x_{2}(t) \\ &\hphantom{\mathcal{L}[V_{2}(x_{2})]=}{}-a_{23} \int _{-\tau _{23}}^{0}x_{3}(t+\theta )\,\mathrm{d}\mu _{23}(\theta )\biggr)+\frac{1}{2}\sigma _{2}^{2}, \\ &\mathcal{L}\bigl[V _{3}(x_{3})\bigr]=(x_{3}-1) \biggl(-r_{3}-h_{3}-a_{33}x_{3}(t)+a_{32} \int _{-\tau _{32}}^{0}x_{2}(t+\theta )\,\mathrm{d}\mu _{32}(\theta )\biggr) + \frac{1}{2}\sigma _{3}^{2}. \end{aligned} \end{aligned}$$ For any integer \(n>0\), using Lemma 2 we can obtain $$\begin{aligned}& \begin{aligned} &\mathcal{L}\bigl[V_{1}(x_{1}) \bigr]\leq \frac{\sigma _{1}^{2}}{2}-(r_{1}-h_{1})+ \frac{n ^{2}}{2}a_{12}+(r_{1}-h_{1})x_{1}+a_{11}x_{1}-a_{11}x_{1}^{2} \\ &\hphantom{\mathcal{L}[V_{1}(x_{1})]\leq}{}+\frac{1}{2n ^{2}}a_{12} \int _{-\tau _{12}}^{0}x_{2}^{2}(t+\theta )\,\mathrm{d}\mu _{12}( \theta ), \\ &\mathcal{L}\bigl[V_{2}(x_{2})\bigr]\leq \frac{\sigma _{2}^{2}}{2}+(r _{2}+h_{2}) +\frac{n}{2}a_{21} \int _{-\tau _{21}}^{0}x_{1}^{2}(t+\theta )\,\mathrm{d}\mu _{21}(\theta )-(r_{2}+h_{2})x_{2}+a_{22}x_{2} \\ &\hphantom{\mathcal{L}[V_{2}(x_{2})]\leq}{}-a_{22}x _{2}^{2}+ \frac{x_{2}^{2}}{2n}a_{21}+\frac{n^{2}}{2}a_{23}+ \frac{1}{2n ^{2}}a_{23} \int _{-\tau _{23}}^{0}x_{3}^{2}(t+\theta )\,\mathrm{d}\mu _{23}( \theta ), \\ &\mathcal{L}\bigl[V_{3}(x_{3})\bigr]\leq \frac{\sigma _{3}^{2}}{2}+(r _{3}+h_{3})+\frac{x_{3}^{2}}{2n}a_{32}-(r_{3}+h_{3})x_{3}+a_{33}x_{3}-a _{33}x_{3}^{2} \\ &\hphantom{\mathcal{L}[V_{3}(x_{3})]\leq}{}+\frac{n}{2}a_{32} \int _{-\tau _{32}}^{0}x_{2}^{2}(t+ \theta )\,\mathrm{d}\mu _{32}(\theta ). \end{aligned} \end{aligned}$$ Define \(V_{0}(x_{1},x_{2},x_{3})=\alpha V_{1}(x_{1})+V_{2}(x_{2})+ \eta V_{3}(x_{3})+V_{4}(t)\), where $$\begin{aligned} V_{4}(t) = &\frac{\alpha }{2n^{2}}a_{12} \int _{-\tau _{12}}^{0} \int _{t+\theta }^{t}x_{2}^{2}(s) \,\mathrm{d}s\,\mathrm{d}\mu _{12}(\theta ) +\biggl( \frac{n}{2}a_{21} \int _{-\tau _{21}}^{0} \int _{t+\theta }^{t}x_{1}^{2}(s) \,\mathrm{d}s\,\mathrm{d}\mu _{21}(\theta ) \\ &{}+\frac{1}{2n^{2}}a_{23} \int _{-\tau _{23}}^{0} \int _{t+\theta }^{t}x_{3}^{2}(s) \,\mathrm{d}s \,\mathrm{d}\mu _{23}(\theta )\biggr) +\eta \frac{n}{2}a_{32} \int _{-\tau _{32}} ^{0} \int _{t+\theta }^{t}x_{2}^{2}(s) \,\mathrm{d}s\,\mathrm{d}\mu _{32}(\theta ). \end{aligned}$$ Choose the positive constants α, η and integer \(n>0\) such that $$ \begin{aligned} &\biggl(-a_{22}+ \frac{1}{2n}a_{21}\biggr) +\frac{n\eta }{2}a_{32}+ \frac{\alpha }{2n ^{2}}a_{12}< 0, \\ &\biggl(-a_{33}+\frac{1}{2n}a_{32}\biggr)\eta + \frac{1}{2n^{2}}a _{23}< 0, -a_{11}\alpha + \frac{n}{2}a_{21}< 0. \end{aligned} $$ In fact, from \((-a_{33}+\frac{1}{2n}a_{32})\eta +\frac{1}{2n^{2}}a _{23}=0\) and \(-a_{11}\alpha +\frac{n}{2}a_{21}=0\), we have \(\eta =\frac{a _{23}}{n(2na_{33}-a_{32})}\) and \(\alpha =\frac{na_{21}}{2a_{11}}\). Substituting η and α into the left of the first inequality of (8), we can obtain that there is enough large \(n>0\) such that \(2na_{33}-a_{32}>0\) and \(-a_{22}+\frac{a_{21}}{2n}+\frac{a_{32}a_{23}}{2(2na _{33}-a_{32})}+\frac{a_{12}a_{21}}{4na_{11}}<-\frac{1}{2}a_{22}\). From this, we further choose positive constants \(\eta >\frac{a_{23}}{n(2na _{33}-a_{32})}\) and \(\alpha >\frac{na_{21}}{2a_{11}}\) such that (8) holds. Using Itô's formula, from (5) we have $$\begin{aligned} d\bigl[V_{0}(x_{1},x_{2},x_{3}) \bigr] = &\mathcal{L}V_{0}(x_{1},x_{2},x_{3})\,\mathrm{d}t+ \alpha \sigma _{1}(x_{1}-1)\,\mathrm{d}B_{1}(t) \\ &{}+\sigma _{2}(x_{2}-1) \,\mathrm{d}B_{2}(t)+\eta \sigma _{3}(x_{3}-1)\,\mathrm{d}B_{3}(t). \end{aligned}$$ From (6) and (7), we obtain $$\begin{aligned} \mathcal{L}\bigl[V_{0}(x_{1},x_{2},x_{3}) \bigr] = &\alpha \mathcal{L}V_{1}(x_{1})+ \mathcal{L}V_{2}(x_{2})+\eta \mathcal{L}V_{3}(x_{3})+ \frac{d}{dt}V _{4}(t) \\ \leq &\frac{\alpha \sigma _{1}^{2}}{2}-\alpha (r_{1}-h_{1})+ \frac{ \alpha n^{2}}{2}a_{12}+\alpha (r_{1}-h_{1})x_{1}+ \alpha a_{11}x_{1}- \alpha a_{11}x_{1}^{2} \\ &{}+\frac{\sigma _{2}^{2}}{2}+(r_{2}+h_{2}) -(r_{2}+h_{2})x_{2}+ a_{22}x _{2}-a_{22}x_{2}^{2}+ \frac{x_{2}^{2}}{2n}a_{21} \\ &{}+\frac{n^{2}}{2}a _{23}+\frac{\eta \sigma _{3}^{2}}{2}+(r_{3}+h_{3}) \eta +\frac{\eta x _{3}^{2}}{2n}a_{32}-\eta (r_{3}+h_{3})x_{3}+ \eta a_{33}x_{3} \\ &{}- \eta a_{33}x_{3}^{2}+\frac{\alpha }{2n^{2}}x_{2}^{2}a_{12} + \frac{n}{2}x_{1}^{2}a_{21}+ \frac{1}{2n^{2}}x_{3}^{2}a_{23} + \frac{n \eta }{2}x_{2}^{2}a_{32}. \end{aligned}$$ From (8) we can obtain that there exists a constant \(K>0\) such that $$\begin{aligned} d\bigl[V_{0}(x_{1},x_{2},x_{3}) \bigr] \leq & K\,\mathrm{d}t+\alpha \sigma _{1}(x_{1}-1) \,\mathrm{d}B_{1}(t) \\ &{}+\sigma _{2}(x_{2}-1)\,\mathrm{d}B_{2}(t)+\eta \sigma _{3}(x_{3}-1)\,\mathrm{d}B_{3}(t). \end{aligned}$$ Then, from (4) and (9), a similar argument as in [21] we can get the following contradiction: $$ \infty >V_{0}\bigl(x_{1}(0),x_{2}(0),x_{3}(0) \bigr)+KT\geq \infty . $$ Thus, we obtain \(\tau _{\infty }=\infty\) a.s., and hence, \(\tau _{e}= \infty\) a.s. For any \(p>0\), let \(Q_{1}(t)=e^{t}x_{1}^{p}(t)\). By Itô's formula, we have $$ dQ_{1}(t)=\mathcal{L}Q_{1}(t)\,\mathrm{d}t+pe^{t}x_{1}^{p} \sigma _{1}\,\mathrm{d}B _{1}(t), $$ $$\begin{aligned} \mathcal{L}Q_{1}(t) =&e^{t}x_{1}^{p} \biggl\{ 1+\frac{p(p-1)\sigma _{1}^{2}}{2}+p\biggl[r _{1}-h_{1}-a_{11}x_{1} -a_{12} \int _{-\tau _{12}}^{0}x_{2}(t+\theta ) \,\mathrm{d}\mu _{12}(\theta )\biggr]\biggr\} \\ \leq& K_{1}(p)e^{t} \end{aligned}$$ $$ K_{1}(p)=\max_{x_{1}\geq 0}\biggl\{ \biggl[p(r_{1}-h_{1})+1+ \frac{p(p-1)\sigma _{1} ^{2}}{2}\biggr]x_{1}^{p}-pa_{11}x_{1}^{p+1} \biggr\} . $$ Integrating both sides of (10) and then taking expectations lead to $$ E\bigl[e^{t}x_{1}^{p}\bigr]-\xi ^{p}(0)\leq K_{1}(p) \bigl(e^{t}_{1}-1 \bigr), $$ which implies \(\limsup_{t\to \infty }E[x_{1}^{p}(t)]\leq K_{1}(p)\). For any constant \(p>0\) and integer \(n>0\) with \(a_{22}-a_{21} \frac{p}{p+1}n^{-\frac{p+1}{p}}>0\), we define \(Q_{2}(t)\) as follows: $$ Q_{2}(t)=C_{1}^{*}Q_{1}(t)+e^{t}x_{2}^{p}(t) +e^{\tau _{21}}\frac{pn ^{p+1}}{p+1}a_{21} \int _{-\tau _{21}}^{0} \int _{t+\theta }^{t}e^{s}x_{1} ^{p+1}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{21}(\theta ), $$ where \(C_{1}^{*}=a_{11}^{-1}e^{\tau _{21}}n^{p+1}a_{21}\). We have by Itô's formula $$ dQ_{2}(t)=\mathcal{L}Q_{2}(t)\,\mathrm{d}t+C_{1}^{*}pe^{t}x_{1}^{p} \sigma _{1} \,\mathrm{d}B_{1}(t)+pe^{t}x_{2}^{p} \sigma _{2}\,\mathrm{d}B_{2}(t). $$ From (11), we have $$\begin{aligned} \mathcal{L}Q_{2}(t) = &C^{*}_{1} \mathcal{L}Q_{1}(t)+\mathcal{L}\bigl(e^{t}x _{2}^{p}(t)\bigr)+\frac{d}{dt}\biggl(e^{\tau _{21}} \frac{pn^{p+1}}{p+1}a_{21} \int _{-\tau _{21}}^{0} \int _{t+\theta }^{t}e^{s}x_{1}^{p+1}(s) \,\mathrm{d}s \,\mathrm{d}\mu _{21}(\theta )\biggr) \\ =&C^{*}_{1}e^{t}x_{1}^{p} \biggl\{ 1+\frac{p(p-1) \sigma _{1}^{2}}{2}+p\biggl[r_{1}-h_{1}-a_{11}x_{1} -a_{12} \int _{-\tau _{12}} ^{0}x_{2}(t+\theta )\,\mathrm{d}\mu _{12}(\theta )\biggr]\biggr\} \\ &{}+e^{t}x_{2}^{p}\biggl\{ 1+\frac{p(p-1) \sigma _{2}^{2}}{2}+p \biggl[-r_{2}-h_{2}+a_{21} \int _{-\tau _{21}}^{0}x_{1}(t+ \theta )\,\mathrm{d}\mu _{21}(\theta ) \\ &{}-a_{22}x_{2}(t)-a_{23} \int _{-\tau _{23}}^{0}x_{3}(t+\theta )\,\mathrm{d}\mu _{23}(\theta )\biggr]\biggr\} \\ &{}+e ^{\tau _{21}}\frac{pn^{p+1}}{p+1}a_{21}\biggl(e^{t}x_{1}^{p+1}(t)- \int _{-\tau _{21}}^{0}e^{t+\theta }x_{1}^{p+1}(t+ \theta )\,\mathrm{d}\mu _{21}(\theta )\biggr) \\ \leq &C_{1}^{*}e^{t}\biggl\{ \biggl[1+ \frac{p(p-1)\sigma _{1}^{2}}{2}+p(r_{1}-h_{1})\biggr]x_{1}^{p}-pa_{11}x_{1} ^{p+1}\biggr\} \\ &{}+e^{t}\biggl\{ \biggl[1+\frac{p(p-1)\sigma _{2}^{2}}{2}-p(r_{2}+h_{2}) \biggr]x _{2}^{p}-p\biggl[a_{22}-a_{21} \frac{p}{p+1}n^{-\frac{{p+1}}{p}}\biggr]x_{2}^{p+1} \\ &{}+\frac{p}{p+1}n^{p+1}a_{21} \int _{-\tau _{21}}^{0}x_{1}^{p+1}(t+ \theta )\,\mathrm{d}\mu _{21}(\theta )\biggr\} \\ &{}+e^{\tau _{21}} \frac{pn^{p+1}}{p+1}a_{21}\biggl(e^{t}x_{1}^{p+1}(t) -e^{-\tau _{21}} \int _{-\tau _{21}}^{0}e^{t}x_{1}^{p+1}(t+ \theta )\,\mathrm{d}\mu _{21}( \theta )\biggr) \\ \leq &e^{t}\biggl\{ \biggl[1+\frac{p(p-1)\sigma _{2}^{2}}{2}-p(r_{2}+h _{2})\biggr]x_{2}^{p}-p\biggl[a_{22}-a_{21} \frac{p}{p+1}n^{-\frac{{p+1}}{p}}\biggr]x _{2}^{p+1} \\ &{}+C_{1}^{*}\biggl[1+\frac{p(p-1)\sigma _{1}^{2}}{2}+p(r_{1}-h _{1})\biggr]x_{1}^{p}-e^{\tau _{21}} \frac{p^{2}}{p+1}n^{p+1}a_{21}x_{1}^{p+1} \biggr\} . \end{aligned}$$ Obviously, there is a constant \(K_{2}(p)>0\) such that \(\mathcal{L}Q _{2}(t)\leq K_{2}(p)e^{t}\). According to (13) and (14), we obtain $$ E\bigl[e^{t}x_{2}^{p}\bigr]\leq EQ_{2}(t)\leq EQ_{2}(0)+K_{2}(p) \bigl(e^{t}-1\bigr), $$ $$ Q_{3}(t)=C_{2}^{*}Q_{2}(t)+e^{t}x_{3}^{p} +e^{\tau _{32}} \frac{pn^{p+1}}{p+1}a_{32} \int _{-\tau _{32}}^{0} \int _{t+\theta }^{t}e ^{s}x_{2}^{p+1}(s) \,\mathrm{d}s\,\mathrm{d}\mu _{32}(\theta ), $$ where \(C_{2}^{*}=a_{22}^{-1}e^{\tau _{32}}n^{p+1}a_{32}\). Applying Itô's formula to \(Q_{3}(t)\), we obtain $$ dQ_{3}(t)=\mathcal{L}Q_{3}(t)\,\mathrm{d}t+C_{2}^{*} \bigl(C_{1}^{*}pe^{t}x_{1}^{p} \sigma _{1}\,\mathrm{d}B_{1}(t)+pe^{t}x_{2}^{p} \sigma _{2}\,\mathrm{d}B_{2}(t)\bigr)+pe ^{t}x_{3}^{p} \sigma _{3}\,\mathrm{d}B_{3}(t), $$ $$\begin{aligned} \mathcal{L}Q_{3}(t) = &C_{2}^{*} \mathcal{L}Q_{2}(t)+\mathcal{L}\bigl[e^{t}x _{3}^{p}\bigr]+a_{32}e^{\tau _{32}}e^{t}x_{2}^{p+1} \frac{pn^{p+1}}{p+1} \\ &{}-a _{32}e^{\tau _{32}}\frac{pn^{p+1}}{p+1} \int _{-\tau _{32}}^{0}e^{t}x_{2} ^{p+1}(t+\theta )\,\mathrm{d}\mu _{32}(\theta ) \\ \leq & C_{2}^{*}\mathcal{L}Q _{2}(t)+\mathcal{L} \bigl[e^{t}x_{3}^{p}\bigr]+a_{32}e^{\tau _{32}}e^{t}x_{2}^{p+1} \frac{pn ^{p+1}}{p+1} \\ &{}-a_{32}\frac{pn^{p+1}}{p+1} \int _{-\tau _{32}}^{0}x_{2}^{p+1}(t+ \theta )\,\mathrm{d}\mu _{32}(\theta ). \end{aligned}$$ $$\begin{aligned} \mathcal{L}\bigl[e^{t}x_{3}^{p} \bigr] = &e^{t}\biggl\{ \biggl[1+\frac{p(p-1)\sigma _{3}^{2}}{2}+p(-r _{3}-h_{3})\biggr]x_{3}^{p} \\ &{}+a_{32}px_{3}^{p} \int _{-\tau _{32}}^{0}x_{2}(t+ \theta )\,\mathrm{d}\mu _{32}(\theta )-a_{33}px_{3}^{p+1}(t)\biggr\} \\ \leq & e ^{t}\biggl\{ \biggl[1-p(r_{3}+h_{3})+ \frac{p(p-1)\sigma _{3}^{2}}{2}\biggr]x_{3}^{p}+a _{32} \frac{p}{p+1}n^{p+1} \int _{-\tau _{32}}^{0}x_{2}^{p+1}(t+\theta ) \,\mathrm{d}\mu _{32}(\theta ) \\ &{}-p\biggl[a_{33}-a_{32}\frac{p}{p+1}n^{- \frac{p+1}{p}} \biggr]x_{3}^{p+1}\biggr\} , \end{aligned}$$ from (15) we further obtain $$\begin{aligned} \mathcal{L}Q_{3}(t) \leq & e^{t}\biggl\{ \biggl[1-p(r_{3}+h_{3})+\frac{p(p-1)\sigma _{3}^{2}}{2} \biggr]x_{3}^{p}-p\biggl[a_{33} -a_{32} \frac{p}{p+1}n^{- \frac{{p+1}}{p}}\biggr]x_{3}^{p+1} \\ &{}+\biggl(1-p(r_{2}+h_{2})+\frac{p(p-1)\sigma _{2}^{2}}{2} \biggr)x_{2}^{p}C_{2}^{*} \\ &{}- \frac{p^{2}}{p+1}\bigl(n^{p+1}a_{32}e^{ \tau _{32}}+n^{-\frac{p+1}{p}}a_{21}C_{2}^{*} \bigr)x_{2}^{p+1} \\ &{}-C_{2} ^{*}e^{\tau _{21}}\frac{p^{2}}{p+1}n^{p+1}a_{21}x_{1}^{p+1}+C_{1}^{*}C _{2}^{*}\biggl[1+p(r_{1}-h_{1})+ \frac{p(p-1)\sigma _{1}^{2}}{2}\biggr]x_{1}^{p}\biggr\} . \end{aligned}$$ Obviously, there is a constant \(K_{3}(p)>0\) such that \(\mathcal{L}Q _{3}(t)\leq K_{3}(p)e^{t}\). Hence, from (16) and (17) we obtain $$ E\bigl[e^{t}x_{3}^{p}\bigr]\leq E \bigl[Q_{3}(t)\bigr]\leq E\bigl[Q_{3}(0)\bigr]+K_{3}(p) \bigl(e^{t}-1\bigr). $$ Consequently, \(\limsup_{t\to \infty }E[x_{3}^{p}(t)]\leq K_{3}(p)\). This completes the proof. □ Assume that functions \(Y\in C(R_{+}\times \varOmega , R_{+})\) and \(Z\in C(R_{+}\times \varOmega , R)\) satisfy \(\lim_{t\rightarrow \infty } \frac{Z(t)}{t}=0\) a.s. If there are three positive constants T, β, and \(\beta _{0}\) such that, for all \(t\geq T\), $$ \ln Y(t)=\beta t-\beta _{0} \int _{0}^{t}Y(s)\,\mathrm{d}s+Z(t)\quad \textit{a.s.}, $$ then \(\lim_{t\to \infty }\langle Y(t)\rangle = \frac{\beta }{\beta _{0}}\) a.s., and \(\lim_{t\to \infty }\frac{ \ln Y(t)}{t}=0\) a.s. If there exist two positive constants \(\beta _{0}\) and T, and a constant \(\beta \in R\) such that, for \(t\geq T\), $$ \ln Y(t)\leq \beta t-\beta _{0} \int _{0}^{t}Y(s)\,\mathrm{d}s+Z(t)\quad \textit{a.s.}, $$ then \(\limsup_{t\to \infty }\langle Y(t)\rangle \leq \frac{\beta }{ \beta _{0}}\) a.s. if \(\beta \geq 0\), and \(\lim_{t\to \infty }Y(t)=0 \) a.s. if \(\beta < 0\). If there exist three positive constants T, β, and \(\beta _{0}\) such that, for all \(t\geq T\), $$ \ln Y(t)\geq \beta t-\beta _{0} \int _{0}^{t}Y(s)\,\mathrm{d}s+Z(t)\quad \textit{a.s.}, $$ then \(\liminf_{t\to \infty }\langle Y(t)\rangle \geq \frac{\beta }{ \beta _{0}}\) a.s. Lemma 4 can be found in [22]. We consider the following auxiliary system: $$ \textstyle\begin{cases} \mathrm{d}Y_{1}(t)= Y_{1}(t)[r_{1}-h_{1}-a_{11}Y_{1}(t)]\,\mathrm{d}t+\sigma _{1}Y_{1}(t)\,\mathrm{d}B_{1}(t),\\ \mathrm{d}Y_{2}(t)=Y_{2}(t)[-r_{2}-h _{2}+a_{21}\int _{-\tau _{21}}^{0}Y_{1}(t+\theta )\,\mathrm{d}\mu _{21}( \theta ) -a_{22}Y_{2}(t)]\,\mathrm{d}t \\ \hphantom{\mathrm{d}Y_{2}(t)=}{}+\sigma _{2}Y_{2}(t)\,\mathrm{d}B_{2}(t), \\ \mathrm{d}Y_{3}(t)=Y_{3}(t)[-r_{3}-h_{3}+a_{32}\int _{-\tau _{32}} ^{0}Y_{2}(t+\theta )\,\mathrm{d}\mu _{32}(\theta ) -a_{33}Y_{3}(t)] \,\mathrm{d}t \\ \hphantom{\mathrm{d}Y_{3}(t)=}{}+\sigma _{3}Y_{3}(t)\,\mathrm{d}B_{3}(t) \end{cases} $$ with the initial condition $$ Y_{1}(\theta )=\xi (\theta ),\qquad Y_{2}( \theta )=\eta (\theta ), \qquad Y_{3}( \theta )=\zeta (\theta ),\quad -r \leq \theta \leq 0. $$ Firstly, by a similar argument as in the proof of Lemma 3, we can obtain that for any condition (19) system (18) has a unique global solution \((Y_{1}(t),Y_{2}(t),Y_{3}(t))\in R^{3}_{+}\) a.s. for all \(t\geq 0\). We have the following results. Assume that \((Y_{1}(t),Y_{2}(t),Y_{3}(t))\) is a global positive solution of system (18). Then we have: If \(\Delta _{11}<0\), then \(\lim_{t\to \infty }Y_{i}(t)=0\) a.s. for \(i=1,2,3\). If \(\Delta _{11}=0\), then \(\lim_{t\to \infty }\langle Y_{1}(t) \rangle =0\) and \(\lim_{t\to \infty }Y_{i}(t)=0\) a.s. for \(i=2,3\). If \(\Delta _{11}>0\) and \(\Delta _{22}<0\), then \(\lim_{t\to \infty }\langle Y_{1}(t)\rangle =\frac{\Delta _{11}}{a_{11}}\) and \(\lim_{t\to \infty }Y_{i}(t)=0\) a.s. for \(i=2,3\). If \(\Delta _{22}=0\), then \(\lim_{t\to \infty }\langle Y_{1}(t) \rangle =\frac{\Delta _{11}}{a_{11}}\), \(\lim_{t\to \infty }\langle Y _{2}(t)\rangle =0 \), and \(\lim_{t\to \infty }Y_{3}(t)=0\) a.s. If \(\Delta _{22}>0\) and \(\Delta _{33}<0\), then \(\lim_{t\to \infty }\langle Y_{1}(t)\rangle =\frac{\Delta _{11}}{a_{11}}\), \(\lim_{t\to \infty }\langle Y_{2}(t)\rangle =\frac{\Delta _{22}}{a_{11}a _{22}}\), and \(\lim_{t\to \infty }Y_{3}(t) =0\) a.s. If \(\Delta _{33}=0\), then \(\lim_{t\to \infty }\langle Y_{1}(t) \rangle =\frac{\Delta _{11}}{a_{11}}\), \(\lim_{t\to \infty }\langle Y _{2}(t)\rangle =\frac{\Delta _{22}}{a_{11}a_{22}}\), and \(\lim_{t\to \infty }\langle Y_{3}(t)\rangle =0\) a.s. If \(\Delta _{33}>0\), then $$ \lim_{t\to \infty }\bigl\langle Y_{1}(t)\bigr\rangle = \frac{\Delta _{11}}{a_{11}},\qquad \lim_{t\to \infty }\bigl\langle Y_{2}(t) \bigr\rangle =\frac{\Delta _{22}}{a_{11}a_{22}},\qquad \lim _{t\to \infty } \bigl\langle Y_{3}(t)\bigr\rangle = \frac{\Delta _{33}}{a_{11}a_{22}a_{33}} \quad \textit{a.s.} $$ \(\limsup_{t\to \infty }\frac{\ln Y_{i}(t)}{t}\leq 0\) a.s. for \(i=1,2,3\). Applying Itô's formula to system (18), we have $$\begin{aligned}& \ln Y_{1}(t)=b_{1}t-a_{11} \int _{0}^{t}Y_{1}(s)\,\mathrm{d}s+\sigma _{1}B _{1}(t)+\ln Y_{1}(0), \end{aligned}$$ $$\begin{aligned}& \ln Y_{2}(t)= -b_{2}t+a_{21} \int _{0}^{t} \int _{-\tau _{21}}^{0}Y_{1}(s+ \theta )\,\mathrm{d}\mu _{21}(\theta )\,\mathrm{d}s \\& \hphantom{\ln Y_{2}(t)=}{} -a_{22} \int _{0}^{t}Y_{2}(s) \,\mathrm{d}s+\sigma _{2}B_{2}(t)+\ln Y_{2}(0) \\& \hphantom{\ln Y_{2}(t)}=-b_{2}t+a_{21} \int _{0} ^{t}Y_{1}(s)\,\mathrm{d}s -a_{22} \int _{0}^{t}Y_{2}(s)\,\mathrm{d}s+\psi _{1}(t), \end{aligned}$$ $$\begin{aligned} \ln Y_{3}(t) = &-b_{3}t+a_{32} \int _{0}^{t} \int _{-\tau _{32}}^{0}Y_{2}(s+ \theta )\,\mathrm{d}\mu _{32}(\theta )\,\mathrm{d}s \\ &{} -a_{33} \int _{0}^{t}Y_{3}(s) \,\mathrm{d}s+\sigma _{3}B_{3}(t)+\ln Y_{3}(0) \\ =&-b_{3}t+a_{32} \int _{0} ^{t}Y_{2}(s)\,\mathrm{d}s -a_{33} \int _{0}^{t}Y_{3}(s)\,\mathrm{d}s+\psi _{2}(t), \end{aligned}$$ $$\begin{aligned}& \psi _{1}(t)= \sigma _{2}B_{2}(t)+\ln Y_{2}(0)+a_{21} \int _{-\tau _{21}} ^{0} \int _{\theta }^{0}Y_{1}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{21}(\theta ) \\& \hphantom{\psi _{1}(t)=}{} -a _{21} \int _{-\tau _{21}}^{0} \int _{t+\theta }^{t}Y_{1}(s)\,\mathrm{d}s \,\mathrm{d} \mu _{21}(\theta ), \\& \psi _{2}(t)=\sigma _{3}B_{3}(t)+\ln Y _{3}(0)+a_{32} \int _{-\tau _{32}}^{0} \int _{\theta }^{0}Y_{2}(s)\,\mathrm{d}s \,\mathrm{d}\mu _{32}(\theta ) \\& \hphantom{\psi _{2}(t)=}{} -a_{32} \int _{-\tau _{32}}^{0} \int _{t+\theta }^{t}Y_{2}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{32}(\theta ). \end{aligned}$$ Assume \(\Delta _{11}\leq 0\). From Lemma 4 and (20) we have \(\lim_{t\to \infty }Y_{1}(t)=0\) a.s. or \(\lim_{t\to \infty }\langle Y_{1}(t)\rangle =0\) a.s. Thus, \(\lim_{t\to \infty }\frac{1}{t}\psi _{1}(t)=0\) a.s. From (21), we have \(\lim_{t\to \infty }Y_{2}(t)=0\), then \(\lim_{t\to \infty }\frac{1}{t} \psi _{2}(t)=0\) a.s. From (22), we further have \(\lim_{t\to \infty }Y_{3}(t) =0\) a.s. Assume \(\Delta _{11}>0\) and \(\Delta _{22}<0\). From Lemma 4 and (20) we obtain \(\lim_{t\to \infty }\langle Y_{1}(t)\rangle =\frac{ \Delta _{11}}{a_{11}}\) a.s. Thus, \(\int _{0}^{t}Y_{1}(s)\,\mathrm{d}s=\frac{ \Delta _{11}}{a_{11}}t+\alpha _{1}(t)\) for any \(t\geq 0\), where \(\lim_{t\to \infty }\frac{\alpha _{1}(t)}{t}=0\) a.s. From (21), we obtain $$ \ln Y_{2}(t) =\frac{\Delta _{22}}{a_{11}}t -a_{22} \int _{0}^{t}Y_{2}(s) \,\mathrm{d}s+\psi _{1}(t)+a_{21}\alpha _{1}(t). $$ Since \(\lim_{t\to \infty }\frac{1}{t}\psi _{1}(t)=0\) a.s., from Lemma 4 we obtain \(\lim_{t\to \infty }Y_{2}(t)=0\) a.s. Further, we also have \(\lim_{t\to \infty }Y_{3}(t)=0\) a.s. Assume \(\Delta _{22}=0\). Then we have \(\Delta _{11}>0\). By a similar argument we obtain \(\lim_{t\to \infty }\langle Y_{1}(t)\rangle =\frac{ \Delta _{11}}{a_{11}}\) a.s., \(\lim_{t\to \infty }\langle Y_{2}(t) \rangle =0\) a.s., and \(\lim_{t\to \infty }Y_{3}(t)=0\) a.s. Assume \(\Delta _{22}>0\) and \(\Delta _{33}<0\). Then we have \(\Delta _{11}>0\). From Lemma 4, (20), and (23) we directly obtain \(\lim_{t\to \infty }\langle Y_{1}(t)\rangle =\frac{ \Delta _{11}}{a_{11}}\) a.s. and \(\lim_{t\to \infty }\langle Y_{2}(t) \rangle =\frac{\Delta _{22}}{a_{11}a_{22}}\) a.s. Hence, \(\int _{0}^{t}Y _{2}(s)\,\mathrm{d}s=\frac{\Delta _{22}}{a_{11}a_{22}}t+\alpha _{2}(t)\) for any \(t\geq 0\), where \(\lim_{t\to \infty }\frac{\alpha _{2}(t)}{t}=0\) a.s. From (22), we obtain $$ \begin{aligned} \ln Y_{3}(t) = & \frac{\Delta _{33}}{a_{11}a_{22}}t -a_{33} \int _{0}^{t}Y _{3}(s)\,\mathrm{d}s+\psi _{2}(t)+a_{32}\alpha _{2}(t). \end{aligned} $$ Since \(\lim_{t\to \infty }\frac{1}{t}\psi _{2}(t)=0\) a.s., from Lemma 4 we obtain \(\lim_{t\to \infty }Y_{3}(t)=0\) a.s. Assume \(\Delta _{33}=0\) or \(\Delta _{33}>0\). Then we have \(\Delta _{11}>0\) and \(\Delta _{22}>0\). Hence, we obtain \(\lim_{t\to \infty }\langle Y _{1}(t)\rangle =\frac{\Delta _{11}}{a_{11}}\) and \(\lim_{t\to \infty } \langle Y_{2}(t)\rangle =\frac{\Delta _{22}}{a_{11}a_{22}}\) a.s. Then, from (24) and Lemma 4 we further obtain \(\lim_{t\to \infty } \langle Y_{3}(t)\rangle =0\) a.s. or \(\lim_{t\to \infty }\langle Y _{3}(t)\rangle =\frac{\Delta _{33}}{a_{11}a_{22}a_{33}}\) a.s. For any \(i\in \{1,2,3\}\), from the above discussions we obtain that there is one of the following three cases: (a) \(\lim_{t\to \infty }Y _{i}(t)=0\) a.s., (b) \(\lim_{t\to \infty }\langle Y_{i}(t)\rangle =0 \) a.s., (c) \(\lim_{t\to \infty }\langle Y_{i}(t)\rangle =\alpha _{i}\) a.s., where \(\alpha _{1}=\frac{\Delta _{11}}{a_{11}}\), \(\alpha _{2}=\frac{\Delta _{22}}{a_{11}a_{22}}\), and \(\alpha _{3}=\frac{ \Delta _{33}}{a_{11}a_{22}a_{33}}\). For cases (a) and (b), we directly have \(\limsup_{t\to \infty }\frac{\ln Y_{i}(t)}{t}\leq 0\) a.s. For case (c), from (20) or (23), or (24) we can obtain \(\limsup_{t\to \infty }\frac{\ln Y_{i}(t)}{t}= 0\) a.s. Therefore, conclusion (8) holds. This completes the proof. □ Assume that \((x_{1}(t),x_{2}(t),x_{3}(t))\) and \((Y_{1}(t),Y_{2}(t),Y _{3}(t))\) are the solutions of model (1) and system (18), respectively. If the initial values satisfy \(x_{i}( \theta )\leq Y_{i}(\theta )\) for all \(-r\leq \theta \leq 0\) and \(i=1,2,3\), then \(x_{i}(t)\leq Y_{i}(t)\) for all \(t\geq 0\), \(i=1,2,3\), \(\limsup_{t\to \infty }\frac{\ln x_{i}(t)}{t}\leq 0\) a.s., \(i=1,2,3\), for any constant \(\tau >0\), \(\lim_{t\to \infty }\frac{1}{t} \int _{t-\tau }^{t}x_{i}(s)\,\mathrm{d}s=0\) a.s., \(i=1,2,3\). From model (1) we obtain $$\begin{aligned}& \mathrm{d}x_{1}(t)\leq x_{1}(t) \bigl[r_{1}-h_{1}-a_{11}x_{1}(t)\bigr] \,\mathrm{d}t+\sigma _{1}x_{1}(t) \,\mathrm{d}B_{1}(t), \\& \mathrm{d}x_{2}(t)\leq x_{2}(t)\biggl[-r_{2}-h_{2}+a_{21} \int _{-\tau _{21}}^{0}x_{1}(t+\theta ) \,\mathrm{d}\mu _{21}(\theta )-a_{22}x_{2}(t)\biggr]\,\mathrm{d}t+ \sigma _{2}x_{2}(t) \,\mathrm{d}B_{2}(t), \\& \mathrm{d}x_{3}(t)= x_{3}(t)\biggl[-r_{3}-h_{3}+a_{32} \int _{-\tau _{32}}^{0}x_{2}(t+\theta ) \,\mathrm{d}\mu _{32}(\theta )-a_{33}x_{3}(t)\biggr]\,\mathrm{d}t+ \sigma _{3}x_{3}(t) \,\mathrm{d}B_{3}(t). \end{aligned}$$ Using the comparison theorem and Theorem 2.1 given in Bao and Yuan [23], for any \(t\geq 0\), we obtain \(x_{i}(t)\leq Y_{i}(t)\) (\(i=1,2,3\)). Then from Lemma 5 we obtain that \(\limsup_{t\to \infty } \frac{\ln x_{i}(t)}{t}\leq 0\) a.s. (\(i=1,2,3\)), and \(\lim_{t\to \infty }\frac{1}{t}\int _{t-\tau }^{t}x_{i}(s)\,\mathrm{d}s=0\) a.s. (\(i=1,2,3\)) for any constant \(\tau >0\). This completes the proof. □ 3 Global dynamics Firstly, on the extinction and persistence and global stability in the mean with probability one, we can establish the following integrated results. Assume that \((x_{1}(t),x_{2}(t),x_{3}(t))\) is a global positive solution of model (1). Then we have If \(\Delta _{11}<0\), then \(\lim_{t\to \infty }x_{i}(t)=0\) a.s. for \(i=1,2,3\). If \(\Delta _{11}=0\), then \(\lim_{t\to \infty }\langle x_{1}(t) \rangle =0\) and \(\lim_{t\to \infty }x_{i}(t)=0\) a.s. for \(i=2,3\). If \(\Delta _{11}>0\) and \(\Delta _{22}<0\), then \(\lim_{t\to \infty } \langle x_{1}(t)\rangle =\frac{\Delta _{11}}{H_{1}}\) and \(\lim_{t\to \infty }x_{i}(t)=0\) a.s. for \(i=2,3\). If \(\Delta _{22}=0\), then \(\lim_{t\to \infty }\langle x_{1}(t) \rangle =\frac{\Delta _{11}}{H_{1}}\), \(\lim_{t\to \infty }\langle x _{2}(t)\rangle =0\), and \(\lim_{t\to \infty }x_{3}(t)=0\) a.s. If \(\Delta _{22}>0\) and \(\Delta _{33}<0\), then \(\lim_{t\to \infty } \langle x_{1}(t)\rangle =\frac{\Delta _{21}}{H_{2}}\), \(\lim_{t\to \infty }\langle x_{2}(t)\rangle =\frac{\Delta _{22}}{H_{2}}\), and \(\lim_{t\to \infty }x_{3}(t)=0\) a.s. If \(\Delta _{33}=0\) and \(a_{33}a_{22}(a_{11}a_{22}+a_{12}a_{21})-a _{12}a_{21}a_{23}a_{32}>0\), then \(\lim_{t\to \infty }\langle x_{1}(t) \rangle =\frac{\Delta _{21}}{H_{2}}\), \(\lim_{t\to \infty }\langle x _{2}(t)\rangle =\frac{\Delta _{22}}{H_{2}}\), and \(\lim_{t\to \infty } \langle x_{3}(t)\rangle =0\) a.s. If \(\Delta _{33}>0\) and \(a_{33}a_{22}(a_{11}a_{22}+a_{12}a_{21})-a _{12}a_{21}a_{23}a_{32}>0\), then $$ \lim_{t\to \infty }\bigl\langle x_{1}(t)\bigr\rangle = \frac{\Delta _{31}}{H_{3}},\qquad \lim_{t\to \infty }\bigl\langle x_{2}(t)\bigr\rangle =\frac{\Delta _{32}}{H _{3}},\qquad \lim _{t\to \infty }\bigl\langle x_{3}(t)\bigr\rangle = \frac{\Delta _{33}}{H _{3}}\quad \textit{a.s.} $$ Using Itô's formula to model (1), we obtain $$\begin{aligned}& \ln x_{1}(t) = b_{1}t-a_{11} \int _{0}^{t}x_{1}(s) \,\mathrm{d}s-a_{12} \int _{0}^{t} \int _{-\tau _{12}}^{0}x_{2}(s+\theta )\,\mathrm{d}\mu _{12}(\theta )\,\mathrm{d}s +\sigma _{1}B_{1}(t)+\ln x_{1}(0) \\& \hphantom{\ln x_{1}(t)}= b_{1}t-a_{11} \int _{0}^{t}x_{1}(s) \,\mathrm{d}s-a_{12} \int _{0}^{t}x_{2}(s)\,\mathrm{d}s +\phi _{1}(t), \end{aligned}$$ $$\begin{aligned}& \ln x_{2}(t) = -b_{2}t+a_{21} \int _{0}^{t} \int _{-\tau _{21}}^{0}x_{1}(s+ \theta )\,\mathrm{d}\mu _{21}(\theta )\,\mathrm{d}s-a_{22} \int _{0}^{t}x_{2}(s) \,\mathrm{d}s \\& \hphantom{\ln x_{2}(t) =}{}-a_{23} \int _{0}^{t} \int _{-\tau _{23}}^{0}x_{3}(s+\theta ) \,\mathrm{d}\mu _{23}(\theta )\,\mathrm{d}s+\sigma _{2}B_{2}(t)+\ln x_{2}(0) \\& \hphantom{\ln x_{2}(t)}=-b _{2}t+a_{21} \int _{0}^{t}x_{1}(s)\,\mathrm{d}s -a_{22} \int _{0}^{t}x_{2}(s) \,\mathrm{d}s-a_{23} \int _{0}^{t}x_{3}(s)\,\mathrm{d}s+\phi _{2}(t) \end{aligned}$$ $$\begin{aligned} \ln x_{3}(t) = &-b_{3}t+a_{32} \int _{0}^{t} \int _{-\tau _{32}}^{0}x_{2}(s+ \theta )\,\mathrm{d}\mu _{32}(\theta )\,\mathrm{d}s -a_{33} \int _{0}^{t}x_{3}(s) \,\mathrm{d}s+\sigma _{3}B_{3}(t)+\ln x_{3}(0) \\ =&-b_{3}t+a_{32} \int _{0} ^{t}x_{2}(s) \,\mathrm{d}s-a_{33} \int _{0}^{t}x_{3}(s)\,\mathrm{d}s+\phi _{3}(t), \end{aligned}$$ $$\begin{aligned}& \phi _{1}(t)= \sigma _{1}B_{1}(t)+ \ln x_{1}(0)+a_{12} \int _{-\tau _{12}} ^{0} \int _{t+\theta }^{t}x_{2}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{12}(\theta )\\& \hphantom{\phi _{1}(t)=}{} -a _{12} \int _{-\tau _{12}}^{0} \int _{\theta }^{0}x_{2}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{12}(\theta ), \\& \phi _{2}(t)=\sigma _{2}B_{2}(t)+\ln x_{2}(0) +a _{21} \int _{-\tau _{21}}^{0} \int _{\theta }^{0}x_{1}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{21}(\theta ) \\& \hphantom{\phi _{2}(t)=}{}-a_{21} \int _{-\tau _{21}}^{0} \int _{t+\theta }^{t}x _{1}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{21}(\theta ) \\& \hphantom{\phi _{2}(t)=}{}+a_{23} \int _{-\tau _{23}} ^{0} \int _{t+\theta }^{t}x_{3}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{23}(\theta ) -a _{23} \int _{-\tau _{23}}^{0} \int _{\theta }^{0}x_{3}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{23}(\theta ), \\& \phi _{3}(t)=\sigma _{3}B_{3}(t)+\ln x_{3}(0)+a _{32} \int _{-\tau _{32}}^{0} \int _{\theta }^{0}x_{2}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{32}(\theta ) \\& \hphantom{\phi _{3}(t)=}{} -a_{32} \int _{-\tau _{32}}^{0} \int _{t+\theta }^{t}x _{2}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{32}(\theta ). \end{aligned}$$ Further, we also obtain $$ \ln x_{1}(t) \leq b_{1}t-a_{11} \int _{0}^{t}x_{1}(s)\,\mathrm{d}s+\sigma _{1}B_{1}(t)+\ln x_{1}(0) $$ $$\begin{aligned} \ln x_{2}(t) \leq &-b_{2}t+a_{21} \int _{0}^{t} \int _{-\tau _{21}}^{0}x _{1}(s+\theta )\,\mathrm{d}\mu _{21}(\theta )\,\mathrm{d}s -a_{22} \int _{0}^{t}x _{2}(s)\,\mathrm{d}s+\sigma _{2}B_{2}(t)+\ln x_{2}(0) \\ =&-b_{2}t+a_{21} \int _{0}^{t}x_{1}(s)\,\mathrm{d}s -a_{22} \int _{0}^{t}x_{2}(s)\,\mathrm{d}s+ \sigma _{2}B_{2}(t)+\ln x_{2}(0) \\ &{}+a_{21} \int _{-\tau _{21}}^{0} \int _{\theta }^{0}x_{1}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{21}(\theta ) -a_{21} \int _{-\tau _{21}}^{0} \int _{t+\theta }^{t}x_{1}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{21}(\theta ). \end{aligned}$$ Assume \(\Delta _{11}\leq 0\). From (28), Lemmas 5 and 6, we can immediately obtain that conclusions (1) and (2) hold. Assume \(\Delta _{11}>0\) and \(\Delta _{22}\leq 0\). From (28), Lemmas 5 and 6, we immediately obtain that \(\limsup_{t\to \infty } \langle x_{1}(t)\rangle \leq \frac{\Delta _{11}}{H_{1}}\), \(\lim_{t\to \infty }x_{2}(t)=0\) or \(\lim_{t\to \infty }\langle x_{2}(t) \rangle =0\), and \(\lim_{t\to \infty }x_{3}(t)=0\) a.s. For any \(\varepsilon >0\) with \(b_{1}-a_{12}\varepsilon >0\), we have \(\int _{0}^{t}x_{2}(s)\,\mathrm{d}s<\varepsilon t\) a.s. for enough large t, and from (25) $$ \ln x_{1}(t)\geq (b_{1}-a_{12}\varepsilon )t-a_{11} \int _{0}^{t}x_{1}(s) \,\mathrm{d}s+\phi _{1}(t). $$ $$\begin{aligned}& \int _{-\tau _{12}}^{0} \int _{t+\theta }^{t}x_{2}(s)\,\mathrm{d}s\,\mathrm{d} \mu _{12}(\theta ) \leq \int _{-\tau _{12}}^{0}\,\mathrm{d}\mu _{12}(\theta ) \int _{t-\tau _{12}}^{t}x_{2}(s)\,\mathrm{d}s, \\& \int _{-\tau _{12}}^{0} \int _{\theta }^{0}x_{2}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{12}(\theta )\leq \int _{-\tau _{12}}^{0}\,\mathrm{d}\mu _{12}(\theta ) \int _{-\tau _{12}}^{0}x _{2}(s)\,\mathrm{d}s, \end{aligned}$$ by Lemma 6 we obtain \(\lim_{t\to \infty }\frac{1}{t}\int _{-\tau _{12}} ^{0}\int _{t+\theta }^{t}x_{2}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{12}(\theta )=0\) and \(\lim_{t\to \infty }\frac{1}{t} \int _{-\tau _{12}}^{0}\int _{\theta }^{0}x_{2}(s)\,\mathrm{d}s\,\mathrm{d}\mu _{12}(\theta ) =0\). Hence, \(\lim_{t\to \infty }\frac{\phi _{1}(t)}{t}=0\) a.s. Thus, from Lemma 4 and the arbitrariness of ε we have \(\liminf_{t\to \infty }\langle x_{1}(t)\rangle \geq \frac{\Delta _{11}}{H _{1}}\). This shows that \(\lim_{t\to \infty }\langle x_{1}(t)\rangle =\frac{ \Delta _{11}}{H_{1}}\). Assume \(\Delta _{33}>0\). From (25)–(27), we obtain $$ a_{32}\bigl[a_{21}\ln x_{1}(t)+a_{11} \ln x_{2}(t)\bigr]+H_{2}\ln x_{3}(t) = \Delta _{33}t-H_{3} \int _{0}^{t}x_{3}(s)\,\mathrm{d}s+\phi _{4}(t), $$ where \(\phi _{4}(t)=a_{21}a_{32}\phi _{1}(t)+a_{11}a_{32}\phi _{2}(t)+H _{2}\phi _{3}(t)\). By a similar argument as in the above, for \(\phi _{1}(t)\), we have \(\lim_{t\to \infty }\frac{\phi _{4}(t)}{t}=0\) a.s. For any \(\varepsilon >0\) with \(\Delta _{33}-2\varepsilon >0\), by Lemma 6, \(\ln x_{1}(t)<\frac{ \varepsilon }{a_{32}a_{21}+1}t\) and \(\ln x_{2}(t)<\frac{\varepsilon }{a _{32}a_{11}+1}t\) for t enough large. Then from (30) we further have $$ H_{2}\ln x_{3}(t)>(\Delta _{33}-2\varepsilon )t-H_{3} \int _{0}^{t}x_{3}(s) \,\mathrm{d}s+\phi _{4}(t). $$ Hence, by Lemma 4 and the arbitrariness of ε, we further have $$ \liminf_{t\to \infty }\bigl\langle x_{3}(t) \bigr\rangle \geq \frac{\Delta _{33}}{H _{3}}. $$ From (25) and (26), we obtain $$ a_{22}\ln x_{1}(t)-a_{12}\ln x_{2}(t) =\Delta _{21}t-H_{2} \int _{0}^{t}x _{1}(s)\,\mathrm{d}s+ a_{12}a_{23} \int _{0}^{t}x_{3}(s)\,\mathrm{d}s+\phi _{5}(t), $$ where \(\phi _{5}(t)=a_{22}\phi _{1}(t)-a_{12}\phi _{2}(t)\). Similarly, as in the above for \(\phi _{1}(t)\), we can obtain \(\lim_{t\to \infty }\frac{ \phi _{5}(t)}{t}=0\) a.s. For any \(\varepsilon >0\), from Lemma 6 and the properties of superior limit, we have \(\int _{0}^{t}x_{3}(s)\,\mathrm{d}s<( \limsup_{t\to \infty }\langle x_{3}(t)\rangle +\varepsilon )t\) and \(\ln x_{2}(t)<\frac{\varepsilon }{a_{12}+1}t\) for enough large t. Then from (32) we further have $$ a_{22}\ln x_{1}(t)\leq \Delta _{21}t+a_{12}a_{23} \Bigl(\limsup_{t\to \infty }\bigl\langle x_{3}(t)\bigr\rangle + \varepsilon \Bigr)t+\varepsilon t-H_{2} \int _{0} ^{t}x_{1}(s)\,\mathrm{d}s+\phi _{5}(t). $$ From Lemma 4 and the arbitrariness of ε it follows that $$ \limsup_{t\to \infty }\bigl\langle x_{1}(t) \bigr\rangle \leq \frac{\Delta _{21}+a _{12}a_{23}\limsup_{t\to \infty }\langle x_{3}(t)\rangle }{H_{2}} \quad \textit{a.s.} $$ Combining (31), for any \(\varepsilon >0\) enough small, when t is enough large, we have $$ \int _{0}^{t}x_{3}(s)\,\mathrm{d}s>\biggl( \frac{\Delta _{33}}{H_{3}}-\varepsilon \biggr)t,\qquad \int _{0}^{t}x_{1}(s)\,\mathrm{d}s< \biggl( \frac{\Delta _{21}+a_{12}a_{23} \limsup_{t\to \infty }\langle x_{3}(t)\rangle }{H_{2}}+\varepsilon \biggr)t. $$ Hence, from (29) we further have $$\begin{aligned} \ln x_{2}(t) \leq & -b_{2}t + \biggl(\frac{a_{21}(\Delta _{21}+a_{12}a_{23} \limsup_{t\to \infty }\langle x_{3}(t)\rangle )}{H_{2}}+\varepsilon \biggr)t \\ &{}-a_{23}\biggl(\frac{\Delta _{33}}{H_{3}} -\varepsilon \biggr)t-a_{22} \int _{0} ^{t}x_{2}(s)\,\mathrm{d}s+\phi _{2}(t). \end{aligned}$$ We have \(\lim_{t\to \infty }\frac{\phi _{2}(t)}{t}=0\) a.s. by Lemma 6. From (31), we obtain $$\begin{aligned}& -b_{2}+\frac{a_{21}(\Delta _{21}+a_{12}a_{23}\limsup_{t\to \infty } \langle x_{3}(t)\rangle )}{H_{2}}-a_{23}\frac{\Delta _{33}}{H_{3}} \\& \quad \geq -b_{2}+a_{21}\frac{\Delta _{21}}{H_{2}}-a_{23} \frac{\Delta _{33}}{H _{3}}+\frac{a_{21}a_{12}a_{23}\Delta _{33}}{H_{2}H_{3}}=\frac{a_{22} \Delta _{32}}{H_{3}}>0. \end{aligned}$$ Hence, from (34), Lemma 4, and the arbitrariness of ε, we have $$\begin{aligned} a_{22}\limsup_{t\to \infty }\bigl\langle x_{2}(t)\bigr\rangle \leq& \biggl(-b_{2}+ \frac{a _{21}(\Delta _{21}+a_{12}a_{23}\limsup_{t\to \infty }\langle x_{3}(t) \rangle }{H_{2}}-a_{23}\frac{\Delta _{33}}{H_{3}}\biggr) \\ \triangleq& M \quad \textit{a.s.} \end{aligned}$$ For any \(\varepsilon >0\), when t is enough large, we have \(\int _{0}^{t}x_{2}(s)\,\mathrm{d}s<(\frac{M}{a_{22}}+\varepsilon )t\). Then from (27) it follows that $$\begin{aligned} \ln x_{3}(t) \leq & -b_{3}t+a_{32} \biggl(\frac{M}{a_{22}}+\varepsilon \biggr)t -a _{33} \int _{0}^{t}x_{3}(s)\,\mathrm{d}s+\phi _{3}(t) \\ \leq &-b_{3}t+a_{32} \varepsilon t+ \frac{a_{32}}{a_{22}} \biggl(-b_{2}+\frac{a_{21}(\Delta _{21}+a _{12}a_{23}\limsup_{t\to \infty }\langle x_{3}(t)\rangle )}{H_{2}}-a _{23}\frac{\Delta _{33}}{H_{3}} \biggr)t \\ &{}-a_{33} \int _{0}^{t}x_{3}(s) \,\mathrm{d}s+\phi _{3}(t). \end{aligned}$$ We have \(\lim_{t\to \infty }\frac{\phi _{3}(t)}{t}=0\) a.s. by Lemma 6. From (31), we also have $$\begin{aligned}& -b_{3}+ \frac{a_{32}}{a_{22}}\biggl(-b_{2}+ \frac{a_{21}(\Delta _{21}+a_{12}a _{23}\limsup_{t\to \infty }\langle x_{3}(t)\rangle )}{H_{2}}-a_{23}\frac{ \Delta _{33} }{H_{3}}\biggr) \\& \quad \geq -b_{3}+\frac{a_{32}}{a_{22}}\biggl(-b_{2}+a _{21}\frac{\Delta _{21}}{H_{2}} -a_{23}\frac{\Delta _{33}}{H_{3}}+ \frac{a _{21}a_{12}a_{23}\Delta _{33}}{H_{2}H_{3}}\biggr)=a_{33}\frac{\Delta _{33}}{H _{3}}>0. \end{aligned}$$ Hence, from (36), Lemma 4, and the arbitrariness of ε, one can derive that $$\begin{aligned}& a_{33}\limsup_{t\to \infty }\bigl\langle x_{3}(t)\bigr\rangle \\& \quad \leq -b_{3}+\frac{a _{32}}{a_{22}}\biggl(-b_{2}+a_{21} \frac{\Delta _{21}}{H_{2}}+\frac{a_{12}a _{21}a_{23}\limsup_{t\to \infty }\langle x_{3}(t)\rangle }{H_{2}}-a _{23}\frac{\Delta _{33}}{H_{3}}\biggr). \end{aligned}$$ That is equivalent to the following equation: $$\begin{aligned}& \bigl[a_{33}a_{22}(a_{11}a_{22}+a_{12}a_{21})-a_{12}a_{21}a_{23}a_{32} \bigr] \limsup_{t\to \infty }\bigl\langle x_{3}(t)\bigr\rangle \\& \quad \leq \bigl[a_{33}a_{22}(a _{11}a_{22}+a_{12}a_{21})-a_{12}a_{21}a_{23}a_{32} \bigr]\times \frac{ \Delta _{33}}{H_{3}}. \end{aligned}$$ Hence, we obtain \(\limsup_{t\to \infty }\langle x_{3}(t)\rangle \leq \frac{\Delta _{33}}{H_{3}}\) a.s. Combining (31), we finally obtain \(\lim_{t\to \infty }\langle x_{3}(t)\rangle =\frac{\Delta _{33}}{H _{3}}\) a.s. From (33) and (35) we can obtain $$ \limsup_{t\to \infty }\bigl\langle x_{1}(t) \bigr\rangle \leq \frac{b_{1}(a_{22}a _{33}+a_{32}a_{23})+b_{2}a_{33}a_{12}-b_{3}a_{12}a_{23}}{a_{11}a_{22}a _{33}+a_{12}a_{21}a_{33}+a_{11}a_{32}a_{23}} =\frac{\Delta _{31}}{H _{3}}\quad \textit{a.s.} $$ $$ \limsup_{t\to \infty }\bigl\langle x_{2}(t) \bigr\rangle \leq \frac{b_{1}a_{21}a _{33}-b_{2}a_{33}a_{11}+b_{3}a_{11}a_{23}}{a_{11}a_{22}a_{33}+a_{12}a _{21}a_{33}+a_{11}a_{32}a_{23}} =\frac{\Delta _{32}}{H_{3}}\quad \textit{a.s.} $$ For any \(\varepsilon >0\), from Lemma 6 there is \(T>0\) such that, for any \(t>T\), $$ \int _{0}^{t}x_{3}(s)\,\mathrm{d}s< \biggl( \frac{\Delta _{33}}{H_{3}}+\varepsilon \biggr)t,\qquad \ln x_{1}(t)< \frac{\varepsilon }{a_{21}+1}t. $$ $$ \begin{aligned} a_{21}\ln x_{1}(t)+a_{11}\ln x_{2}(t)=\Delta _{22}t-H_{2} \int _{0}^{t}x _{2}(s) \,\mathrm{d}s-a_{11}a_{23} \int _{0}^{t}x_{3}(s)\,\mathrm{d}s+\phi _{6}(t), \end{aligned} $$ where \(\phi _{6}(t)=a_{21}\phi _{1}(t)+a_{11}\phi _{2}(t)\). We have \(\lim_{t\to \infty }\frac{\phi _{6}(t)}{t}=0\) a.s. by Lemma 6. Substituting (39) into (40), we have, when \(t>T\), $$ a_{11}\ln x_{2}(t) \geq \Delta _{22}t-a_{11}a_{23} \biggl(\frac{\Delta _{33}}{H _{3}}+\varepsilon \biggr)t-\varepsilon t-H_{2} \int _{0}^{t}x_{2}(s)\,\mathrm{d}s+ \phi _{6}(t). $$ From Lemma 4 and the arbitrariness of ε, we have \(\liminf_{t\to \infty }\langle x_{2}(t)\rangle \geq \frac{\Delta _{32}}{H _{3}}\) a.s. Combining (38), we finally obtain \(\lim_{t\to \infty }\langle x_{2}(t)\rangle =\frac{\Delta _{32}}{H_{3}} \) a.s. For any \(\varepsilon >0\), from (38) when t is enough large we have \(\int _{0}^{t}x_{2}(s)\,\mathrm{d}s< (\frac{\Delta _{32}}{H_{3}}+ \varepsilon )t\). Then from (25) it follows that $$ \ln x_{1}(t) \geq b_{1}t-a_{11} \int _{0}^{t}x_{1}(s) \,\mathrm{d}s-a_{12}\biggl( \frac{ \Delta _{32}}{H_{3}}+\varepsilon \biggr)t+\phi _{1}(t). $$ From Lemma 4 and the arbitrariness of ε, we have \(\liminf_{t\to \infty }\langle x_{1}(t)\rangle \geq \frac{\Delta _{31}}{H _{3}}\). Combining (37), we finally obtain \(\lim_{t\to \infty }\langle x_{1}(t)\rangle =\frac{\Delta _{31}}{H_{3}}\) a.s. Assume \(\Delta _{33}=0\). Then we can have \(\Delta _{22}>0\) and \(\Delta _{11}>0\). By a similar argument as in the above for case \(\Delta _{33}>0\), we can obtain $$ a_{33}\limsup_{t\to \infty }\bigl\langle x_{3}(t) \bigr\rangle \leq -b_{3}+\frac{a _{32}}{a_{22}}\biggl(-b_{2}+a_{21} \frac{\Delta _{21}}{H_{2}} +\frac{a_{12}a _{21}a_{23}\limsup_{t\to \infty }\langle x_{3}(t)\rangle }{H_{2}}\biggr). $$ $$ \bigl[a_{33}a_{22}(a_{11}a_{22}+a_{12}a_{21})-a_{12}a_{21}a_{23}a_{32} \bigr] \limsup_{t\to \infty }\bigl\langle x_{3}(t)\bigr\rangle \leq 0. $$ Therefore, we finally have \(\lim_{t\to +\infty }\langle x_{3}(t) \rangle = 0\). Thus, for any \(\varepsilon >0\), there is \(T>0\) such that \(\int _{0}^{t}x_{3}(s)\,\mathrm{d}s<\varepsilon t\) for all \(t>T\). Hence, from (39) and (40) we further obtain as \(t>T\) $$ a_{11}\ln x_{2}(t)\geq \Delta _{22}t-a_{11}a_{23} \varepsilon t-\varepsilon t-H_{2} \int _{0}^{t}x_{2}(s)\,\mathrm{d}s+\phi _{6}(t). $$ From Lemma 4 and the arbitrariness of ε, we have $$ \liminf_{t\to \infty }\bigl\langle x_{2}(t) \bigr\rangle \geq \frac{\Delta _{22}}{H _{2}} \quad \textit{a.s.} $$ Using the same method as in the proof of \(\liminf_{t\to \infty } \langle x_{1}(t)\rangle \geq \frac{\Delta _{31}}{H_{3}}\) in the above, we can successively prove \(\limsup_{t\to \infty }\langle x_{1}(t) \rangle \leq \frac{\Delta _{21}}{H_{2}}\) a.s., \(\limsup_{t\to \infty }\langle x_{2}(t)\rangle \leq \frac{\Delta _{22}}{H_{2}}\) a.s., and \(\liminf_{t\to \infty }\langle x_{1}(t)\rangle \geq \frac{\Delta _{21}}{H _{2}}\) a.s. Combining (41), we finally obtain \(\lim_{t\to \infty }\langle x_{2}(t)\rangle =\frac{\Delta _{22}}{H_{2}} \) a.s. and \(\lim_{t\to \infty }\langle x_{1}(t)\rangle =\frac{ \Delta _{21}}{H_{2}}\) a.s. Assume \(\Delta _{22}>0\) and \(\Delta _{33}<0\). From (30) we directly obtain $$ a_{32}\bigl[a_{21}\ln x_{1}(t)+a_{11} \ln x_{2}(t)\bigr]+H_{2}\ln x_{3}(t)\leq \Delta _{33}t+\phi _{4}(t). $$ Hence, $$ \limsup_{t\to \infty }\frac{1}{t}\bigl(a_{21}a_{32} \ln x_{1}(t)+a_{11}a _{32}\ln x_{2}(t)+H_{2} \ln x_{3}(t)\bigr)\leq \Delta _{33}< 0. $$ This shows \(\lim_{t\to \infty }(x_{1}(t))^{a_{21}a_{32}}(x_{2}(t))^{a _{11}a_{32}}(x_{3}(t))^{H_{2}}=0\), which implies that there is \(i\in \{1,2,3\}\) such that $$ \lim_{t\to \infty }x_{i}(t)=0. $$ For \(1\leq i\leq j\leq 3\), similarly to the above arguments for cases \(\Delta _{11}\leq 0\), and \(\Delta _{11}>0\) and \(\Delta _{22}\leq 0\), we can easily prove that if \(\lim_{t\to \infty }x_{i}(t)=0\) a.s., then \(\lim_{t\to +\infty }x_{j}(t)=0\) a.s. Therefore, from (42) we finally obtain \(\lim_{t\to \infty }x_{3}(t)=0\) a.s. Consequently, \(\lim_{t\to \infty }\langle x_{3}(t)\rangle =0\) a.s. By a similar argument as in the above for case \(\Delta _{33}=0\), we also know \(\lim_{t\to \infty }\langle x_{1}(t)\rangle = \frac{\Delta _{21}}{H_{2}}\) and \(\lim_{t\to \infty }\langle x_{2}(t) \rangle =\frac{\Delta _{22}}{H_{2}}\). This completes the proof. □ Next, we can establish the following result on the global attractivity in the expectation for any global positive solutions of model (1). Let \((x_{1}(t;\phi ),x_{2}(t;\phi ),x_{3}(t;\phi ))\) and \((y_{1}(t; \phi ^{*}),y_{2}(t;\phi ^{*}),y_{3}(t;\phi ^{*}))\) be two solutions of model (1) with initial values \(\phi ,\phi ^{*}\in C([-\gamma ,0],R _{+}^{3}))\). Assume that there are positive constants \(w_{1}\), \(w_{2}\), and \(w_{3}\) such that $$ w_{1}a_{11}-w_{2}a_{21}>0, \qquad w_{2}a_{22}-w_{1}a_{12}-w_{3}a_{32}>0, \qquad w_{3}a_{33}-w_{2}a_{23}>0. $$ $$ \lim_{t\to \infty }E\sqrt{ \bigl\vert x_{1}(t;\phi )-x_{1}\bigl(t;\phi ^{*}\bigr) \bigr\vert ^{2}+ \bigl\vert x_{2}(t;\phi )-x_{2}\bigl(t;\phi ^{*} \bigr) \bigr\vert ^{2}+ \bigl\vert x_{3}(t;\phi )-x_{3}\bigl(t;\phi ^{*}\bigr) \bigr\vert ^{2}}=0. $$ We only need to show $$ \lim_{t\to \infty }E \bigl\vert x_{i}(t; \phi )-x_{i}\bigl(t;\phi ^{*}\bigr) \bigr\vert =0, \quad i=1,2,3. $$ Define functions as follows: $$ V_{i}(x_{i})= \bigl\vert \ln x_{i}(t;\phi )- \ln y_{i}\bigl(t;\phi ^{*}\bigr) \bigr\vert ,\quad i=1,2,3. $$ Applying Itô's formula, we obtain $$\begin{aligned}& \mathcal{L}V_{1}(x_{1})\leq -a_{11} \bigl\vert x_{1}(t;\phi )-y_{1}\bigl(t; \phi ^{*}\bigr) \bigr\vert \\& \hphantom{\mathcal{L}V_{1}(x_{1})\leq}{}+a _{12} \int _{-\tau _{12}}^{0} \bigl\vert x_{2}(t+\theta ;\phi )-y_{2}\bigl(t+\theta ;\phi ^{*}\bigr) \bigr\vert \,\mathrm{d}\mu _{12}(\theta ), \end{aligned}$$ $$\begin{aligned}& \mathcal{L}V_{2}(x_{2}) \leq -a_{22} \bigl\vert x_{2}(t;\phi )-y_{2}\bigl(t; \phi ^{*}\bigr) \bigr\vert +a_{21} \int _{-\tau _{21}}^{0} \bigl\vert x_{1}(t+\theta ;\phi )-y_{1}\bigl(t+ \theta ;\phi ^{*}\bigr) \bigr\vert \,\mathrm{d}\mu _{21}(\theta ) \\& \hphantom{\mathcal{L}V_{2}(x_{2}) \leq}{}+a_{23} \int _{-\tau _{23}} ^{0} \bigl\vert x_{3}(t+\theta ;\phi )-y_{3}\bigl(t+\theta ;\phi ^{*}\bigr) \bigr\vert \,\mathrm{d}\mu _{23}( \theta ), \end{aligned}$$ $$\begin{aligned} \mathcal{L}V_{3}(x_{3}) \leq& -a_{33} \bigl\vert x_{3}(t;\phi )-y_{3}\bigl(t; \phi ^{*}\bigr) \bigr\vert \\ &{}+a _{32} \int _{-\tau _{32}}^{0} \bigl\vert x_{2}(t+\theta ;\phi )-y_{2}\bigl(t+\theta ;\phi ^{*}\bigr) \bigr\vert \,\mathrm{d}\mu _{32}(\theta ). \end{aligned}$$ Define function as follows: $$ V(t)=w_{1}V_{1}(x_{1})+w_{2}V_{2}(x_{2})+w_{3}V_{3}(x_{3})+V_{4}(t), $$ $$\begin{aligned} V_{4}(t) = &w_{1}a_{12} \int _{-\tau _{12}}^{0} \int _{t+\theta }^{t} \bigl\vert x_{2}(s; \phi )-y_{2}\bigl(s;\phi ^{*}\bigr) \bigr\vert \,\mathrm{d}s\,\mathrm{d} \mu _{12}(\theta ) \\ &{}+w_{2}a _{21} \int _{-\tau _{21}}^{0} \int _{t+\theta }^{t} \bigl\vert x_{1}(s;\phi )-y_{1}\bigl(s; \phi ^{*}\bigr) \bigr\vert \,\mathrm{d}s \,\mathrm{d}\mu _{21}(\theta ) \\ &{}+w_{2}a_{23} \int _{-\tau _{23}}^{0} \int _{t+\theta }^{t} \bigl\vert x_{3}(s;\phi )-y_{3}\bigl(s;\phi ^{*}\bigr) \bigr\vert \,\mathrm{d}s\,\mathrm{d} \mu _{23}(\theta ) \\ &{}+w_{3}a_{32} \int _{-\tau _{32}}^{0} \int _{t+\theta }^{t} \bigl\vert x_{2}(s;\phi )-y_{2}\bigl(s;\phi ^{*}\bigr) \bigr\vert \,\mathrm{d}s\,\mathrm{d} \mu _{32}(\theta ). \end{aligned}$$ From (44)–(48) we obtain $$\begin{aligned} \mathcal{L}V(t) = &w_{1}\mathcal{L}V_{1}(x_{1})+w_{2} \mathcal{L}V_{2}(x _{2})+w_{3}\mathcal{L}V_{3}(x_{3}) + \frac{\,\mathrm{d}V_{4}(t;\phi ,\phi ^{*})}{\mathrm{d}t} \\ \leq &-(w_{1}a _{11}-w_{2}a_{21}) \bigl\vert x_{1}(t;\phi )-y_{1}\bigl(t;\phi ^{*} \bigr) \bigr\vert \\ &{}-(w_{2}a_{22}-w _{1}a_{12}-w_{3}a_{32}) \bigl\vert x_{2}(t;\phi )-y_{2}\bigl(t;\phi ^{*} \bigr) \bigr\vert \\ &{}-(w_{3}a _{33}-w_{2}a_{23}) \bigl\vert x_{3}(t;\phi )-y_{3}\bigl(t;\phi ^{*}\bigr) \bigr\vert . \end{aligned}$$ Hence, we have $$\begin{aligned} E\bigl[V(t)\bigr] \leq & E\bigl[V(0)\bigr]-(w_{1}a_{11}-w_{2}a_{21}) \int _{0}^{t}E\bigl[ \bigl\vert x_{1}(s; \phi )-y_{1}\bigl(s;\phi ^{*}\bigr) \bigr\vert \bigr]\,\mathrm{d}s \\ &{}-(w_{2}a_{22}-w_{1}a_{12}-w _{3}a_{32}) \int _{0}^{t}E\bigl[\big| x_{2}(s;\phi )-y_{2}\bigl(s;\phi ^{*}\bigr)\big|\bigr]\,\mathrm{d}s \\ &{}-(w_{3}a_{33}-w_{2}a_{23}) \int _{0}^{t}E\bigl[ \bigl\vert x_{3}(s; \phi )-y_{3}\bigl(s; \phi ^{*}\bigr) \bigr\vert \bigr] \,\mathrm{d}s, \end{aligned}$$ which implies $$ \int _{0}^{t}E\bigl[ \bigl\vert x_{i}(s;\phi )-y_{i}\bigl(s;\phi ^{*}\bigr) \bigr\vert \bigr]\,\mathrm{d}s< +\infty ,\quad i=1,2,3. $$ Define functions $$ F_{i}(t)=E\bigl[ \bigl\vert x_{i}(t;\phi )-y_{i}\bigl(t;\phi ^{*}\bigr) \bigr\vert \bigr], \quad i=1,2,3. $$ Then, for any \(t_{1},t_{2}\in [0,+\infty )\), we obtain, for each \(i=1,2,3\), $$\begin{aligned} \bigl\vert F_{i}(t_{2})-F_{i}(t_{1}) \bigr\vert = & \bigl\vert E\bigl[ \bigl\vert x_{i}(t_{2}; \phi )-y_{i}\bigl(t_{2}; \phi ^{*}\bigr) \bigr\vert - \bigl\vert x_{i}(t_{1};\phi )-y_{i} \bigl(t_{1};\phi ^{*}\bigr) \bigr\vert \bigr] \bigr\vert \\ \leq & E\bigl[ \bigl\vert \bigl(x _{i}(t_{2};\phi )-y_{i}\bigl(t_{2};\phi ^{*}\bigr)\bigr)- \bigl(x_{i}(t_{1};\phi )-y_{i}\bigl(t _{1};\phi ^{*}\bigr)\bigr) \bigr\vert \bigr] \\ \leq & E\bigl[ \bigl\vert x_{i}(t_{2};\phi )-x_{i}(t_{1};\phi ) \bigr\vert \bigr]+E\bigl[ \bigl\vert y_{i}\bigl(t_{2};\phi ^{*}\bigr)-y _{i}\bigl(t_{1};\phi ^{*}\bigr) \bigr\vert \bigr]. \end{aligned}$$ From model (1), applying Itô's formula, we have $$\begin{aligned}& \begin{aligned} &x_{1}(t_{2};\phi )-x_{1}(t_{1};\phi ) \\ &\quad = \int _{t_{1}}^{t_{2}}x_{1}(s; \phi ) \biggl[r_{1}-h_{1}-a_{11}x_{1}(s;\phi )-a_{12} \int _{-\tau _{12}}^{0}x _{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}(\theta )\biggr]\,\mathrm{d}s \\ &\qquad {}+ \int _{t_{1}} ^{t_{2}}\sigma _{1}x_{1}(s; \phi )\,\mathrm{d}B_{1}(s), \\ &x_{2}(t_{2}; \phi )-x_{2}(t_{1};\phi ) \\ &\quad = \int _{t_{1}}^{t_{2}}x_{2}(s;\phi ) \biggl[-r_{2}-h _{2} +a_{21} \int _{-\tau _{21}}^{0}x_{1}(s+\theta ;\phi ) \,\mathrm{d}\mu _{21}(\theta )-a_{22}x_{2}(s;\phi ) \\ &\qquad {}-a_{23} \int _{-\tau _{23}}^{0}x _{3}(s+\theta ;\phi ) \,\mathrm{d}\mu _{23}(\theta )\biggr]\,\mathrm{d}s+ \int _{t_{1}} ^{t_{2}}\sigma _{2}x_{2}(s; \phi )\,\mathrm{d}B_{2}(s), \\ &x_{3}(t_{2}; \phi )-x_{3}(t_{1};\phi ) \\ &\quad = \int _{0}^{t}x_{3}(s;\phi ) \biggl[-r_{3}-h_{3}+a _{32} \int _{-\tau _{32}}^{0}x_{2}(s+\theta ;\phi ) \,\mathrm{d}\mu _{32}( \theta )-a_{33}x_{3}(s)\biggr] \,\mathrm{d}s \\ &\qquad {}+ \int _{t_{1}}^{t_{2}}\sigma _{3}x _{3}(s;\phi )\,\mathrm{d}B_{3}(s). \end{aligned} \end{aligned}$$ For any \(t_{2}>t_{1}\) and \(p>1\), using Hölder's inequality, from the first equation of (51), we have $$\begin{aligned}& \bigl(E\bigl[ \bigl\vert x_{1}(t_{2}; \phi )-x_{1}(t_{1};\phi ) \bigr\vert \bigr] \bigr)^{p} \\& \quad \leq E\bigl[ \bigl\vert x_{1}(t _{2};\phi )-x_{1}(t_{1};\phi ) \bigr\vert ^{p}\bigr] \\& \quad \leq E\biggl[\biggl( \int _{t_{1}}^{t_{2}}x _{1}(s;\phi ) \biggl\vert r_{1}-h_{1}-a_{11}x_{1}(s;\phi )-a_{12} \int _{-\tau _{12}} ^{0}x_{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}(\theta ) \biggr\vert \,\mathrm{d}s \\& \qquad {}+ \biggl\vert \int _{t _{1}}^{t_{2}}\sigma _{1}x_{1}(s; \phi )\,\mathrm{d}B_{1}(s) \biggr\vert \biggr)^{p}\biggr] \\& \quad \leq 2^{p}E\biggl[\biggl( \int _{t_{1}}^{t_{2}}x_{1}(s;\phi ) \biggl\vert r_{1}-h_{1}-a_{11}x_{1}(s; \phi )-a_{12} \int _{-\tau _{12}}^{0}x_{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}( \theta ) \biggr\vert \,\mathrm{d}s\biggr)^{p}\biggr] \\& \qquad {}+2^{p}E\biggl[ \biggl\vert \int _{t_{1}}^{t_{2}}\sigma _{1}x _{1}(s;\phi )\,\mathrm{d}B_{1}(s) \biggr\vert ^{p} \biggr]. \end{aligned}$$ Using Hölder's inequality again, we also have $$\begin{aligned}& E\biggl[\biggl( \int _{t_{1}}^{t_{2}}x_{1}(s;\phi ) \biggl\vert r_{1}-h_{1}-a_{11}x_{1}(s; \phi )-a_{12} \int _{-\tau _{12}}^{0}x_{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}( \theta ) \biggr\vert \,\mathrm{d}s\biggr)^{p}\biggr] \\& \quad \leq E\biggl[\biggl( \int _{t_{1}}^{t_{2}}\biggl( \vert r_{1}-h_{1} \vert x_{1}(s;\phi )+a_{11}x _{1}^{2}(s; \phi ) \\& \qquad {}+a_{12} \int _{-\tau _{12}}^{0}x_{1}(s;\phi )x_{2}(s+ \theta ;\phi )\,\mathrm{d}\mu _{12}(\theta )\biggr)\,\mathrm{d}s \biggr)^{p}\biggr] \\& \quad \leq (t_{2}-t_{1})^{p-1}E\biggl[ \int _{t_{1}}^{t_{2}}\biggl( \vert r_{1}-h_{1} \vert x_{1}(s; \phi )+a_{11}x_{1}^{2}(s; \phi ) \\& \qquad {} +a_{12} \int _{-\tau _{12}}^{0}x_{1}(s;\phi )x_{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}(\theta ) \biggr)^{p}\,\mathrm{d}s\biggr] \\& \quad \leq (t_{2}-t_{1})^{p-1}E\biggl[ \int _{t_{1}}^{t_{2}}3^{p}\biggl( \vert r_{1}-h_{1} \vert ^{p} x_{1}^{p}(s; \phi )+a_{11}^{p}x_{1}^{2p}(s;\phi ) \\& \qquad {} +\biggl(a_{12} \int _{-\tau _{12}}^{0}x_{1}(s;\phi )x_{2}(s+\theta ; \phi )\,\mathrm{d}\mu _{12}(\theta ) \biggr)^{p}\biggr)\,\mathrm{d}s\biggr] \\& \quad =3^{p}(t_{2}-t_{1})^{p-1} \vert r_{1}-h_{1} \vert ^{p} \int _{t_{1}}^{t_{2}}E\bigl[x _{1}^{p}(s; \phi )\bigr]\,\mathrm{d}s+3^{p}a_{11}^{p}(t_{2}-t_{1})^{p-1} \int _{t_{1}}^{t_{2}}E\bigl[x_{1}^{2p}(s; \phi )\bigr]\,\mathrm{d}s \\& \qquad {} +3^{p}(t_{2}-t_{1})^{p-1}E \biggl[ \int _{t_{1}}^{t_{2}}\biggl(a_{12} \int _{-\tau _{12}} ^{0}x_{1}(s;\phi ) x_{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}(\theta ) \biggr)^{p} \,\mathrm{d}s\biggr] \end{aligned}$$ $$\begin{aligned}& E\biggl[ \int _{t_{1}}^{t_{2}}\biggl(a_{12} \int _{-\tau _{12}}^{0}x_{1}(s;\phi )x _{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}(\theta )\biggr)^{p} \,\mathrm{d}s\biggr] \\& \quad \leq E\biggl[ \int _{t_{1}}^{t_{2}}\biggl(\frac{1}{2}a_{12}x_{1}^{2}(s; \phi ) +\frac{1}{2}a _{12} \int _{-\tau _{12}}^{0}x_{2}^{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}(\theta )\biggr)^{p}\,\mathrm{d}s\biggr] \\& \quad \leq E\biggl[ \int _{t_{1}}^{t_{2}}\biggl(a_{12}^{p}x_{1}^{2p}(s; \phi )+\biggl(a_{12} \int _{-\tau _{12}}^{0}x_{2}^{2}(s+\theta ;\phi )\,\mathrm{d}\mu _{12}( \theta )\biggr)^{p}\biggr)\,\mathrm{d}s\biggr] \\& \quad \leq E\biggl[ \int _{t_{1}}^{t_{2}}\biggl(a_{12}^{p}x _{1}^{2p}(s;\phi ) +a_{12}^{p-1} \int _{-\tau _{12}}^{0}x_{2}^{2p}(s+ \theta ;\phi )\,\mathrm{d}\mu _{12}(\theta )\biggr)\,\mathrm{d}s\biggr] \\& \quad =a_{12}^{p} \int _{t_{1}}^{t_{2}}E\bigl[x_{1}^{2p}(s; \phi )\bigr]\,\mathrm{d}s+a_{12}^{p-1} \int _{t_{1}}^{t_{2}} \int _{-\tau _{12}}^{0}E\bigl[x_{2}^{2p}(s+ \theta ; \phi )\bigr]\,\mathrm{d}\mu _{12}(\theta )\,\mathrm{d}s. \end{aligned}$$ In view of Theorem 7.1 in [24], for any \(t_{2}>t_{1}\) and \(1< p\leq 2\), we obtain $$ E\biggl[ \biggl\vert \int _{t_{1}}^{t_{2}}\sigma _{1}x_{1}(s; \phi )\,\mathrm{d}B_{1}(s) \biggr\vert ^{p}\biggr] \leq \bigl\vert \sigma _{1}^{p} \bigr\vert \biggl( \frac{p(p-1)}{2}\biggr)^{\frac{p}{2}}(t_{2}-t_{1})^{ \frac{p-2}{2}} \int _{t_{1}}^{t_{2}}E\bigl[x_{1}^{p}(s; \phi )\bigr]\,\mathrm{d}s. $$ From Lemma 3, there exist \(K_{1}^{**}(p)>0\), \(K_{2}^{**}(p)>0\), and \(K_{3}^{**}(p)>0\) such that \(\sup_{t\geq {-\gamma }}E[x_{1}^{p}(t)] \leq K_{1}^{**}(p)\), \(\sup_{t\geq {-\gamma }}E[x_{2}^{p}(t)]\leq K _{2}^{**}(p)\), and \(\sup_{t\geq {-\gamma }}E[x_{3}^{p}(t)]\leq K_{3} ^{**}(p)\). Therefore, from (52)–(55) there exists \(\delta >0\) such that, for any \(t_{1}\geq 0\), \(t_{2}\geq 0\), and \(1< p\leq 2\) with \(| t_{2}-t_{1}|\leq \delta \), $$\begin{aligned}& \bigl(E\bigl[ \bigl\vert x_{1}(t_{2};\phi )-x_{1}(t_{1};\phi ) \bigr\vert \bigr] \bigr)^{p} \\& \quad \leq 2^{p}\biggl[ \bigl\vert \sigma _{1}^{p} \bigr\vert \biggl(\frac{p(p-1)}{2}\biggr)^{\frac{p}{2}}(t_{2}-t_{1})^{ \frac{p}{2}}K_{1}^{**}(p) \biggr]+2^{p}\bigl[3^{p}(t_{2}-t_{1})^{p} \vert r_{1}-h_{1} \vert ^{p}K _{1}^{**}(p) \\& \qquad {}+3^{p}a_{11}^{p}(t_{2}-t_{1})^{p}K_{1}^{**}(2p) \bigr]+2^{p}3^{p}(t _{2}-t_{1})^{p}a_{12}^{p} \bigl[K_{1}^{**}(2p)+K_{2}^{**}(2p)\bigr] \\& \quad \leq M _{1}^{**} \vert t_{2}-t_{1} \vert ^{\frac{p}{2}}, \end{aligned}$$ $$\begin{aligned} M_{1}^{**} = & \bigl\vert \sigma _{1}^{p} \bigr\vert \bigl(2p(p-1)\bigr)^{\frac{p}{2}}K_{1}^{**}(p)+[36 \delta ]^{\frac{p}{2}}\bigl[ \vert r_{1}-h_{1} \vert ^{p}K_{1}^{**}(p)+a_{11}^{p}K _{1}^{**}(2p)\bigr] \\ &{}+[36\delta ]^{\frac{p}{2}}a_{12}^{p} \bigl[K_{1}^{**}(2p)+K _{2}^{**}(2p) \bigr]. \end{aligned}$$ Similarly, we also obtain $$ \bigl(E\bigl[ \bigl\vert y_{1}\bigl(t_{2};\phi ^{*}\bigr)-y_{1}\bigl(t_{1};\phi ^{*} \bigr) \bigr\vert \bigr]\bigr)^{p}\leq M_{1}^{**} \vert t_{2}-t_{1} \vert ^{\frac{p}{2}} $$ for any \(t_{1}\geq 0\), \(t_{2}\geq 0\) with \(| t_{2}-t_{1}|\leq \delta \) and \(1< p\leq 2\). Thus, from (50), we obtain $$\begin{aligned} \bigl\vert F_{1}(t_{2})-F_{1}(t_{1}) \bigr\vert \leq & E\bigl[ \bigl\vert x_{1}(t_{2};\phi )-x_{1}(t_{1}; \phi ) \bigr\vert \bigr]+E\bigl[ \bigl\vert y_{1}\bigl(t_{2};\phi ^{*} \bigr)-y_{1}\bigl(t_{1};\phi ^{*}\bigr) \bigr\vert \bigr] \\ \leq &2\bigl(M _{1}^{**}\bigr)^{\frac{1}{p}}\sqrt{ \vert t_{2}-t_{1} \vert }. \end{aligned}$$ Using a similar argument, for \(F_{2}(t)\) and \(F_{3}(t)\) we can also obtain that there is \(\delta >0\) for any \(t_{1}\geq 0\), \(t_{2}\geq 0\) with \(| t_{2}-t_{1}|\leq \delta \) and \(1< p\leq 2\) $$ \bigl\vert F_{2}(t_{2})-F_{2}(t_{1}) \bigr\vert \leq 2\bigl(M_{2}^{**}\bigr)^{\frac{1}{p}} \sqrt{ \vert t_{2}-t_{1} \vert } $$ $$ \bigl\vert F_{3}(t_{2})-F_{3}(t_{1}) \bigr\vert \leq 2\bigl(M_{3}^{**}\bigr)^{\frac{1}{p}} \sqrt{ \vert t_{2}-t_{1} \vert }, $$ $$\begin{aligned} M_{2}^{**} = & \bigl\vert \sigma _{2}^{p} \bigr\vert \bigl(2p(p-1)\bigr)^{\frac{p}{2}}K_{2}^{**}(p)+[64 \delta ]^{\frac{p}{2}}\bigl[ \vert r_{2}+h_{2} \vert ^{p}K_{2}^{**}(p)+a_{22}^{p}K _{2}^{**}(2p)\bigr] \\ &{}+[64\delta ]^{\frac{p}{2}}a_{23}^{p} \bigl[K_{2}^{**}(2p)+K _{3}^{**}(2p) \bigr]+[64\delta ]^{\frac{p}{2}}a_{21}^{p} \bigl[K_{1}^{**}(2p)+K _{3}^{**}(2p) \bigr] \end{aligned}$$ $$\begin{aligned} M_{3}^{**} = & \bigl\vert \sigma _{3}^{p} \bigr\vert \bigl(2p(p-1)\bigr)^{\frac{p}{2}}K_{3}^{**}(p)+[36 \delta ]^{\frac{p}{2}}\bigl[ \vert r_{2}+h_{2} \vert ^{p}K_{3}^{**}(p)+a_{33}^{p}K _{3}^{**}(2p)\bigr] \\ &{}+[36\delta ]^{\frac{p}{2}}a_{32}^{p} \bigl[K_{3}^{**}(2p)+K _{2}^{**}(2p) \bigr]. \end{aligned}$$ From (56)–(58), we obtain that \(F_{1}(t)\), \(F_{2}(t)\), and \(F_{3}(t)\) for \(t\in (0,\infty )\) are uniformly continuous. Therefore, from (48) and Barbalat lemma in [25] we can finally obtain (43). This completes the proof. □ Denote by \(\mathcal{P}([-\gamma ,0],R_{+}^{3})\) the space of all probability measures on \(C([-\gamma ,0],R_{+}^{3})\). For \(P_{1},P_{2} \in \mathcal{P}([-\gamma ,0],R_{+}^{3})\), define $$ d_{\mathrm{BL}}(P_{1},P_{2})=\sup_{f\in \mathrm{BL}} \biggl\vert \int _{R_{+}^{3}}f(z)P_{1}(\mathrm{d}z)- \int _{R_{+}^{3}}f(z)P_{2}(\mathrm{d}z) \biggr\vert , $$ where set BL is defined as follows: $$ \mathrm{BL}=\bigl\{ f:\mathcal{C}\bigl([-\gamma ,0],R_{+}^{3}\bigr) \rightarrow R: \bigl\vert f(z_{1})-f(z _{2}) \bigr\vert \leq \Vert z_{1}-z_{2} \Vert , \bigl\vert f(\cdot ) \bigr\vert \leq 1\bigr\} . $$ Denote by \(p(t,\phi ,\mathrm{d}x)\) the transition probability of process \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t))\). We have the following results. Assume that there are positive constants \(q_{1}\), \(q_{2}\), and \(q_{3}\) such that $$ q_{1}a_{11}-q_{2}a_{21}>0, \qquad q_{2}a_{22}-q_{1}a_{12}-q_{3}a_{32}>0, \qquad q_{3}a_{33}-q_{2}a_{23}>0. $$ Then model (1) is asymptotically stable in distribution, i.e., there exists a unique probability measure \(v(\cdot )\) such that, for any initial function \(\phi \in C([-\gamma ,0],R_{+}^{3})\), the transition probability \(p(t,\phi ,\cdot )\) of \(x(t,\phi )=(x_{1}(t,\phi ),x_{2}(t, \phi ),x_{3}(t,\phi ))\) satisfies $$ \lim_{t\to \infty }d_{\mathrm{BL}}\bigl(p(t,\phi ,\cdot ),v(\cdot ) \bigr)=0. $$ This theorem can be proved using a standard argument as in [15, 16] by using Lemma 1 and Theorem 2. Hence, we here omit it. 4 Effect of harvesting In model (1), \(h_{i}\geq 0\) (\(i=1,2,3\)) denotes the harvesting rates of species \(x_{i}\), respectively. Firstly, based on Theorem 1, we discuss the effects of harvesting for the persistence and extinction of species in model (1). From \(\Delta _{11}=0\), the critical value of harvesting rate \(h_{1}\) for prey \(x_{1}\) is determined by \(h_{1}'=r_{1}-\frac{\sigma _{1}^{2}}{2}\). When \(h_{1}\geq h_{1}'\), all species \(x_{1}\) (\(i=1,2,3\)) will die out from conclusions (1) and (2) of Theorem 1. This shows that the excessive harvesting for the prey will lead to the extinction of all species in a food-chain system. When \(h_{1}< h_{1}'\), from \(\Delta _{22}=0\), the critical value of harvesting rate \(h_{2}\) for middle predator \(x_{2}\) is determined by \(h_{2}'=\frac{a_{21}}{a_{11}}(r_{1}-\frac{\sigma _{1}^{2}}{2}-h_{1})-(r _{2}+\frac{\sigma ^{2}}{2})\). When \(h_{2}\geq h_{2}'\), from conclusions (3) and (4) of Theorem 1 we see that prey \(x_{1}\) will be permanent in the mean, but two predators \(x_{2}\) and \(x_{3}\) will die out. This shows that the excessive harvesting for the middle predator will lead to the extinction of all top species. Furthermore, we see that \(h_{2}'\) decreasingly depends on the harvesting rate \(h_{1}\) for prey \(x_{1}\). This shows that when we increase the harvest for the prey, then the harvest for the middle predator must decrease to just guarantee the non-extinction of the whole food-chain system. When \(h_{2}< h_{2}'\), from \(\Delta _{33}=0\), we further obtain that the critical value of harvesting rate \(h_{3}\) for top predator \(x_{3}\) is $$ h_{3}'=\frac{[(r_{1}-\frac{\sigma _{1}^{2}}{2}-h_{1})a_{21}-(r_{2}+\frac{ \sigma _{2}^{2}}{2}+h_{2})a_{11}]a_{32}}{H_{2}}-\biggl(r_{3}+ \frac{\sigma _{3}^{2}}{2}\biggr). $$ When \(h_{3}\geq h_{3}'\), then from conclusions (5) and (6) of Theorem 1 we see that prey \(x_{1}\) and middle predator \(x_{2}\) will be permanent in the mean, but top predator \(x_{3}\) will die out; whereas when \(h_{3}< h_{3}'\), from conclusion (7) of Theorem 1 we see that all species \(x_{1}\) (\(i=1,2,3\)) will be permanent in the mean. This shows that only temperate harvesting for all species can ensure the persistence of all species and a continuous income. Furthermore, we also see that \(h_{3}'\) decreasingly depends on the harvesting rates \(h_{1}\) and \(h_{2}\) for prey and middle predators \(x_{1}\) and \(x_{2}\). This shows that when there exist the harvests for prey and middle predator, then the harvest for the top predator must decrease; if not, then top predator will die out. Next, we discuss the optimal harvesting problem under the harvesting rates \(h_{1}\), \(h_{2}\), and \(h_{3}\) for species \(x_{1}\), \(x_{2}\), and \(x_{3}\), respectively. We can establish the following comparatively integrated results. Assume that there are positive constants \(m_{1}\), \(m_{2}\), and \(m_{3}\) such that $$ m_{1}a_{11}-m_{2}a_{21}>0,\qquad m_{2}a_{22}-m_{1}a_{12}-m_{3}a_{32}>0, \qquad m_{3}a_{33}-m_{2}a_{23}>0. $$ $$\begin{aligned}& h_{1}^{*}= \frac{-a_{11}(a_{32}-a_{23})^{2}+2a_{33}a_{21}(a_{12}-a _{21})+4a_{11}a_{22}a_{33}}{2[4a_{11}a_{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a _{11}(a_{23}-a_{32})^{2}]}\biggl(r_{1}-\frac{\sigma _{1}^{2}}{2}\biggr) \\& \hphantom{ h_{1}^{*}=}{}+\frac{a _{11}a_{33}(a_{12}+a_{21})}{4a_{11}a_{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a _{11}(a_{23}-a_{32})^{2}}\biggl(r_{2}+\frac{\sigma _{2}^{2}}{2}\biggr) \\& \hphantom{ h_{1}^{*}=}{}+\frac{a _{11}(a_{12}+a_{21})(a_{32}-a_{23})}{2[4a_{11}a_{22}a_{33}-a_{33}(a _{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}]}\biggl(r_{3}+\frac{\sigma _{3} ^{2}}{2}\biggr), \\& h_{2}^{*}=\biggl\{ \frac{2a_{22}a_{33}(a_{12}+a_{21})}{2[4a _{11}a_{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}]} \\& \hphantom{ h_{2}^{*}=}{}+\frac{(a_{32}-a_{23})(a_{23}a_{12}-a_{32}a_{21})}{2[4a_{11}a_{22}a _{33}-a_{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}]}\biggr\} \biggl(r_{1}-\frac{ \sigma _{1}^{2}}{2} \biggr) \\& \hphantom{ h_{2}^{*}=}{}+\frac{a_{33}a_{12}(a_{12}-a_{21})-a_{11}a_{32}(a _{23}-a_{32})-4a_{11}a_{22}a_{33}}{4a_{11}a_{22}a_{33}-a_{33}(a_{12}-a _{21})^{2}-a_{11}(a_{23}-a_{32})^{2}}\biggl(r_{2}+\frac{\sigma _{2}^{2}}{2}\biggr) \\& \hphantom{ h_{2}^{*}=}{}+\biggl\{ \frac{(a_{12}-a_{21})(a_{21}a_{32}-a_{12}a_{23})}{2[4a_{11}a _{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}]} \\& \hphantom{ h_{2}^{*}=}{}+\frac{2a _{11}a_{22}(a_{23}+a_{32})}{2[4a_{11}a_{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a _{11}(a_{23}-a_{32})^{2}]}\biggr\} \biggl(r_{3}+\frac{\sigma _{3}^{2}}{2}\biggr), \\& h_{3} ^{*}=\frac{a_{33}(a_{21}-a_{12})(a_{23}+a_{32})}{2[4a_{11}a_{22}a _{33}-a_{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}]}\biggl(r_{1}- \frac{ \sigma _{1}^{2}}{2}\biggr) \\& \hphantom{ h_{3}^{*}=}{}-\frac{2a_{11}a_{33}(a_{23}+a_{32})}{4a_{11}a _{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}}\biggl(r _{2}+\frac{\sigma _{2}^{2}}{2}\biggr) \\& \hphantom{ h_{3}^{*}=}{}+\frac{a_{33}(a_{12}-a_{21})^{2}+2a _{11}a_{23}(a_{23}-a_{32})-4a_{11}a_{22}a_{33}}{2[4a_{11}a_{22}a_{33}-a _{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}]}\biggl(r_{3}+\frac{ \sigma _{3}^{2}}{2}\biggr) \end{aligned}$$ $$\begin{aligned} Y^{*}(H) = &-(a_{22}a_{33}+a_{23}a_{32})h_{1}^{2}+(a_{33}a_{12}-a_{33}a _{21})h_{1}h_{2} \\ &{}-a_{11}a_{33}h_{2}^{2}+(a_{11}a_{23}-a_{11}a_{32})h _{2}h_{3}-(a_{11}a_{22}+a_{12}a_{21})h_{3}^{2} \\ &{}-(a_{12}a_{23}+a _{21}a_{32})h_{1}h_{3}+ \biggl[\biggl(r_{1}-\frac{\sigma _{1}^{2}}{2}\biggr) (a_{22}a_{33}+a _{23}a_{32}) \\ &{}+\biggl(r_{2}+\frac{\sigma _{2}^{2}}{2}\biggr)a_{33}a_{12}- \biggl(r_{3}+\frac{ \sigma _{3}^{2}}{2}\biggr)a_{12}a_{23} \biggr]h_{1} \\ &{}+\biggl[\biggl(r_{1}-\frac{\sigma _{1} ^{2}}{2}\biggr)a_{33}a_{21}- \biggl(r_{2}+\frac{\sigma _{2}^{2}}{2}\biggr)a_{11}a_{33} + \biggl(r _{3}+\frac{\sigma _{3}^{2}}{2}\biggr)a_{11}a_{23} \biggr]h_{2} \\ &{}+\biggl[\biggl(r_{1}-\frac{ \sigma _{1}^{2}}{2}\biggr)a_{21}a_{32} -\biggl(r_{2}+\frac{\sigma _{2}^{2}}{2}\biggr)a _{11}a_{32} \\ &{}- \biggl(r_{3}+\frac{\sigma _{3}^{2}}{2}\biggr) (a_{11}a_{22}+a_{12}a_{21}) \biggr]h _{3}. \end{aligned}$$ We have the following conclusions. (\(\mathcal{A}_{1}\)): If \(h_{1}^{*}\geq 0\), \(h_{2}^{*}\geq 0\), and \(h_{3}^{*}\geq 0\), and $$ \begin{aligned} &\Delta _{33}|_{h_{1}=h_{1}^{*},h_{2}=h_{2}^{*},h_{3}=h_{3}^{*}}>0 , \\ &4a _{11}a_{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}>0. \end{aligned} $$ Then there is an optimal harvesting strategy \(H^{*}=(h_{1}^{*},h_{2} ^{*},h_{3}^{*})\) for model (1), and $$ MESY=\frac{Y^{*}(H^{*})}{H_{3}}. $$ If one of the following conditions holds, then there is not the optimal harvesting strategy for model (1). (\(\mathcal{B}_{1}\)): \(b_{1}|h_{1}=h_{1}^{*}\leq 0\); \(\Delta _{33}|_{h_{1}=h_{1}^{*},h_{2}=h_{2}^{*},h _{3}=h_{3}^{*}}\leq 0\); \(h_{1}^{*}<0\) or \(h_{2}^{*}<0\) or \(h_{3}^{*}<0\); \(4a_{11}a_{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a _{11}(a_{23}-a_{32})^{2}<0\). Define a set as follows: $$ \mathcal{U}=\bigl\{ H=(h_{1},h_{2},h_{3})^{T} \in R^{3}:\Delta _{33}>0,h_{i} \geq 0,i=1,2,3\bigr\} . $$ It is clear that for any \(H\in \mathcal{U}\) conclusion (7) of Theorem 1 holds. From the condition of conclusion (\(\mathcal{A}_{1}\)), we see that if optimal harvesting strategy \(H^{*}\) exists, then \(H^{*}\in \mathcal{U}\). Proof of conclusion \((\mathcal{A}_{1})\). Based on condition (61) we obtain that \(\mathcal{U}\) is not empty. From Theorem 3, we obtain that there exists a unique invariant measure \(v(\cdot )\) for model (1). From Corollary 3.4.3 in Prato and Zbczyk [26], we obtain that \(v(\cdot )\) is strong mixing. By Theorem 3.2.6 in [26], we further obtain that measure \(v(\cdot )\) is also ergodic. Let \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t))\) be any global positive solution of model (1) with initial value \((\xi (\theta ),\eta (\theta ), \varsigma (\theta ))\in C([-\gamma ,0],R_{+}^{3}))\). Based on Theorem 3.3.1 in [26], for \(H=(h_{1},h_{2},h_{3})^{T}\in \mathcal{U}\), we have $$ \lim_{t\rightarrow \infty }\frac{1}{t} \int _{0}^{t}H^{T}x(s)\,\mathrm{d}s= \int _{R_{+}^{3}}H^{T}xv(\mathrm{d}x). $$ Let \(\varrho (z)\) be the stationary probability density of model (1), then we get $$ Y(H)=\lim_{t\rightarrow \infty }E\Biggl[\sum _{i=1}^{3}h_{i}x_{i}(t)\Biggr]= \lim_{t\rightarrow \infty }E\bigl[H^{T}x(t)\bigr]= \int _{R_{+}^{3}}H^{T}x\varrho (x)\,\mathrm{d}x. $$ Note that the invariant measure of model (1) is unique and there exists a one-to-one correspondence between \(\varrho (z)\) and its corresponding invariant measure. We deduce $$ \int _{R_{+}^{3}}H^{T}x\varrho (x)\,\mathrm{d}x= \int _{R_{+}^{3}}H^{T}xv(\mathrm{d}x). $$ Therefore, from conclusion (5) of Theorem 1, (59), and (63)–(65), we have $$\begin{aligned} Y(H) =&\lim_{t\to +\infty }\frac{1}{t} \int _{0}^{t}H^{T}x(s)\,\mathrm{d}s \\ =&h_{1}\lim_{t\to +\infty }\frac{1}{t} \int _{0}^{t}x_{1}(s)\,\mathrm{d}s+h _{2}\lim_{t\to +\infty }\frac{1}{t} \int _{0}^{t}x_{2}(s)\,\mathrm{d}s +h _{3}\lim_{t\to +\infty }\frac{1}{t} \int _{0}^{t}x_{3}(s)\,\mathrm{d}s \\ =&\frac{Y ^{*}(H)}{H_{3}}. \end{aligned}$$ By calculating we obtain $$\begin{aligned}& \frac{\partial Y^{*}(H)}{\partial h_{1}} = -2(a_{22}a_{33}+a_{23}a _{32})h_{1}+(a_{33}a_{12}-a_{33}a_{21})h_{2}-(a_{12}a_{23}+a_{21}a _{32})h_{3} \\& \hphantom{\frac{\partial Y^{*}(H)}{\partial h_{1}} =}{}+\biggl(r_{1}-\frac{\sigma _{1}^{2}}{2}\biggr) (a_{22}a_{33}+a_{23}a _{32})+ \biggl(r_{2}+\frac{\sigma _{2}^{2}}{2}\biggr)a_{33}a_{12}- \biggl(r_{3}+\frac{ \sigma _{3}^{2}}{2}\biggr)a_{12}a_{23}, \\& \frac{\partial Y^{*}(H)}{\partial h _{2}} =-2a_{11}a_{33}h_{2}+(a_{33}a_{12}-a_{33}a_{21})h_{1}+(a_{11}a _{23}- a_{11}a_{32})h_{3} \\& \hphantom{\frac{\partial Y^{*}(H)}{\partial h _{2}} =}{}+\biggl(r_{1}-\frac{\sigma _{1}^{2}}{2} \biggr)a_{33}a _{21}-\biggl(r_{2}+\frac{\sigma _{2}^{2}}{2} \biggr)a_{11}a_{33}+\biggl(r_{3}+\frac{\sigma _{3}^{2}}{2} \biggr)a_{11}a_{23} ,\\& \frac{\partial Y^{*}(H)}{\partial h_{3}} = -2(a_{11}a_{22}+a_{12}a _{21})h_{3}+(a_{11}a_{23}-a_{11}a_{32})h_{2}-(a_{12}a_{23}+a_{21}a _{32})h_{1} \\& \hphantom{\frac{\partial Y^{*}(H)}{\partial h_{3}} =}{}+\biggl(r_{1}-\frac{\sigma _{1}^{2}}{2} \biggr)a_{21}a_{32}-\biggl(r_{2}+\frac{ \sigma _{2}^{2}}{2} \biggr)a_{11}a_{32}-\biggl(r_{3}+\frac{\sigma _{3}^{2}}{2} \biggr) (a _{11}a_{22}+a_{12}a_{21}). \end{aligned}$$ Solving equations \(\frac{\partial Y^{*}(H)}{\partial h_{1}}=0\), \(\frac{\partial Y^{*}(H)}{\partial h_{2}}=0\), and \(\frac{\partial Y ^{*}(H)}{\partial h_{3}}=0\), we can obtain \(h_{1}=h_{1}^{*}\), \(h_{2}=h_{2}^{*}\), and \(h_{3}=h_{3}^{*}\), which are given in (59). Let \(H^{*}=(h_{1}^{*},h_{2}^{*},h_{3}^{*})\), by calculating we further obtain $$\begin{aligned} &\frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{1}^{2}} =-2(a_{22}a_{33}+a _{23}a_{32}),\qquad \frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{1}\partial h_{2}} =a_{33}(a_{12}-a_{21}), \\ &\frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{1}\partial h_{3}} =-(a _{12}a_{23}+a_{21}a_{32}),\qquad \frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{2}^{2}} =-2a_{11}a_{33}, \\ &\frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{2}\partial h_{1}} =a _{33}(a_{12}-a_{21}),\qquad \frac{\partial ^{2} Y^{*}(H^{*})}{\partial h _{2}\partial h_{3}} =a_{11}(a_{23}- a_{32}), \\ &\frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{3}^{2}} =-2(a_{11}a_{22}+a _{12}a_{21}),\qquad \frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{3}\partial h_{1}} =-(a_{12}a_{23}+a_{21}a_{32}), \\ &\frac{\partial ^{2} Y^{*}(H^{*})}{\partial h_{3}\partial h_{2}} =a _{11}(a_{23}- a_{32}). \end{aligned}$$ Define matrix \(M=(\frac{\partial ^{2}Y^{*}(H^{*})}{\partial h_{i} \partial h_{j}})_{1\leq i,j\leq 3}\). Then condition (61) implies that matrix M is negative definite. We hence obtain that \(Y^{*}(H)\) has a unique maximum value \(Y^{*}(H^{*})\). This shows that \(H^{*}\) is an optimal harvesting strategy, and MESY is given in (62). Proof of conclusion \((\mathcal{A}_{2})\). From conclusions (1) and (2) of Theorem 1, we can obtain \(\lim_{t\to \infty }x_{i}(t)=0\) (\(i=1,2,3\)) if condition \((\mathcal{B}_{1})\) holds. Hence, the optimal harvesting does not exist. Assume that condition \((\mathcal{B}_{2})\) or \((\mathcal{B}_{3})\) holds. If there is an optimal harvesting strategy \(\widetilde{H}^{*}=( \widetilde{h}_{1}^{*},\widetilde{h}_{2}^{*},\widetilde{h}_{3}^{*})\), then \(\widetilde{H}^{*}\in \mathcal{U}\). That is, $$ \Delta _{33}|_{h_{1}=\widetilde{h}_{1}^{*},h_{2}=\widetilde{h}_{2}^{*},h _{3}=\widetilde{h}_{3}^{*}}>0,\quad \widetilde{h}_{1}^{*}\geq 0, \widetilde{h}_{2}^{*} \geq 0, \widetilde{h}_{3}^{*}\geq 0. $$ On the other hand, if \(\widetilde{H}^{*}=(\widetilde{h}_{1}^{*}, \widetilde{h}_{2}^{*},\widetilde{h}_{3}^{*})\in \mathcal{U}\) is the optimal harvesting strategy, then we also have \((\widetilde{h}_{1} ^{*},\widetilde{h}_{2}^{*},\widetilde{h}_{3}^{*})\) must be the unique solution of the following system: $$ \frac{\partial Y^{*}(H)}{\partial h_{1}}=0,\qquad \frac{\partial Y^{*}(H)}{ \partial h_{2}}=0,\qquad \frac{\partial Y^{*}(H)}{\partial h_{3}}=0. $$ Therefore, we have \(( h_{1}^{*},h_{2}^{*}, h_{3}^{*})=(\widetilde{h} _{1}^{*},\widetilde{h}_{2}^{*},\widetilde{h}_{3}^{*})\). Thus, condition (66) becomes $$ \Delta _{33}|_{h_{1}= h_{1}^{*},h_{2}= h_{2}^{*},h_{3}= h_{3}^{*}}>0,\quad h_{1}^{*} \geq 0, h_{2}^{*}\geq 0, h_{3}^{*}\geq 0, $$ which contradicts both \((\mathcal{B}_{2})\) and \((\mathcal{B}_{3})\). Lastly, we consider condition \((\mathcal{B}_{4})\). We can assume that conditions \((\mathcal{B}_{2})\) and \((\mathcal{B}_{3})\) do not hold. Hence, \(h_{1}^{*}\geq 0\), \(h_{2}^{*}\geq 0\), and \(h_{3}^{*}\geq 0\), and \(\Delta _{33}|_{h_{1}=h_{1}^{*},h_{2}=h_{2}^{*},h_{3}=h_{3}^{*}}>0\). Thus, \(\mathcal{U}\) is not empty. Condition \((\mathcal{B}_{4})\) implies that matrix M is not negative semidefinite. Therefore, there is not any maximum point. This completes the proof. □ 5 Numerical examples In this section, we will provide the numerical examples to illustrate our main results. The numerical approaches are proposed in [13], and also refer to [16]. Firstly, we indicate in the following numerical examples that the initial values always are fixed by \(x_{1}(\theta )=0.3e ^{\theta }\), \(x_{2}(\theta )=0.2e^{\theta }\), and \(x_{3}(\theta )=0.3e ^{\theta }\) for all \(\theta \in [-\ln 2,0]\), and \(\tau _{12}=\tau _{21}= \tau _{23}=\tau _{32}=\ln 2\). In model (1), parameters \(r_{1}=2.0\), \(r_{2}=1.0\), \(r_{3}=0.5\), and \(h_{1}=h_{2}=h_{3}=0\) are fixed. We consider the following cases. Case 1. Taking parameters \(a_{11}=1\), \(a_{22}=0.5\), \(a_{33}=0.25\), \(a_{12}=1\), \(a_{21}=1\), \(a_{23}=1\), \(a_{32}=1\), \(\sigma _{1}=2.5\), \(\sigma _{2}=0.1\), and \(\sigma _{3}=0.05\), we have \(\Delta _{11}=-1.125<0\). Hence, the conditions of conclusion (1) in Theorem 1 are satisfied. The numerical simulations given in Fig. 1 illustrate that all species \(x_{i}\) (\(i=1,2,3\)) are extinct with probability one. Species \(x_{i}\) (\(i=1,2,3\)) are extinct with probabilty one Case 2. Taking parameters \(a_{11}=1\), \(a_{22}=0.5\), \(a_{33}=0.25\), \(a_{12}=1\), \(a_{21}=1\), \(a_{23}=1\), \(a_{32}=1\), \(\sigma _{1}=2.0\), \(\sigma _{2}=0.1\), and \(\sigma _{3}=0.05\), we have \(\Delta _{11}=0\). Hence, the conditions of conclusion (2) in Theorem 1 are satisfied. The numerical simulations given in Fig. 2 illustrate that all species \(x_{i}\) (\(i=1,2,3\)) also are extinct with probability one. Species \(x_{i}\) (\(i=1,2,3\)) are also extinct with probability one Case 3. Taking parameters \(a_{11}=1\), \(a_{22}=0.5\), \(a_{33}=0.25\), \(a_{12}=1\), \(a_{21}=0.7\), \(a_{23}=1\), \(a_{32}=1\), \(\sigma _{1}=1.0\), \(\sigma _{2}=0.6\), and \(\sigma _{3}=0.05\), we have \(\Delta _{11}=1.5>0\) and \(\Delta _{22}=-0.13<0\). Hence, the conditions of conclusion (3) in Theorem 1 are satisfied. The numerical simulations given in Fig. 3 illustrate that species \(x_{1}(t)\) is persistent in the mean while species \(x_{i}(t)\) (\(i=2,3\)) go to extinction. Species \(x_{1}(t)\) is persistent in the mean while species \(x_{i}(t)\) (\(i=2,3\)) go to extinction Case 4. Taking parameters \(a_{11}=1\), \(a_{22}=0.5\), \(a_{33}=0.25\), \(a_{12}=1\), \(a_{21}=0.78667\), \(a_{23}=1\), \(a_{32}=1\), \(\sigma _{1}=1.0\), \(\sigma _{2}=0.5\), and \(\sigma _{3}=0.3\), we have \(\Delta _{22}=0\). Hence, the conditions of conclusion (4) in Theorem 1 are satisfied. The numerical simulations given in Fig. 4 illustrate that species \(x_{1}(t)\) is persistent in the mean, species \(x_{2}(t)\) is extinct in the mean and \(x_{3}(t)\) is extinct. Species \(x_{1}(t)\) is persistent in the mean, species \(x_{2}(t)\) is extinct in the mean and \(x_{3}(t)\) is extinct Case 5. Taking parameters \(a_{11}=1\), \(a_{22}=0.5\), \(a_{33}=2.5\), \(a_{12}=1\), \(a_{21}=2\), \(a_{23}=1\), \(a_{32}=1\), \(\sigma _{1}=0.5\), \(\sigma _{2}=0.3\), and \(\sigma _{3}=1.5\), we have \(\Delta _{22}=2.505>0\) and \(\Delta _{33}=-1.5575<0\). Hence, the conditions of conclusion (5) in Theorem 1 are satisfied. The numerical simulations given in Fig. 5 illustrate that species \(x_{1}(t)\) and \(x_{2}(t)\) are persistent in the mean while species \(x_{3}(t)\) goes to extinction. Species \(x_{1}(t)\) and \(x_{2}(t)\) are persistent in the mean while species \(x_{3}(t)\) goes to extinction Case 6. Taking parameters \(a_{11}=1\), \(a_{22}=0.5\), \(a_{33}=2.5\), \(a_{12}=1\), \(a_{21}=2\), \(a_{23}=1\), \(a_{32}=1\), \(\sigma _{1}=0.5\), \(\sigma _{2}=0.3\), and \(\sigma _{3}=\sqrt{1.004}\), we have \(\Delta _{33}=0\). Hence, the conditions of conclusion (6) in Theorem 1 are satisfied. The numerical simulations given in Fig. 6 illustrate that species \(x_{1}(t)\) and \(x_{2}(t)\) are persistent in the mean while species \(x_{3}(t)\) is extinct in the mean. Species \(x_{1}(t)\) and \(x_{2}(t)\) are persistent in the mean while species \(x_{3}(t)\) is extinct in the mean Case 7. Taking parameters \(a_{11}=1\), \(a_{22}=0.5\), \(a_{33}=1\), \(a_{12}=1\), \(a_{21}=2\), \(a_{23}=1\), \(a_{32}=2\), \(\sigma _{1}=0.1\), \(\sigma _{2}=0.2\), and \(\sigma _{3}=0.9\), we have \(\Delta _{33}=0.2425>0\). Hence, the conditions of conclusion (7) in Theorem 1 are satisfied. The numerical simulations given in Fig. 7 illustrate that all species \(x_{i}(t)\) (\(i=1,2,3\)) are persistent in the mean. Species \(x_{i}(t)\) (\(i=1,2,3\)) are persistent in the mean In model (1) we take parameters \(r_{1}=1\), \(r_{2}=0.3\), \(r_{3}=0.1\), \(m_{1}=1\), \(m_{2}=0.3\), \(m_{3}=0.1\), \(a_{11}=0.4\), \(a_{12}=0.1\), \(a_{22}=0.5\), \(a_{21}=0.75\), \(a_{23}=0.1\), \(a_{32}=0.45\), \(a_{33}=0.6\), \(\sigma _{1}=0.2\), \(\sigma _{2}=0.1\), and \(\sigma _{3}=\sqrt{0.012}\). We have \(m_{1}a_{11}-m_{2}a_{21}=0.1075>0\), \(m_{2}a_{22}-m_{1}a_{12}-m _{3}a_{32}=0.005>0\), and \(m_{3}a_{33}-m_{2}a_{23}=0.03>0\). Calculating \(h_{i}^{*}\) (\(i=1,2,3\)) in Theorem 2, we have \(h_{1}^{*}=0.0023>0\), \(h_{2} ^{*}=0.1414>0\), and \(h_{3}^{*}=0.0958>0\). Furthermore, we also have \(4a_{11}a_{22}a_{33}-a_{33}(a_{12}-a_{21})^{2}-a_{11}(a_{23}-a_{32})^{2}=0.1175>0\), \(\Delta _{33}|_{h_{1}=h_{1}^{*},h_{2}=h_{2}^{*},h_{3}=h_{3}^{*}}=0.4769>0\), and \(H_{3}=0.183\). Hence, all conditions of conclusion (\(\mathcal{A} _{1}\)) in Theorem 2 are satisfied. Hence, there is an optimal harvesting strategy \(H^{*}=(0.0023,0.1414,0.0958)^{T}\), and the maximum of expectation of sustainable yield (MESY) is \(\frac{Y^{*}(H^{*})}{H_{3}}=0.3464\). The numerical simulations are given in Fig. 8. Species \(x_{i}(t)\) (\(i=1,2,3\)) are also persistent in the mean. There is an optimal harvesting strategy \(H^{*}=(h_{1}^{*},h_{2}^{*},h_{3}^{*})^{T}=(0.0023,0.1414,0.0958)^{T}\), and the maximum of expectation of sustainable yield (MESY) is \(\frac{Y^{*}(H^{*})}{H_{3}}=0.3464\) Ecological and mathematical improvements have provided that three species are more advantageous than two-species models (Pimm [27], Hastings and Powell [28]). Besides, considering the influence of distributed delays and environmental noise, we analyze a stochastic three species food-chain model with harvesting in this paper. By using the stochastic integral inequalities, Lyapunov function method, and the inequality estimation technique, some criteria on the existence of global positive solutions, stochastic boundedness, extinction, global asymptotic stability in the mean and the probability distribution, and the effect of harvesting are established. Our results show some meaningful facts: Theorem 1 shows the sufficient and necessary conditions for the extinction and global asymptotic stability in the mean with probability one. In addition, Theorem 1 also reveals the effects of harvesting for the extinction and permanence in the mean of prey, middle predator, and top predator. Theorem 2 and Theorem 3 guarantee the global attractivity in the expectation and the global asymptotic stability in distribution, respectively. Theorem 4 reveals the existence of optimal harvesting strategy and MESY are affected by environmental fluctuations. There are still some problems waiting for further investigation. Firstly, it is meaningful to study more complex systems, for example, stochastic systems with Lévy jumps (see, for example, [22, 29]), Markovian switching (see, for example, [30]) and nonlinear functional responses (see, for example, [31]), and general stochastic many species food-chain systems. Furthermore, the optimal harvesting problem for other stochastic population systems with distributed delays, for instance, competitive systems and cooperative systems, still are rarely investigated at present. We will leave to investigate these problems in the future. We would like to thank the anonymous referees for their helpful comments and the editor for his constructive suggestions, which greatly improved the presentation of this paper. Data sharing is not applicable to this article as no data sets were generated or analysed during the current study. This research is supported by the Natural Science Foundation of China (Grant No. 11771373) and the Natural Science Foundation of Xinjiang Province of China (Grant No. 2016D03022). All authors contributed equally to this work. All authors read and approved the final manuscript. College of Mathematics and System Sciences, Xinjiang University, Urumqi, People's Republic of China Elton, C.S.: Animal Ecology. Macmillan Co., New York (1927) Google Scholar Chiu, C.H., Hsu, S.B.: Extinction of top predator in a three level food-chain model. J. Math. Biol. 37, 372–380 (1998) MathSciNetView ArticleGoogle Scholar Freedman, H., Waltman, P.: Mathematical analysis of some three-species food-chain models. Math. Biosci. 33, 257–276 (1977) MathSciNetView ArticleGoogle Scholar Freedman, H., Waltman, P.: Persistence in models of three interacting predator-prey populations. Math. Biosci. 68, 213–231 (1984) MathSciNetView ArticleGoogle Scholar Hutson, V., Law, R.: Permanent coexistence in general models of three interacting species. J. Math. Biol. 21, 285–298 (1985) MathSciNetView ArticleGoogle Scholar Bao, J., Yuan, C.: Stochastic population dynamics driven by Lévy noise. J. Math. Anal. Appl. 391, 363–375 (2012) MathSciNetView ArticleGoogle Scholar Braumann, C.A.: Itô versus Stratonovich calculus in random population growth. Math. Biosci. 206, 81–107 (2007) MathSciNetView ArticleGoogle Scholar Gard, T.C.: Persistence in stochastic food web models. Bull. Math. Biol. 46, 357–370 (1984) MathSciNetView ArticleGoogle Scholar Liu, M.: Optimal harvesting policy of a stochastic predator-prey model with delay. Appl. Math. Lett. 48, 102–108 (2015) MathSciNetView ArticleGoogle Scholar Jiang, D., Shi, N.: A note on non-autonomous logistic equation with random perturbation. J. Math. Anal. Appl. 303, 164–172 (2005) MathSciNetView ArticleGoogle Scholar Li, Z., Mao, X.: Population dynamical behavior of non-autonomous Lotka–Volterra competitive system with random perturbation. Discrete Contin. Dyn. Syst., Ser. B 24, 523–545 (2009) MathSciNetView ArticleGoogle Scholar Zhu, C., Yin, G.: On competitive Lotka–Volterra model in random environments. J. Math. Anal. Appl. 357, 154–170 (2009) MathSciNetView ArticleGoogle Scholar Liu, M., Bai, C.: Optimal harvesting policy of a stochastic food-chain model with harvesting. Appl. Math. Comput. 245, 265–270 (2014) MathSciNetMATHGoogle Scholar Wei, F., Wang, K.: The existence and uniqueness of the solution for stochastic functional differential equations with infinite dely. J. Math. Anal. Appl. 331, 516–531 (2007) MathSciNetView ArticleGoogle Scholar Wang, S., Wang, L., Wei, T.: Optimal harvesting for a stochastic predator-prey model with s-type distributed time delays. Methodol. Comput. Appl. Probab. 20, 27–68 (2018) MathSciNetMATHGoogle Scholar Liu, M., Bai, C.: Analysis of a stochastic tri-trophic food-chain model with harvesting. J. Math. Biol. 73, 597–625 (2016) MathSciNetView ArticleGoogle Scholar Li, W., Wang, L.: Stability and bifurcation of a delayed three-level food chain model with Beddington–DeAngelis functional response. Nonlinear Anal., Real World Appl. 10, 2471–2477 (2009) MathSciNetView ArticleGoogle Scholar Xu, C., Zhang, Q.: Bifurcation analysis in a predator-prey model with discrete and distributed time delay. Int. J. Appl. Math. Mech. 8(1), 50–65 (2012) Google Scholar Ma, Z., Huo, H., Liu, C.: Stability and Hopf bifurcation on a predator-prey model model with discrete and distributed delays. Nonlinear Anal., Real World Appl. 10, 1160–1172 (2009) MathSciNetView ArticleGoogle Scholar Mao, X.: Exponential Stability of Stochastic Differential Equations. Dekker, New York (2007) MATHGoogle Scholar Muhammadhaji, A., Teng, Z., Rehim, M.: On a two species stochastic Lotka–Volterra competition system. J. Dyn. Control Syst. 21, 495–511 (2015) MathSciNetView ArticleGoogle Scholar Liu, M., Wang, K.: Stochastic Lotka–Volterra systems with Lévy noise. J. Math. Anal. Appl. 410, 750–763 (2014) MathSciNetView ArticleGoogle Scholar Bao, J., Yuan, C.: Comparison theorem for stochastic differential delay equations with jumps. Acta Appl. Math. 4, 267–270 (2001) Google Scholar Mao, X.: Stochastic Differential Equations and Applications. Horwood, Chichester (2007) MATHGoogle Scholar Barbalat, I.: Systems equations differentielles d'oscillations. Rev. Roum. Math. Pures Appl. 4, 267–270 (1959) MATHGoogle Scholar Prato, G., Zabczyk, J.: Ergodic for Infinite Dimensional Systems. Cambridge University Press, Cambridge (1996) View ArticleGoogle Scholar Pimm, S.L.: Food Webs. Chapman & Hall, New York (1982) View ArticleGoogle Scholar Hastings, A., Powell, T.: Chaos in a three-species food-chain. Ecology 72, 896–903 (1991) View ArticleGoogle Scholar Zeng, T., Teng, Z.: Stability in the mean of a stochastic three species food chain model with general Lévy jumps. Chaos Solitons Fractals 108, 258–265 (2018) View ArticleGoogle Scholar Ge, Y., Xu, Y.: Optimal harvesting policies for a stochastic food-chain system with Markovian switching. Math. Probl. Eng. 2015, 875159 (2015) MathSciNetMATHGoogle Scholar Liu, M.: Dynamics of a stochastic regime-switching predator-prey model with modified Leslie–Gower Holling-type II schemes and prey harvesting. Nonlinear Dyn. (2019). https://doi.org/10.1007/s1107101904797x View ArticleGoogle Scholar
CommonCrawl
Guide to Dividend Investing Introduction to Dividend Investing Stock Dividends Cash Dividend Definition Companies That Pay Dividends vs. Companies That Don't How and Why Do Companies Pay Dividends? Is Dividend Investing a Good Strategy? Put Dividends to Work in Your Portfolio The 3 Biggest Misconceptions of Dividend Stocks How Dividends Work Forward Dividend Yield Understanding Dividend Rate vs. Dividend Yield Dividend Payout Ratio Definition Ex-Dividend Definition Make Ex-Dividends Work for You Difference Between Record Date and Ex-Dividend Date How and When Are Stock Dividends Paid Out? Dividend Investing Strategies & Concepts How Dividends Affect Stock Prices What Causes Dividends Per Share to Increase? How Can I Find Out Which Stocks Pay Dividends? Dividend Growth Rate Unpaid Dividend 4 Ratios to Evaluate Dividend Stocks How to Use the Dividend Capture Strategy Mutual Funds: How They Pay Dividends Why Would a Company Drastically Cut Its Dividend? Corporate Finance & Accounting Financial Ratios By Adam Hayes Reviewed By Margaret James What Is a Dividend Payout Ratio? Formula and Calculation What the Ratio Tells You Dividend Sustainability Dividends Are Industry Specific Dividend Payout Ratio Example Payout Ratio vs. Dividend Yield The dividend payout ratio is the ratio of the total amount of dividends paid out to shareholders relative to the net income of the company. It is the percentage of earnings paid to shareholders in dividends. The amount that is not paid to shareholders is retained by the company to pay off debt or to reinvest in core operations. It is sometimes simply referred to as the 'payout ratio.' The dividend payout ratio provides an indication of how much money a company is returning to shareholders versus how much it is keeping on hand to reinvest in growth, pay off debt, or add to cash reserves (retained earnings).? The dividend payout ratio is the proportion of earnings paid out as dividends to shareholders, typically expressed as a percentage.? Some companies pay out all their earnings to shareholders, while some only pay out a portion of their earnings. If a company pays out some of its earnings as dividends, the remaining portion is retained by the business. To measure the level of earnings retained, the retention ratio is calculated. Several considerations go into interpreting the dividend payout ratio, most importantly the company's level of maturity. A new, growth-oriented company that aims to expand, develop new products, and move into new markets would be expected to reinvest most or all of its earnings and could be forgiven for having a low or even zero payout ratio. Formula and Calculation of Dividend Payout Ratio The dividend payout ratio can be calculated as the yearly dividend per share divided by the earnings per share, or equivalently, the dividends divided by net income (as shown below). ?Dividend?Payout?Ratio=Dividends?PaidNet?Income\begin{aligned} &\text{Dividend Payout Ratio} = \frac{ \text{Dividends Paid} }{ \text{Net Income} } \\ \end{aligned}?Dividend?Payout?Ratio=Net?IncomeDividends?Paid??? Alternatively, the dividend payout ratio can also be calculated as: ?Dividend?Payout?Ratio=1?Retention?Ratio\begin{aligned} &\text{Dividend Payout Ratio} = 1 - \text{Retention Ratio} \\ \end{aligned}?Dividend?Payout?Ratio=1?Retention?Ratio?? On a per-share basis, the retention ratio can be expressed as: ?Retention?Ratio=EPS?DPSEPSwhere:EPS=Earnings?per?share\begin{aligned}&\text{Retention Ratio} = \frac{ \text{EPS}-\text{DPS} }{ \text{EPS} } \\&\textbf{where:}\\&\text{EPS}=\text{Earnings per share} \\&\text{DPS}=\text{Dividends per share}\end{aligned}?Retention?Ratio=EPSEPS?DPS?where:EPS=Earnings?per?share?? You can also calculate a payout ratio using Microsoft Excel: First, if you are given the sum of the dividends over a certain period and the outstanding shares, you can calculate the?dividends per share?(DPS). Suppose you are invested in a company that paid a total of $5 million last year and it has 5 million shares outstanding. On Microsoft Excel, enter "Dividends per Share" into cell A1. Next, enter "=5000000/5000000" in cell B1; the dividend per share in this company is $1 per share. Then, you need to calculate the?earnings per share?(EPS) if it is not given. Enter "Earnings per Share" into cell A2. Suppose the company had a net income of $50 million last year. The formula for earnings per share is (net income - dividends on preferred stock) ÷ (shares outstanding). Enter "=(50000000 - 5000000)/5000000" into cell B2. The EPS for this company is $9. Finally, calculate the payout ration. Enter "Payout Ratio" into cell A3. Next, enter "=B1/B2" into cell B3; the payout ratio is 11.11%. Investors use the ratio to gauge whether dividends are appropriate and sustainable. The payout ratio depends on the sector; for example, startup companies may have a low payout ratio because they are more focused on reinvesting their income to grow the business. What the Dividend Payout Ratio Tells You Several considerations go into interpreting the dividend payout ratio, most importantly the company's level of maturity. A new, growth-oriented company that aims to expand, develop new products, and move into new markets would be expected to reinvest most or all of its earnings and could be forgiven for having a low or even zero payout ratio. The payout ratio is 0% for companies that do not pay dividends and is 100% for companies that pay out their entire net income as dividends. On the other hand, an older, established company that returns a pittance to shareholders would test investors' patience and could tempt activists to intervene. In 2012 and after nearly twenty years since its last paid dividend, Apple?(AAPL) began to pay a dividend when the new CEO felt the company's enormous cash flow made a 0% payout ratio difficult to justify.?? ?? Since it implies that a company has moved past its initial growth stage, a high payout ratio means share prices are unlikely to appreciate rapidly. The payout ratio is also useful for assessing a dividend's sustainability. Companies are extremely reluctant to cut dividends since it can drive the stock price down and reflect poorly on management's abilities. If a company's payout ratio is over 100%, it is returning more money to shareholders than it is earning and will probably be forced to lower the dividend or stop paying it altogether. That result is not inevitable, however. A company endures a bad year without suspending payouts, and it is often in their interest to do so. It is therefore important to consider future earnings expectations and calculate a forward-looking payout ratio to contextualize the backward-looking one. Long-term trends in the payout ratio also matter. A steadily rising ratio could indicate a healthy, maturing business, but a spiking one could mean the dividend is heading into unsustainable territory. The retention ratio is a converse concept to the dividend payout ratio. The dividend payout ratio evaluates the percentage of profits earned that a company pays out to its shareholders, while the retention ratio represents the percentage of profits earned that are retained by or reinvested in the company. Dividend payouts vary widely?by industry, and like most ratios, they are most useful to compare within a given industry. Real estate investment partnerships (REITs), for example, are legally?obligated to distribute at least 90% of earnings to shareholders as they enjoy special?tax exemptions.?? Master limited partnerships (MLPs) tend to have high payout ratios, as well.? Dividends are not the only way companies can return value to shareholders; therefore, the payout ratio does not always provide a complete picture. The augmented payout ratio incorporates share?buybacks?into the metric; it?is calculated by dividing the sum of dividends and buybacks by net income for the same period. If the result is too high, it can indicate an emphasis on short-term boosts to share prices at the expense of reinvestment and long-term growth. Another adjustment that can be made to provide a more accurate picture is to subtract preferred stock dividends for companies that issue preferred shares. Companies that make a profit at the end of a fiscal period can do a number of things with the profit they earned. They can pay it to shareholders as?dividends, they can retain it to reinvest in the business for growth, or they can do both. The portion of the profit that a company chooses to pay out to its shareholders can be measured with the payout ratio. For example, on November 29, 2017, The Walt Disney Company declared a $0.84 semi-annual cash dividend per share to shareholders of record December 11, 2017, to be paid January 11, 2018.?? As of the?fiscal year ended?September 30, 2017, the company's EPS was $5.73.?? Its payout ratio is, therefore, ($0.84 / $5.73) = 0.1466, or 14.66%. Disney will pay out 14.66% and retain 85.34%. Dividend Payout Ratio vs. Dividend Yield When comparing the two measures of dividends, it's important to know that the?dividend yield?tells you what the simple rate of return is in the form of?cash dividends?to shareholders, but the?dividend payout ratio?represents how much of a company's net earnings are paid out as dividends. While the dividend yield is the more commonly known and scrutinized term, many believe the dividend payout ratio is a better indicator of a company's ability to distribute dividends consistently in the future. The dividend payout ratio is highly connected to a company's?cash flow. The?dividend yield shows how much a company has paid out in dividends over the course of a year in relation to the stock price. The yield is presented as a percentage, not as an actual dollar amount. This makes it easier to see how much return per dollar invested the shareholder receives through dividends. The yield is calculated as: ?Dividend?Yield=Annual?Dividends?per?SharePrice?per?Share\begin{aligned} &\text{Dividend Yield} = \frac{ \text{Annual Dividends per Share} }{ \text{Price per Share} } \end{aligned}?Dividend?Yield=Price?per?ShareAnnual?Dividends?per?Share??? For example, a company that paid out $10 in annual dividends per share on a stock trading at $100 per share has a dividend yield of 10%. You can also see that an?increase in share price?reduces the dividend yield percentage and vice versa for a decline in price. Apple. "Dividend History." Accessed August 13, 2020. Apple. "Apple Announces Plans to Initiate Dividend and Share Repurchase Program." Accessed August 13, 2020. U.S. Securities and Exchange Commission. "Investor Bulletin: Real Estate Investment Trusts (REITs)," Page 1. Accessed August 13, 2020. The Walt Disney Company. "The Walt Disney Company Announces Semi-Annual Cash Dividend of $0.84 Per Share." Accessed August 13, 2020. Macrotrends. "Disney EPS - Earnings per Share 2006-2020 | DIS." Accessed August 13, 2020. Plowback Ratio Plowback ratio is a fundamental analysis ratio that measures how much earnings are retained after dividends are paid out. Retention Ratio The retention ratio is the proportion of earnings kept back in a business as retained earnings rather than being paid out as dividends. How Return on Equity Works Return on equity (ROE) is a measure of financial performance calculated by dividing net income by shareholders' equity. Because shareholders' equity is equal to a company's assets minus its debt, ROE could be thought of as the return on net assets. How Determining the Dividend Rate Pays off for Investors The dividend is the percentage of a security's price paid out as dividend income to investors. Inside the Payout Ratio The payout ratio, also called the dividend payout ratio, is the proportion of earnings paid out as dividends to shareholders, typically expressed as a percentage. Dividend Yield Definition The dividend yield is a financial ratio that shows how much a company pays out in dividends each year relative to its stock price. Calculate the Dividend Payout Ratio Using Just the Income Statement Earnings Per Share vs. Dividends Per Share: What's the Difference? Do Interest Rate Changes Affect Dividend Payers? How Do You Calculate a Payout Ratio Using Excel? How to Calculate the Dividend Payout Ratio
CommonCrawl
Derivatives of piecewise functions of functions Derivatives of piecewise functions in Mathematica are computed according to special rules. According the Piecewise documentation (see Possible Issues), Derivatives are computed piece-by-piece, unless the function is univariate in a real variable. This distinction is important for piecewise functions that have a pointwise definition (such as a discontinuity). In this case, if Mathematica can determine that the variable is real, it finds the derivative at the discontinuity by looking at the derivatives on each side, rather that at the function definition at the discontinuity. This is somewhat discussed here. In general, this behavior is intelligent and useful. Consider the function $$ f(b, p) = \sqrt{\frac{b - 1}{e^{(b - 1)p} - 1}} $$ which is, in fact, continuous, but evaluates to Indeterminate at $b = 1$. Its derivatives with respect to the first argument will have the same problem. Now, If we define $f$ via Piecewise, we can put the appropriate limit in by hand. f[b_, p_] := Piecewise[ {{Sqrt[(b - 1)/(Exp[(b - 1) p] - 1)], b < 1 || b > 1}, {Sqrt[1/p], b == 1}} To get derivatives, we can differentiate as normal: D[f[b, p], b] // Simplify $$ \begin{cases} -\frac{1}{4 \sqrt{\frac{1}{p}}} & b = 1 \\[6pt] \frac{e^{(b-1) p} (1 + p - b p) - 1}{2 \sqrt{\frac{b-1}{e^{(b-1) p} - 1}} \left(e^{(b-1) p}- 1 \right)^2} & \text{True} \end{cases} $$ Even though the function value at $b = 1$ does not depend on $b$, the derivative is correctly found because Mathematica looks at the surrounding region. Multiple derivatives are fine, too. However, if the variable defining the piecewise regions is a function of the variable of differentiation, the differentiation always proceeds piece-by-piece. D[f[g[b], p], b] // Simplify $$ \begin{cases} 0 & g(b) = 1 \\[6pt] \frac{e^{(g(b)-1) p} (1 + p - g(b) p) - 1}{2 \sqrt{\frac{g(b)-1}{e^{(g(b)-1) p} - 1}} \left(e^{(g(b)-1) p}- 1 \right)^2} g^\prime (b) & \text{True} \end{cases} $$ The derivative at the discontinuity vanishes. Note that D[f[g, p], b, NonConstants -> {g}] and Dt[f[g, p], b, Constants -> {b}] also exhibit this behavior. Neither does it matter that g is unspecified (g = Sin will do the same thing). It appears that this occurs because with a compound variable Piecewise no longer knows that the variable is real (I'm not sure why), so the derivative is computed piece-by-piece, and the derivative of the constant (in $b$) piece at $b = 1$ goes to zero. One can check this by adding some $b$-dependence and verifying that it behaves as expected for piece-by-piece differentiation: h[b_, p_] := Piecewise[ {b Sqrt[1/p], b == 1}} D[h[g[b], p], b] // Simplify $$ \begin{cases} \sqrt{\frac{1}{p}} \, g^\prime (b) & g(b) = 1 \\[6pt] \frac{e^{(g(b)-1) p} (1 + p - g(b) p) - 1}{2 \sqrt{\frac{g(b)-1}{e^{(g(b)-1) p} - 1}} \left(e^{(g(b)-1) p}- 1 \right)^2} g^\prime (b) & \text{True} \end{cases} $$ Of course, this is only an issue for piecewise functions that have a pointwise definition that doesn't carry the full variable dependence necessary to compute derivatives. Is this behavior intended? How do I work around it? Ideally, one should be able to take as many derivatives of $f(g(b), p)$ with respect to $b$ (providing $g$ is well-behaved) as are possible of $f(b, p)$. For the purposes of this question, treat $g$ as an unspecified, but well-behaved function -- our lack of knowledge about its behavior should be parametrized by the appearance of $g^\prime(b)$ and higher derivatives in the output. Essentially, Mathematica is unable to work out the chain rule for piecewise functions with a pointwise definition, since it differentiates them piece-by-piece, and I want to know the best way to deal with this. I ran into this problem because I need to be able to take derivatives of a piecewise function of a dependent variable with respect to an independent variable so that I can use equations containing that function in NDSolve. Practically, it is easy enough to get one or two correct derivatives by defining the function value at the tricky point as some number of terms in its series expansion to give it explicit variable-dependence, but I am interested in a more general solution. Any suggestions are welcome. calculus-and-analysis piecewise VirgilVirgil $\begingroup$ How do you know if a general function of g[b] is differentiable at all? I mean, when we do not know the definition we can't guarantee the derivative exists at that point for the point-wise function. If it exists, and is equal from left and right, i guess we can define a function to use the definition of a derivative using limits, or, do a numerical differentiation? $\endgroup$ – MathX Apr 2 '16 at 3:27 $\begingroup$ @MathX While you wrote this, I answered with essentially the same general thought... $\endgroup$ – Jens Apr 2 '16 at 3:28 $\begingroup$ @Jens yeah I was editing when I saw an answer was posted haha you beat me to it. This question actually made me curious to see if Mathematica has a built-in Frechet derivative function which apparently it doesn't. $\endgroup$ – MathX Apr 2 '16 at 3:33 $\begingroup$ @MathX In this case, I know that the function g[b] is differentiable. $\endgroup$ – Virgil Apr 2 '16 at 4:01 $\begingroup$ @MathX also, take a look at DifferenceQuotient. $\endgroup$ – Virgil Apr 2 '16 at 4:03 Obviously you can get derivatives only if the function actually has a derivative, and this is true for the example in the question. Your function is even continuous at the point in question. This means singularity is removable, and therefore we should not have to use Piecewise for that single point at all. Here is how you can avoid its use when you suspect that the derivative exists everywhere: f[x_, p_] := Module[{b}, Limit[Sqrt[(b - 1)/(Exp[(b - 1) p] - 1)], b -> x]] ==> (-1 + E^((-1 + b) p) (1 + p - b p))/(2 Sqrt[(-1 + b)/(-1 + E^((-1 + b) p))] (-1 + E^((-1 + b) p))^2) ==> -(((1 - E^(p (-1 + g[b])) (1 + p) + E^(p (-1 + g[b])) p g[b]) Derivative[1][g][b])/( 2 (-1 + E^(p (-1 + g[b])))^2 Sqrt[(-1 + g[b])/(-1 + E^( p (-1 + g[b])))])) The trick is simply to evaluate the function as a Limit. With that, the result contains no case distinctions. Having the derivatives as above, you may then of course encounter the same removable singularity and therefore would need to evaluate these results using Limit as well. I.e., to do the substitution mentioned in the comment, you would have to do Limit[D[f[g[b], p], b], g[b] -> 1] (* ==> -(Derivative[1][g][b]/(4 Sqrt[1/p])) *) The comments made it clear that the a formulation in terms of Piecewise is indeed preferred. So here is an alternative approach that relies on the idea (mentioned in the comment) that f itself may only be needed for numerical arguments: ClearAll[f]; fOrig[x_, p_] := D[Piecewise[{{Sqrt[(x - 1)/(Exp[(x - 1) p] - 1)], x < 1 || x > 1}, {Sqrt[1/p], x == 1}}], x] Derivative[1, 0][f][b_, p_] := Module[{x}, Simplify[fOrig[x, p]] /. x -> b] f[b_?NumericQ, p_] := fOrig[b, p] D[f[b, p], b] $$\begin{cases} -\frac{1}{4 \sqrt{\frac{1}{p}}} & b=1 \\ \frac{e^{(b-1) p} (-b p+p+1)-1}{2 \sqrt{\frac{b-1}{e^{(b-1) p}-1}} \left(e^{(b-1) p}-1\right)^2} & \text{True} \end{cases}$$ D[f[g[b], p], b] $$g'(b) \begin{cases} -\frac{1}{4 \sqrt{\frac{1}{p}}} & g(b)=1 \\ \frac{e^{p (g(b)-1)} (-g(b) p+p+1)-1}{2 \left(-1+e^{p (g(b)-1)}\right)^2 \sqrt{\frac{g(b)-1}{-1+e^{p (g(b)-1)}}}} & \text{True} \\ \end{cases} $$ This gives the correct results because I defined the derivative directly to use the first differentiation that worked properly (without inner function), and then manually added the chain rule. JensJens $\begingroup$ The difficulty is that I want the derivatives, which are also continuous, to give the appropriate limit at the point in question. However with this Limit-based definition, (D[f[g[b], p], b]) /. g[b] -> 1 returns Indeterminate. $\endgroup$ – Virgil Apr 2 '16 at 3:59 $\begingroup$ @Virgil The substitution g[b] -> 1 isn't going to work anyway because it leaves g' untouched. But I think I understand what you mean. You want to keep g unspecified until the end, not give it a definition at the beginning, right? Then I guess the limit approach wouldn't help. I think it only works if the inner function is already defined so that the singular point in the outer function can actually be identified. $\endgroup$ – Jens Apr 2 '16 at 4:45 $\begingroup$ @Virgil I added the "workaround" for the case you mentioned, showing the extent to which Limit can do what you ask. However, it's probably not as "automated" as you want. However, not knowing anything about g, I don't see a better alternative right now. $\endgroup$ – Jens Apr 2 '16 at 4:57 $\begingroup$ Correct. In the end, g is going to be a dependent variable in NDSolve, so it needs to be unspecified. I also need the equations it occurs in (which include the piecewise function and derivatives thereof) to be well-defined (as in not Indeterminate) everywhere. I played around with Limit myself for a while, but abandoned it for Piecewise, since it cannot pass along the limit to derivatives. $\endgroup$ – Virgil Apr 2 '16 at 5:06 $\begingroup$ I would like to automate it. Perhaps a new myLimit-type function that has HoldFirst, passes D inside, and evaluates only for numerical arguments would work $\endgroup$ – Virgil Apr 2 '16 at 5:10 I have accepted Jens' method, since it works and with a minimum of inconvenience. However, it is more useful when generalized to multiple derivatives, so I will show what I did to accomplish that below. I also want to present two other work-arounds that I came up with. I do not like either as well as Jens', but they may be useful in cases where one does not want to use a numeric function. Jens' method for general derivatives To correctly obtain more complicated derivatives, we simply need to make a general Derivative definition for $f$ that is numeric in the appropriate argument: fOrig[b_, p_] := Piecewise[ Derivative[n__][f][b_?NumericQ, p_] := Derivative[n][fOrig][b, p] This is the approach I have decided to take in my own work. A special derivative It one want's to avoid numeric definitions as in the above, one option is to define a special derivative that uses Inactivate and Activate to hold particular functions inactive during the course of derivation, restoring them after any chain and/or product rules have been worked out. The effect is the same as that achieved by making the argument numeric. SetAttributes[myD, HoldFirst]; Options@myD = Options@D; myD[expr_, patt_, x__, opts : OptionsPattern[]] := Activate[ D[Inactivate[expr, patt], x, opts], patt] Here, the argument patt takes a pattern, and any parts of expr matching that pattern are inactivated before D is applied and reactivated after. It can be applied as so: myD[fOrig[g[b], p], fOrig, b] $$ g^\prime (b) \, \begin{cases} - \frac{1}{4\sqrt{\frac{1}{p}}} & g(b) = 1 \\[6pt] \frac{e^{(g(b)-1) p} (1 + p - g(b) p) - 1}{2 \sqrt{\frac{g(b)-1}{e^{(g(b)-1) p} - 1}} \left(e^{(g(b)-1) p}- 1 \right)^2} & \text{True} \end{cases} $$ Higher-order derivatives work as well. I am not as fond of this method as it requires the use of a special new derivative, which I find cumbersome. A special limit One other thing one can do is to work with the piece-by-piece differentiation of Piecewise and modify the limiting value itself, so that when differentiated it gives the proper result. I came up with this: SeriesLimit[f_, {x_, x0_?NumericQ}] := SeriesCoefficient[f, {x, x0, 0}] Derivative[0, {0, n_}][seriesLimit][f_, {x_, x0_}] := seriesLimit[D[f, {x, n}], {x, x0}] Derivative[n_, {0, 0}][seriesLimit][f_, {x_, x0_}] := seriesLimit[1, {x, x0}] Times[a___, seriesLimit[f_, {x_, x0_}], b___] ^:= seriesLimit[a f b, {x_, x0_}] seriesLimit is a wrapper that holds its arguments until the argument x0 is numeric, at which point it takes the first term in the series expansion of $f$ around $x_0$, that is, $f(x_0)$. Under differentiation, it passes derivatives with respect to x0 inside to f as derivatives with respect to x, and collects terms generated by derivatives with respect to other arguments back inside. This behavior is probably not robust, but it is sufficient here. D[seriesLimit[g[x, p], {x, b}], b] seriesLimit[Derivative[1, 0][g][x, p], {x, y}] D[seriesLimit[g[x, p], {x, h[b]}], b, b] seriesLimit[h''[b] Derivative[1, 0][g][x, p], {x, h[b]}] + seriesLimit[h'[b]^2 Derivative[2, 0][g][x, p], {x, h[b]}] seriesLimit[Sqrt[(x - 1)/(Exp[(x - 1) p] - 1)], {x, 1}] Sqrt[1/p] It can be used in Piecewise to alleviate the difficulty at hand: fNew[b_, p_] := Piecewise[ {seriesLimit[Sqrt[(x - 1)/(Exp[(x - 1) p] - 1)], {x, b}], b == 1}} D[fNew[g[b], p], b] $$ \begin{cases} g^\prime (b) \, \frac{e^{(g(b)-1) p} (1 + p - g(b) p) - 1}{2 \sqrt{\frac{g(b)-1}{e^{(g(b)-1) p} - 1}} \left(e^{(g(b)-1) p}- 1 \right)^2} & g(b) > 1 \| \, g(b) < 1 \\[6pt] \text{seriesLimit} \Big[ g^\prime (b) \, \frac{e^{(x-1) p} (1 + p - x p) - 1}{2 \sqrt{\frac{x-1}{e^{(x-1) p} - 1}} \left(e^{(x-1) p}- 1 \right)^2}, \{x, g(b)\} \Big] & \text{True} \end{cases} $$ In the case that the piecewise variable is not a function (i.e. $g(b) \rightarrow b$), under differentiation the derivative at the discontinuity is calculated by the limiting procedure outlined in the question, and the seriesLimit drops out. Not the answer you're looking for? Browse other questions tagged calculus-and-analysis piecewise or ask your own question. Derivative of piecewise function Direct and easy calculation of the N-th derivative of a function at 0? Why does the first derivative of a piecewise continuous function have discontinuities? Why does piecewise plot have a discontinuity when the function, first and second derivatives are equal? Derivatives of functions with arbitrary number of variables? Higher-order partial derivatives w.r.t. variable raised to some power Defining derivatives of integrals
CommonCrawl
353 articles found Preprint REVIEW | doi:10.20944/preprints201807.0606.v1 Mechanisms of Protein Search for Targets on DNA: Theoretical Insights Alexey A. Shvets, Maria P. Kochugaeva, Anatoly B. Kolomeisky Subject: Chemistry, General & Theoretical Chemistry Keywords: protein-DNA interactions; facilitated diffusion; protein target search; discrete-state stochastic models Protein-DNA interactions are critical for the successful functioning of all natural systems. The key role in these interactions is played by processes of protein search for specific sites on DNA. Although it has been studied for many years, only recently microscopic aspects of these processes became more clear. In this work, we present a review on current theoretical understanding of the molecular mechanisms of the protein target search. A comprehensive discrete-state stochastic method to explain the dynamics of the protein search phenomena is introduced and explained. Our theoretical approach utilizes a first-passage analysis and it takes into account the most relevant physical-chemical processes. It is able to describe many fascinating features of the protein search, including unusually high effective association rates, high selectivity and specificity, and the robustness in the presence of crowders and sequence heterogeneity. Theoretical Analysis of Saline Diffusion during Sodium Chloride Aqueous Solutions Freezing for Desalination Purposes Beatriz Castillo Téllez, Karim Allaf, Isaac Pilatowsky Figueroa, Rosenberg Javier Romero Domínguez, Roberto Best y Brown, Wilfrido Rivera Gomez Franco Subject: Earth Sciences, Environmental Sciences Keywords: freezing/melting desalination process; aqueous solutions of sodium chloride; theoretical diffusive models Online: 4 January 2018 (07:08:51 CET) Considering the important demand of fresh water and its scarce availability, water desalination is an interesting technology, producing about 44 Mm3/year worldwide, but, in general the most common desalination techniques are highly energy demanding. Freezing-Melting (F/M) desalination uses just up to 70% less thermal energy, but is the least used process mainly, due to the difficulty of the salt separation. This study proposes a model able to analyses the thermodynamic potential that allows the salt diffusion during the F/M process, using an aqueous solution of sodium chloride. This should allow to obtain a sensitive analysis of the process to promote the separation between the high concentration brine and the ice with liquid separation by physical process. The unidimensional model is based on the evolution of both processes: thermal and mass diffusions, depending on temperature and saline gradients, predicting whether the salt will remain inside the ice or not. Thus, the thermal potential is adjusted to frozen only when the salt has been "pushed" towards the brine. Mostly models have base their results on the assumption of a "certain value of saline concentration of the liquid fraction", value in which there is great disagreement. In this paper the calculations are based on the concentration in the solid-liquid interface, which has been extensively studied and there is a coincidence in those results, being the main advantage of the proposed model. Application of the Nucleation Theorem to Crystallization of Liquids: Some General Theoretical Results Jürn W.P. Schmelzer Subject: Physical Sciences, Condensed Matter Physics Keywords: general theory of phase transitions; nucleation; thermodynamics of nucleation in physical chemistry and chemical physics; theory and models of crystal growth; glasses Online: 4 November 2019 (11:07:48 CET) Show abstract| Share Different aspects in applying the nucleation theorem to the description of crystallization of liquids are analyzed. It is shown that, by employing the classical Gibbs' approach in the thermodynamic description of heterogeneous systems and assuming that the basic assumptions of classical nucleation theory commonly employed in application to crystallization hold, a general form of the nucleation theorem can be formulated valid not only for one-component but generally for multi-component systems. This result is taken then as the starting point for the derivation of particular forms of this theorem for the cases that the deviation from equilibrium is caused by variations of either composition of the liquid phase, temperature, or pressure. In this procedure, recently developed by us expressions for the curvature dependence of the surface tension, respectively, the dependence of the surface tension on pressure and/or temperature are employed. It is shown that the formulation of the nucleation theorem as proposed by Kashchiev [J. Chem. Phys. 76, 5098-5102 (1982)] holds also for multi-component systems as far as mentioned above assumptions are fulfilled. In the application of classical nucleation theory to crystallization processes it is assumed as one of its basic ingredients that the bulk properties of the critical clusters are widely identical to the properties of the newly evolving crystal phase. This assumption is, however, in general, it is not true. This limitation of the theoretical description can be overcome by the application of the generalized Gibbs approach for the specification of the dependence of the properties of critical crystal clusters on the degree of metastability of the liquid phase. Applying this method, it is demonstrated that a similar formulation of the nucleation theorem as derived based on classical nucleation theory holds true also in cases when a dependence of the state parameters of the critical clusters on the degree of deviation from equilibrium is appropriately accounted for. Effects of Climate Change on Grassland Biodiversity and Productivity: The Need for a Diversity of Models Marcel van Oijen, Gianni Bellocchi, Mats Höglind Subject: Biology, Agricultural Sciences & Agronomy Keywords: data needs; empirical models; integrated models; process-based models; review There is increasing evidence that the impact of climate change on the productivity of grasslands will at least partly depend on their biodiversity. A high level of biodiversity may confer stability to grassland ecosystems against environmental change, but there are also direct effects of biodiversity on the quantity and quality of grassland productivity. To explain the manifold interactions, and to predict future climatic responses, models may be used. However, models designed for studying the interaction between biodiversity and productivity tend to be structurally different from models for studying the effects of climatic impacts. Here we review the literature on the impacts of climate change on biodiversity and productivity of grasslands. We first discuss the availability of data for model development. Then we analyse strengths and weaknesses of three types of model: ecological, process-based and integrated. We discuss the merits of this model diversity and the scope for merging different model types. Strut-and-Tie Models of Masonry Shear Walls Radosław Jasiński Subject: Engineering, Civil Engineering Keywords: masonry shear walls; ST models; equilibrium models Online: 12 June 2018 (10:26:48 CEST) This paper contains theoretical fundamentals of strut and tie models, used in unreinforced horizontal shear walls. Depending on support conditions and wall loading, we can distinguish models with discrete bars when point load is applied to the wall (type I model) or with continuous bars (type II model) when load is uniformly distributed at the wall boundary. The main part of this paper compares calculated results with the own tests on horizontal shear walls made of solid brick, silicate elements and autoclaved aerated concrete. The tests were performed in Poland. The model required some modifications due to specific load and static diagram. Network Models for Cognitive Development and Intelligence Han Van Der Maas, Kees-Jan Kan, Maarten Marsman, Claire E. Stevenson Subject: Behavioral Sciences, Developmental Psychology Keywords: intelligence; development of intelligence; cognitive development; network models; factor models; psychometrics; latent variable models Online: 25 January 2017 (03:14:34 CET) Cronbach's (1957) famous division of scientific psychology into two disciplines is still actual for the fields of cognition (general mechanisms) and intelligence (dimensionality of individual differences). The welcome integration of the two fields requires the construction of mechanistic models of cognition and cognitive development that explain key phenomena in individual differences research. In this paper we argue that network modeling is a promising approach to integrate the processes of cognitive development and (developing) intelligence into one unified theory. Network models are defined mathematically, describe mechanisms on the level of the individual, and are able to explain positive correlations among intelligence subtest scores - the empirical basis for the well-known g-factor - as well as more complex factorial structures. Links between network modeling, factor modeling and item response theory allow for a common metric, encompassing both discrete and continuous characteristics, for cognitive development and intelligence. Working Paper REVIEW About Model Validation in Bioprocessing Vignesh Rajamanickam, Heiko Babel, Liliana Montano-Herrera, Alireza Ehsani, Fabian Stiefel, Stefan Haider, Beate Presser, Bettina Knapp Subject: Keywords: bioprocess models; model validation; model calibration; Quality by Design; mechanistical and statistical models; hybrid models; chemometric models; Biopharmaceutical engineering; regulatory guidance In bioprocess engineering the Qualtiy by Design (QbD) initiative encourages the use of models to define design spaces. However, clear guides on how models for QbD are validated are still missing. In this review we provide a comprehensive overview about validation methods, mathematical approaches and metrics currently applied in bioprocess modeling. The methods cover analytics for data used for modeling, model training and selection, measures for predictiveness and model uncertainties. We point out general issues in model validation and calibration for different types of models and put this into context of existing health authority recommendations. This review provides the start-point for developing a guidance for model validation approaches. There is no one-fits-all approach but this review shall help to identify the best fitting validation method or combination of methods for the specific task and type of bioprocess models that is developed. Current and Future Habitat Suitability Models for Four Ticks of Medical Concern in Illinois, USA Heather Lynn Kopsco, Peg Gronemeyer, Nohra Mateus-Pinilla, Rebecca Lee Smith Subject: Biology, Ecology Keywords: Ticks; species distribution models; habitat suitability models; Illinois; climate The greater U.S. Midwest is on the leading edge of tick and tick-borne disease (TBD) expansion, and tick and TBD encroachment into Illinois is occurring from both the northern and the southern regions. To assess historical and future habitat suitability of four ticks of medical concern within the state, we fit individual and mean-weighted ensemble species distribution models for Ixodes scapularis, Amblyomma americanum, Dermacentor variabilis, and a newly invading species, Amblyomma maculatum using a variety of landscape and mean climate variables for the periods of 1970-2000, 2041-2060, and 2061-2080. Ensemble models for the historical climate were consistent with known distributions of each species but predicted the habitat suitability of A. maculatum to be much greater throughout Illinois than what known distributions demonstrate. Proximity to wetlands and water bodies was important in predicting both I. scapularis and A. americanum presence. A. americanum occurrence was highly dependent on increasing forest cover, while A. maculatum habitat was more strongly predicted by open habitats. As the climate warmed, the expected distribution of all species became more strongly impacted by precipitation and temperature variables, particularly mean temperature of the wettest quarter and mean temperature of the driest quarter. By 2070, I. scapularis was expected to retract by as much as 60% from southern and central regions of the state as compared to historical climate distribution but remained concentrated in the Chicago metropolitan area. A. americanum was predicted to initially expand across parts of east- and west-central Illinois by 2050, but then largely retract in distribution to along rivers and water bodies by 2070. The ranges of D. variabilis and A. maculatum, however, were predicted to contract in the 2050 climate scenario, but then expand in the 2070 scenario. Predicting where ticks may invade and concentrate as the climate changes will be important to anticipate, prevent, and treat TBD in Illinois. Working Paper ARTICLE Mathematical Modelling of the Dynamics of Tumor Growth and its Optimal Control Jannatun Irana Ira, Md. Shahidul Islam, Jagadis Chandra Misra, Md. Kamrujjaman Subject: Keywords: Mathematical models; tumor growth; chemotherapy; diffusion; optimal control Online: 5 July 2020 (16:39:32 CEST) In the last few decades, the dynamics of tumor cells and their growths are presented via clinical, experimental, and theoretical approaches, which leads to the development of the new idea of multiple cancer therapies to control and reduce the death rate for earlier detection. In this paper, we discussed the dynamics of tumor cell growth and its treatment process. We analyzed some simple mathematical models and generalized the study to understand the growth of tumor cells. The main proposed model is a system of ordinary differential equations which combines interactions among natural killer cells, dendritic cells and cytotoxic CD8+ T cells. The model is solved numerically to explain how the tumor cells spread and become more dangerous as well as the treatment process of cancer. It is also studied that how the cell behaves in the presence of different therapy and drugs. The optimal control of chemotherapy has been discussed. It has also been explained how much the model is effective in reducing tumor cells over time. Finally, a couple of spatially distributed models are discussed for tumor cell growth. Computational Models in Neurosciences Between Mechanistic and Phenomenological Characterizations Christophe Gauld, Cédric Brun, Thomas Boraud, Mallory Carlu, Damien Depannemaecker Subject: Life Sciences, Other Keywords: Computing Methodologies; Computer Simulation; Models, Biological; Models, Theoretical; Theoretical neurosciences Computational neuroscience combines mathematics, computer science models, and neurosciences for theorizing, investigating, and simulating neural systems involved in the development, structure, physiology, and cognitive abilities of the brain. Computational models constitute a major stake in translational neuroscience: the analytical understanding of these models seems fundamental to consider a translation towards clinical applications. Method: We propose a minimal typology of computational models, which allows distinguishing between more realistic models (e.g., mechanistic models) and pragmatic models (e.g., phenomenological models). Result: Understanding the translational aspects of computational models goes far beyond the intrinsic characteristics of models. First, we assume that a computational model is rarely uniquely mechanistic or phenomenological. Idealization seems necessary because of i) the researcher's perspectives on the phenomena and the purposes of the study (i.e., by the relativity of the model); ii) The complexity of reality across different levels and therefore the nature and number of dimensions required to consider a phenomenon. Especially, the use of models goes far beyond their function, and requires considering external characteristics rooted in path dependence, interdisciplinarity, and pluralism in neurosciences. Conclusion: The unreasonable use of computational models, which are highly complex and subject to a shift in their initial function, could be limited by bringing to light such factors. Potential Distribution of Aedes (Ochlerotatus) scapularis (Diptera: Culicidae): A Vector Mosquito New to the Florida Peninsula Lindsay P. Campbell, Nathan Burkett-Cadena, Evaristo Miqueli, Isik Unlu, Kristin Sloyer, Johana Medina, Chalmers Vasquez, William Petrie, Lawrence Reeves Subject: Biology, Anatomy & Morphology Keywords: invasive species; ecological niche models; species distribution models; vector surveillance Aedes scapularis is a neotropical mosquito known to transmit pathogens of medical and veterinary importance. Its recent establishment in southeastern Florida has potential public health implications. We used an ecological niche modeling approach to predict the abiotic environmental suitability for Ae. scapularis across much of the Americas and Caribbean Islands. Georeferenced occurrence data obtained from the Global Biodiversity Inventory Facility and recent collection records of Ae. scapularis from southern Florida served as input for model calibration. Environmental layers included bioclimatic variables provided in 2000 to 2010 average Modern Era Retrospective-analysis for Research and Applications climatic (MERRAclim) data. Models were run in the software program Maxent. Isothermality values found often in costal environments contributed strongest to model performance. Model projections suggested areas predicted suitable for Ae. scapularis across portions of the Amazon Basin, the Yucatán Peninsula, the Florida Peninsula, and multiple Caribbean Islands. Additionally, model predictions suggested connectivity of highly suitable or relatively suitable environments spanning the United States Gulf Coast, which may facilitate geographic expansion of this species. At least sixteen Florida counties were predicted highly suitable for Ae. scapularis, suggesting vigilance is needed by vector control and public health agencies to recognize further spread of this vector. Data Integration in Logic-based Models of Biological Mechanisms Benjamin Hall, Anna Niarakis Subject: Life Sciences, Biochemistry Keywords: Logic-based models; Boolean models; executable models; qualitative dynamical modelling; omic data integration; in silico simulations; formal verification Discrete, logic-based models are increasingly used to describe biological mechanisms. Initially introduced to study gene regulation, these models evolved to cover various molecular mechanisms, such as signalling, transcription factor cooperativity, and even metabolic processes. The abstract nature and amenability of discrete models to robust mathematical analyses make them appropriate for addressing a wide range of complex biological problems. Recent technological breakthroughs have generated a wealth of high throughput data. Novel, literature-based representations of biological processes and emerging algorithms offer new opportunities for model construction. Here, we review up-to-date efforts to address challenging biological questions by incorporating omic data into logic-based models, and discuss critical difficulties in constructing and analysing integrative, large-scale, logic-based models of biological mechanisms. Knowledge Graph Embedding for Link Prediction Models Ehtisham Ur Rehman, Aamir Saeed, Nasru Minallah, Abdul Hafeez Subject: Mathematics & Computer Science, Analysis Keywords: Knowledge Graphs; Link Prediction; Semantic-Based Models; Translation Based Embedded Models For disciplines like biological science, security, and the medical field, link prediction is a popular research area. To demonstrate the link prediction many methods have been proposed. Some of them that have been demonstrated through this review paper are TransE, Complex, DistMult, and DensE models. Each model defines link prediction with different perceptions. We argue that the practical performance potential of these methods, having similar parameter values, using the fine-tuning technique to evaluate their reliability and reproducibility of results. We describe those methods and experiments; provide theoretical proofs and experimental examples, demonstrating how current link prediction methods work in such settings. We use the standard evaluation metrics for testing the model's ability. Data Science in Economics: Comprehensive Review of Advanced Machine Learning and Deep Learning Methods Saeed Nosratabadi, Amir Mosavi, Puhong Duan, Pedram Ghamisi, Filip Ferdinand, Shahab S. Band, Uwe Reuter, Joao Gama, Amir H. Gandomi Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: data science; deep learning; ensemble machine learning models; economics; hybrid models This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The data science advances are investigated in three individual classes of deep learning models, ensemble models, and hybrid models. Application domains include stock market, marketing, E-commerce, corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on advancement of hybrid models as more than 51% of the reviewed articles applied hybrid model. On the other hand, it is found that based on the RMSE accuracy metric, hybrid models had higher prediction accuracy than other algorithms. While it is expected the trends go toward the advancements of deep learning models. Machine Learned Coarse Grain Water Models for Evaporation Studies Sumith Yesudasan, Rodney Averett, Sibi Chacko Subject: Engineering, Mechanical Engineering Keywords: Coarse Grain Models; Water Models; Nanoscale Evaporation; Nano Channels; Molecular Dynamics Evaporation studies of water using classical molecular dynamics simulations are largely limited due to its high computational expense. We aim at addressing the computational issues by developing a coarse grain model for evaporation of water on solid surfaces by combining four water molecules into a single bead. Most commonly used mono atomic pair potentials like Lennard Jones, Morse, Mie and three body potential like Stillinger-Weber are optimized using a combination of Genetic algorithm and Nelder-Mead algorithm. Among them, Stillinger-Weber based model shows excellent agreement of density and Enthalpy of vaporization with experimental results for a wide range of temperatures. Further, the new water model is used to simulate contact angle of water and thin film evaporation from surfaces with different wettabilities. Modelling Residential Building Costs in New Zealand: Time Series Transfer Function Approach Linlin Zhao, Jasper Mbachu, Zhansheng Liu, Huirong Zhang Subject: Engineering, Civil Engineering Keywords: transfer function models; ARIMA models; model selection; building cost index; New Zealand Online: 12 March 2019 (03:14:14 CET) Cost estimating based on building cost index plays an important role in project planning and management by providing accurate cost information. Recently, tremendous advances in cost estimating has been made but serious inaccuracies in it are still too frequently witnessed. This study aims to improve estimating accuracy for residential building costs in New Zealand. In this study, the New Zealand house prices index is involved in the transfer function models to produce forecasts of building costs for one-storey house, two-storey house, and town house in New Zealand. To demonstrate the effectiveness of the proposed models, this study compares the estimate results of the transfer function models with the univariate ARIMA models. The results indicate that the proposed transfer function models can achieve better outcomes than ARIMA models by considering the causality between building costs and New Zealand house prices. During the modelling process, the better cost estimation approach can be identified, and the movements of building costs are shown. Curating and Comparing 114 Strain-Specific Genome-Scale Metabolic Models of Staphylococcus aureus Alina Renz, Andreas Dräger Subject: Life Sciences, Biochemistry Keywords: Staphylococcus aureus; MRSA; genome-scale metabolic models; model-driven discovery; strain-specific models Online: 8 April 2021 (14:25:31 CEST) Staphylococcus aureus is a high-priority pathogen causing severe infections with high morbidity and mortality worldwide. Many S. aureus strains are methicillin-resistant (MRSA) or even multi-drug resistant. It is one of the most successful and prominent modern pathogens. An effective fight against S. aureus infections requires novel targets for antimicrobial and antistaphylococcal therapies. Recent advances in whole-genome sequencing and high-throughput techniques facilitate the generation of genome-scale metabolic models (GEMs). Among the multiple applications of GEMs is drug-targeting in pathogens. Hence, comprehensive and predictive metabolic reconstructions of S. aureus could facilitate the identification of novel targets for antimicrobial therapies. This review aims at giving an overview of all available GEMs of multiple S. aureus strains. We downloaded all 114 available GEMs of S. aureus for further analysis. The scope of each model was evaluated, including the number of reactions, metabolites, and genes.Furthermore, all models were quality-controlled using Mᴇᴍᴏᴛᴇ, an open-source application with standardized metabolic tests. Growth capabilities and model similarities were examined. This review should lead as a guide for choosing the appropriate GEM for a given research question. With the information about the availability, the format, and the strengths and potentials of each model, one can either choose an existing model or combine several models to create models with even higher predictive values. This facilitates model-driven discoveries of novel antimicrobial targets to fight multi-drug resistant S. aureus strains. Variational Bayesian Learning and Semiparametric Models on the Double Exponential Family Hector Zarate-Solano, Edilberto Cepeda-Cuervo Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Variational learning Bayes; semiparametric heterocedastic models; calculus of variations; optimization; biparametric exponential models In this paper, we focus on variational Bayesian learning deterministic optimization methods for inference in biparametric exponential models where the parameters follow semiparametric regression structures. This combination of data models and algorithms contributes to solving real-world problems and reduces the computation time. This allows both the rapid exploration of many data models and the accurate estimation of the mean and variance functions through the connection between generalized linear models and graph theory. A simulation study was carried out to assess the performance of the deterministic approximation. Finally, herein, we present an application using macroeconomic data to emphasize the benefits of the proposed approach. The Generation of Particles by Quantum Loops Hans Diel Subject: Physical Sciences, Particle & Field Physics Keywords: spacetime models; causal models; nonlinear dynamics; relativity theory; quantum field theory; quantum loops Quantum loops are processes that constitute quantum objects. In the causal model of quantum loops and quantum objects presented here, the nonlinear processes involve the elementary units of spacetime and the associated elementary units of quantum fields. As such, quantum loop processes are the sources of gravitational fields (i.e., spacetime curvature) and of the quantum objects wave function. The model may be viewed as a derivative of loop quantum gravity, spin networks and causal dynamical triangulation, although significant deviations to these theories exist. The causal model of quantum loops is based on a causal model of spacetime dynamics where space(-time) consists of interconnected space points, each of which is connected to a small number of neighboring space points. The curvature of spacetime is expressed by the density of these space points and by the arrangement of the connections between them. The quantum loop emerges in a nonlinear collective behavioral process from a collection of space points that carry energy and quantum field attributes. An Overview of Malaria Transmission Mechanisms, Control, and Modeling Merveille Koissi Savi Subject: Life Sciences, Other Keywords: Complexity; policy-recommendation; mathematical models Online: 5 December 2022 (11:33:35 CET) In sub-Saharan Africa, malaria is a leading cause of mortality and morbidity. As a result of the interplay between many factors, the control of this disease can be challenging. However, few studies have demonstrated malaria's complexity, control, and modeling although this perspective could lead to effective policy recommendations. This paper aims to be a didactic material providing the reader with an overview of malaria. More importantly, using a system approach lens, we intend to highlight the debated topics and the multifaceted thematic aspects of malaria transmission mechanisms, while showing the control approaches used as well as the model supporting the dynamics of malaria. As there is a large amount of information on each subject, we have attempted to provide a basic understanding of malaria that needs to be further developed. Nevertheless, this study illustrates the importance of using a multidisciplinary approach to designing next-generation malaria control policies. A Systematic Review of the Effect of Stress in Animals During Adolescence, and Its Long-Term Consequences during Adulthood: Focus on Hippocampal Neurogenesis, Cognitive Function and Behavioural Outcomes Alessandra Borsini, Juliette Giacobbe, Gargi Mandal, Maura Boldrini Subject: Life Sciences, Other Keywords: animal models; adolescence; hippocampal neurogenesis Adolescence represents a critical period for the programming of future adult behaviours. Neurogenesis is particularly active during adolescence, with increased number of granule cells and increased hippocampal volume both in animals and humans. Among the factors which can affect neurogenesis during adolescence, stress is considered a major one. Indeed, adolescence is known to be a particularly stressful period in life, with some adolescents suffering from mood disorders and anxiety. While there is increasing interest on the neurogenic changes occurring during the adolescent period, evidence is sparse. We conducted a systematic review summarising changes in hippocampal neurogenesis, neuroplasticity and hippocampal-dependent cognitive functions and behavioural outcomes in stress-induced adolescent animal models of depression, and investigating long-term stress effects on the same outcomes assessing the same animals in adulthood. Overall, the results show a significant reduction in hippocampal cell proliferation, and a concomitant increase in depressive-like behaviours in adolescent animals exposed to stress challenges, however reduction in the number of surviving neurons was accompanied by no changes in both cognition and behaviour. Studies also observed altered neuroplasticity, including a stress-induced decrease in markers of pre- and post-synaptic plasticity, dendritic spine length and density, and long-term potentiation. These changes in neuroplasticity were accompanied by cognitive impairments and depressive-like behaviours. Overall, some of the negative effects observed during adolescence, especially on cell proliferation, neuroplasticity, cognition and behaviour either persisted or worsened during adulthood. Interestingly, treatment during adolescence with antidepressants, glutamate receptor inhibitors, glucocorticoid antagonists, or a healthy diet consisting of omega-3 fatty acids and vitamin A, were able to reverse or prevent these detrimental effects. Future research should aim to investigate the translational impact of these preclinical findings, developing novel tools for the measurement of hippocampal neurogenesis directly in depressed adolescents, and subsequently assessing neurogenic changes in response to stress as well as pharmacological and non-pharmacological interventions. Climatic Extrapolations in Hydrology: The Expanded Bluecat Methodology Demetris Koutsoyiannis, Alberto Montanari Subject: Earth Sciences, Other Keywords: Bluecat; climate models; stochastics; uncertainty. Online: 27 April 2022 (10:46:45 CEST) Bluecat is a recently proposed methodology to upgrade a deterministic model (D-model) into stochastic (S-model), based on the hypothesis that the information contained in a time series of observations and the concurrent predictions by the D-model is sufficient to support this upgrade. Prominent characteristics of the methodology are its simplicity and transparency, which allow easy use in practical applications, without sophisticated computational means. Here we utilize the Bluecat methodology and expand it in order to be combined with climatic model outputs, which often require extrapolation out of the range of values covered by observations. We apply the expanded methodology to the precipitation and temperature processes in a large area, namely the entire territory of Italy. The results showcase the appropriateness of the method for hydroclimatic studies, as regards the assessment of the performance of the climatic projections, as well as their stochastic conversion with simultaneous bias correction and uncertainty quantification. Effect of Savings on a Gas-Like Model Economy with Credit and Debt Guillermo Chacón-Acosta, Vanessa Ángeles-Sánchez Subject: Physical Sciences, Acoustics Keywords: econophysics; savings propensity; geometric models In this work, we apply ensemble formalism to a geometric agents model to study the effect of saving propensity in a system with money, credit, and debt. We calculate the partition function to obtain the total money of the system, with which we give an interpretation of the economic temperature in terms of the different payment methods available to the agents. We observe an interplay between the fraction of money that agents can save and the debt that can be financed. We also observe that the system's entropy increases as the saved proportion increases and increases, even more, when debt is present. The Purpose of Project Economic Models Uyiosa Omoregie Subject: Social Sciences, Economics Keywords: Models; Projects; Modelling; Quality Assurance Models come in different forms: visual, arithmetic, mental, physical. The most common type of model is arguably the mental model, which people use to view and interpret the world. A model can be described as a representation of a problem or a situation – a simplified representation. The process of building or developing a model is called modeling. A model once developed by the modeller, can be 'owned' by a manager or decision maker. The ideal is to make the model an extension of the user's ability to think about and analyse problems or situations. When used properly – taking into consideration its limitations – an economic model for a project can provide insight for decision makers, when making the crucial decision to approve a project. An economic model for a liquefied natural gas (LNG) project is shown as an example. Tools for BIM-GIS Integration (IFC Georeferencing and Conversions): Results from the GeoBIM Benchmark 2019 Francesca Noardo, Lars Harrie, Ken Arroyo Ohori, Filip Biljecki, Claire Ellul, Helen Eriksson, Dogus Guler, Dean Hintz, Mojgan A. Jadidi, Maria Pla, Santi Sanchez, Rudi Stouffs, Jernej Tekavec, Jantien Stoter Subject: Engineering, Other Keywords: georeferencing; conversions; interoperability; CityGML; Industry Foundation Classes; Building Information Models; 3D city models; standards The integration of 3D city models with Building Information Models (BIM), abbreviated as GeoBIM, facilitates improved data support to several applications, e.g. 3D map updates, building permits issuing, detailed city analysis, infrastructure design, context-based building design, to name a few. To solve the integration, several issues need to be tackled and solved, i.e. harmonization of features, interoperability, format conversions, integration of procedures. The GeoBIM benchmark 2019, funded by ISPRS and EuroSDR, evaluated the state of implementation of tools addressing some of those issues. In particular, in the part of the benchmark described in this paper, the application of georeferencing to Industry Foundation Classes (IFC) models and making consistent conversions between 3D city models and BIM are investigated, considering the OGC CityGML and buildingSMART IFC as reference standards. In the benchmark, sample datasets in the two reference standards were provided. External volunteers were asked to describe and test georeferencing procedures for IFC models and conversion tools between CityGML and IFC. From the analysis of the delivered answers and processed datasets, it was possible to notice that while there are tools and procedures available to support georeferencing and data conversion, comprehensive definition of the requirements, clear rules to perform such two tasks, as well as solid technological solutions implementing them, are still lacking in functionalities. Those specific issues can be a sensible starting point for planning the next GeoBIM integration agendas. Not Just Numbers: Mathematical Modelling and its Contribution to Anaerobic Digestion Processes Subject: Engineering, Other Keywords: Mathematical modelling; Anaerobic digestion; Mechanistic models; Data-driven models; Mathematical analysis; Hybrid modelling; Thermodynamics Mathematical modelling of bioprocesses has a long and notable history, with eminent contributions from fields including microbiology, ecology, biophysics, chemistry, statistics, control theory and mathematical theory. This richness of ideas and breadth of concepts provide great motivation for inquisitive engineers and intrepid scientists to try their hand at modelling, and this collaboration of disciplines has also delivered significant milestones in the quality and application of models for both theoretical and practical interrogation of engineered biological systems. The focus of this review is the anaerobic digestion process, which, as a technology that has come in and out of fashion, still remains a fundamental process for addressing the global climate emergency. Whether with conventional anaerobic digestion systems, biorefineries, or other anaerobic technologies, mathematical models are important tools that are used to design, monitor, control and optimise the process. Both highly structured, mechanistic models and data-driven approaches have been used extensively over half a decade, but recent advances in computational capacity, scientific understanding and diversity and quality of process data, presents an opportunity for the development of new modelling paradigms, augmentation of existing methods, or even incorporation of tools from other disciplines, to ensure that anaerobic digestion research can remain resilient and relevant in the face of emerging and future challenges. Geoinformation Modeling of Flooded Areas in Settlements (in the Example of Lutsk) Anna Shostak, Volodymyr Voloshyn, Oleksandr Melnyk, Pavlo Manko Subject: Earth Sciences, Geoinformatics Keywords: geoinformation modeling; settlement territory; approximation; digital terrain models; TIN-models; water level; flood process Object. Flooding in Ukraine is a common natural phenomenon that repeats periodically and in some cases it becomes disastrous. In an average year floods on the rivers of Volyn region take place from one to three times which extend beyond the limits of the floodplain. The floodplain of Styr river is located in the historical center of Lutsk city, that`s why issues of research and forecasting of floods are very important for a given city. Methodology. Using modern technologies of geodesy and remote sensing allows to quickly determine and predict the floodplain area of settlements. Based on the statistical data of the Volyn Regional Center for Hydrometeorology during the 7 year period 2011-2017 about water levels of the river Styr. We conducted mathematical modeling of fluctuations of water levels within the territory of Lutsk, based on creating a partial Fourier series for discrete values of middle-ten-day water levels values. The post hydrological measurements of Styr river water levels in the territory of Lutsk located on the Shevchenko Street comply with an altitude 172.87 meters. Based on the data of short-term flood forecasting in February and March, and relief data from the Department of Architecture and Urban Development of Volyn State Administration, we conducted visualization of the results using geographic information system QGIS. Results. The results of mathematical processing were the basis for geoinformation simulation of flooded areas using remote sensing data that are publicly available. Use of statistical and geospatial data in this article has great potential for further application in modeling the processes of natural and technogenic origin. Scientific novelty. The mathematical model of short-term forecasting of water levels during the flood period on the river Styr with implementation of geoinformation modeling of flooded areas using remote sensing data is proposed. Practical significance. The research results of water level changes on the Styr River and flood zones within the limits of Lutsk is proposed. The spring flood in February-March 2018, with the maximum water level 5.33 m, corresponds to an absolute mark of 178.20 m, which is forecasted in this article. Geoinformation Modeling of Flooded Areas in Settlements in the Example of Lutsk Subject: Earth Sciences, Geoinformatics Keywords: geoinformation modeling, settlement territory, approximation, digital terrain models, TIN-models, water level, flood process. Online: 5 February 2018 (15:56:04 CET) Floods in Ukraine is a common natural phenomenon that repeats periodically and in some cases it becomes disastrous signs. In an average year in the rivers of Volyn passes from one to three floods with going beyond the limits of the floodplain. Floodplain of Styr river is located in the historical center of Lutsk city, that`s why issues of research and forecasting of floods are very important for a given city. Using modern technologies of geodesy and remote sensing allows to quickly determine and predict the floodplain area of settlements. The research results of water level changes on the Styr River and flood zones within the limits of Lutsk is proposed. The mathematical model of short-term forecasting of water level in flood period on the river Styr with implementation of geoinformation modeling of flooded areas using remote sensing data is proposed. Emerging Quantum Fields Embedded in the Emergence of Spacetime Subject: Physical Sciences, Particle & Field Physics Keywords: spacetime models; discrete spacetime; relativity theory; causal models; quantum field theory; spin networks; quantum loops Based on a local causal model of the dynamics of curved discrete spacetime, a causal model of quantum field theory in curved discrete spacetime is described. On the elementary level, space(-time) is assumed to consists of interconnected space points. Each space point is connected to a small discrete set of neighboring space points. Density distribution of the space points and the lengths of the space point connections depend on the distance from the gravitational sources. This leads to curved spacetime in accordance with general relativity. Dynamics of spacetime (i.e., the emergence of space and the propagation of space changes) dynamically assigns "in-connections" and "out-connections" to the affected space points. Emergence and propagation of quantum fields (including particles) are mapped to the emergence and propagation of space changes by utilizing identical paths of in/out-connections. Compatibility with standard quantum field theory (QFT) requests the adjustment of the QFT techniques (e.g., Feynman diagrams, Feynman rules, creation/annihilation operators), which typically apply to three in/out connections, to n > 3 in/out connections. In addition, QFT computation in position space has to be adapted to a curved discrete space-time. Subject: Physical Sciences, Particle & Field Physics Keywords: quantum field theory; local causal models; general relativity theory; spacetime models; discrete spacetime; computer simulations Online: 7 May 2018 (05:45:19 CEST) Based on a local causal model of the dynamics of curved discrete spacetime, a causal model of quantum field theory in curved discrete spacetime is described. At the elementary level, space(-time) is assumed to consists of interconnected space points. Each space point is connected to a small discrete set of neighbor space points. Density distribution of the space points and the lengths of the space point connections depend on the distance from the gravitational sources. This leads to curved spacetime in accordance with general relativity. Dynamics of spacetime (i.e., the emergence of space and the propagation of space changes) dynamically assigns "in-connections" and "out-connections" to the affected space points. Emergence and propagation of quantum fields (including particles) are mapped to the emergence and propagation of space changes by utilizing identical paths of in/out-connections. Compatibility with standard quantum field theory (QFT) requests the adjustment of the QFT techniques (e.g., Feynman diagrams, Feynman rules, creation/annihilation operators), which typically apply to three in/out connections, to n > 3 in/out connections. In addition, QFT computation in position space has to be adapted to a curved discrete space-time. Generative Adversarial Networks: A Brief History and Overview Akhil Gunasekaran Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: machine learning; deep learning; generative models Over the past decade, research in the field of Deep Learning has brought about novel improvements in image generation and feature learning; one such example being a Generative Adversarial Network. However, these improvements have been coupled with an increasing demand on mathematical literacy and previous knowledge in the field. Therefore, in this literature review, I seek to introduce Generative Adversarial Networks (GANs) to a broader audience by explaining their background and intuition at a more foundational level. I begin by discussing the mathematical background of this architecture, specifically topics in linear algebra and probability theory. I then proceed to introduce GANs in a more theoretical framework, along with some of the literature on GANs, including their architectural improvements and image-generation capabilities. Finally, I cover state-of-the-art image generation through style-based methods, as well as their implications on society. A Look to Model of Society and Teams Development based on Initial Formation, Primary, Adaptable, Information, and Creative Society Patterns Dmitriy Gakh Subject: Social Sciences, Organizational Economics & Management Keywords: team development; society development; maturity models There are different Maturity, Motivation, and Development models. The models can be applied to the development of organizations, businesses, information technology infrastructure, human resources, and so on. This paper discusses society patterns that can be used in modeling society and team development. The model discussed has many advantages over existing ones. It assumes the Age of Creativity and the Creative Society Pattern as the upmost level of development. The patterns are juxtaposed with the 16 levels Simple Learning Motivation Hierarchy Model that allow modeling of dynamic processes with Expansion and Totality as the upmost levels. This approach eliminates the limitations of existing models and allows detailed modeling and planning. Explanation of the future development of humanity (up to the Age of Creativity) is one of the advantages of the model. The paper contains the description of the main peculiarities of society patterns and creates a basis for practical implementation of the model for society and team development. Organizations and teams can benefit from this model through its implementation in consulting and coaching processes. The model can be used in regional/organizational development and investment planning. Preprint CONCEPT PAPER | doi:10.20944/preprints202110.0295.v3 Unravelling Transmission in Epidemiological Models and its Role in the Disease-Diversity Relationship Marjolein E.M. Toorians, Ailene MacPherson, T. Jonathan Davies Subject: Biology, Ecology Keywords: epidemiological models; transmission; biodiversity; dilution effect With the decrease of biodiversity worldwide coinciding with an increase in disease outbreaks, investigating this link is more important then ever before. This review outlines the different modelling methods commonly used for pathogen transmission in animal host systems. There are a multitude of ways a pathogen can invade and spread through a host population. The assumptions of the transmission model used to capture disease propagation determines the outbreak potential, the net reproductive success (R0). This review offers an insight into the assumptions and motivation behind common transmission mechanisms and introduces a general framework with which contact rates, the most important parameter in disease dynamics, determines the transmission method. By using a general function introduced here and this general transmission model framework, we provide a guide for future disease ecologists for how to pick the contact function that best suites their system. Additionally, this manuscript attempts to bridge the gap between mathematical disease modelling and the controversially and heavily debated disease-diversity relationship, by expanding the summarized models to multiple hosts systems and explaining the role of host diversity in disease transmission. By outlining the mechanisms of transmission into a stepwise process, this review will serve as a guide to model pathogens in multi-host systems. We will further describe these models it in the greater context of host diversity and its effect on disease outbreaks, by introducing a novel method to include host species' evolutionary history into the framework. Spectral Analysis of Flow Around Single and Two Crossing Circular Cylinders Arranged at 60 and 90 Degrees Tianyuan Wang, Qingqing Yang, Yeting Tang, Hongda Shi, Qin Zhang, Mengfei Wang, Andrey Epikhin, Andrey Britov Subject: Engineering, Marine Engineering Keywords: low-dimensional models; vortex dynamics; wakes Two modal decomposition techniques, including proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD), are used to identify the wake patterns past single and two crossing cylinders in 60° and 90° arrangements with gap ratio G = 4. The flow is simulated using direct numerical simulations (DNS) for Reynolds numbers Re = 100. The spatial scale of flow decreases with increasing modal frequency from the modal analysis. Two main modes are identified in the wake of the cylinders, namely spatially antisymmetric and symmetric modes. Antisymmetric and symmetric modes are related to cylinders' vortex shedding and shedding vortices' shift motion, respectively, whose frequencies are odd and even multiples of cylinders' lift force frequency. In addition, a low-frequency mode concerning the shadowing effect of the downstream cylinder (DC) in 90° arrangement is found in the wake of the DC centre. A Bayesian Nonparametric Learning Approach to Ensemble Models Using the Proper Bayesian Bootstrap Marta Galvani, Chiara Bardelli, Silvia Figini, Pietro Muliere Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Bootstrap; Bayesian nonparamteric learning; Ensemble Models Bootstrap resampling techinques, introduced by Efron and Rubin, can be presented in a general Bayesian framework, approximating the statistical distribution of a statistical functional φ(F), where F is a random distribution function. Efron's and Rubin's bootstrap procedures can be extended introducing an informative prior through the Proper Bayesian bootstrap. In this paper different bootstrap techniques are used and compared in predictive classification and regression models based on ensemble approaches, i.e. bagging models involving decision trees. Proper Bayesian bootstrap, proposed by Muliere and Secchi, is used to sample the posterior distribution over trees, introducing prior distributions on the covariates and the target variable. The results obtained are compared with respect to other competitive procedures employing different bootstrap techniques. The empirical analysis reports the results obtained on simulated and real data. Some Green is Better than Nothing: Associations between Dfferent Measurements of Land Patterns and Depression among Nursing Students in El Paso, Texas Jose Nazif-Munoz, Jose Cedeno Laurent, Matthew Browning, John D Spengler, Héctor Olvera Álvarez Subject: Earth Sciences, Atmospheric Science Keywords: greenness; brownness; depression; structural equation models Background: While greenness has been associated with lower depression, the generalizability of this association in arid landscapes remains undetermined. We assessed the association between depression and greenness among nursing students living in El Paso, Texas (the Chihuahuan desert). Methods: Depression was measured with the Patient Health Questionnaire-9 scale, and greenness with the normalized difference vegetation index (at buffer sizes =250m, 500m, 1000m). Using data from the National Land Cover Database two additional measures of land patterns were analyzed: grayness and brownness. Structural equation models were used to assess the relationships of these land patterns to depression and quantify the indirect effects of peer alienation. Results: After adjusting for individual characteristics, at buffers 250 m greenness was associated with a decrease in the Incidence Rate Ratios (IRR) of depression by 49% (IRR, 0.51; 95%CI, 0.12-2.10), greyness with increases by 64% (IRR, 1.64; 95%CI, 1.07-2.52) and brownness with decreases by 35% (IRR, 0.65; 95%CI, 0.42-0.99). At buffer 250 m peer alienation explained 17.43% (95% CI, -1.79-36.66) of the association between depression and brownness, suggesting a pathway to depression. Conclusions: We did not observe an association between depression and residential greenness in El Paso, Texas. However, we did observe a protective association between brownness and depression as well as an adverse association with grayness. These results have theoretical implications as based on commonly used frameworks in this literature and adverse association of brownness (and the lack of greenness) and depression was expected. Beyond Modeling: A Roadmap to Community Cyberinfrastructure for Ecological Data-Model Integration Istem Fer, Anthony K. Gardella, Alexey N. Shiklomanov, Shawn P. Serbin, Martin G. De Kauwe, Ann Raiho, Miriam R. Johnston, Ankur Desai, Toni Viskari, Tristan Quaife, David S. LeBauer, Elizabeth M. Cowdery, Rob Kooper, Joshua B. Fisher, Benjamin Poulter, Matthew J. Duveneck, Forrest M. Hoffman, William Parton, Joshua Mantooth, Eleanor E. Campbell, Katherine D. Haynes, Kevin Schaefer, Kevin R. Wilcox, Michael C. Dietze Subject: Keywords: community cyberinfrastructure; accessibility; reproducibility; interoperability; models In an era of rapid global change, our ability to understand and predict Earth's natural systems is lagging behind our ability to monitor and measure changes in the biosphere. Bottlenecks in our ability to process information have reduced our capacity to fully exploit the growing volume and variety of data. Here, we take a critical look at the information infrastructure that connects modeling and measurement efforts, and propose a roadmap that accelerates production of new knowledge. We propose that community cyberinfrastructure tools can help mend the divisions between empirical research and modeling, and accelerate the pace of discovery. A new era of data-model integration requires investment in accessible, scalable, transparent tools that integrate the expertise of the whole community, not just a clique of 'modelers'. This roadmap focuses on five key opportunities for community tools: the underlying backbone to community cyberinfrastructure; data ingest; calibration of models to data; model-data benchmarking; and data assimilation and ecological forecasting. This community-driven approach is key to meeting the pressing needs of science and society in the 21st century. Modelling Market Volatility with Univariate GARCH Models: Evidence from Nasdaq-100 Fuzuli Aliyev, Richard Ajayi, Nijat Gasim Subject: Social Sciences, Finance Keywords: volatility; risk, garch models; nasdaq-100 Online: 25 September 2019 (09:09:06 CEST) This paper models and estimates the volatility of nonfinancial, innovative and hi-tech focused stock index, the Nasdaq-100, using univariate symmetric and asymmetric GARCH models. We employ GARCH, EGARCH and GJR-GARCH using daily data over the period January 4, 2000 through March 19, 2019. We find that the volatility shocks on the index returns are quite persistent. Furthermore, our findings show that the index has leverage effect, and the impact of shocks is asymmetric, whereby the impacts of negative shocks on volatility are higher than those of positive shocks of the same magnitude. Advances in Machine Learning Modeling Reviewing Hybrid and Ensemble Methods Sina Ardabili, Amir Mosavi, Annamária R. Várkonyi-Kóczy Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: machine learning; deep learning; ensemble models Online: 20 August 2019 (08:41:28 CEST) The conventional machine learning (ML) algorithms are continuously advancing and evolving at a fast-paced by introducing the novel learning algorithms. ML models are continually improving using hybridization and ensemble techniques to empower computation, functionality, robustness, and accuracy aspects of modeling. Currently, numerous hybrid and ensemble ML models have been introduced. However, they have not been surveyed in a comprehensive manner. This paper presents the state of the art of novel ML models and their performance and application domains through a novel taxonomy. Characterization of Human Chondrocytes from Less- vs. Severely-Affected Osteoarthritic Cartilage and Evaluation of their Ability to Develop into In Vitro 3D Models Nazatul Nurzazlin Binti Zakariah, Shamsul Bin Sulaiman, Nor Hamdan Bin Yahaya, Rizal Abdul Rani, Ruszymah Binti Haji Idrus, Shiplu Roy Chowdhury Subject: Life Sciences, Cell & Developmental Biology Keywords: 3D models; cartilage; chondrocytes; osteoarthritis (OA) Osteoarthritis (OA) is a joint disease involving cartilage degeneration. This study aimed to compare properties of chondrocytes from less-affected (LA-Cartilage) and severely-affected (SA-Cartilage) of human OA articular cartilage. Based on Dougados classification, OA cartilage was classified into two groups; less-affected (Grade 0–1) and severely-affected (Grade 2–3). Chondrocytes from each group were cultured until passage (P) 4. Growth, migration, stem cell properties and chondrogenic properties under normal and inflammatory conditions, and the formation of in vitro 3D cartilage tissues were compared between groups. The growth and migratory properties of LA-chondrocytes and SA-chondrocytes were similar, except that the migration rate of SA-chondrocytes was significantly higher at P0 compared to LA-chondrocytes. Both LA-chondrocytes and SA-chondrocytes expressed mesenchymal stem cell markers and tri-lineage differentiation, but the expression of stem cell markers decreased significantly with increasing passage number. Exposure to inflammatory conditions induced distinct morphological changes and significant increases in expression of SOX9 at P4 and MMP3 at P1 for LA-chondrocytes. LA-chondrocytes and SA-chondrocytes able to develop into in vitro 3D constructs, but SA-chondrocytes exhibited superior cartilage-like properties. Chondrocytes from both less- and severely-affected regions are suitable to be used in clinical applications, however, chondrocytes from severely-affected regions could be a more favorable cell source. The Role of SMEs' Green Business Models in the Transition to a Low- Carbon Economy: Differences in Their Design and Degree of Adoption Stemming from Business Size María A. Quintás, Ana I. Martínez-Senra, Antonio Sartal Subject: Social Sciences, Organizational Economics & Management Keywords: green business models; decarbonization; SMEs; Size The purpose of this paper is to analyze how Green Business Models (BMs) established by small and medium enterprises (SMEs) can incorporate product and process decarbonization in their components (value proposition, creation and capture) and to what extent this incorporation is affected by SME size. We use a database comprising 1,161 observations of SMEs, 466 in 2014 and 695 in 2016. The results show that SMEs' value propositions give an intermediate valuation to both legally required and voluntary reduction of environmental impact, irrespective of SME size and the year analyzed. Regarding value creation, SMEs adopt practically no environmental practices, and there are significant differences according to size, with more difficulties than advantages stemming from small size. The study also shows that such environmental practices are not effective in reducing carbon. This diagnosis indicates that SMEs need help from the administration if they are to play a key role in the process of transformation toward a low-carbon economy. Legislative actions involving harsher environmental protection measures might help shape value propositions that place greater importance on reducing environmental impact, whereas training actions on available environmental techniques, promotion of research on how to adapt such techniques to SMEs and the development of specific practices for SMEs might enhance environmental value creation and capture in their BMs. Zebrafish as Toxicological Model for Screening and Recapitulate Human Diseases Maria Virginia Caballero, Manila Candiracci Subject: Life Sciences, Molecular Biology Keywords: zebrafish; models; evaluation; drugs; cardiotoxicity; genotoxicity Embryonic and larval Danio rerio is increasingly used as a toxicological model to conduct rapid in vivo tests and developmental toxicity assays; the zebrafish features as high genetic homology to mammals; robust phenotypes; and its value in high-throughput genetic and chemical screening have made it a powerful tool to evaluate in vivo toxicity. New methodologies of genome editing as CRISPR/Cas9; ZFN or Talen make it a suitable model to perform studies to pair human genetic diseases as well. This review surveys recent studies; employing zebrafish as experimental model; comparing it with other in vivo and in vitro models; presenting zebrafish as a potent vertebrate tool to evaluate drug toxicity to facilitate more extensive; easy and comprehensive knowledge of new generation drugs. In Vivo Models for Prostate Cancer Research Robert Adamiecki, Anita Hryniewicz-Jankowska, Maria A. Ortiz, Xiang Li, Baylee A Porter-Hansen, Imad Nsouli, Gennady Bratslavsky, Leszek Kotula Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: prostate cancer; knockout mouse models; genetically-engineered mouse models; xenografts; patient derived xenografts; organoids; signaling pathways In 2022, prostate cancer (PCa) is estimated to be the most commonly diagnosed cancer in men in the United States – almost 270,000 American men are estimated to be diagnosed with PCa in 2022 [1]. This review compares and contrasts in vivo models of PCa with regards to the altered genes, signaling pathways, and stages of tumor progression associated with each model. The main type of model included in this review are genetically engineered mouse models, which include conditional and constitutive knockout model. 2D cell lines, 3D organoids and spheroids, xenografts and allografts, and patient derived models are also included. The major applications, advantages and disadvantages, and ease of use and cost are unique to each type of model, but they all make it easier to translate the tumor progression that is seen in the mouse prostate to the human prostate. Although both human and mouse prostates are androgen-dependent, the fact that the native, genetically unaltered prostate in mice cannot give rise to carcinoma is an especially critical component of PCa models. Thanks to the similarities between the mouse and human genome, our knowledge of PCa has been expanded, and will continue to do so, through models of PCa. Ensemble Models for Tick Vectors: Standard Surveys Compared with Convenience Samples William H. Kessler, Carrie De Jesus, Samantha M. Wisely, Gregory E. E. Glass Subject: Biology, Ecology Keywords: ensemble models; species distribution models (SDMs); ticks; Amblyomma americanum; Ixodes scapularis; Florida; biased sampling; study design Ensembles of Species Distribution Models (SDMs) represent the geographic ranges of pathogen vectors by combining alternative analytical approaches and merging information on vector occurrences with more extensive environmental data. Biased collection data impact SDMs, regardless of the target species but no studies have compared the differences in the distributions predicted by the ensemble models when different sampling frameworks are used for the same species. We compared Ensemble SDMs for two important Ixodid tick vectors, Amblyomma americanum and Ixodes scapularis in mainland Florida, USA, when inputs were either convenience samples of ticks, or collections obtained using the standard protocols promulgated by the U.S. Centers for Disease Control and Prevention. The Ensemble SDMs for the convenience samples and standard surveys showed only a slight agreement (Kappa = 0.060, A. americanum; 0.053, I. scapularis). Convenience sample SDMs indicated A. americanum and I. scapularis should be absent from 34.5% and 30.9% of the state where standard surveys predicted the highest likelihood of occurrence of the respective vectors. Ensemble models from standard surveys predicted 81.4% and 72.5% (A. americanum and I. scapularis) of convenience sample sites. Omission errors by standard survey SDMs, of the convenience collections, frequently were associated with adjacency to at least one SDM or errors in geocoding algorithms that failed to correctly locate convenience samples. These geocoding errors emphasize commonly overlooked needs to explicitly evaluate and improve data quality for vector survey data used in spatial models. Animal Models of SARS-CoV-2: A Systematic Review Prabudh Goel, Vishesh Jain, Anjan Kumar Dhua, Devendra Kumar Yadav, Ajay Verma, Aparajita Mitra, Tripti Khanna, Sandeep Agarwala, Minu Bajpai Subject: Medicine & Pharmacology, Other Keywords: animal models; experimental models; SARS-CoV-2; COVID-19; rhesus macaque; monkey; hamster; ferrets; transgenic mice Background: The use of animal models for biomedical research provides us with a convenient and feasible route to establish causal relationships by recapitulating the temporal sequence of events in a controlled environment with a potential to manipulate the variables at multiple levels including genetic, protein, physiological or environmental. Objectives: The current review was conducted to gain insights into various animal models for the SARS-CoV-2 virus. Material and Methods: A literature review (PUBMED, PUBMED Central, PMC, Google Scholar, Google search engine) following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines performed in early April 2020 revealed 9 articles of interest. Search terms included covid 19, covid-19, novel corona virus, SARS-CoV-2, animal models, experimental models, laboratory models & covid 19 animal models. Two independent reviewers extracted the data; the third reviewer was involved in case of discrepancy. Results: SARS-CoV-2 shares an identical receptor binding domain with the SARS-CoV virus and has a superior binding affinity to the host ACE2. Based on this, the role of rhesus macaques, golden Syrian hamsters, transgenic hACE2 mice and ferrets as animal models have been studied. All four animals are susceptible to infection with SARS-CoV-2 with variable clinical presentation but universal recovery. The respiratory tract is primarily involved in all four models. Involvement of intestines was also seen in at least one study in each animal. Transfer to naïve animals in close contact has been documented in case of hamsters and ferrets. Seroconversion was documented in all although the role of convalescent sera was tested in hamsters only, with positive results though. Air-borne transmission was documented in ferrets and the possibility of feco-oral transmission was suggested for hamsters. The possibilities of recurrence and re-infection were ruled out by experiments upon the rhesus macaques. The fulfilment of Koch's postulates has been highlighted. Discussion: The various studies available on animal models have been able to establish models of infection and transmission that recapitulate different aspects of disease in humans. However, the response between different animals and the same animal in different experiments is not completely coherent. Some of them do not manifest the disease clinically while others behave differently at molecular and immunological levels. Moreover, the physiology of these animals is not identical to human beings and the findings may not be extrapolated to human beings in an 'as-is' manner. Conclusions: The review acknowledges the achievements made by these experiments in a short span of time and highlighted the urgent need for a deeper dive in search of a quintessential animal model which can be studied for efficacy and safety of newer drugs and vaccines before a make-shift from the petri-dish to the human body can be contemplated. Sustainable Business Models in Biosphere Reserves: Case of Hungary Amir Mosavi Subject: Earth Sciences, Environmental Sciences Keywords: sustainable development; sustainability; biosphere reserves; business models; sustainable business models; climate protection; climate change adaptation; resilience The goal of Man and the Biosphere (MAB) Programme is to support sustainable development through effective management, innovative technologies, policy suggestion and governance. Today, the concept of Biosphere Reserves plays an important role in scientific investigations, generating knowledge, and experiences to link socio-economic development and biodiversity conservation for human well-being. This research, through an independent study which takes place in the Hungarian Biosphere Reserves of Pilis and Kiskunság aims at identifying practical sustainable business models which are suitable for supporting livelihood of locals. In this research, the two Biosphere Reserves serve as the learning sites under the light of global principles and state-of-the-art-of knowledge on sustainable development and sustainable business models. To do so, the state-of-the-art-of sustainable business model has been investigated through a comprehensive academic research. The lessons that learned from this investigation are used to support the data gathering method and planning the field trips to identify the sustainable business models currently in use at the Biosphere Reserves. This research particularly had been interested in small-sized sustainable business models practiced by small communities or families in various zones of Biosphere Reserves. First set of interviews and questionnaires designed to identify the business models in practice. The results identify foraging the wild plants in the buffer zone and transition areas as a potential sustainable business model in practice. Further interviews and surveys were conducted with foragers shows the beneficial of their practice on the local ecosystem and in increasing awareness on the deep connection with the ecosystems. The sustainable business model of foraging in addition to providing a sustainable livelihood for the locals maintains a spiritual connection between people and land. The identified sustainable business model can further be educational and practical for other 685 biosphere reserves. Working Paper SHORT NOTE Comments on "On a Continuum Model for Avalanche Flow and Its Simplified Variants"' by S. S. Grigorian and A. V. Ostroumov Dieter Issler Subject: Keywords: Snow avalanches; mathematical models; snow entrainment; Voellmy and Grigorian friction laws; hydraulic models; runout distance; analytic solutions This note first summarizes the history of the manuscript "On a Continuum Model for Avalanche Flow and Its Simplified Variants" by Grigorian and Ostroumov―published in the Special Issue on snow avalanche dynamics of Geosciences―since the early 1990s and explains the guiding principles in editing it for publication. The changes are then detailed and some explanatory notes given for the benefit of readers who are not familiar with the early Russian work on snow avalanche dynamics. Finally, the editor's personal views as to why he still considers this paper of relevance for avalanche dynamics research today are presented in brief essays on key aspects of the paper, namely the role of simple and complex models in avalanche research and mitigation work, the status and possible applications of Grigorian's stress-limited friction law, and non-monotonicity of the dynamics of the Grigorian–Ostroumov model in the friction coefficient. A comparison of the erosion model proposed by those authors with two other models suggests to enhance it with an additional equation for the balance of tangential momentum across the shock front. A preliminary analysis indicates that continuous scouring entrainment is possible only in a restricted parameter range and that there is a second erosion regime with delayed entrainment. Real-time Loosely Coupled 3DMA GNSS /Doppler Measurements Integration Using a Graph Optimization and Its Performance Assessments in Urban Canyons of New York Hoi-Fung Ng, Li-Ta Hsu, Max Jwo Lem Lee, Junchi Feng, Tahereh Naeimi, Mahya Beheshti, John-Ross Rizzo Subject: Engineering, Electrical & Electronic Engineering Keywords: Localization; Navigation; Smartphone; GNSS; 3D Building Models Online: 4 August 2022 (08:56:12 CEST) Smart health applications have received significant attention in recent years. Novel applications hold significant promise to overcome many of the inconveniences faced by persons with disabilities throughout daily living. For people with blindness and low vision (BLV), environmental perception is compromised, creating myriad difficulties. Precise localization is still a gap in the field and is critical to safe navigation. Conventional GNSS positioning cannot provide satisfactory performance in urban canyons. 3D mapping-aided (3DMA) GNSS may serve as an urban GNSS solution, since the availability of 3D city models has widely increased. As a result, this study developed a real-time 3DMA GNSS-positioning system based on state-of-the-art 3DMA GNSS algorithms. Shadow matching was integrated with likelihood-based ranging 3DMA GNSS, generating positioning hypothesis candidates. To increase robustness, the 3DMA GNSS solution was then optimized with Doppler measurements using factor graph optimization (FGO) in a loosely-coupled fashion. This study also evaluated positioning performance using an advanced wearable system's recorded data in New York City. The real-time forward processed FGO can provide a root-mean-square error (RMSE) with about 21 m. The RMSE drops to 16 m when the data is post-processed with FGO in a combined direction. Overall results show that the proposed loosely-coupled 3DMA FGO algorithm can provide a better and more robust positioning performance for the multi-sensor integration approach used by this wearable for persons with BLV. Identify the Mathematical Models Generated by the Table Curve 3 D Program Emilian Mosnegutu, Mirela Panainte-Lehadus, Valentin Nedeff, Claudia Tomozei, Narcis Barsan, Dana Chitimus, Marcin Jasinski Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: mathematical models; Table Curve 3D; correlation coefficient This article describes the methodology used to identify the mathematical model that describes the correlations between the input parameters of an experiment and the parameters being followed. As a technological process, the aerodynamic separation was chosen, respectively, the behavior of a solid particle within an ascending vertical airflow. The experimental data obtained were used to identify two parameters, the average linear velocity, and the angular velocity, and through the Table Curve 3D program was developed a mathematical model which describes the dependence between the input parameters (the shape and size of the solid particle and the velocity of the airflow) and the monitored parameters. In order to determine a single mathematical equation that describes as accurately as possible the correlation between the input variables and those obtained, a pyramid-type analysis was designed. The determination of the mathematical equation started from the number of equations generated by the Table Curve 3D program, then the equations with a correlation coefficient greater than 0.85 were chosen, and finally, the common equations were identified. Respecting the working methodology was identified one equation which has for the average linear velocity a correlation coefficient r2 between 0.88-0.99 and 0.86-0.99 for the angular velocity. Predicting Key Grassland Characteristics from Hyperspectral Data Patrick Jackman, Thomas Lee, Michael French, Jayadeep Sasikumar, Patricia O'Byrne, Damon Berry, Adrian Lacey, Robert Ross Subject: Engineering, Automotive Engineering Keywords: Ensilement; Grass Quality; Hyperspectral Reflectance; Predictive Models A series of experiments were conducted to measure and quantify the yield, dry matter content, sugars content and nitrates content of grass intended for ensilement. These experiments took place in the East Midlands of Ireland during the Spring, Summer and Autumn of 2019. A bespoke sensor rig was constructed; included in this rig was a hyperspectral radiometer that measured a broad spectrum of reflected natural light from a circular spot approximately 1.2 metres in area. Grass inside a 50cm square quadrat was manually collected from the centre of the circular spot for ground truth estimation of the grass qualities. Up to 25 spots were recorded and sampled each day. The radiometer readings for each spot were automatically recorded onto a laptop that controlled the sensor rig, and ground truth measurements were made either on site or within 24 hours in a wet chemistry laboratory. The collected data was used to build Partial Least Squares Regression (PLSR) predictive models of grass qualities from the hyperspectral dataset and it was found that substantial relationships exist between the spectral reflectance from the grass and yield (r2 = 0.62), dry matter % (r2 = 0.54), sugar content (r2 = 0.54) and nitrates (r2 = 0.50). This shows that hyperspectral reflectance data contains substantial information about key grass qualities and can form part of a broader holistic data driven approach to provide accurate and rapid predictions to farmers, agronomists and agricultural contractors. A Survey on Bias in Deep NLP Ismael Garrido-Muñoz, Arturo Montejo-Ráez, Fernando Martínez-Santiago, L. Alfonso Ureña-López Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: natural language processing; deep learning; biased models Online: 2 March 2021 (09:17:15 CET) Deep neural networks are hegemonic approaches to many machine learning areas, including natural language processing (NLP). Thanks to the availability of large corpora collections and the capability of deep architectures to shape internal language mechanisms in self-supervised learning processes (also known as "pre-training"), versatile and performing models are released continuously for every new network design. But these networks, somehow, learn a probability distribution of words and relations across the training collection used, inheriting the potential flaws, inconsistencies and biases contained in such a collection. As pre-trained models have found to be very useful approaches to transfer learning, dealing with bias has become a relevant issue in this new scenario. We introduce bias in a formal way and explore how it has been treated in several networks, in terms of detection and correction. Also, available resources are identified and a strategy to deal with bias in deep NLP is proposed. LMM-22: An Enhanced Linear Mixed Model (LMM) Approach for Genome-Wide Association Studies (GWAS) for the Prediction of Diseases and Traits among Humans from Genomics Data Siddharth Sharma Subject: Mathematics & Computer Science, Analysis Keywords: GWAS Studies; Linear Mixed Models; GPU Acceleration Increasingly, genomics is being used for the prediction of specific traits and diseases (phenotypes) among humans. Wider availability of genomics data through multiple research projects (such as International HapMap Project1 and 1000 Genomes2) has been a catalyst in that direction. With the recent advances in machine learning and big data analysis, data computation resources and data models needed for genomics data analysis are readily available. However, the prediction of traits and diseases has its own challenges in terms of computational requirements and computational analysis, statistical analysis (example: confounding variables), and limited quality of data collection. Linear Mixed Models (LMM, a type of linear regression) is a common approach for Genome-wide Association Studies (GWAS) for the prediction of common traits among humans using genomics. This paper researches the existing LMM-based approaches for Genome-wide Association Studies (GWAS), describes the experiment performed on FaST-LMM approach from Microsoft Research, and then proposes an enhanced approach (called LMM-22) on how to address computational and statistical issues. LMM-22 focuses on the parallelization of LMM computations and execution of LMM-22 on General Purpose Graphics Processing Units (GPU) as against CPUs to accelerate the LMM approach for GWAS studies. Preprint TECHNICAL NOTE | doi:10.20944/preprints201912.0256.v1 PAPR Impact over the PER of Vehicle-to-Vehicle Communications with Fading Channels Italo Alexander Carreño, Frank Andrés Eras, Thomás Borja, Diego Javier Reinoso, Luis Urquiza-Aguiar, Martha Cecilia Paredes Paredes Subject: Engineering, Electrical & Electronic Engineering Keywords: PAPR; HPA; OPS-SAP; PER; fading models Peak-to-Average Power Ratio (PAPR) is one of the main problems in wireless communications using Orthogonal Frequency Division Multiplexing (OFDM). Its behavior is random and can produce problems for the hardware implementation, directly influencing the Packet Error Rate (PER). In this article, the PER is obtained for channels with Rayleigh and Rician fading. In the simulation, a High Power Amplifier (HPA) is added to the transmitter and for PAPR reduction Simple Amplitude Predistortion-Orthogonal Pilot Sequences (OPS-SAP) technique is used. A Hidden Markov Model for the Linguistic Analysis of the Voynich's Manuscript Luis Acedo Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Hidden Markov Models; Mathematical Linguistics; Voynich Manuscript Hidden markov models are a very useful tool in the modelling of time series and any sequence of data. In particular, they have been successfully applied to the field of mathematical linguistics. In this paper, we apply a hidden markov model to analyze the underlying structure of an ancient and complex manuscript, known as Voynich's manuscript, that remains still undeciphered. By assuming a certain number of internal states representations for the symbols of the manuscript we train the network by means of the $\alpha$ and $\beta$-pass algorithms to optimize the model. By this procedure, we are able to obtain the so-called transition and observation matrices in order to compare with known languages concerning the frequency of consonant and vowel sounds. From this analysis, we conclude that transitions occur between the two states with similar frequencies to other languages. Moreover, the identification of the vowel and consonant sounds matches some previous tentative bottom-up approaches to decode the manuscript. Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions Sergio Manzetti Subject: Physical Sciences, Fluids & Plasmas Keywords: rogue; wave; models; KdV; NLSE; non-local; ocean; optics Online: 9 June 2018 (15:28:58 CEST) Anomalous waves and rogue events are closely associated with irregularities and unexpected events occurring at various levels of physics, such as in optics, in oceans and in the atmosphere. Mathematical modeling of rogue waves is a highly actual field of research, which has evolved over the last decades into a specialized part of mathematical physics. The applications of the mathematical models for rogue events is directly relevant to technology development for prediction of rogue ocean waves, and for signal processing in quantum units. In this survey, a comprehensive perspective of the most recent developments in methods for representing rogue waves is given, along with discussion of the devised and forms and solutions. The standard nonlinear Schrödinger equation, the Hirota equation, the MMT equation and further to other models are discussed, and their properties highlighted. This survey shows that the most recent advancement in modeling rogue waves give models which can be used to establish methods for prediction of rogue waves at open seas, which is important for the safety and activity of marine vessels and installations. The study further puts emphasis on the difference between the methods, and how the resulting models form a basis for representing rogue waves in various forms, solitary or with a wave-background. This review has also a pedagogic component directed towards students and interested non-experts, and forms a complete survey of the most conventional and emerging methods published until recently Models as a Medium in Architecture Atli Magnus Seelow Subject: Arts & Humanities, Art History & Restoration Keywords: history of architecture; architectural models; architectural media Architecture is more than just buildings. Its associated production and reception pro­cesses take place through a variety of different media. Among those media, the model is of special significance: because architecture, like almost every science or art, works with models as representa­tionally or theoretically simplified images mediating between the abstract and the reality. The properties that characterise models give them a special significance in archi­tecture—both in the abstract, as well as in the concrete. The following article sketches out the history of the architectural model as a medium in a short tour d'horizon. A special focus is placed on showing the versatility of the model—for design and presentation and as an artefact, teaching resource and research medium. It transmits a specific form of knowledge which can be replaced by no other medium. Forecasting Algorithms for Recurrent Patterns in Consumer Demand Tetiana Boiko, Oleg Karpenkov, Bulat Rakhimberdiev Subject: Mathematics & Computer Science, Other Keywords: seasonality; forecasting; pull and push models; denoising In this paper we develop a forecasting algorithm for recurrent patterns in consumer demand. We study this problem in two different settings: pull and push models. We discuss several features of the algorithm concerning sampling, periodic approximation, denoising and forecasting. The Potential of Active Contour Models in Extracting Roads from Mobile Laser Scanning Data Pankaj Kumar, Paul Lewis, Tim McCarthy Subject: Engineering, Civil Engineering Keywords: active contour models; LiDAR, segmentation; road edges Active contour models present a robust segmentation approach which make efficient use of specific information about objects in the input data rather than processing all the data. They have been widely used in many applications including image segmentation, object boundary localisation, motion tracking, shape modelling, stereo matching and object reconstruction. In this paper, we investigate the potential of active contour models in extracting roads from Mobile Laser Scanning (MLS) data. The categorisation of active contours based on their mathematical representation and implementation are discussed in detail. We discuss an integrated version in which active contour models are combined to overcome their limitations. We review various active contour based methodologies which have been developed to extract roads and other features from LiDAR and digital imaging datasets. We present a small case study in which an integrated version of active contour models is applied to automatically extract road edges from MLS dataset. An accurate extraction of left and right edges from the tested road section validates the use of active contour models. The present study provides a valuable insight on the potential of active contours for extracting roads and other infrastructures from 3D LiDAR point cloud data. The Ecology of Plant Interactions: A Giant with Feet of Clay Ciro Cabal, Fernando Valladares, Ricardo Martinez-Garcia Subject: Biology, Ecology Keywords: plant-plant interactions; stress gradient hypothesis; functional trait ecology; inter-plant distance; individual-based models; consumer-resource models Ecologists use the net biotic interactions among plants to predict fundamental ecosystem features. Following this approach, ecologists have built a giant body of theory founded on observational evidence. However, due to the limitations that a phenomenological approach raises both in empirical and theoretical studies, an increasing number of scientists claim the need for a mechanistic understanding of plant interaction outcomes, and a few studies have taken such a mechanistic approach. In this synthesis, we propose a modeling framework to study the plant interaction mechanistically. We first establish a conceptual ground to frame plant-plant interactions, and then, we propose to formalize this research line theoretically developing a family of individual-based, spatially-explicit models in which biotic interactions are an emergent property mediated by the interaction between plants' functional traits and the environment. These models allow researchers to evaluate the strength and sign of biotic interactions under different environmental scenarios and thus constitute a powerful tool to investigate the mechanisms underlying facilitation, species coexistence, or the formation of vegetation spatial patterns. Protein Structure, Models of Sequence Evolution, and Data Type Effects in Phylogenetic Analyses of Mitochondrial Data: A Case Study in Birds Emily L. Gordon, Rebecca T. Kimball, Edward L. Braun Subject: Biology, Animal Sciences & Zoology Keywords: mitogenome; transmembrane proteins; substitution matrix; JTT matrix; molecular evolution; partitioned models; mixture models; RY coding; cyto-nuclear discordance Phylogenomic analyses have revolutionized the study of biodiversity, but they have revealed that estimated tree topologies can depend, at least in part, on the subset of the genome that is analyzed. For example, estimates of trees for avian orders differ if protein coding or non-coding data are analyzed. The bird tree is a good study system because the historical signal for relationships among orders is very weak, which should permit subtle non-historical signals to be identified, while monophyly of orders is strongly corroborated, allowing identification of strong non-historical signals. Hydrophobic amino acids in mitochondrially-encoded proteins, which are expected to be found in transmembrane helices, have been hypothesized to be associated with non-historical signals. We tested this hypothesis by comparing the evolution of transmembrane helices and extramembrane segments of mitochondrial proteins from 420 bird species, sampled from most avian orders. We estimated amino acids exchangeabilities for both structural environments and assessed the performance of phylogenetic analysis using each data type. We compared those relative exchangeabilities with values calculated using a substitution dataset for transmembrane helices from a variety of sampled set of nuclear- and mitochondrially-encoded proteins, allowing us to compare the bird-specific mitochondrial models with a general model of transmembrane protein evolution. To complement our amino acid analyses, we examined the impact of protein structure on patterns of nucleotide evolution. Models of transmembrane and extramembrane sequence evolution for amino acids and nucleotides exhibited striking differences, but there was no evidence for strong topological data type effects. However, incorporating protein structure into analyses of mitochondrially-encoded proteins improved model fit. Thus, we believe that considering protein structure will improve analyses of mitogenomic data, both in birds and in other taxa. Quantile Mapping Bias Correction on Rossby Centre Regional Climate Models for Precipitation Analysis over Kenya, East Africa Brian Ayugi, Guirong Tan, Rouyun Niu, Hassen Babaousmail, Moses Ojara, Hanggoro Wido, Lucia Mumo, Isaac Kwesi Nooni, Victor Ongoma Subject: Earth Sciences, Atmospheric Science Keywords: Quantile Mapping Bias Correction (QMBC); Regional Climate Models (RCMs); Rossby Centre Regional Climate Models (RCA4); Drought; Flood; Kenya Accurate assessment and projections of extreme climate events requires the use of climate datasets with no or minimal error. This study uses quantile mapping bias correction (QMBC) method to correct the bias of five Regional Climate Models (RCMs) from the latest output of Rossby Climate Model Center (RCA4) over Kenya, East Africa. The outputs were validated using various scalar metrics such as Root Mean Square Difference (RMSD), Mean Absolute Error (MAE) and mean Bias. The study found that the QMBC algorithm demonstrate varying performance among the models in the study domain. The results show that most of the models exhibit significant improvement after corrections at seasonal and annual timescales. Specifically, the European community Earth-System (EC-EARTH) and Commonwealth Scientific and Industrial Research Organization (CSIRO) models depict exemplary improvement as compared to other models. On the contrary, the Institute Pierre Simon Laplace Model CM5A-MR (IPSL-CM5A-MR) model show little improvement across various timescales (i.e. March-April-May (MAM) and October-November-December (OND)). The projections forced with bias corrected historical simulations tallied observed values demonstrate satisfactory simulations as compared to the uncorrected RCMs output models. This study has demonstrated that using QMBC on outputs from RCA4 is an important intermediate step to improve climate data prior to performing any regional impact analysis. The corrected models can be used for projections of drought and flood extreme events over the study area. This study analysis is crucial from the sustainable planning for adaptation and mitigation of climate change and disaster risk reduction perspective. Spatiotemporal Heterogeneity in the Distribution of Chikungunya and Zika Virus Case Incidences and Risk Factors during their Epidemics in Barranquilla, Colombia, between 2014 and 2016: An Ecological Study. Thomas C McHale, Claudia M Romero-Vivas, Claudio Fronterre, Pedro Arango-Padilla, Andrew K Falconar, Naomi R Waterlow, Chad Nix, Jorge Cano Subject: Life Sciences, Other Keywords: Chikungunya virus, Zika virus, spatial clustering, Bayesian Poisson models, conditional autoregressive models, socioeconomic risk factors, environmental risk factors Chikungunya virus (CHIKV) and Zika virus (ZIKV) have recently emerged as global infections with consequential disability adjusted life years (DALYs) and economic burden. This study aimed to explore the spatiotemporal heterogeneity in the occurrence of CHIKV and ZIKV outbreaks throughout Barranquilla, Colombia during 2014 and 2016 and explored the potential for clustering. Incidence data were fitted using multiple Bayesian Poisson models based on a suite of explanatory variables as potential risk factors and multiple options for random effects. A best fit model was used to analyse the case incidence risk for both epidemics to identify any risk factors during their epidemics. Neighbourhoods in the northern region of Barranquilla were hotspots for the outbreaks of both CHIKV and ZIKV. Additional hotspots occurred in the south-western and central regions of the CHIKV and ZIKV outbreaks, respectively. Multivariate conditional autoregressive models strongly identified higher socioeconomic strata (SES) and residing in detached houses as risk factors for ZIKV case incidences. These novel findings challenge the belief that these infections are driven by social vulnerability and merits further study both in Barranquilla and throughout the tropical and subtropical regions of the world. Spatiotemporal Heterogeneity in the Distribution of Chikungunya and Zika Virus Case Incidences and Risk Factors During Their Epidemics in Barranquilla, Colombia, between 2014 and 2016: An Ecological Study Thomas C. McHale, Claudia M. Romero-Vivas, Claudio Fronterre, Pedro Arango-Padilla, Andrew K. Falconar, Naomi R. Waterlow, Chad Nix, Jorge Cano Factors Influencing Genomic Prediction Accuracies of Tropical Maize Resistance to Fall Armyworm and Weevils Arfang Badji, Lewis Machida, Daniel Bomet Kwemoi, Frank Kumi, Dennis Okii, Natasha Mwila, Symphorien Agbahoungba, Angele Ibanda, Astere Bararyenya, Selma Ndapewa Nghituwamhata, Thomas Odong, Peter Wasswa, Mildred Ochwo-Ssemakula, Herbert Talwana, Godfrey Asea, Michael Otim, Samuel Kyamanywa, Patrick Rubaihayo Subject: Biology, Agricultural Sciences & Agronomy Keywords: Prediction accuracy; Mixed linear and Bayesian models; Machine Learning algorithms; Training set size and composition; Parametric and nonparametric models Genomic selection (GS) can accelerate variety improvement when training set (TS) size, and its relationship with the breeding set (BS) are optimized for prediction accuracies (PA) of genomic prediction (GP) models. Sixteen GP algorithms were run on phenotypic best linear unbiased predictors (BLUPs) and estimators (BLUEs) of resistance to both fall armyworm (FAW) and maize weevil (MW) in a tropical maize panel. For MW resistance, 37% of the panel was the TS, and BS was the remainder whilst for FAW, random-based training sets (RBTS) and pedigree-based training sets (PBTS) were designed. PAs achieved with BLUPs varied from 0.66 to 0.82 for MW resistance traits, and, for FAW resistance, 0.694 to 0.714 for RBTS of 37%, and 0.843 to 0.844 for RBTS of 85%, and, these were at least two-fold those from BLUEs. For PBTS, FAW resistance PAs were generally higher than those for RBTS, except for one dataset. GP models generally showed similar PAs across individual traits whilst the TS designation was determinant since a positive correlation (R=0.92***) between TS size and PAs was observed for RBTS and, for the PBTS, it was negative (R=0.44**). This study pioneers the use of GS for maize resistance to insect pests in sub-Saharan Africa. Effect of the Trip-Length Distribution on Network-Level Traffic Dynamics: Exact and Statistical Results Jorge Laval Subject: Civil Engineering, Engineering Keywords: Traffic flow, Macroscopic Fundamental diagram, Aggregated network models This paper presents additional results of the generalized bathtub model for urban networks, including a simpler derivation and exact solutions for uniformly distributed trip lengths. It is shown that in steady state this trip-based model is equivalent to the more parsimonious accumulation-based model, and that the trip-length distribution has merely a transient effect on traffic dynamics, which converge to the same point in the macroscopic fundamental diagram (MFD). To understand the statistical properties of the system, a queueing approximation method is proposed to compute the network accumulation variance. It is found that (i) the accumulation variance is much larger than predicted by traditional queueing models, due to the nonlinear dynamics imposed by the MFD, (ii) the trip-length distribution has no effect on the accumulation variance, indicating that the proposed formula for the variance might be universal, (iii) the system exhibits critical behavior near the capacity state where the variance diverges to infinity. This indicates that the tools from critical phenomena and phase transitions might be useful to understand congestion in cities. Use of Human Lung Tissue Models for Screening of Drugs Against SARS-CoV-2 Infection Alexander J McAuley, Petrus Jansen van Vuren, Muzaffar-Ur-Rehman Mohammed, Faheem Faheem, Sarah Goldie, Shane Riddell, Nathan J Gödde, Ian K Styles, Matthew P Bruce, Simran Chahal, Stephanie Keating, Kim R Blasdell, Mary Tachedjian, Carmel M O'Brien, Nagendrakumar Balasubramanian Singanallur, John Noel Viana, Aditya V Vashi, Carl M Kirkpatrick, Christopher A MacRaild, Rohan M Shah, Elizabeth Vincan, Eugene Athan, Darren J Creek, Natalie L Trevaskis, Sankaranarayanan Murugesan, Anupama Kumar, Seshadri S Vasan Subject: Life Sciences, Virology Keywords: COVID-19; Therapeutics; Drug Repurposing; 3D Tissue Models The repurposing of licenced drugs for use against COVID-19 is one of the most rapid ways to develop new and alternative therapeutic options to manage the ongoing pandemic. Given the approximately 8,000 licenced compounds available from Compounds Australia that can be screened, this paper demonstrates the utility of commercially-available ex vivo/3D airway and alveolar tissue models. These models are a closer representation of in vivo studies compared to in vitro models, but retain the benefits of rapid in vitro screening for drug efficacy. We demonstrate that several existing drugs appear to show anti-SARS-CoV-2 activity against both Delta and Omicron Variants of Concern in the airway model. In particular, fluvoxamine, as well as aprepitant, everolimus, and sirolimus have virus reduction efficacy comparable to the current standard of care (remdesivir, molnupiravir, nirmatrelvir). Whilst these results are encouraging, further testing and efficacy studies are required before clinical use can be considered. A Review of Methods used to Monitor and Predict Droughts Hushiar Raheem Hamarash, Azad Rasul, Rahel Othman Hamad Subject: Earth Sciences, Environmental Sciences Keywords: drought monitoring; drought predictions; drought indices; drought models Drought is considered one of the severest natural disasters and it is difficult to predict it. This review article aimed to display the state of the art of methods used to predict and monitor types of droughts. We examine more than 30 indices and models to identify the strengths and weaknesses of methods and identify gaps remaining in this field. Examples of examined indies are Palmer Drought Severity Index (PDSI), Standardized Precipitation Index (SPI), and Standardized Precipitation Evapotranspiration Index (SPEI). The research found improvement in drought modeling, however, more focus and improvement are required to monitor and predict drought types. It also found that some methods outperform others such as PDSI, SPI, SPEI, EVI, NDVI, NDWI, VCI and TCI. A Conceptual Framework for Constructing Decision Policies by Processing the Possibilities in Mental Models of Dynamic Systems with the Cognitive Theory of Mental Models Martin FG. Schaffernicht, Miguel López-Astorga, Cristian A. Rojas-Barahona, Ramón D. Castillo Subject: Social Sciences, Organizational Economics & Management Keywords: Mental Models; Dynamic Decision Making; Systems Thinking; Learning This article is a theoretical contribution to mental model research, which currently has different threads. Whereas some researchers focus on the perceived causal structure, others also include decision policies and decisions. We focus on the link between recognized causal structure ("mental models of dynamic systems") and policies, proposing Johnson-Laird's theory of mental models as the link. The resulting framework hypothesizes two types of systematic mental model errors: (1) misrepresentation of the system's structure and (2) failure to deploy relevant mental models of possibilities. Examination of three experiments through this lens reveals errors of both types. Therefore, we propose that the cognitive theory of mental models opens a path to better understand how people construct their decision policies and develop interventions to reduce such mental model errors. The article closes by raising several questions for empirical studies of the reasoning process leading from mental models of dynamic systems to decision policies. Preprint COMMUNICATION | doi:10.20944/preprints202206.0137.v1 FAIR Sharing of Reproducible Models of Epidemic and Pandemic Forecast Kausthubh Ramachandran, Matthias König, Martin Scharm, Tung V. N. Nguyen, Henning Hermjakob, Dagmar Waltemath, Rahuman S. Malik Sheriff Subject: Medicine & Pharmacology, Other Keywords: FAIR; epidemiology; models; pandemic forecast; SIR modelling; standards A major challenge for the dissemination, replication, and reuse of epidemiological forecasting studies during COVID-19 pandemics is the lack of clear guidelines and platforms to exchange models in a Findable, Accessible, Interoperable, and Reusable (FAIR) manner, facilitating reproducibility of research outcomes. During the beginning of pandemics, models were developed in diverse tools that were not interoperable, opaque without traceability and semantics, and scattered across various platforms - making them hard to locate, infer and reuse. In this work, we demonstrate that implementing the standards developed by the systems biology community to encode and share COVID-19 epidemiological models can serve as a roadmap to implement models as a tool in medical informatics, in general. As a proof-of-concept, we encoded and shared 24 epidemiological models using the standard format for model exchange in systems biology, annotated them with cross-references to data resources, packed up all associated files in COMBINE archives for easy sharing, and finally, disseminated the models through BioModels repository to significantly enhance their reproducibility and repurposing potential. We recommend the use of systems biology standards to encode and share models of epidemic and pandemic forecasts to improve their findability, accessibility, interoperability, reusability, and reproducibility. Do Written Responses to Open-Ended Questions on Fourth-Grade Formative Assessments in Mathematics Help Predict Scores on End-of-Year Standardized Tests? Felipe Urrutia, Roberto Araya Subject: Social Sciences, Education Studies Keywords: Computational linguistics; elementary mathematics; formative assessments; student models : Predicting long-term student learning is a critical task for teachers and for educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first is that teachers develop their own questions for formative assessment. Therefore, there are a huge number of possible questions, each of which is answered by only a few students. Second, formative assessment often involved open-ended questions that students answer in writing. These types of questions in formative assessment are highly valuable. However, analyzing the responses automatically can be a complex process. In this paper, we address these two challenges. We analyzed 621,575 answers to closed-ended questions and 16,618 answers to open-ended questions by 464 fourth-graders from 24 low-SES schools. We constructed a classifier to detect incoherent responses to open-ended mathematics questions. We then used it in a model to predict scores on an end-of-year national standardized test. We found that despite answering 36.4 times fewer open-ended questions than closed questions, including features of the students' open responses in our model improved our prediction of their end-of-year test scores. To the best of our knowledge, this is the first time that a predictor of end-of-year test scores has been improved by using automatically detected features of answers to open-ended questions on formative assessments. What If? Electricity As Energy David Gautschi, Heidi Gautschi, Christopher Tucci Subject: Social Sciences, Business And Administrative Sciences Keywords: electricity markets; financial intermediation; money; institutions; business models Responding to the influences of climate change, on the one hand, and selected benefits of digital technology, on the other hand, an energy transition of global scale appears to be underway. Many observers project that a significant element of the energy transition will be a growing dependence on electricity, a dependence possibly doubling by 2050. Such a transformation, however, would likely require re-configuring the architecture of complex, centralized electricity grids, an artifact of a context of more than a century ago. In concert with the energy transition, we argue to modify the objective of the electricity grid to enable efficient, pervasive optimization in local service areas that provides incentives for users to be efficient in their energy use. At the core of our argument is the presentation of economic incentives denominated in an electricity-backed commodity currency such that incumbent electricity generators could augment their economic purpose of electricity production and electricity distribution to include financial intermediation. A direct consequence of this institutional transformation is the opportunity for all users to generate wealth. There are others who have been inspired to conjure ways that energy could be a candidate currency. Our argument is distinctive, though, in exploiting how an institution (the power grid system) could be repositioned and how all agents in the system could benefit by the institutionalization of electricity as money. A Wavelet Multiscale Mathematical Model for Quality of Life Index Measuring Majed S. Balalaa, Anouar Ben Mabrouk Subject: Social Sciences, Other Keywords: multi scale; quality of life; wavelets; mathematical models The present paper is concerned with the study of the quality of life index. Such an index has become an important index for measuring the well-being of individuals. However, the quality of life index is always a subjective, intangible, and often hard to quantify with precision due to the lack of quantitative models dealing with. The main goal of the present paper is thus to propose a mathematical, quantitative model for the measurement of a quality of life index. The main novelty is firstly the construction of a wavelet dynamic multiscale model to quantify and investigate the effect of time scale on the quality of life index measuring. The proposed procedure is acted empirically on a sample corresponding to Saudi Arabia as a case of study during the period from 2003 to 2020 as part of the 2030-vision plan. Saudi Arabia has implemented the so-called 2030-vision plan where the quality of life improvement is one of the main goals to be attempted. The findings show that wavelets are capable to localize the time-wise behavior of the index contrarily to classical studies which estimate a global view of the index. Moreover, the study shows the link between the quality of life behavior and many other indices. Cancer Diagnosis of Microscopic Biopsy Images Using Social Spider Optimization Tuned Neural Network Prasanalakshmi Balaji, Kumarappan Chidambaram Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: biopsy; cancer diagnosis; predictive models; neural network; optimization One of the most dangerous diseases that threaten people is Cancer. Cancer if diagnosed in earlier stages can be eradicated with its life threatening consequences. In addition, accuracy in prediction plays a major role. Hence, developing a reliable model that contributes much towards the medical community in early diagnosis of Biopsy images with perfect accuracy come to the scenario. The article aims towards development of better predictive models using multi-variate data and high-resolution diagnostic tools in clinical cancer research. This paper proposes the social spider optimization (SSO) algorithm tuned neural network to classify microscopic biopsy images of cancer. The significance of the proposed model relies on the effective tuning of the weights of the NN classifier by the SSO algorithm. The performance of the proposed strategy is analysed with the performance metrics, such as accuracy, sensitivity, specificity, and MCC measures, and are attained to be 95.9181%, 94.2515%, 97.125%, and 97.68% respectively, which shows the effectiveness of the proposed method in effective cancer disease diagnosis. Study on Significant Drift in the Domain of Explainable Artificial Intelligence Tavishee Chauhan, Hemant Palivela Subject: Keywords: XAI; bibliometric analysis; black box models; artificial intelligence Artificial Intelligence (AI) is required since multiple resources are in need to complete depending on a daily basis. As a result, automating routine tasks is an excellent idea. This reduces the foundation's work schedules while also improving efficiency. Furthermore, the business can obtain talented personnel for the business strategy through Artificial Intelligence. Explainability in XAI derives from a combination of strategies that improve machine learning models' environmental flexibility and interpretability. When Artificial Intelligence is trained with a large number of variables to which we apply alterations, the entire processing is turned into a black box model which is in turn difficult to understand. The data for this research's quantitative analysis is gathered from the IEEE, Web of Science, and Scopus databases. This study looked at a variety of fields engaged in the (Explainable Artificial Intelligence) XAI trend, as well as the most commonly employed techniques in domain of XAI, the location from which these studies were conducted, the year-by-year publishing trend, and the most frequently occurring keywords in the abstract. Ultimately, the quantitative review reveals that employing Explainable Artificial Intelligence or XAI methodologies, there is plenty of opportunity for more research in this field. Geometry and Control of Thermodynamic Systems Described by Generalized Exponential Families Marco Favretti Subject: Physical Sciences, Mathematical Physics Keywords: statistical models; Fisher metric; Ehresmann connection, exponential families In this paper we investigate the geometric structure and control of exponential families depending on additional parameters, called external parameters. These generalized exponential families emerges naturally when one applies the maximum entropy formalism to derive the equilibrium statistical mechanics framework. We study the associated statistical model, compute the Fisher metric and introduce a natural fibration of the parameter space over the external parameter space. The Fisher Riemannian metric allows to endow this fibration with an Ehresmann connection and to study the geometry and control of these statistical models. As an example, we show that horizontal lift of paths in the external parameter space corresponds to an isentropic evolution of the system. We apply the theory to the example of a ideal gas in a rotating rigid container. Most of the results are expressed in local coordinates; in the appendices we hint at possible global extensions of the theory. Who is Responsible for Embodied CO2? Hans Sanderson Subject: Earth Sciences, Atmospheric Science Keywords: Sustainability; Climate; Trade; Models; Emissions; Value Chain; Justice With the Paris Agreement, countries are obliged to report greenhouse gas (GHG) emission reduc-tions, which will ensure that the global temperature increase is maintained well below 2C. The Parties will report their Nationally Determined Contributions in terms of plans and progress to-wards these targets during the postponed COP26 in Glasgow in November 2021. These commit-ments however do not take significant portions of the consumption related emissions related to countries imports in to account. Similarly, the majority of companies that report their emissions to CDP also do not account for their embodied value-chain related emissions. Municipalities, on the path towards carbon neutrality in accordance with the methods outlined by C40, also do not in-clude imported and embodied CO2e in their total emission tallies. So, who is responsible for these emissions - the producer or the consumer? How can we ensure that the NDC's, municipalities and companies reduction targets share the responsibility of the emissions in the value-chain thus en-suring that targets and plans become, sustainable, climate fair, and just in global value chains? Today the responsibility lays with the producer, which is not sustainable. We have the outline for the tools needed to quantify and transparently share the responsibility between producers and consumers at corporate, municipal and national level based on an improved understanding of the attendant sources, causes, flows and risks og GHG emissions globally. Hybrid LCA/EEIO models can for example be further developed. This will, in the end, enable everyday consumption to support a more sustainable, green and low carbon transition of our economy. Sustainable Development Goals and Physical Education. A Proposal for Practice-Based Models Salvador Baena-Morales, Daniel Jerez-Mayorga, Pedro Delgado Floody, Jesús Martínez-Martinez Subject: Social Sciences, Accounting Keywords: physical education; physical activity; pedagogical models; sustainability development The Sustainable Development Goals (SDGs) is a global strategy that aims to obtain a more equitable and just world. These objectives are organized in 17 SDGs, which are detailed in 169 targets. Different international institutions have emphasized education's relevance to developing citizens who contribute to the SDGs' achievement for 2030. However, a review focused on Physical Education exclusively has not been performed yet. Therefore, the objective of this work is double. First, to analyze and select the specific goals of the SDGs that can be implemented in the subject of Physical Education. And second, to relate these specific goals to the different models based on Physical Education practices. This review showed how three institutional documents have previously related sport, physical exercise and physical education to the specific goals of the SDGs. Based on the search done, this document selects those goals that could be integrated into the educational context through Physical Education. The bibliographic and narrative analysis carried out in this research shows that of the 169 specific goals proposed in the SDGs, only 24 could be worked on in Physical Education. In addition, after this previous analysis, a proposal for the relationship between the practice-based models and these 24 goals is presented. The contributions made in this paper will allow teachers to establish links between PE sessions and SDGs while raising awareness to develop students who contribute to a more sustainable world. Small Ruminants and Its Use in Regenerative Medicine: Recent Works and Future Perspectives. Rui D. Alvites, Mariana Vieira Branquinho, Ana Catarina Sousa, Bruna Lopes, Patrícia Sousa, Carla Mendonça, Luís Miguel Atayde, Ana Colette Maurício Subject: Life Sciences, Biochemistry Keywords: Goat; Sheep; Small Ruminants; Animal Models; Regenerative Medicine. Medical and translational scientific research requires the use of animal models as an initial approach to the study of new therapies and treatments, but when the objective is an exploration of translational potentialities, classical models fail to adequately mimic problems in humans. Among the larger animal models that have been explored more intensely in recent decades, small ruminants, namely sheep and goats, have emerged as excellent options. The main advantages associated to the use of these animals in research works are related to their anatomy and dimensions very similar to those of humans in most physiological systems, in addition to their low maintenance and feeding costs, tendency to be docile, long life expectancies and few ethical complications raised in society. The most obvious disadvantages are the significant differences in some systems such as the gastrointestinal, and the reduced amount of data that limits the comparison between works and the validation of the characterization essays. Despite everything, recently these species have been increasingly used as animal models for diseases in different systems, and the results obtained open doors for their more frequent and advantageous use in the future. The purpose of this review is to summarize the general principles related to the use of small ruminants as animal models, with focus on regenerative medicine, to group the most relevant works and results published recently and to highlight the potentials for the near future in medical research. Planning the Future Oral Health Workforce: A Rapid Review of Supply, Demand and Need Models, Data Sources and Skill Mix Considerations Madhan Balasubramanian, Aliya Hasan, Suruchi Ganbavale, Anfal Alolayah, Jennifer Gallagher Subject: Medicine & Pharmacology, Dentistry Keywords: health workforce; operational models; planning; skill mix; integration Over the last decade, there has been a renewed interest in oral health workforce planning. The purpose of this review is to examine oral health workforce planning models on supply, demand and needs, mainly in respect to their data sources, modelling technique and use of skill mix. A search was carried out on PubMed, Web of Science, and Google Scholar databases for published scientific articles on oral health workforce planning models between 2010 to 2020. No restrictions were placed on the type of modelling philosophy, and all studies including supply, demand or needs based models were included. Rapid review methods guided the review process. Twenty-three studies from 15 different countries were included in the review. A majority were from high income countries (n=17). Dentists were the sole oral health workforce group modelled in 13 studies; only five studied included skill mix (allied dental personnel) considerations. The most common application of modelling was a workforce to population ratio or a needs-based demand weighted variant. Nearly all studies presented weaknesses in modelling process due to the limitations in data sources and/or non availability of necessary data to inform oral health workforce planning. Skill mix considerations in planning models were also limited to horizontal integration within oral health professionals. Planning for the future oral health workforce is heavily reliant on quality data being available for supply, demand and needs models. Integrated methodologies that expand skill mix considerations and account for uncertainty are essential for future planning exercises. Dietary Polyphenols in Metabolic Diseases and Neurodegeneration: Molecular Targets on Autophagy and Biological Effects Ana García-Aguilar, Olga Palomino Ruiz-Poveda, Manuel Benito de las Heras, Carlos Guillén Viejo Subject: Medicine & Pharmacology, Allergology Keywords: polyphenols; autophagy; mechanisms; oxidative stress; inflammation; disease models Polyphenols represent a group of secondary metabolites of plants which have been analyzed as potent regulators of multiple biological processes, including cell proliferation, apoptosis and autophagy, among others. These natural compounds exhibit beneficial effects and protection against inflammation, oxidative stress and related injuries including metabolic diseases, such as cardiovascular damage, obesity and diabetes and neurodegeneration. In the present review, we report the main biological effects in relationship to autophagy regulation in response to different dietary polyphenols and its impact on metabolic and neurodegenerative diseases Money Often Costs Too Much: A Study to Investigate The Effect Of Twitter Sentiment On Bitcoin Price Fluctuation Mitul Verma, Pritish Sharma Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Bitcoin; SVM; linear mixed models; word embedding; ELMo Introduced in 2009, Bitcoin has demonstrated a huge potential as the world's first digital currency and has been widely used as a financial investment. Our research aims to uncover the relationship between Bitcoin prices and people's sentiments about Bitcoin on social media. Among various social media platforms, micro-blogging is one of the most popular. Millions of people use micro-blogging platforms to exchange ideas, broadcast views, and to provide opinions on different topics related to politics, culture, science, and technology. This makes them a potentially rich source of data for sentiment analysis. Therefore we chose one of the busiest micro-blogging platforms, Twitter, to perform sentiment analysis on Bitcoin. We used ELMo embedding model to convert Bitcoin-related tweets into a vector form and SVM classifier to divide the tweets into three sentiment categories - positive, negative, and neutral. We then used the sentiment data to find its relation with Bitcoin price fluctuation using the linear mixed model. Effects of Tissue Preservation on Carbon and Nitrogen Stable Isotope Signatures in Syngnathid Fishes and Prey Miquel Planas, Alex Paltrinieri, Mario Davi Dias Carneiro, Jorge Hernández-Urcera Subject: Biology, Animal Sciences & Zoology Keywords: stable isotopes; preservation; syngnathids; seahorses; pipefishes; conversion models Isotopic stable analysis (SIA) is a powerful tool in the assessment of different types of ecological and physiological studies. For that, different preservative methods for the samples are commonly used prior to isotopic analysis. The effects of various preservative methods (drying, freezing, ethanol and formaldehyde) have been analyzed for C:N ratio, δ13C and δ15N on a variety of tissues including dorsal fins (three seahorse and two pipefish species), seahorse newborns (three seahorses species), and prey (copepods and different stages of Artemia) commonly used to feed the fishes in rearing conditions. The aims of the study were to: (i) evaluate isotopic effects of preservation methods across tissues; and (ii) construct the first conversion models available in syngnathid fishes. The preservation in ethanol and to a lesser extend in formaldehyde significantly affected δ13C values, whereas δ15N signatures were not affected significantly. Due to their low lipid content, the isotopic signals in fish fins were almost unaffected, supporting the suitability of dorsal fins as a convenient tool in isotopic studies on vulnerable species such as syngnathids. The regression equations provided permit the successful conversion of δ13C and δ15N values between preservative treatments. The conversion models can be applied to isotopic studies in the field and in the laboratory. Dynamics of Periodic Waves in a Neural Field Model Nikolai Bessonov, Anne Beuter, Sergei Trofimchuk, Vitaly Volpert Subject: Mathematics & Computer Science, Applied Mathematics Keywords: neural field models; integrodifferential equations; wave; brain stimulation Periodic travelling waves are observed in various brain activities including visual, motor, language, sleep, and so on. There are several neural field models describing periodic waves assuming nonlocal interaction and, possibly, inhibition, time delay, or some other properties. In this work we study the influence of asymmetric connectivity functions and of time delay on the emergence of periodic waves and on their properties. Nonlinear wave dynamics is studied, including modulated and aperiodic waves. Multiplicity of waves for the same values of parameters is observed. External stimulation in order to restore wave propagation in a damaged tissue is discussed. Fractional Riccati Equation and Its Applications to Rough Heston Model Siow W. Jeng, Adem Kilicman Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Fractional Riccati equation; Rough volatility models; Heston model Rough volatility models are popularized by \cite{gatheral2018volatility}, where they have shown that the empirical volatility in the financial market is extremely consistent with rough volatility. Fractional Riccati equation as a part of computation for the characteristic function of rough Heston model is not known in explicit form as of now and therefore, we must rely on numerical methods to obtain a solution. In this paper, we give a short introduction to option pricing theory and an overview of the current advancements on the rough Heston model. Treatment with DHA Improves Epidermal Eeratinocyte Differentiation and Ameliorates Inflammation in Human Keratinocytes and Reconstructed Human Epidermis Models Tinghan Jia, Wu Qiao, Qifeng Yao, Wenhui Wu, Ken Kaku Subject: Life Sciences, Molecular Biology Keywords: DHA; reconstructed human models; filaggrin; skin barrier; inflammation Atopic dermatitis (AD) is a chronic inflammatory skin disease, which can cause skin barrier function damaged. Although co-incubation with docosahexaenoic acid (DHA) exerts a positive effect in deficient skin model, there is no study to investigate the effects of topical treatment with DHA in inflammatory reconstructed human epidermis (RHE) model. The effects of DHA on monolayer normal human epidermal keratinocyte (NHEK) cells were evaluated via CCK-8, qPCR and ELISA. The skin related barrier function was assessed by hematoxylin-eosin (HE) staining, western blot (WB), Immunohistofluorescence (IF) and ELISA in normal and inflammatory RHE models. DHA upregulated filaggrin and loricrin expression at mRNA levels in addition to suppress overexpression of TNF-α,IL-1α and IL-6 stimulated by poly I:C plus LPS (stimulation cocktail) in cultured NHEK cells. After topical treatment with DHA, cocktail induced inflammatory characteristics of skin diseases including barrier morphological, differentiation proteins and TSLP secretion, which were alleviated in RHE models. Supplementation with DHA can improved related barrier function and have anti-inflammation effects in monolayer keratinocytes and RHE models, which indicated that DHA may have a potential value for the treatment of inflammation-associate skin diseases. Monin-Obukhov Similarity Theory for Modeling of Wind Turbine Wakes under Atmospheric Stable Conditions: Breakdown and Modifications Xing Xing Han, De You Liu, Chang Xu, Wen Zhong Shen, Lin Min Li, Fei Fei Xue Subject: Engineering, Energy & Fuel Technology Keywords: wind turbine; wake; atmospheric stability; MOST; turbulence models Monin-Obukhov similarity theory (MOST) overestimates wind shear in some atmospheric stable conditions, i.e. Richardson number $R_f<0.25$. The overestimated wind shear that leads to an under-predicted friction wind speed and a lower ambient turbulence intensity for a given hub-height reference wind speed and a given roughness length, could influence wake modeling of a wind turbine. This work investigates the side effects of the breakdown of MOST on wake modeling under stable conditions and makes some modifications to the flow similarity functions to eliminate these side effects. Based on a field measurement in a wind farm, we firstly show that MOST predicts a larger wind shear for the atmospheric stability parameter $\zeta>0.1$ and proposes new flow similarity functions without constraining $R_f$ to limit the overestimated wind shear by MOST. Next, different turbulence models based on MOST and a modified one based on the new similarity functions are investigated through numerical simulations. These turbulence models are combined with the actuator disk model (AD) and Reynolds-averaged Navier–Stokes equations (RANS) to model wind turbine wakes under stable conditions. As compared to measurements, numerical results show that turbulence models based on MOST result in larger wake deficits and slower wake recovery rate with a square root of the mean-squared-error (RSME) of wake deficit in the range of 0.07-0.18. This overestimated wake effect is improved by applying the new similarity functions and the RSME of wake deficit is averagely reduced by 0.05. Finally, we check the role of the under-predicted turbulence intensity playing in the larger wake deficit predicted by models based MOST. Additional numerical simulations using the modified turbulence model are carried out, in which the roughness length is reduced to impose a hub-height ambient turbulence intensity equivalent to the MOST case. Simulation results show that reducing turbulence intensity enhances wake effects, however, it cannot reproduce the large wake deficit predicted by models based on MOST, which suggests that the overestimated wake effect by MOST could be also related to the overestimated wind shear. Sigma Phase: Nucleation and Growth Gláucio Fonseca, Priscila Mendes, Ana Silva Subject: Materials Science, Metallurgy Keywords: Sigma Phase, Contiguity, Kinetics, Cahn models, 3D Reconstruction Duplex Stainless Steels (DSS) and Superduplex Stainless Steels (SDSS) are an important class of stainless steels because they combine the benefits of austenite and ferrite phases, resulting in steels with better mechanical properties and higher corrosion resistance. Due to these characteristics are widely employed in various industries. However, the appearance of deleterious phases in their microstructure impairs the properties of DSS and SDSS. Among the deleterious phases, the main one is the sigma phase (σ), which can be nucleated when the steel is exposed to the temperature range between 650 °C and 900 °C, reducing its toughness and resistance to corrosion. In a previous work, Fonseca and collaborators used two descriptors of the microstructural path to analyze the formation of sigma phase (σ), SV, interfacial area per unit volume between sigma phase and austenite, and <λ>, mean chord length of sigma, both in function of the VV, volume fraction of sigma, known in the literature as microstructural partial path (MP). In this work, the contiguity ratio is applied for the first time to describe the microstructural path in the study of sigma phase precipitation in SDSS. The contiguity ratio showed that the distribution of the ferrite/sigma boundaries is homogeneous. Thus, it is reasonable to infer that one has a uniform distribution of sigma phase nuclei within the ferrite. About the kinetics of sigma phase formation, the DSS can be described by the classical JMAK equation, whereas for the SDSS, the kinetics tends to follow the Cahn model for grain edge nucleation. Finally, we present the 3D reconstruction of the sigma phase in SDSS. The results demonstrate that the sigma phase nucleates at the edges of the ferrite/austenite interfaces. Moreover, the sigma phase grows consuming the ferrite, but it is not fully interconnected. The Effect of High Positive Autocorrelation on the Performance of Garch Family Models Ngozi G. Emenogu, Monday Osagie Adenomon Subject: Mathematics & Computer Science, Probability And Statistics Keywords: financial time series; autocorrelation; models; GARCH; RMSE; MAE This study compared the performance of five Family Generalized Auto-Regressive Conditional Heteroscedastic (fGARCH) models (sGARCH, gjrGARCH, iGARCH, TGARCH and NGARCH) in the presence of high positive autocorrelation. To achieve this, financial time series was simulated with autocorrelated coefficients as ρ = (0.8, 0.85, 0.9, 0.95, 0.99), at different time series lengths (as 250, 500, 750, 1000, 1250, 1500) and each trial was repeated 1000 times carried out in R environment using rugarch package. And the performance of the preferred model was judged using Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Results from the simulation revealed that these GARCH models performances varies with the different autocorrelation values and at different time series lengths. But in the overall, NGARCH model dominates with 62.5% and 59.3% using RMSE and MAE respectively. We therefore recommended that investors, financial analysts and researchers interested in stock prices and asset return should adapt NGARCH model when there is high positive autocorrelation in the financial time series data. Development Model of Synergistic Sustainable Marine Ecotourism (Case Study in Pangandaran Region, West Java Province, Indonesia) Atikah Nurhayati, Isah Aisah, Asep K Supriatna Subject: Earth Sciences, Environmental Sciences Keywords: marine ecotourism; coastal areas; fishermen; development models; sustainable Coastal areas in the South Coast of West Java Province have the potential to develop marine ecotourism, one of which is the Pangandaran area which must be transferred into economic value by not damaging natural resources. Marine ecotourism development is not only intended to raise foreign exchange for local governments, but are also expected to play a role in maintaining natural resources sustainably. This research aims to analyze the sustainable synergistic marine ecotourism development model. The method used in this research using quantitative descriptive method. The Quantitative descriptive method is used to describe the general condition of the research area, using primary and secondary data. The technique of taking respondents using accidental sampling as many as 50 respondents consisting of tourists, public figures, fishermen who have side jobs as a provider of marine ecotourism services. The analysis tool used is through a Rapfish model approach to measuring the synergistic model of sustainable development of marine ecotourism. Based on the results of a research on a sustainable synergistic marine ecotourism development model by measuring the ecological dimensions of environmental services in high conditions, the economic dimension of marine ecotourism is in moderate condition. Marine ecotourism technology in low conditions and social dimensions of marine ecotourism in low conditions. Model development of sustainable marine ecotourism synergistic with regard to the dimension of environmental, economic and social institutions should be able to form integrated from infrastructure to support marine ecotourism up to raise the level of income of fishermen who have a second job as a marine ecotourism providers. The infrastructure and regulatory dimensions are recommended to use the technology information to promote marine ecotourism optimally and regulations need to make marine ecotourism zoning rules and infrastructure improvements. Analysis of Different Statistical Models in Probabilistic Joint Estimation of Porosity and Litho-Fluid Facies from Acoustic Impedance Values Mattia Aleardi Subject: Earth Sciences, Geophysics Keywords: reservoir characterization; Bayesian inversion; A-priori statistical models We discuss the influence played by different statistical models in the prediction of porosity and litho-fluid facies from logged and post-stack inverted acoustic impedance (Ip) values. We compare the inversion and classification results obtained under three different a-priori statistical assumptions: an analytical Gaussian distribution, an analytical Gaussian-mixture model and a non-parametric mixture distribution. The first model assumes Gaussian distributed porosity and Ip values, thus neglecting their facies-dependent behaviour caused by different lithologic and saturation conditions. Differently, the other two statistical models relate each component of the mixture to a specific litho-fluid facies, so that the facies-dependency of porosity and Ip values is taken into account. Blind well tests are used to validate the final predictions, whereas the analysis of the maximum-a-posteriori (MAP) solutions, the coverage ratio and the contingency analysis tools are used to quantitatively compare the inversion outcomes. This work points out that the correct choice of the statistical petrophysical model could be crucial in reservoir characterization studies. Indeed, for the investigated zone it turns out that the simple Gaussian model constitutes an oversimplified assumption, while the two mixture models provide more accurate results, although the non-parametric one yields slightly superior predictions with respect to the Gaussian-mixture assumption. Modeling Analytical Streams for Social Business Intelligence Indira Lanza-Cruz, Rafael Berlanga, María José Aramburu Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: social business intelligence; data streaming models; linked data Social Business Intelligence (SBI) enables companies to capture strategic information from public social networks. Contrary to traditional Business Intelligence (BI), SBI has to face the high dynamicity of both the social network contents and the company analytical requests, as well as the enormous amount of noisy data. Effective exploitation of these continuous sources of data requires efficient processing of the streamed data to be semantically shaped into insightful facts. In this paper, we propose a multidimensional formalism to represent and evaluate social indicators directly from fact streams derived in turn from social network data. This formalism relies on two main aspects: the semantic representation of facts via Linked Open Data and the support of OLAP-like multidimensional analysis models. Contrary to traditional BI formalisms, we start the process by modeling the required social indicators according to the strategic goals of the company. From these specifications, all the required fact streams are modeled and deployed to trace the indicators. The main advantages of this approach are the easy definition of on-demand social indicators, and the treatment of changing dimensions and metrics through streamed facts. We demonstrate its usefulness by introducing a real scenario user case in the automotive sector. A Critical Review on Predicting Fouling in RO Desalination Alejandro Ruiz-García, Noemi Melián-Martel, Ignacio Nuez Subject: Engineering, General Engineering Keywords: Reverse osmosis; Membrane fouling; Fouling indices; Predicting models RO membrane fouling is one of the main challenges that membrane manufactures, the scientific community and industry professionals have to deal with. The consequences of this inevitable phenomenon have a negative effect on the performance of the desalination system. Predicting fouling in RO systems is key to evaluating the long-term operating conditions and costs. Much research has been done on fouling indices, methods, techniques and prediction models to estimate the influence of fouling on the performance of RO systems. This paper offers a critical review evaluating the state of industry knowledge in the development of fouling indices and models in membrane systems for desalination in terms of use and applicability. Despite major efforts in this field, there are gaps in terms of effective methods and models for the estimation of fouling in full-scale RO desalination plants. In existing models applied to full-scale RO desalination plants, neither the spacer geometry of membranes nor the efficiency and frequency of chemical cleanings - which play an important role in the performance of this process - are considered. Influence of Parameter Sensitivity and Uncertainty on Projected Runoff in the Upper Niger Basin under a Changing Climate Ganiyu Titilope Oyerinde, Bernd Diekkrüger Subject: Earth Sciences, Environmental Sciences Keywords: climate change; hydrology; rainfall-runoff models; model uncertainty Hydro-climatic projections in West Africa are attributed with high uncertainties that are difficult to quantify. This study assesses the influence of the parameter sensitivities and uncertainties of three rainfall runoff models on simulated discharge in current and future times using meteorological data from 8 Global Climate Models. The IHACRES Catchment Moisture Deficit (IHACRES-CMD) model, the GR4J and the Sacramento model were chosen for this study. During model evaluation, 10,000 parameter sets have been generated for each model and used in a sensitivity and uncertainty analysis using the Generalized Likelihood Uncertainty Estimation (GLUE) method. Out of the three models, IHACRES-CMD recorded the highest Nash-Sutcliffe Efficiency (NSE) of 0.92 and 0.86 for the calibration (1997-2003) and the validation (2004-2010) period respectively. The Sacramento model was able to adequately predict low flow patterns on the catchment while the GR4J and IHACRES-CMD over and under estimate low flow respectively. The use of multiple hydrological models to reduce uncertainties caused by model approaches is recommended along with other methods of sustainable river basin managements. A Brief History of Long Memory: Hurst, Mandelbrot and the Road to ARFIMA Timothy Graves, Robert B. Gramacy, Nicholas W. Watkins, Christian L. E. Franzke Subject: Mathematics & Computer Science, General Mathematics Keywords: long-range dependence; Hurst effect; fractionallydifferenced models; Mandelbrot Long memory plays an important role in many fields by determining the behaviour and predictability of systems; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts the accuracy and confidence with which one may predict future events on the basis of a small amount of historical data. A major force in the development and study of long memory was the late Benoit B. Mandelbrot. Here we discuss the original motivation of the development of long memory and Mandelbrot's influence on this fascinating field. We will also elucidate the sometimes contrasting approaches to long memory in different scientific communities Backtesting the Lee-Carter and the Cairns-Blake-Dowd Stochastic Mortality Models on Italian Death Rates Carlo Maccheroni, Samuel Nocito Subject: Social Sciences, Econometrics & Statistics Keywords: lee-carter; cairns-blake-dowd; mortality models; backtesting The work proposes a backtesting analysis in comparison between the Lee-Carter and the Cairns-Blake-Dowd mortality models, employing Italian data. The mortality data come from the Italian National Statistics Institute (ISTAT) database and span the period 1975-2014, over which we computed back-projections evaluating the performances of the models in comparisons with real data. We propose three different backtest approaches, evaluating the goodness of short-run forecast versus medium-length ones. We find that both models were not able to capture the improving shock on the mortality observed for the male population on the analyzed period. Moreover, the results suggest that CBD forecast are reliable prevalently for ages above 75, and that LC forecast are basically more accurate for this data. Goodness-of-Fit Tests for Copulas of Multivariate Time Series Bruno Rémillard Subject: Social Sciences, Econometrics & Statistics Keywords: goodness-of-fit; time series; copulas; GARCH models In this paper, we study the asymptotic behavior of the sequential empirical process and the sequential empirical copula process, both constructed from residuals of multivariate stochastic volatility models. Applications for the detection of structural changes and specification tests of the distribution of innovations are discussed. It is also shown that if the stochastic volatility matrices are diagonal, which is the case if the univariate time series are estimated separately instead of being jointly estimated, then the empirical copula process behaves as if the innovations were observed; a remarkable property. As a by-product, one also obtains the asymptotic behavior of rank-based measures of dependence applied to residuals of these time series models. Preprint CONFERENCE PAPER | doi:10.20944/preprints201612.0045.v2 Constraints on Dark Energy Models from Selected Galaxy Clusters (S-Z + X-Ray Data) and Gravitational Lensing Data Alexander Bonilla Rivera, Jairo Ernesto Castillo Hernandez Subject: Physical Sciences, General & Theoretical Physics Keywords: Sunyaev-Zeldovich effect, galaxy clusters, dark energy models The Sunyaev-Zeldovich effect (SZe) is a global distortion of Cosmic Microwave Bckground (CMB) spectrum as result of its interaction with a hot electron plasma in the intracluster medium for large gravitational virialized structures such as galaxy clusters. Furthermore, this hot gas of electrons emits X-Rays due to its fall in the gravitational potential well of the cluster. The analysis of SZe and X-Ray data, provide a method for calculating distances to galaxy clusters at any redshift (Angular diameter distance (dA) and gas mass fraction (fgas)). On the other side, many of these galaxy clusters produce a Strong Gravitational Lens effect (SGL), which has become an useful astrophysical tool for cosmology. We use these cosmological tests, in addition to the more traditional ones (SNIa, CMB, BAO), to constraint alternative models of dark energy (ωCDM, CPL, IDE, EDE) and study the history of expansion through the cosmographic parameters (H(z), q(z), j(z)). Using Akaike and Bayesian Information Criterion (AIC, BIC) we find that the ωCDM and ΛCDM models are the most favored by the observational data. In addition, we found that at low redshift appears an peculiar behavior of slowdown of aceleration, which occurs only on dynamical dark energy models using only galaxy clusters (dA,clusters + fgas). Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities Frank Nielsen, Ke Sun Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: information geometry; mixture models; log-sum-exp bounds Information-theoretic measures such as the entropy, cross-entropy and the Kullback-Leibler divergence between two mixture models is a core primitive in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte-Carlo stochastic integration, approximated, or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy and the Kullback-Leibler divergence of mixtures. We illustrate the versatile method by reporting on our experiments for approximating the Kullback-Leibler divergence between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures, and Gamma mixtures. Energy Consumption Prediction Using Machine Learning; A Review Amir Mosavi, Abdullah Bahmani Subject: Engineering, Energy & Fuel Technology Keywords: energy consumption; prediction; machine learning models; deep learning models; 21 artificial intelligence (AI); computational intelligence (CI); forecasting; soft computing (SC) Machine learning (ML) methods has recently contributed very well in the advancement of the prediction models used for energy consumption. Such models highly improve the accuracy, robustness, and precision and the generalization ability of the conventional time series forecasting tools. This article reviews the state of the art of machine learning models used in the general application of energy consumption. Through a novel search and taxonomy the most relevant literature in the field are classified according to the ML modeling technique, energy type, perdition type, and the application area. A comprehensive review of the literature identifies the major ML methods, their application and a discussion on the evaluation of their effectiveness in energy consumption prediction. This paper further makes a conclusion on the trend and the effectiveness of the ML models. As the result, this research reports an outstanding rise in the accuracy and an ever increasing performance of the prediction technologies using the novel hybrid and ensemble prediction models. Statistical Inference for Finite Mixture of Matrix Variate t-distribution Xinyu Yang, Leigang Dong, Zhichuan Zhu, Weisan Wu Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Matrix variate distribution; Mixture models; EM-algorithm; Penalized likelihood In the era of big data with increasingly complex data structures and ever-larger data scales, matrix-type data are becoming highly valued and their applications in the fields of medicine, industry, education, geography, and astronomy are growing in extent. In recent years, significant progress has been made in the practical use of matrix variable t-distribution finite mixture models for handling data in order to address the issues of multi-subgroup structures and long data tails. In this paper, the expectation-maximization (EM) algorithm with penalized maximum likelihood is proposed to resolve the problem of the unbounded nature of the likelihood function applied to the model by considering the degeneracy of the variance-covariance matrix of this model. Our data were analyzed through simulations and real data, and the results demonstrate that our model is effective in both preventing the likelihood function from being unbounded and in ensuring the accuracy of the estimated parameters of the EM algorithm.
CommonCrawl
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. View all journals Plasmonic coupling in closed-packed ordered gallium nanoparticles S. Catalán-Gómez1, C. Bran2, M. Vázquez2, L. Vázquez2, J. L. Pau1 & A. Redondo-Cubero1 Scientific Reports volume 10, Article number: 4187 (2020) Cite this article Nanophotonics and plasmonics Surface patterning Synthesis and processing Plasmonic gallium (Ga) nanoparticles (NPs) are well known to exhibit good performance in numerous applications such as surface enhanced fluorescence and Raman spectroscopy or biosensing. However, to reach the optimal optical performance, the strength of the localized surface plasmon resonances (LSPRs) must be enhanced particularly by suitable narrowing the NP size distribution among other factors. With this purpose, our last work demonstrated the production of hexagonal ordered arrays of Ga NPs by using templates of aluminium (Al) shallow pit arrays, whose LSPRs were observed in the VIS region. The quantitative analysis of the optical properties by spectroscopic ellipsometry confirmed an outstanding improvement of the LSPR intensity and full width at half maximum (FWHM) due to the imposed ordering. Here, by engineering the template dimensions, and therefore by tuning Ga NPs size, we expand the LSPRs of the Ga NPs to cover a wider range of the electromagnetic spectrum from the UV to the IR regions. More interestingly, the factors that cause this optical performance improvement are studied with the universal plasmon ruler equation, supported with discrete dipole approximation simulations. The results allow us to conclude that the plasmonic coupling between NPs originated in the ordered systems is the main cause for the optimized optical response. The interaction between metallic nanoparticles (NPs) and the electromagnetic radiation has constituted the driving force for the studies of the light-matter interaction during the last decades1,2,3. Due to their free electrons, the metallic NPs are capable to concentrate and amplify the electric near-field in the vicinities of their surfaces. The electron oscillations (plasmons) resonate with light at a certain frequency that is commonly known as the localized surface plasmon resonance (LSPR). This frequency strongly depends on the NP size, shape, contact angle, environment and, especially, on the metal type4. For instance, in the most studied elements, silver (Ag) and gold (Au), their LSPRs are quite restricted to the visible (VIS) region5 due to their low losses and interband transitions in the ultraviolet (UV) region. Thus, during the last years, there has been an effort in the scientific community to search for alternative metals6,7,8. Among others, liquid gallium (Ga) has emerged as an ideal plasmonic candidate9,10 since its LSPRs can be tuned from the UV to the infrared (IR) due to the lack of strong interband transitions in this wide region11 in contrast to other candidates such as Al12, Cu13 or Ni14. This spectral tunability can be achieved by means of different methods: changing the NP size15, contact angle or substrate16,17, varying the gallium oxide shell thickness18, by hybridization with other plasmonic NPs19 or by alloying9,20. In addition to this, the Ga NPs can be grown in a facile, fast and up-scalable method such as Joule-effect thermal evaporation21,22. This synthesis technique produces self-assembled hemispherical NPs formed by a liquid Ga core and a self-limiting gallium oxide (Ga2O3) shell formed when exposed to air that preserves them from the environment without significantly affecting the LSPRs23. Interestingly, the liquid nature of the core can be changed by using pulsed light24,25. The mechanism controlling the growth of Ga NPs is coalescence and takes place in a wide range of substrates26. Consequently, the typical size distribution obtained is non-uniform being the biggest NPs surrounded by smaller ones18. The main disadvantage of these broad size distributions is that the optical performance is reduced since NPs of different sizes resonate at different frequencies resulting in relatively broad and moderate intense LSPRs. Furthermore, the interparticle spacing is not well controlled and plasmonic coupling can hardly take place. Despite this inconvenient, different applications based on Ga NPs have been demonstrated such as surface enhanced Raman spectroscopy (SERS)27,28,29, surface enhanced fluorescence30,31, Li-ion batteries32, waveguiding33,34, optical switching35,36, phase-change memories37 or the development of biosensors for the detection of different diseases38,39. With the aim to advance in these applications, several recent works have reported different approaches that have improved the size distributions of Ga NPs using IR light40,41, corrugated Cu films29, polymer nanostructured templates42 and even 2D materials31. Although in all the cases the NPs were aligned and the size distributions were more homogeneous, NPs of different sizes still coexist. In the literature, one of the most successful approaches for the ordering of NPs has been the use of Al nanostructured templates composed by shallow pit arrays43. These substrates are produced by anodized aluminium oxide, commonly known as alumina, that has been widely used for many applications44 and nanostructures manufacturing45,46. Indeed, ordered distributions of CdTe47, In48, Ag49, Au50,51 and Al52 NPs have been reported from these templates. Based on this idea, we have previously communicated the production of uniform size distributions of highly ordered Ga NPs53. However, that work was restricted to experimental data of a unique NP size whose LSPR was placed in the VIS range. Furthermore, the better optical response caused by the ordering was not deeply investigated. In this work, we report the fabrication of ordered arrays of Ga NPs of different size by changing the template pattern in order to spread out the LSPRs to a wider range of the electromagnetic spectrum from the UV to the IR. Moreover, we investigate the origin of the optical performance improvement of the experimental results using discrete dipole approximation (DDA) simulations with the aim to represent the different observed scenarios. Lastly, we apply the plasmon ruler equation to the experimental and calculated data in order to evaluate whether plasmonic coupling takes place. Results and Discussions Hexagonal ordered Al shallow pit arrays have been produced by the anodization of Al foils as described in the experimental section. Three different pit diameters have been obtained due to the different anodization conditions. Specifically, pit diameters of 40, 80 and 300 nm as shown in the scanning electron microscopy (SEM) images of Fig. 1(a1,b1,c1). A high band-pass filter was applied to the SEM images of Fig. 1(a1,b1) in order to better observe the Al pattern. The SEM images have been analysed in terms of Fourier decomposition with the Gwyddion software54. 2D spatial maps of Fast Fourier Transform (FFT) are represented in the insets. In all the templates (a1, b1 and c1) the dots exhibit the hexagonal pattern. SEM images of the Al nanostructured templates of different pit diameters (first column), Ga NPs on Al templates (second column) and on flat Si (third). First row corresponds to 26 mg of Ga mass, second row to 89 mg and third row to 256 mg. The Al templates of (a1), (b1) and (c1) have a pit diameter of 40, 80 and 300 nm. Each SEM image has its respective FFT image in the inset. These Al nanopatterned substrates were used as templates for the ordering of the Ga NPs53, depositing different Ga masses in each template in order to adjust the NP size with the Al pits. The pattern of the template forces the Ga NPs to coalesce into the pits resulting in NPs with size distributions more homogeneous than those obtained on flat surfaces such as Si, sapphire or glass18,26. In the second column of Fig. 1, the SEM images of the optimized cases, when NPs matches the Al pits, are shown for the three different template pit diameters (a2, b2 and c2) corresponding to Ga masses of 26, 89 and 256 mg, respectively. On the nanopatterned Al, a uniform size distribution is formed. Indeed, the FFT maps on the insets show hexagonally ordered bright dots in a long-range order, quite similar to the FFT map of the Al templates. These confirm the uniform size distributions for the Ga NPs on the three different Al templates. In the third column of Fig. 1, the SEM images of the same Ga masses (26, 89 and 256 mg) deposited on flat Si substrates are shown for comparison. Broad size distributions are obtained in each case (Fig. 1(a3,b3,c3)) reflecting random arrangements. This is confirmed with the FFT images in the insets that do not show any remarkable feature compared to their counterparts on the Al templates. Overall, Fig. 1 shows that a wide size range of hexagonally ordered arrays of Ga NPs can be produced by designing an appropriate nanostructured Al template45 and adapting the deposited Ga mass. Once the size distribution has been improved (i.e. narrowed), we have studied the optical response of the different Ga masses evaporated by spectroscopic ellipsometry (SE). The analysis has been focused on the imaginary part of the pseudodielectric constant (<ε2>) since it regards the extinction response of the whole substrate-NP system, which is a good indicator of the far-field of the LSPRs55, as analysed later on in the simulations. The typical SE spectrum of Ga NPs on flat Si consists of two resonant bands ascribed to the two different axes in its hemispherical geometry18,22,31,39. The out-of-plane resonant mode due to the vertical and shortest axis of the NPs, typically observed in the UV region and the in-plane resonant mode due to the longest and horizontal axis, typically observed in the VIS-IR region. Figure 2 shows the SE measurements of the Ga NPs for the three different templates (b, d and f) and their counterpart masses on the Si substrate (a, c and e). For the lowest pit diameter (DAl = 40 nm), masses of Ga from 15 to 45 mg have been deposited. On Si (Fig. 2(a)), the LSPR wavelength is placed around 400 nm for the lowest mass. This band corresponds to the in-plane or longitudinal LSPR mode of the Ga NPs and redshifts as the Ga mass is increased. This shift is well stablished in the literature since the NP diameter is expected to be proportional to the evaporated Ga mass; whose calibration could be found elsewhere31. The out-of-plane mode is placed in the UV (>6 eV), out of our SE spectral range for this case. In addition to these features, the LSPR intensity increases with the Ga mass likely due to the higher scattering of bigger NPs. The SE measurements of the same Ga masses but deposited on the Al template are shown in Fig. 2(b). All of them show a band at 825 nm (1.5 eV) ascribed to the interband transition of the Al template12 and thus, it is not due to any plasmonic absorption. Interestingly, the LSPR intensity reaches a maximum and then decreases as the Ga mass increases. The highest intensity band (26 mg of Ga mass) matches to the most ordered Ga NPs shown previously in Fig. 1(a2). For Ga masses higher than 26 mg, the obtained NPs exceed the pit diameter and form dimers and trimers due to the coarsening of adjacent NPs53. As a consequence, the unimodal distribution vanishes and the LSPR intensity also falls (Fig. 2(b)). In addition, other LSPR bands appear at longer wavelengths likely due to dimers and trimers formed that are not the scope of this work. SE measurements of the set of Ga masses evaporated on the three Al templates (b,d,f) of 40, 80 and 300 nm of Al pit diameter (DAl), respectively. The counterpart Ga masses on Si are presented in (a,c,e). For the intermediate pit diameter (DAl = 80 nm), Ga masses from 23 to 112 mg have been deposited. The same arguments discussed before can be applied to this case. However, it is worth to note that the LSPR intensities of the Ga NPs on the Al template are much higher than on the Si substrate (the "y" axis has been set to be the double in (d) respect to (c)), suggesting a more efficient coupling. The highest LSPR intensity on the nanopatterned template is obtained for a Ga mass of 89 mg and corresponds to the SEM image of Fig. 1(b2) where Ga NPs and Al pit dimensions coincide. The most ordered cases (72 and 89 mg) shows a negative value of the <ε2> around 900 nm that does not have physical meaning, and is likely due to interference with the substrate as also occurred in other systems56 and in Ga NPs57. Furthermore, it is also important to mention that the out-of-plane bands appear in the UV region (200–300 nm) due to the NP size increase and redshift likewise the in-plane mode (indicated in Fig. 2(c)). This transversal mode is presented only in the SE measurements of the Ga NPs on Si and not on the Al template. For the biggest pit diameter (DAl = 300 nm), masses of Ga from 77 to 310 mg have been deposited on both substrates, Si and nanopatterned Al. On Si, the same LSPR trend is observed, redshift of both the in-plane and the out-of-plane mode (Fig. 2(e)). On the Al template the behaviour is quite similar (Fig. 2(f)). For the lowest mass (77 mg) there is a strong band at 1200 nm that redshifts and increases its intensity as the Ga mass is increased. Furthermore, the main band for each Ga mass presents at higher energies a shoulder that is likely due to higher order resonance modes such as quadrupoles based on the similarity of the NP sizes and the light wavelength in all the cases. In fact, similar Ga NP diameters have been demonstrated to support dipoles and quadrupoles modes simultaneously as evidenced by electron energy loss spectroscopy (EELS) measurements58. However, apart from the main band, a lower intensity bands around 200–400 nm also exist. This family of peaks belongs to the out of plane resonance mode due to different evidences: Firstly, the position of these bands agrees with the ones on Si (Fig. 2(f)). Secondly, these bands redshift as the Ga diameter increases similarly to the in-plane resonance mode15. Furthermore, these bands present a lower intensity than the in-plane mode and lastly, they show a shoulder at lower wavelength, which is ascribed to the quadrupole, indicating their plasmonic character. Thus, the presence of this out-of-plane resonance mode entail a certain NP eccentricity. Atomic Force Microscopy (AFM) has been applied to characterize both intermediate nanopatterned template and the NPs deposited on it with the purpose of reconstructing the NPs geometry. From the AFM images, profiles of the pits and the NPs have been extracted and plotted in Fig. 3(a,b), respectively, together with the topography images as insets. The pit depth and maximum NP height (apex) can be accurately interpreted since the AFM tip can access properly to those regions as it can be seen by the overlapping of the 15 profiles. However, the width of the pits and NPs is better measured by the SEM images of Fig. 1(b2) due to tip convolution effects. Thus, taking the two morphological characterization techniques, SEM and AFM, we can estimate a NP diameter of 80 nm in the horizontal axis and a NP diameter of 60 nm in the vertical axis. The eccentricity from these values correspond to ∼0.75. The same AFM procedure has been applied to the other two templates in the most ordered cases. For the lowest template we have obtained NPs of 28 nm of vertical diameter and 40 nm of horizontal diameter while for the biggest template, NPs with 213 and 300 nm of vertical and horizontal diameter, respectively. This two cases lead to aspect ratios of 0.7 and 0.71. Those values are compatible to have two different separated resonances as previously confirmed with EELS measurements59 and strengthen our premise that the LSPR band around 200–400 nm in Fig. 2(f) corresponds to an out-of-plane resonant mode. Several pit and NP profiles extracted from the AFM images, also shown as insets, of the Al template (a) and of the Ga NPs deposited on the template (b). The AFM topography image dimensions are 0.9 × 0.9 μm2. In order to evaluate the effect of the ordering imposed by the nanopatterning in the plasmonic properties of the Ga NPs, we have quantitatively studied the plasmonic characteristics of the SE measurements of Fig. 2. For an application point of view such as biosensing, the best scenario is to have an intense and narrow LSPR. Thus, we have calculated as a figure of merit (η) the ratio between the maximum intensity over the full width at half maximum (FWHM). This ratio has been represented in Fig. 4 as a function of the LSPR wavelength for the three different Al pit diameters and compared with the values obtained on Si that was the chosen substrate for our previously biosensing platforms38,39. Figure of merit (η) as a function of the LSPR wavelength for the Ga NPs on the three different Al nanostructured templates and on Si. The figure of merit is calculated from the maximum intensity over the FWHM of the SE measurements of Fig. 2. The Ga NPs on Si (black squares) show a η that increases with the Ga mass so as with the LSPR wavelength. This increase is likely caused by the higher scattering cross section9 of the bigger NPs. The NPs on the smallest Al pit diameter (blue stars) display a η that reaches a maximum. The point with the highest ratio corresponds to the most ordered Ga NPs shown previously in Fig. 1(a2). The LSPR spectra of the Ga NPs on the 80 nm (red circles) and 300 nm (green triangles) Al pit diameter have also a higher η than the Ga NPs on Si, the optimum samples being those the SEM images of Fig. 1(b2 and, c2). Interestingly, the three optimized cases on the Al template are placed in different regions of the electromagnetic spectrum. The lowest pit diameter induces the LSPR wavelength of the Ga NPs to be in the UV, while the medium and biggest pit diameter shows their LSPRs in the VIS and IR regions, respectively, as indicated in Fig. 4. It is important to point out that the missing spectral regions in the graph between templates can be easily covered with a suitable pit diameter. Thus, as seen in Fig. 4, it is clear that the improvement of the plasmonic features such as intensity and FWHM is directly related to the optimization of the NP size distribution. In fact, in the SE spectra of Fig. 2(b,d,f) on the Al templates the same behaviour is observed: a LSPR intensity increase and a LSPR wavelength redshift as the Ga mass increases. That mass increase means that the NPs within the pits increase in size. Thus, the more logical explanation for the peak wavelength redshift is the size increase of the NPs as it happens on Si in Fig. 2(a,c,e) and is well-known in the literature4. However, there is an additional factor that must be taken into account: the NP size increasing implies that NPs are closer to their neighbours what means that near-field of the NPs could interact between them causing a redshift in the far-field response. In order to numerically study this effect, we have performed DDA simulations in the far and near field of different scenarios as described in the experimental section. The first scenario corresponds to a single spherical NP of 80 nm diameter with a native oxide shell of 2 nm similar to the ordered NP deposited on the medium template pit diameter (DAl = 80 nm) of Fig. 1(b2). Figure 5(a) presents the extinction efficiency (Qext) as a function of the wavelength and energy where the LSPR band is observed centred at 250 nm. The near-field has been evaluated at this wavelength and illustrated in the inset in order to observe the local electric field distribution. It shows two hot-spots in the direction of the electric field, whose intensity decays rapidly with distance in agreement with the literature2. Qext analysis as a function of the wavelength and energy for two different scenarios. (a) A spherical single core-shell Ga NP and (b) a pair of Ga NPs with a variable interparticle distance. The Qext spectra of the single NP is also plotted in (b) for comparison. The distribution of the near electric field is illustrated in the inset of each scenario indicating the evaluated wavelength and the electric field direction. In the (b) inset, only the near-field for interparticle distances of 2, 4, 8 and 20 nm are shown. In the second scenario, we have simulated a pair of NPs of the same diameter (80 nm) with a variable interparticle distance from 2 to 80 nm. The Qext is plotted in Fig. 5(b). For an interparticle distance lower than the NP diameter, a unique band is observed in each spectrum that does not correspond to the single NP one. This new band is due to the dipolar plasmonic coupling between NPs. The phenomenon was well-described in the model by Kreibig and Vollmer1 and has been corroborated with experiments and simulations in several works regarding other metals60,61,62,63. Thus, the spectra of Fig. 5(b) reveals that in the presence of a NP close to each other, the plasmonic coupling governs the optical signal despite the single NP LSPR. Interestingly, the same behaviour for the plasmonic coupling band than in the SE spectra of Fig. 2 is observed: an increase of the intensity and a peak wavelength redshift. It means that the NP plasmonic coupling is stronger as the NPs are closer. The near-field is plotted in the inset of Fig. 5(b) for NP distances of 2, 4, 8 and 20 nm. In the case of a distance of 2 nm, the near-field image shows the most intense hot-spot in the region between NPs evidencing that the plasmonic coupling dominates, not only the far-field response, but also the near-field. In fact, a 5-fold increase of the near-field intensity is observed in the legend compared to the case of the isolated NP of the inset of Fig. 5(a). Furthermore, it can be seen in the inset images of larger interparticle distances (4, 8 and 20 nm) that the electric field intensity decreases due to the lower interaction. Indeed, the dipolar near-field of a plasmonic particle has been demonstrated to decay as the cube of the inverse distance64 and consequently the plasmon coupling strength becomes a function of d−3, being d the interparticle distance65. Lastly, periodicity effects have been studied with DDA simulations of three NPs that better represents the experimental size distribution obtained in Fig. 1(b2). On one hand, an ordered distribution of three NPs at a distance of 2 nm and, on the other hand, an arbitrary disordered system (Fig. 6). For the ordered case, both analysis, far and near-field, show the same results that for the two NP case simulations (Fig. 5(b)). The Qext peak wavelength is placed at the same position (394 nm); see red solid line in Fig. 6(a). Furthermore, the near-field distribution of Fig. 6(c) shows the same hot-spots between the two NPs in the direction of the electric field than the NPs pair. In the case of the disordered system, the LSPR placed at shorter wavelengths (black dashed line of Fig. 6(a)) shows lower intensity than the ordered one and its near-field distribution shows lower electric field enhancement (see the legends in (b) and (c)). In addition, the FWHM of both Qext spectra have been calculated being 240 nm for the ordered system and 259 nm for the disordered one. The LSPR of the disordered system is likely due to a convolution of LSPR bands of different intensities due to the coupling of the NPs at different distances. Note that in the experiments the scenario is much more complex with NPs surrounded by other NPs at different distances but also with different diameters as illustrated in Fig. 1(a3,b3,c3) what would lead to LSPRs with FWHM even higher than the simulated one. (a) Qext analysis as a function of the wavelength and energy for two different scenarios. A closed-packed ordered distribution of three NPs at a distance of 2 nm and a disordered distribution of three NPs. The distribution of the near electric field is also shown for the disordered (b) and ordered (c) case. Taking all the DDA simulations together, the main conclusion is that plasmonic coupling dictates the optical extinction spectra in case of two or more NPs of the same size positioned at lower distances that their diameters. In our experimental part, the ordered NPs obtained are also spherical-like, have a similar size and their distances are lower than their diameters. Thus, it is reasonable to consider that the LSPR bands experimentally observed in the SE measurements could be due to the plasmonic coupling. The plasmonic coupling in ordered arrays of gold and silver NPs has been deeply studied65,66,67,68. Indeed, the LSPR band behaviour in those works was equivalent to our experiments; an intensity increase and a wavelength peak redshift. However, these two trends can also be caused not only by the size increase but also by the decreasing interparticle distance. Thus, in order to differentiate between both factors an empirical equation was found65 that describes the plasmonic coupling with the wavelength shift (∆λ) as a function of the interparticle distance (d), but normalized with the LSPR wavelength of the isolated NP (λ0) and the diameter (D). This law named as the plasmon ruler equation has been demonstrated to be universal, since it works for several diameters, geometries, dielectric constants and materials65. The normalized LSPR shift (∆λ/λ0) decays exponentially as follows: $$\frac{\Delta {\rm{\lambda }}}{{{\rm{\lambda }}}_{0}}=A\cdot exp(\frac{-d/D}{\tau })$$ being τ the decay rate and A an intrinsic factor that indicates the amplitude and is related with the analysed material. In order to evaluate whether the plasmonic coupling governs our SE experimental results, we have applied this formula to the set of Ga masses evaporated on the three different Al templates. We have taken the LSPR wavelength shift (Δλ) from the SE measurements of Fig. 2. The shift is calculated from the λ0 that corresponds to LSPR wavelength of the isolated NP case, assumed in each template to be the SE measurement of the lowest Ga mass evaporated. The dimensions d and D have been taken from each SEM image, taking the average value from 25 different NPs. Figure 7(a) illustrates the data acquisition procedure for an arbitrary Ga mass of 46 mg on the Al template of 80 nm of pit diameter. It can be observed the Δλ obtained respect to the 23 mg Ga mass (lowest mass for this template). The inset shows the SEM image where D and d and marked. From this data, a Δλ/λ0 of 0.16 and a d/D of 0.27 values are obtained. In (a), it is illustrated the SE measurement and SEM image in the inset were the parameters of Eq. 1 were taken for a representative case of 46 mg of Ga on the medium Al pit diameter template; ∆λ, d and D are indicated. (b) LSPR normalized shift (Δλ/λ0) as a function of the normalized interparticle distance (d/D) for the set of Ga masses evaporated on the three different Al templates. The fit of the data from the medium Al template (red circles) according to the Eq. 1 is also plotted with the obtained decay rate (τ). The inset of (b) shows the same plot but for the DDA calculations of Fig. 5(b) together with the fit. These ratios have been calculated and represented in Fig. 7(b) for each mass of the template with pit diameter of 40 nm (blue stars), 80 nm (red circles) and 300 nm (green triangles). In the three different templates, the Ga NPs LSPR normalized shift (Δλ/λ0) decays exponentially with the normalized interparticle distance (d/D). We have fitted the points obtained in the template diameter of 80 nm (red circles) and obtained a decay rate (τ) value of 0.22 ± 0.04. This value matches to the reported ones by other authors in systems of nanodiscs, nanospheres and nanoparticles of Ag and Au65,67,69. The fit of the lowest and highest template diameter gives rise to a τ of 0.16 ± 0.02 and 0.16 ± 0.03, respectively. Despite these values fall into the data range obtained in the literature, they could be underestimated since the maximum values of the d/D are 0.55–0.65. According to the DDA simulations of the Fig. 5(b) it is necessary to have two NPs with a d/D ratio approximately around 1 to assume that there is no plasmonic coupling between them. In other words, in these cases the NPs evaporated from the lowest Ga masses are not sufficiently separated to consider them isolated in terms of plasmonic coupling and consequently the τ value obtained is likely lower than expected. In the inset of Fig. 7(b), the LSPR normalized shift is represented as a function of the normalized interparticle distance for the DDA simulations of Fig. 5(b). The LSPR wavelength shift has been calculated from the Qext analysis as also done in the literature65. In this case, the points also fulfil the universal plasmon ruler equation and the τ value obtained from the fit is equal to 0.23 ± 0.01. This value coincides with the value acquired from the experimental data analysis in the same Fig. 7(b). Thus, based on the agreement between the DDA simulations and the experimental results, it is demonstrated that the LSPRs in the ordered Ga NPs of the three different sizes are caused by the plasmonic coupling since the universal plasmon ruler equation is fulfilled. The understanding of the optical properties of these platforms will be crucial for an appropriate design of future plasmonic-based devices. Furthermore, it is worthy to note that the NP size and interparticle size are sub-wavelength in most of the cases. This fact together with the excellent arrangement and its LSPR tunability over a wide spectral range make this system a promising candidate to be a metamaterial. In summary, we have successfully produced hexagonal ordered arrays of Ga NPs of 40, 80 and 300 nm of diameter by using Al nanostructured templates. As a consequence of the ordering and the narrow size distribution, the optical response of these NPs is considerable better than the ones deposited on flat Si in terms of both, the intensity and FWHM of the LSPRs in the UV, VIS and IR regions. The origin of the LSPRs in the ordered NPs has been investigated experimentally, with the universal plasmon ruler equation, and numerically, with DDA simulations. In all the studied scenarios, the LSPR wavelength decays exponentially with the interparticle distance, being the decay rate around 0.2 in agreement with the literature. These results confirm the plasmonic coupling as the cause of the LSPRs in the ordered NPs. The enhanced optical performance due to the improvement of the NP size distributions and its comprehension will contribute to the development of applications based on these ordered NPs such as biosensors. Experimental and Simulation Methods Growth of the nanopatterned template Al nanopatterned substrates of different pit diameters were used as templates for the ordering of the Ga NPs. The patterned templates were prepared on high-purity Al foils (99.999%) of 0.5 mm of thickness and 25 mm of diameter by anodization process. The experimental conditions of the anodization depend on the desirable pit diameter52. In this work, three different pit diameters have been used whose growth procedure is described as follows. For a 40 nm pit diameter the anodization was done for 16 h in sulphuric acid electrolyte (2.15 M) under 20 V constant voltage and constant temperature between 0 to 1 °C70. Due to this process, AAO pores are formed also known as alumina membrane. Since our aim is the Al nanostructure below the alumina, the growth pores are chemical etched by immersion in a mixture of 0.18 M chromic oxide (H2CrO4) and 0.72 M phosphoric acid (H3PO4) for 24 h. The result is a hexagonal Al array of ordered shallow pits as shown elsewhere53. For 80 nm pit diameter: The Al is anodized for 24 h in oxalic acid electrolyte (0.3 M) under 40 V constant voltage and constant temperature of 3 °C71. Then the AAO is removed with the same selective chemical etching described before. For a 300 nm pit diameter: The hard anodization72,73 is done in oxalic acid electrolyte (0.3 M) with a 5% of ethanol at a constant temperature of 0 °C. Firstly, at a voltage of 80 V during 900 s in order to create a protective Al oxide layer that prevents the subsequent rupture. Then, the voltage is increase up to 130 V at a rate of 0.08 V/s and keep there for 3600 s. After the anodization, the AAO is removed. Ga deposition The Ga NPs were grown by Joule-effect thermal evaporation. The process takes place at a base pressure of 2 × 10−7 mbar in a vertical Edwards E306 vacuum chamber. Ga (purity of 99.9999%) is evaporated in a tungsten filament (99.90% purity) at a power of 50 W. The Al template samples as well as Si (100) reference samples were placed 200 mm away from the Ga source. The size of the Ga NPs is controlled by the Ga mass added to the crucible as shown elsewhere31. In this work, the Ga mass has been varied from 15 mg to 310 mg that produces NPs with diameter from 20 to 400 nm. Optical and morphological characterization After the Ga NPs growth, the optical properties of the samples on the Al template and on the reference Si were measured by SE using a Woollam M-2000 ellipsometer (J.A. Woollam Inc). The measurements were taken at an incident angle of 75° within the spectral range from 200 to 1700 nm. The analysis was focused in the imaginary part of the pseudodielectric constant (<ε2>) with the aim to compare with the extinction efficiency of the latter simulations. <ε2> was obtained from the ellipsometric parameters psi (ψ) and delta (Δ)74. The morphology of the Ga NPs and the Al templates was analysed by SEM and AFM. The SEM is a FEI XL30-SFEG system, operating with 10 keV electron beam and nominal lateral resolution of 4 nm, being the secondary electrons collected and analysed with an Everhart-Thornley detector. The AFM is an Agilent PicoPlus 5500 system. The topographical images were obtained with Si cantilevers whose tips have a nominal radius of 8 nm and force constant of 40 N/m (Brucker) working in dynamic mode. Images were analysed and post-processed with the Gwyddion software54. Discrete dipole approximation simulations In order to study the origin of the LSPR band observed by SE with numerical calculations, we have carried out simulations with the DDA method with the DDSCAT 7.2. code75. The three scenarios used in these simulations were created by a target generation tool program executed in Matlab76: a single core-shell spherical Ga NP, a pair of NPs with a variable interparticle distance and an array of three NPs at a distance of 2 nm. Liquid Ga surrounded by its protective oxide shell (Ga2O3) constituted the core-shell structure. The diameter of the NPs was chosen to be 80 nm being the oxide thickness equal to 2 nm in order to represent the NPs with the best experimental results. A dipole lattice spacing of 2 nm was used in all cases, giving 33401 dipoles for the isolated NP. The error tolerance for convergence of the calculations was set to 10−5 at each wavelength77. The dielectric constants of the Ga and Ga2O3 materials were obtained from the literature78,79. The analysis was focused in the Qext calculated from the ratio of the extinction cross-section and the geometrical cross section. Kreibig, U. & Vollmer, M. Optical properties of metal clusters. Vol. 25 (Springer (1995). Maier, S. A. Plasmonics: Fundamentals and Applications. (Springer Science & Business Media (2007). Stockman, M. I. Nanoplasmonics: past, present, and glimpse into future. Opt. Express 19, 22029–22106, https://doi.org/10.1364/OE.19.022029 (2011). ADS Article PubMed Google Scholar García, M. A. Surface plasmons in metallic nanoparticles: fundamentals and applications. J. Appl. Phys. 45, 389501, https://doi.org/10.1088/0022-3727/45/38/389501 (2012). Hubenthal, F., Ziegler, T., Hendrich, C., Alschinger, M. & Träger, F. Tuning the surface plasmon resonance by preparation of gold-core/silver-shell and alloy nanoparticles. Eur. Phys. J. D 34, 165–168, https://doi.org/10.1140/epjd/e2005-00138-1 (2005). ADS CAS Article Google Scholar Naik, G. V., Shalaev, V. M. & Boltasseva, A. Alternative Plasmonic Materials: Beyond Gold and Silver. Adv. Mat. 25, 3264–3294, https://doi.org/10.1002/adma.201205076 (2013). West, P. R. et al. Searching for better plasmonic materials. Laser & Photonics Reviews 4, 795–808, https://doi.org/10.1002/lpor.200900055 (2010). Hsieh, W. T. et al. Comparative Analysis of Metals and Alternative Infrared Plasmonic Materials. ACS Photonics 5, 2541–2548, https://doi.org/10.1021/acsphotonics.7b01166 (2018). Sanz, J. M. et al. UV Plasmonic Behavior of Various Metal Nanoparticles in the Near- and Far-Field Regimes: Geometry and Substrate Effects. J. Phys. Chem. C 117, 19606–19615, https://doi.org/10.1021/jp405773p (2013). McMahon, J. M., Schatz, G. C. & Gray, S. K. Plasmonics in the ultraviolet with the poor metals Al, Ga, In, Sn, Tl, Pb, and Bi. Phys. Chem. Chem. Phys. 15, 5415–5423, https://doi.org/10.1039/C3CP43856B (2013). Hunderi, O. & Ryberg, R. Band structure and optical properties of gallium. J. Phys. F: Metal Phys. 4, 2084 (1974). Diest, K., Liberman, V., Lennon, D. M., Welander, P. B. & Rothschild, M. Aluminum plasmonics: optimization of plasmonic properties using liquid-prism-coupled ellipsometry. Opt. Express 21, 28638–28650, https://doi.org/10.1364/OE.21.028638 (2013). ADS CAS Article PubMed Google Scholar Wang, H., Tam, F., Grady, N. K. & Halas, N. J. Cu Nanoshells: Effects of Interband Transitions on the Nanoparticle Plasmon Resonance. The Journal of Physical Chemistry B 109, 18218–18222, https://doi.org/10.1021/jp053863t (2005). Schuermans, S., Maurer, T., Martin, J., Moussy, J.-B. & Plain, J. Plasmon/interband transitions coupling in the UV from large scale nanostructured Ni films. Opt. Mater. Express 7, 1787–1793, https://doi.org/10.1364/OME.7.001787 (2017). Wu, P. C. et al. Real-time plasmon resonance tuning of liquid Ga nanoparticles by in situ spectroscopic ellipsometry. Applied Physics Letters 90, 103119, https://doi.org/10.1063/1.2712508 (2007). Albella, P. et al. Shape Matters: Plasmonic Nanoparticle Shape Enhances Interaction with Dielectric Substrate. Nano Lett. 11, 3531–3537, https://doi.org/10.1021/nl201783v (2011). Wu, P. C. et al. Plasmonic Gallium Nanoparticles on Polar Semiconductors: Interplay between Nanoparticle Wetting, Localized Surface Plasmon Dynamics, and Interface Charge. Langmuir 25, 924–930, https://doi.org/10.1021/la802678y (2009). Catalán-Gómez, S., Redondo-Cubero, A., Palomares, F. J., Nucciarelli, F. & Pau, J. L. Tunable plasmonic resonance of gallium nanoparticles by thermal oxidation at low temperatures. Nanotechnology 28, 405705 (2017). Gordillo, N., Catalán-Gómez, S., Pau, J. L. & Redondo-Cubero, A. Spectrally broad plasmonic absorption in Ga and In nanoparticle hybrids. Nanotechnology 30, 475705, https://doi.org/10.1088/1361-6528/ab3c73 (2019). Zhang, T. et al. Gallium platinum alloys- a new material system for UV. plasmonics. Opt. Mater. Express 7, 2880–2887, https://doi.org/10.1364/OME.7.002880 (2017). Hernández, M. J. et al. Gallium-assisted growth of silicon nanowires by electron cyclotron resonance plasmas. Nanotechnology 21, 455602 (2010). Catalán-Gómez, S. et al. Size-selective breaking of the core–shell structure of gallium nanoparticles. Nanotechnology 29, 355707 (2018). Gutierrez, Y. et al. How an oxide shell affects the ultraviolet plasmonic behavior of Ga, Mg, and Al nanostructures. Opt. Express 24, 20621–20631, https://doi.org/10.1364/OE.24.020621 (2016). Soares, B. F., MacDonald, K. F., Fedotov, V. A. & Zheludev, N. I. Light-Induced Switching between Structural Forms with Different Optical Properties in a Single Gallium Nanoparticulate. Nano Letters 5, 2104–2107, https://doi.org/10.1021/nl0515652 (2005). MacDonald, K. F., Fedotov, V. A. & Zheludev, N. I. Optical nonlinearity resulting from a light-induced structural transition in gallium nanoparticles. Applied Physics Letters 82, 1087–1089, https://doi.org/10.1063/1.1543644 (2003). Losurdo, M., Suvorova, A., Rubanov, S., Hingerl, K. & Brown, A. S. Thermally stable coexistence of liquid and solid phases in gallium nanoparticles. Nat Mater 15, 995–1002, https://doi.org/10.1038/nmat4705 (2016). Wu, P. C. et al. Demonstration of Surface-Enhanced Raman Scattering by Tunable, Plasmonic Gallium Nanoparticles. J. Am. Chem. Soc. 131, 12032–12033, https://doi.org/10.1021/ja903321z (2009). Yi, C. et al. Evidence of Plasmonic Coupling in Gallium Nanoparticles/Graphene/SiC. Small 8, 2721–2730, https://doi.org/10.1002/smll.201200694 (2012). Pau, J. L., García-Marín, A., Hernández, M. J., Lorenzo, E. & Piqueras, J. Optical biosensing platforms based on Ga–graphene plasmonic structures on Cu, quartz and SiO2/Si substrates. Physica Status Solidi (b) 253, 664–670, https://doi.org/10.1002/pssb.201552493 (2016). Yang, Y., Callahan, J. M., Kim, T.-H., Brown, A. S. & Everitt, H. O. Ultraviolet Nanoplasmonics: A Demonstration of Surface-Enhanced Raman Spectroscopy, Fluorescence, and Photodegradation Using Gallium Nanoparticles. Nano Lett. 13, 2837–2841, https://doi.org/10.1021/nl401145j (2013). Catalán-Gómez, S. et al. Photoluminescence enhancement of monolayer MoS2 using plasmonic gallium nanoparticles. Nanoscale Adv. 1, 884–893, https://doi.org/10.1039/C8NA00094H (2019). ADS Article Google Scholar Yarema, M. et al. Monodisperse Colloidal Gallium Nanoparticles: Synthesis, Low Temperature Crystallization, Surface Plasmon Resonance and Li-Ion Storage. J. Am. Chem. Soc. 136, 12422–12430, https://doi.org/10.1021/ja506712d (2014). Krasavin, A. V. & Zheludev, N. I. Active plasmonics: Controlling signals in Au/Ga waveguide using nanoscale structural transformations. Appl. Phys. Lett. 84, 1416–1418, https://doi.org/10.1063/1.1650904 (2004). Krasavin, A. V., MacDonald, K. F., Zheludev, N. I. & Zayats, A. V. High-contrast modulation of light with light by control of surface plasmon polariton wave coupling. Applied Physics Letters 85, 3369–3371, https://doi.org/10.1063/1.1808240 (2004). Bennett, P. J. et al. A photonic switch based on a gigantic, reversible optical nonlinearity of liquefying gallium. Applied Physics Letters 73, 1787–1789, https://doi.org/10.1063/1.122282 (1998). Petropoulos, P., Offerhaus, H. L., Richardson, D. J., Dhanjal, S. & Zheludev, N. I. Passive Q-switching of fiber lasers using a broadband liquefying gallium mirror. Applied Physics Letters 74, 3619–3621, https://doi.org/10.1063/1.123200 (1999). Denisyuk, A. I., MacDonald, K. F., García de Abajo, F. J. & Zheludev, N. I. Towards Femtojoule Nanoparticle Phase-Change Memory. Japanese Journal of Applied Physics 48, 03A065 https://doi.org/10.1143/jjap.48.03a065 (2009). Marín, A. G. et al. Gallium plasmonic nanoparticles for label-free DNA and single nucleotide polymorphism sensing. Nanoscale 8, 9842–9851, https://doi.org/10.1039/c6nr00926c (2016). Marín, A. G. et al. Immunosensing platform based on gallium nanoparticle arrays on silicon substrates. Biosens. Bioelectron. 74, 1069–1075, https://doi.org/10.1016/j.bios.2015.08.002 (2015). MacDonald, K. F. et al. Optical control of gallium nanoparticle growth. Applied Physics Letters 80, 1643–1645, https://doi.org/10.1063/1.1456260 (2002). Fedotov, V. A., MacDonald, K. F., Zheludev, N. I. & Emel'yanov, V. I. Light-controlled growth of gallium nanoparticles. Journal of Applied Physics 93, 3540–3544, https://doi.org/10.1063/1.1555677 (2003). Waters, R. F. et al. Templated assembly of metal nanoparticle films on polymer substrates. Applied Physics Letters 109, 263105, https://doi.org/10.1063/1.4973202 (2016). Cheng, L. et al. UV Plasmonic Resonance of Aluminum Shallow Pit Arrays. The Journal of Physical Chemistry C 119, 14304–14311, https://doi.org/10.1021/acs.jpcc.5b02674 (2015). Wen, L., Xu, R., Mi, Y. & Lei, Y. Multiple nanostructures based on anodized aluminium oxide templates. Nat. Nanotechnol. 12, 244, https://doi.org/10.1038/nnano.2016.257 (2016). D, L. & A, S. Nanoporous Alumina: Fabrication, Structure, Properties and Applications (Springer (2015). Sousa, C. T. et al. Nanoporous alumina as templates for multifunctional applications. Appl. Phys. Rev. 1, 031102, https://doi.org/10.1063/1.4893546 (2014). Jung, M., Lee, H. S., Park, H. L. & Mho, S.-i Fabrication of high density CdTe/GaAs nanodot arrays using nanoporous alumina masks. Current Applied Physics 6, e187–e191, https://doi.org/10.1016/j.cap.2006.01.036 (2006). Lei, Y. & Chim, W.-K. Highly Ordered Arrays of Metal/Semiconductor Core−Shell Nanoparticles with Tunable Nanostructures and Photoluminescence. Journal of the American Chemical Society 127, 1487–1492, https://doi.org/10.1021/ja043969m (2005). Zeng, Z. et al. Highly reproducible surface-enhanced Raman scattering substrate for detection of phenolic pollutants. Nanotechnology 27, 455301, https://doi.org/10.1088/0957-4484/27/45/455301 (2016). Ko, W. R. et al. Interfacial Mode Interactions of Surface Plasmon Polaritons on Gold Nanodome Films. ACS Applied Materials & Interfaces 8, 20516–20521, https://doi.org/10.1021/acsami.6b02243 (2016). Fan, X. et al. Assembly of gold nanoparticles into aluminum nanobowl array. Scientific reports 7, 2322–2322, https://doi.org/10.1038/s41598-017-02552-z (2017). ADS CAS Article PubMed PubMed Central Google Scholar González-Campuzano, R., Saniger, J. M. & Mendoza, D. Plasmonic resonances in hybrid systems of aluminum nanostructured arrays and few layer graphene within the UV–IR spectral range. Nanotechnology 28, 465704, https://doi.org/10.1088/1361-6528/aa8ce4 (2017). Catalán-Gómez, S. et al. Self-assembly of highly ordered plasmonic gallium nanoparticles driven by nanopatterning. Nano Futures 2, 041001 (2018). Nečas, D. & Klapetek, P. Gwyddion: an open-source software for SPM data analysis. Open Phys. 10, 181–188, https://doi.org/10.2478/s11534-011-0096-2 (2012). Losurdo, M. Applications of ellipsometry in nanoscale science: Needs, status, achievements and future challenges. Thin Solid Films 519, 2575–2583, https://doi.org/10.1016/j.tsf.2010.11.066 (2011). Gasiorowski, J. et al. Dielectric Function of Undoped and Doped Poly[2-methoxy-5-(3′,7′-dimethyloctyloxy)-1,4-phenylene-vinylene] by Ellipsometry in a Wide Spectral Range. The journal of physical chemistry C 117, 22010–22016, https://doi.org/10.1021/jp4061957 (2013). Losurdo, M., Brown, A. S. & Bruno, G. In Ellipsometry at the Nanoscale (eds Maria Losurdo & Kurt Hingerl) 453–491 (Springer Berlin Heidelberg (2013). de la Mata, M., Catalán-Gómez, S., Nucciarelli, F., Pau, J. L. & Molina, S. I. High Spatial Resolution Mapping of Localized Surface Plasmon Resonances in Single Gallium Nanoparticles. Small 15, 1902920, https://doi.org/10.1002/smll.201902920 (2019). de la Mata, M., Catalán-Gómez, S., Nucciarelli, F., Pau, J. L. & Molina, S. I. High Spatial Resolution Mapping of Localized Surface Plasmon Resonances in Single Gallium Nanoparticles. Small 0, 1902920, https://doi.org/10.1002/smll.201902920. Cataldi, U. et al. Growing gold nanoparticles on a flexible substrate to enable simple mechanical control of their plasmonic coupling. Journal of Materials Chemistry C 2, 7927–7933, https://doi.org/10.1039/C4TC01607F (2014). Lange, H. et al. Tunable Plasmon Coupling in Distance-Controlled Gold Nanoparticles. Langmuir 28, 8862–8866, https://doi.org/10.1021/la3001575 (2012). Cha, H., Lee, D., Yoon, J. H. & Yoon, S. Plasmon coupling between silver nanoparticles: Transition from the classical to the quantum regime. Journal of Colloid and Interface Science 464, 18–24, https://doi.org/10.1016/j.jcis.2015.11.009 (2016). Seo, S., Chang, T.-W. & Liu, G. L. 3D Plasmon Coupling Assisted Sers on Nanoparticle-Nanocup Array Hybrids. Scientific Reports 8, 3002, https://doi.org/10.1038/s41598-018-19256-7 (2018). Maier, S. A., Brongersma, M. L., Kik, P. G. & Atwater, H. A. Observation of near-field coupling in metal nanoparticle chains using far-field polarization spectroscopy. Physical Review B 65, 193408, https://doi.org/10.1103/PhysRevB.65.193408 (2002). Jain, P. K., Huang, W. & El-Sayed, M. A. On the Universal Scaling Behavior of the Distance Decay of Plasmon Coupling in Metal Nanoparticle Pairs: A Plasmon Ruler Equation. Nano Letters 7, 2080–2088, https://doi.org/10.1021/nl071008a (2007). Su, K. H. et al. Interparticle Coupling Effects on Plasmon Resonances of Nanogold Particles. Nano Letters 3, 1087–1090, https://doi.org/10.1021/nl034197f (2003). Gunnarsson, L. et al. Confined Plasmons in Nanofabricated Single Silver Particle Pairs: Experimental Observations of Strong Interparticle Interactions. The Journal of Physical Chemistry B 109, 1079–1087, https://doi.org/10.1021/jp049084e (2005). Rechberger, W. et al. Optical properties of two interacting gold nanoparticles. Optics Communications 220, 137–141, https://doi.org/10.1016/S0030-4018(03)01357-9 (2003). Reinhard, B. M., Siu, M., Agarwal, H., Alivisatos, A. P. & Liphardt, J. Calibration of Dynamic Molecular Rulers Based on Plasmon Coupling between Gold Nanoparticles. Nano Letters 5, 2246–2252, https://doi.org/10.1021/nl051592s (2005). Viñas, S. L. et al. Magnetic hardening of Fe30Co70 nanowires. Nanotechnology 26, 415704, https://doi.org/10.1088/0957-4484/26/41/415704 (2015). Palmero, E. M., Bran, C., Real, R. Pd, Magén, C. & Vázquez, M. Magnetic behavior of NiCu nanowire arrays: Compositional, geometry and temperature dependence. Journal of Applied Physics 116, 033908, https://doi.org/10.1063/1.4890358 (2014). Bran, C. et al. Direct observation of transverse and vortex metastable magnetic domains in cylindrical nanowires. Physical Review B 96, 125415, https://doi.org/10.1103/PhysRevB.96.125415 (2017). Berganza, E., Bran, C., Jaafar, M., Vázquez, M. & Asenjo, A. Domain wall pinning in FeCoCu bamboo-like nanowires. Scientific Reports 6, 29702, https://doi.org/10.1038/srep29702 (2016). ADS Article PubMed PubMed Central Google Scholar Tompkins, H. G. in A User's Guide to Ellipsometry 1–18 (Academic Press (1993). Draine, B. T. & Flatau, P. J. Discrete-Dipole Approximation For Scattering Calculations. J. Opt. Soc. Am. A 11, 1491–1499, https://doi.org/10.1364/JOSAA.11.001491 (1994). Feser, J. & Sobh, A. N. DDSCAT Convert: A Target Generation Tool, https://nanohub.org/resources/ddaconvert, (2016). Draine, B. T. & Flatau, P. J. User Guide for the Discrete Dipole Approximation Code DDSCAT 7.2, (2012). Knight, M. W. et al. Gallium Plasmonics: Deep Subwavelength Spectroscopic Imaging of Single and Interacting Gallium Nanoparticles. ACS Nano 9, 2049–2060, https://doi.org/10.1021/nn5072254 (2015). Al-Kuhaili, M. F., Durrani, S. M. A. & Khawaja, E. E. Optical properties of gallium oxide films deposited by electron-beam evaporation. Appl. Phys. Lett. 83, 4533–4535, https://doi.org/10.1063/1.1630845 (2003). The research is supported by the MINECO (CTQ2014-53334-C2-2-R, CTQ2017-84309-C2-2-R and MAT2016-76824-C3-1-R) and Comunidad de Madrid (P2018/NMT4349 and S2018/NMT-4321 NANOMAGCOST) projects. ARC acknowledges Ramón y Cajal program (under contract number RYC-2015-18047). We also would like to thank Dra. Nuria Gordillo for her technical assistance and fruitful discussions. Grupo de Electrónica y Semiconductores, Departamento de Física Aplicada, Universidad Autónoma de Madrid, Cantoblanco, E-28049, Madrid, Spain S. Catalán-Gómez, J. L. Pau & A. Redondo-Cubero Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas (ICMM-CSIC), Cantoblanco, E-28049, Madrid, Spain C. Bran, M. Vázquez & L. Vázquez S. Catalán-Gómez C. Bran M. Vázquez L. Vázquez J. L. Pau A. Redondo-Cubero C.B. and M.V. made the different substrates used in the experiments. L.V. characterized the samples by AFM. J.L.P. and A.R.C. as the group leaders contributed with their funding, organization, coordination and data interpretation and S.C.G. carried out the optical characterization and simulations, analysed the results and wrote the manuscript. All authors reviewed the manuscript. Correspondence to S. Catalán-Gómez. The authors declare no competing interests. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Catalán-Gómez, S., Bran, C., Vázquez, M. et al. Plasmonic coupling in closed-packed ordered gallium nanoparticles. Sci Rep 10, 4187 (2020). https://doi.org/10.1038/s41598-020-61090-3 MOCVD growth of gallium and indium microparticles for SERS applications Ewa Dumiszewska Piotr Caban Jacek M. Baranowski Journal of Materials Science: Materials in Electronics (2021) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. About Scientific Reports Guide to referees Guest Edited Collections Scientific Reports Top 100 2019 Scientific Reports Top 10 2018 Editorial Board Highlights 10th Anniversary Editorial Board Interviews Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Scientific Reports (Sci Rep) ISSN 2045-2322 (online) nature.com sitemap Protocol Exchange Nature portfolio policies Author & Researcher services Scientific editing Nature Research Academies Libraries & institutions Librarian service & tools Partnerships & Services Nature Conferences Nature Africa Nature China Nature India Nature Italy Nature Korea Nature Middle East Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Morphological Inflection for Oto-Manguean Languages Sahil Jayaram, Diana Abagyan, Tiansheng Sun Project code repository, on GitHub We present a novel approach to the task of multilingual morphological inflection that utilizes lexicostatistical information, focusing on the Oto-Manguean language family. Unlike the state-of-the-art approach on which ours is based, we forgo transfer learning, instead relying solely on data for the Oto-Manguean languages of interest. Consequently, our model falls notably short of the state of the art in overall accuracy. However, our results demonstrate that initializing language embeddings according to interlingual Levenshtein distances rather than at random produces results that are more balanced across languages, as well as higher overall. Morphological inflection is a common linguistic phenomenon in synthetic languages around the world. It is a word formation process in which a word is modified to reflect grammatical information. Morphological inflection systems have multiple important applications; for example, in machine translation, it is desirable for the translated text to contain correct inflected forms. Therefore, in recent years, there have been efforts to build automatic morphological inflection systems. Although results for the task were higher than anticipated, participants acknowledge the fact that their models drastically underperformed for low-resource languages with complex inflection system, such as the Oto-Manguean language family [1]. Due largely to its large inventory of tones, which are also used extensively for inflection, Oto-Manguean languages have some of the most complex inflection systems in the world. However, there are morphological and grammatical similarities between the Oto-Manguean languages. Information regarding these similarities may be useful to predictive models. The Oto-Manguean Language Family Oto-Manguean is a large and diverse language family indigenous to Mesoamerica but now found only in Mexico. For some languages in the family, decline continues, as their status is moribund or highly endangered. However, some branches, such as the Zapotecan and Mixtecan, enjoy lively communities. According to a 2005 census, there are 1,769,971 speakers of Oto-Manguean languages, of which 218,679 are monolingual. A large percentage of Oto-Manguean speakers, those of Zapotecan and Mixtecan languages, now live in Oaxaca, a state in Southern Mexico. Other speakers speak Otomi and Mazahua languages in central Mexican state of Mexico and Hidalgo. It is critical to incorporate these languages into the digital sphere, both to ensure equitable access and to promote language vitality. Oto-Manguean languages have many interesting features, which complicate morphological analysis, but which may be leveraged for an automatic inflection system. All languages in the family are tonal, ranging in number of tones from two to more than ten, which bears morphosyntactic significance. The family's inflectional morphology is considered to be among the richest in the world, consisting of multiple layering classes. Inflection occurs in the stem, affixes, or tonal changes, and these may all be present in one word [2]. Many Oto-Manguean languages, such as Mazatec, also have systems of whistled speech, which whistles out tone combinations of words. Moreover, the internal diversity of Oto-Manguean poses difficulties in developing a versatile tool. At present, there is a stark lack of language technology for Oto-Manguean languages, even compared to other groups of indigenous American languages. Despite the number of active speakers, no Oto-Manguean languages have a Wikipedia page or any input methods on Windows or Mac. Compared to 16 online paper resources for the Uto-Aztecan languages, 15 for Quechuan, and 8 for Algic, only three online papers were published for Oto-Manguean resources as of 2018 [3]. The main computational resource for Oto-Manguean languages is the Oto-Manguean Inflectional Class Database [2], which includes over 13,000 entries in 20 Oto-Manguean languages, nine of which are used as inflection data in SIGMORPHON 2020 Task 0. There are also some efforts in machine translation; for example, the SimplesoftMX project translates Spanish into Zapotec, a major Oto-Manguean language spoken in Oaxaca. However, digital resources for Oto-Manguean languages are still very scarce, and unfortunately, many Oto-Manguean languages are considered "dead" in the context of digital presence and resources [4]. Morphological inflection helps to alleviate data sparsity issues for low resource languages with rich morphology. For example, in machine translation, a morphological inflection system indicates where inflected forms of a word may occur infrequently or not at all in the corpus. The appropriate stem is inflected according to the morphological paradigm in the final translation. It has been shown that a morphological grammar, whether handcrafted or learned, significantly improves translation from English to a morphologically rich target language [5]. Morphological inflection models also have the potential to greatly assist predictive text keyboards and speech recognition, as well as educational tools [6]. The Inuktitut Morphological Analyzer aids students with highly agglutinating words. Educational tools are important for language vitality, as people are encouraged to learn and maintain use of the language. Multiple organizations and projects focus on building multilingual morphological tools for under-resourced languages. One such organization is SIGMORPHON (Special Interest Group on Computational Morphology and Phonology). SIGMORPHON provides a forum for news of recent research developments in computational morphology and phonology. It also organizes shared tasks in which participants can provide solutions to important morphological problems. In the SIGMORPHON 2020 Shared Task 0, participants were asked to design a model that generates morphological inflection for all languages. Development languages from 5 language families, including ten Oto-Manguean languages, was used for training and development. The participants then fine-tuned their model on a list of "surprise languages," half of which were genetically related to the development languages, during the generalization phase. Finally, the participants' models were evaluated on all previous languages. For Oto-Manguean languages, the performance was relatively poor, with a mean accuracy across systems slightly lower than average at 78.5%, and a variance that was high in relation to other language groups, with language-level accuracy ranging from 18.7% to 99.1% [1]. The system that achieved the highest accuracy across all languages was deepspin-02-1 [1]. deepspin-02-1 consists of a feedforward language encoder, two bidirectional LSTM encoders, and a unidirectional LSTM decoder (which incorporates a gated attention mechanism). For each input, one encoder encodes the lemma character sequence while the other encodes the set of inflectional tags. Then, the decoder generates the output character sequence [7]. The system reaches 83.45% accuracy overall for the Oto-Manguean language family. Lexicostatistics Lexicostatistics is a subfield of corpus-based linguistic methods concerned with determining the relationship between languages. Classical methods depend on cognate lists, or a Swadesh list. The Swadesh list is named for linguist Morris Swadesh, and is a list of essential concepts to language. The Swadesh list has been used in lexicostatistics, phylogenetics, and glottochronology since the first iteration in 1950 [8]. We do not have access to either a Swadesh list or cognate list for our languages, which complicates our analysis. In our project, we rely on the concept of "linguistic distance" to initialize our embeddings. Linguistic distance is a measure of how similar two languages are to each other. Linguistic distance is warily regarded in linguistic literature, as there are many factors that may differ between two languages, such as syntax, phonetics, morphology, and so on. Reducing all of these complex inter-playing characteristics into a single distance score is difficult, if not impossible. Sophisticated methods use standard NLP techniques, such as computing a score based on the perplexity of an n-gram language model trained on one language on the test text of the other language [9], or by comparing word embedding distance [10]. However, we do not have access to a document-based corpus in order to train an n-gram language model, nor pre-trained word embeddings. Instead, we compute a measure of linguistic distance based on normalized Levenshtein distance, described in this section. With technologies such as translation systems, spell checkers, language-specific keyboards, and dictionary apps, it is possible for digitally disadvantaged languages to "digitally ascend," empowering speakers to participate in global online society using their mother tongues [4] [11]. This is particularly important for the 1.8 million Oto-Manguean speakers, 12% of whom are monolingual; the development of language technologies for Oto-Manguean could choke back systematic forces that exhort linguistic assimilation [2] [12]. Such forced linguistic assimilation has been known to cause irreparable damage to the groups affected [13]. Bird criticizes technologists who frame computational tasks for low-resource languages as "zero-resource scenarios," ignoring the relevant knowledge of linguists and native speakers [12]. We believe that in the interest of language justice, solutions to the problem of automatic morphological inflection should be tailored to the language of interest, sharing cross-lingual information when necessary. For this reason, we adapt the deepspin-02-1 system introduced by Peters and Martins [7] to a much narrower domain–the Oto-Manguean language family–initializing language embeddings using prior knowledge of language similarity rather than doing so at random. We obtain our data directly from the SIGMORPHON 2020 Shared Task 0 repository [1]. It contains 10 Oto-Manguean languages from a wide range of branches (Figure 1). Thus, our data is representative of most of the language family. The train set size for each language ranges from 805 (Chichicapan Zapotec) to 22962 (Mezquital Otomi), with an average size of 7799.3 (Figure 2). The task dataset comes from two resources. The data source for 9 of the languages comes from the Oto-Manguean inflectional class database [14]. The data for the remaining language, Eastern Highland Chatino, comes from a Chatino corpus released in 2020 [15]. Each line of the data files is organized as follows: lemma target form tags where lemma is the original word, target_form is the inflected form, and tags is a list of (typically 4 to 5) rules that define the inflection, separated by semicolons. The task is to convert the lemma to the target form based on the tags. For the purposes of our model, we represent words as sequences of characters, and tag sets as sequences of tags. Typically, one letter of a word is a character. However, tone superscripts are considered together with the preceding alphabet as one character. Each tag and character is assigned an index. In total, there are 30 tags and 414 different characters. The maximum character length of a word is 27. The words and tags are passed to the model as sequences of one-hot vectors. The general structure of our model is similar to that of Peters and Martins [7] (Figure 3). Our model differs from that of Peters and Martins in two ways. First, because the cardinality of our language set is 10 rather than 90, we use a language embedding size of 5 rather than 20. Second, in order to compensate for the scarceness and lower linguistic variance of our training data, we reduce the parametric complexity of the decoder's attention mechanism by replacing gated two-headed attention with vanilla single-headed attention. Linguistic Distance We experiment with "smart" initialization methods for the language embeddings, informed by lexicostatistics. We use normalized Levenshtein distance as our linguistic distance measure, as defined by Petroni and Serva [16]. Levenshtein distance between two words is defined as the minimum number of insertions, substitutions, or deletions of a single character needed to transform one word into the other. The modification is normalized by the length of the longer word. Thus, for 2 words $wa$ and $wb$, normalized Levenshein distance $D_n$ is defined as: $$Dn(wa, wb) = \frac{D(wa, wb)}{max(|wa|, |w_b|)}$$ Linguistic distance between two languages, $a$ and $b$, is defined as the average of normalized Levenshtein distance between all pairs of words from language $a$ and language $b$ that share the same definition. However, for one of our languages, Eastern Highland Chatino, the corpus does not contain definitions. Instead, for a given $wa$, we selected the $wb$ with the lowest Levenshtein distance to contribute to the average. Thus, we calculated linguistic distance $D_l$ between languages $a$ and $b$ according to the following formula: $$Dl(a, b) = \frac{1}{|Wa|} \sum{wa \in Wa} \min{wb \in Wb}(Dn(wa, w_b))$$ For each language, a 5-dimensional embedding is randomly initialized. The language embeddings are found though optimization using the linguistic distances discussed in the previous paragraph, via stochastic gradient descent. Let P be the set of all unique, unordered language pairs, $va$ the current language embedding for language $a$, and $d{a,b}$ the calculated linguistic distance between languages $a$ and $b$. The loss function is defined as: $$\sum{(a,b) \in P} \Big| \left|\left| va - vb\right|\right|{2}-d_{a,b} \Big|$$ Optimization is halted when the difference between loss values is lower than .005, averaging between 17-20 optimization rounds. This threshold was chosen manually by observing the plot of the loss function. Through this process, we achieve language embeddings that incorporate the linguistic distances between languages. We trained a total of 6 models (3 random seeds per initialization scheme). Our models were evaluated on the test set using a decoder beam size of 1, which we found to be optimal. We present the averages of our results in Figure 4 and Figure 5. Our model achieves an overall accuracy of 58.4% using random initialization language embeddings, and 59.6% using Levenshtein distance, both of which are lower than the score achieved by Peter and Martins' model (83.45%). We believe that the main reason for this discrepancy is that our system, unlike deepspin-02-1, did not utilize transfer learning, instead relying solely on data from the 10 languages on which it was evaluated. Figure 6 demonstrates our highest-performing model "in action." Our results provide strong evidence that using Levenshtein distance information improves performance in a multilingual, low-resource scenario. With random initial language embeddings, several languages, such as Tlatepuzco Chinantec, Eastern Highland Otomi, Yoloxóchitl Mixtec, and Chichicapan Zapotec, received 0% accuracy. With Levenshtein distance, model performance on these languages improved significantly. Although we observed a decrease in accuracy for other languages, such as San Pedro Amuzgo Amazgos, Yaitepec Chatino and Zenzontepec Chatino, the models that accounted for Levenshtein distance achieved slightly higher overall accuracy and more balanced results across different languages. We conducted an analysis to identify possible explanations for the aforementioned "balancing" effect. We found no strong correlation between the change in language-level accuracy from ran to ld ($\Delta_a$) and either train set size or mean squared Levenshtein distance. We present our analysis in Figure 7. We have yet to determine the reason why using linguistically-motivated initial language embeddings, in lieu of randomly initialized language embeddings, produces the trade-off mentioned in here. However, we believe that the matter is deserving of further investigation. Another key question worth exploring is whether our use of linguistic distance remains advantageous when applied in a transfer learning environment; if the benefits of lexicostatistical initialization persist in a higher-resource environment with more languages, e.g. the SIGMORPHON 2020 Shared Task 0 in its entirety, then our method may be of value for multilingual tasks in general. Different measures of linguistic distance may also yield better results. Cognate or Swadesh lists, or any of the methods mentioned in here, would be useful to experiment with. Additionally, our Levenshtein distance measure may be improved by considering phonetics or historical linguistics. Difference in manner and place of articulation, as well as tonal differences, could be factored into the Levenshtein distance calculation. There may be work from historical linguistics in Oto-Manguean which could aid in our linguistic distance calculation. These did not come up in our literature review, but it would be a worthy avenue of consideration in the future. It is also worth experimenting with alternative ways to utilize linguistic distance information. For instance, how does performance change when language embeddings are fixed during training, or when we introduce a loss term that encourages the model to learn lexicostatistic information by itself? Last but not least, we believe it essential that future research in language technologies for Oto-Manguean languages utilize richer linguistic priors. To invoke Bird, it is irresponsible to ignore the wealth of information on these languages (grammatical rules, dictionaries, etc.) that linguistics and native speakers have collected and documented [12]. Thus, a continuation of this study should incorporate deterministic machinery that implements known properties of the languages of interest. [1] Vylomova, et al. 2020. SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 1–39, Online. Association for Computational Linguistics. [2] Enrique L. Palancar and Jean Leo Leonard. Tone and Inflection: New Facts and New Perspectives. DeGruyter Mouton. [3] Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza. Challenges of Language Technologies for the Indigenous Languages of the Americas. In Proceedings of the 27th International Conference on Computational Linguistics. [4] Andras Kornai. Digital language death. PLoS ONE, 8(10). [5] Victor Chahuneau. 2013. Translating into Morphologically Rich Languages with Synthetic Phrases. Association for Computational Linguistics. [6] Antonios Anastasopoulos and Graham Neubi. 2019. Pushing the Limits of Low-Resource Morphological Inflection. EMNLP. [7] Ben Peters and Andre F.T. Martins. 2020. DeepSPIN at Sigmorphon 2020: One-size-fits-all Multilingual Models. Association for Computational Linguistics. [8] Pablo Gamallo, Jose Ramom Pichel, and Inaki Alegria. 2020. Measuring Language Distance of Isolated European Languages. Information, 11(4). [9] Pablo Gamallo, Jose Ramom Pichel, and Inaki Alegria. 2017. From language identification to language distance. Physica A: Statistical Mechanics and its Applications, 484:152–162. [10] Ehsaneddin Asgari and Mohammad R.K. Mofrad. 2016. Comparing fifty natural languages and twelve genetic languages using word embedding language divergence (WELD) as a quantitative measure of language distance. In Proceedings of the Workshop on Multilingual and Cross-lingual Methods in NLP, pages 65–74, San Diego, California. Association for Computational Linguistics. [11] M. Benjamin. 2016. Digital language diversity: Seeking the Value Proposition. In Collaboration and Computing for Under-Resourced Languages: Towards an Alliance for Digital Language Diversity, page 52–58. [12] S. Bird. Decolonising speech and language technology. In Proceedings of the 28th International Conference on Computational Linguistics (COLING), page 3504–3519. [13] S. Romaine. The global extinction of languages and its consequences for cultural diversity. In Cultural and Linguistic Minorities in the Russian Federation and the European Union, page 31–46. Springer International Publishing, Switzerland. [14] Timothy Feist and Enrique L. Palancar. Oto-Manguean Inflectional Class Database. University of Surrey. [15] Hilaria Cruz, Antonios Anastasopoulos, and Gregory Stump. 2020. A Resource for Studying Chatino Verbal Morphology. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2827–2831, Marseille, France. European Language Resources Association. [16] Filippo Petroni and Maurizio Serva. 2010. Measures of lexical distance between languages. Physica A: Statistical Mechanics and its Applications, 389(11):2280–2283. All text ©2021 by respective authors. Site assembled by Jonathan Reeve using Haskell and Rib, with source code available here on GitHub. We thank the Collaboratory at Columbia program for their support of this course.
CommonCrawl
Advanced Modeling and Simulation in Engineering Sciences Numerical simulation of solid deformation driven by creeping flow using an immersed finite element method Thomas Rüberg1,2 & José Manuel Garcí Aznar1 Advanced Modeling and Simulation in Engineering Sciences volume 3, Article number: 9 (2016) Cite this article An immersed finite element method for solid–fluid interaction is presented with application focus on highly deformable elastic bodies in a Stokes flow environment. The method is based on a global balance equation which combines the solid and fluid momentum balances, the fluid mass balance and, in weak form, the interface conditions. By means of an Updated Lagrangian description for finite elasticity, only one analysis mesh is used, where the solid particles are backtracked in order to preserve the deformation history. The method results in a full coupling of the solid-fluid system which is solved by an exact Newton method. The location of the material interface is captured by a signed distance function and updated according to the computed displacement increments and the help of an explicit surface parameterisation; no body-fitted volume meshes are needed. Special emphasis is placed on the accurate integration of finite elements traversed by the interface and the related numerical stability of the shape function basis. A number of applications for compressible Neo-Hookean solids subject to creeping flow are presented, motivated by microfluidic experimentation in mechanobiology. In a large class of biological and biomedical problems, the interaction of fluid flow and highly deformable solids plays an important role. Consider, as examples, the biological response of cells to a mechanical stimulus [1], the flow of capsules in a narrowed tube [2], the deformation of thin-walled blood vessels [3], or the motion of red blood cells [4]. Not only the mechanical response of cells and tissues in a fluid environment is of great interest, but also the biological implications of the mechanical environment (that is, mechanobiology). It is, for instance, suggested that fluid shear stresses can control the phenotype of a living cell: to regulate the cell's differentiation or proliferation, or simply damage it, see for example [5] for endothelial cells. To this end, microfluidic experimentation [6] has proven a powerful tool for the in vitro analysis of such phenomena because it allows to precisely control the mechanical and chemical environment of individual cells or aggregates thereof. Nevertheless, this methodology is expensive and time-consuming. Therefore, computational mechanobiology [7] often proves to be a promising alternative to assist the experimentation. For the considered field of applications, inertia forces are negligible (flow with very low Reynolds numbers, \( Re \ll 1\)) and the ratio of elastic to viscous stresses is by orders of magnitude smaller than in systems akin to aeroelasticity (for example, heaving wings [8] and insect flight [9]). For the modelling of living cells in a continuum mechanical framework, the most common approaches are liquid-filled, elastic membranes or (visco-)elastic solids. Both descriptions have their merits and both can be supported or rejected by experiments [10]. In this work, we consider a solid domain occupied by a homogeneous compressible Neo-Hookean material that is subject to Stokes flow. This class of problems yields an initial, but generic approach to above range of applications with the potential of future extension. Over the last decades a large quantity of methods has been developed for the numerical analysis of fluid-structure interaction problems (see, for instance, the journal issues [11–13] dedicated entirely to this topic, or the monograph [14]). Broadly speaking, these can be classified into monolithic methods, e.g. [15], and iterative coupling approaches, e.g. [16, 17]. Whereas in the former class of methods a fully coupled system of equations is formulated, in the latter class the equations of fluid and solid domains are solved independently and provide the boundary conditions (velocities or forces) to the respectively other domain. Although this iterative approach bears the potential of higher numerical efficiency and software modularity, the stability of the iteration process often requires severe restrictions of the size of the time step or sub-iterations [18]. In order to avoid this intricacy, the approach presented here is monolithic. Another important classification is to divide the methods into those using body-fitted meshes and their counterpart, immersed methods [19]. A body-fitted mesh conforms to the geometry of the problem, this means that the interface between solid and fluid is solved and tracked by the mesh [20], see also [21] for space-time formulation. The advantage of this approach is a simplified implementation of interface conditions and has been successfully applied to a wide range of applications: biomedical (for instance, airways and arteries [3, 22]), parachutes [23, 24], or the iso-geometric analysis of wind turbines [25]. Nevertheless, the use of body-fitted meshes has the downside that large solid deformations can lead to a virtual destruction of the analysis mesh. Even though this problem is addressed by the ALE formalism [26], there are situations where a body-fitted mesh cannot be maintained. Tedious re-meshing and solution mapping techniques are often the consequence of the severe mesh distortions and are avoided by the immersed techniques. The most prominent approach with non-body-fitted meshes is the immersed boundary method [27] and tailored towards elastic surfaces subject to a flow environment. It has been successfully applied to simulations of blood flow and the deformation of red blood cells [28]. Moreover, a finite element counterpart has been developed in [29]. Despite its great success, the numerical stability of the immersed boundary method is subject to severe restrictions of the time step [27] due to its mainly explicit character. Apart from the mentioned techniques which rely domain discretisations, the use of boundary integral equations is a popular alternative [30]. Due to the surface-only description of the flow field, the numerical cost can be reduced significantly in this class of methods. See, for instance, the simulation of vesicle flow [31, 32] where an unbounded fluid domain and liquid-filled membranes are used as model problem. Nevertheless, there are restrictions on the use of boundary integral equations: they rely on a homogeneous Stokes flow and any extension to non-linear flow behaviour is not straightforward; moreover, the robust implementation of singular integral equations is a demanding task. The application target of the aforementioned methods is mostly the interaction between a fluid and a reduced solid, such as a membrane or a shell. Here we focus on voluminous solids instead as, for instance, in [33, 34], where an explicit finite difference method is presented for this problem class. The here presented approach falls into the class of finite element methods with immersed boundaries and the spatial discretisation does not conform with the location of the solid-fluid interface. Henceforth, we refer to our method as an immersed finite element method even though the term has already been employed in [29] (see also, [35]) for a finite element analogy to the immersed boundary method of [27], which basically consists of an explicit coupling scheme where structural forces are imposed on the fluid mesh and the computed velocities modify the structural configuration. The method presented in [33] and typically the immersed boundary method do not sharply resolve the interface but work with a transition zone between the two media that depends on the grid resolution. The method proposed here works with a so-called sharp interface representation [19] and is based on the immersed b-spline finite element method proposed in [36]; see also [37, 38] for finite element methods with embedded interfaces. Therein, the application to viscous fluid flow with moving boundaries has been targeted and later carried over to the partitioned analysis of fluid-structure interaction in [8]. In this latter reference, the structure is resolved by a Lagrangian mesh and the fluid with a Eulerian mesh in which an implicit geometry representation is superimposed in order to capture the interface. Here, this idea is pushed further by using only one analysis mesh for both solid and fluid, in the spirit of the methods developed in [39, 40], which are based on a fully Eulerian description, see also [41–43]. The drawback of fully Eulerian approaches for the solid is that displacement and velocity become independent fields that are coupled by an additional advection equation which needs to be solved [42]. In order to avoid this additional equation and have solid velocities as an extra field, we choose an Updated Lagrangian method [44] combined with particle tracking, as advertised in [45] for plasticity problems with large deformations. Moreover, this choice of expressing the solid equilibrium in the latest known configuration avoids the need of shape derivatives [46] in the linearisation process for the used Newton method. Under the assumption of small velocities, the time derivatives are completely neglected from the field equations and only appear through the interface condition between solid and fluid. This condition is weakly incorporated using Nitsche's method [47], see also [8, 46] for this technique in the context of an iterative coupling method. A signed distance function is used to capture the location of the interface as a level set [48] for the field computation. At the end of every time step an explicit surface parameterisation is generated and moved with the computed displacement increment. Note that we combine explicit and implicit geometry descriptions; the implicit version is used for a quick interface detection at the beginning of each simulation step. Falling back to a temporary explicit description allows to accurately trace the interface location without the need of additional advection equations as common in level set methods [48]. We thereby avoid possible distortions of the surface mesh when using an entirely explicit surface description. Several test applications demonstrate the potential of the method. First, the convergence behaviours of the updated Lagrangian method for a solid-only problem and a fully coupled fluid-solid interaction problem are studied. Next, the cases of a solid subject to shear flow, the flow through a constricted pipe section and driven cavity flow are analysed numerically. Based on the computed field data, several quantities of interest are studied, such as the shape variation of the immersed solid, particle motion, Cauchy stresses in fluid and solid, and fluid velocity patterns. The outline of this article is as follows. In "Fluid–solid interaction" section, the basic balance equations of solid and fluid are presented and a global solid–fluid balance equation that incorporates the interface conditions weakly is derived. Other than customary, we first discretise this expression in time and linearise it before introducing the spatial discretisation in "Immersed finite element method" section. More specifically, in "Updated Lagrangian method and linearisation" section the linearisation is based on an Updated Lagrangian formalism that avoids the need of shape derivatives. Regarding the immersed finite element discretisation, a special emphasis is placed on the integration and numerical stability related to the elements which are traversed by the interface in "Cut elements" section. Moreover, the displacement history is maintained with a particle tracking algorithm presented in "Interface update and particle tracking" section. Finally, in "Example applications" section several example applications are presented which demonstrate the potential of the new method. Fluid–solid interaction We begin by deriving our coupling formulation from the balances of momentum and mass of the solid and the fluid parts, respectively. Here, we restrict ourselves to the case of very low Reynolds numbers such that inertia terms and fluid advection can be safely discarded. This simplification is justified by the fact that viscous forces are significantly larger than inertial forces [49]. Static balance laws The balance laws for solid and fluid can be found, among others, in [50–52]. The main equations important for this work are presented in this section and help to introduce the chosen notation. Let \(\Omega ^s(t) \subset \mathbb {R}^D\) (\(D=2\) or 3 denotes the spatial dimension) be occupied by a hyperelastic solid and immersed in a control volume domain \(\Omega \) at the time t. The remainder \(\Omega ^f(t) = \Omega {\setminus }\Omega ^s(t)\) of this volume is occupied by a viscous incompressible fluid. The boundary of the solid domain is denoted by \(\Gamma (t)\) and the solid is assumed to be strictly inside \(\Omega \) at all times, such that \(\partial \Omega \cap \Gamma (t) = \emptyset \). The outward unit normal vector to the solid domain is denoted by \(\varvec{n}\). This vector refers to the current configuration is therefore time-dependent, but we omit to write \(\varvec{n}(t)\) for simplicity. This choice of notation is illustrated in Fig. 1 and it has to be emphasised that the configuration of the domains is time-dependent: the interaction between solid and fluid changes shape and location of \(\Omega ^s(t)\) and thus \(\Omega ^f(t)\). But the outer boundary \(\partial \Omega \) is assumed to remain fixed. In view of the considered applications, we neglect inertia and advection terms and formulate the following local balance laws $$\begin{aligned} - \nabla \cdot \varvec{\sigma }^s(\varvec{d})&= \varvec{0} \quad \varvec{x}\in \Omega ^s(t)\end{aligned}$$ $$\begin{aligned} - \nabla \cdot \varvec{\sigma }^{f}(\varvec{u},p)&= \varvec{0}\quad \varvec{x}\in \Omega ^{f}(t)\end{aligned}$$ $$\begin{aligned} \nabla \cdot \varvec{u}&= 0 \quad \varvec{x}\in \Omega ^{f}(t). \end{aligned}$$ Fluid–solid interaction. Snapshot at time t of the time-dependent configuration of fluid domain \(\Omega ^f(t)\) with immersed solid \(\Omega ^s(t)\) Here, \(\varvec{\sigma }^s\) and \(\varvec{\sigma }^f\) are the Cauchy stresses in the solid and the fluid, respectively, and we have the primal field variables solid displacement \(\varvec{d}\), fluid velocity \(\varvec{u}\) and fluid pressure p; all functions of the spatial coordinate \(\varvec{x}\) and time t. The spatial divergence operation is symbolised by \(\nabla \cdot ()\). Equations (1) and (2) are the quasi-static momentum balances in solid and fluid, and (3) is the balance of mass for an incompressible fluid. The fluid is Newtonian and the solid is assumed to be hyperelastic. Therefore, the above introduced stresses fulfil the material laws $$\begin{aligned} \varvec{\sigma }^f(\varvec{u},p)&= -p \varvec{I} + 2 \mu ^f \varvec{\varepsilon }(\varvec{u}) \end{aligned}$$ $$\begin{aligned} \varvec{\sigma }^s(\varvec{d})&= \frac{1}{\det \varvec{F}} \frac{\partial W(\varvec{F})}{\partial \varvec{F}} \varvec{F}^\top \quad \text {with} \quad \varvec{F}=\varvec{F}(\varvec{d}), \end{aligned}$$ where \(\varvec{I}\) is the D-dimensional identity tensor, \(\mu ^f\) the fluid's dynamic viscosity, \(\varvec{\varepsilon }\) denotes the symmetric gradient, \(\varvec{F}\) the deformation gradient and W the strain energy density function [50, 53]. On the current location of the interface \(\Gamma (t)\) between solid and fluid, we have the conditions $$\begin{aligned} \dot{\varvec{d}} := \frac{\partial \varvec{d}}{\partial t} = \varvec{u}\quad \text {and} \quad \varvec{\sigma }^s \varvec{n}= \varvec{\sigma }^f \varvec{n}, \end{aligned}$$ which are commonly referred to as continuity and equilibrium conditions. It remains to specify boundary conditions for the fluid boundary \(\partial \Omega \). Here, either of the two possibilities $$\begin{aligned} \varvec{u}= \bar{\varvec{u}} \quad \text {or} \quad \varvec{\sigma }^f \varvec{n}= \varvec{0} \end{aligned}$$ is used which are a prescribed flow velocity or an outflow condition. Equations (1), (2), (3) together with the material laws, (4) and (5), and the interface and boundary conditions, (6) and (7), completely describe the problem. Note that even though we work with the static balance laws (neglected inertia), the problem is time-dependent due to the change of the configurations: \(\Omega ^s(t)\) and \(\Omega (t)\) are functions of \(\varvec{d}\) and thus of time. Due to this dependency, the problem is non-linear in addition to the nonlinearity given by the solid stresses (4). Moreover, the first condition in (6) couples solid with fluid velocities and thus renders the solid sub-problem time-dependent. Coupling formulation Here, a global balance law for fluid and solid is derived which does not rely on essential boundary conditions. This means that the interface conditions (6) are not directly fulfilled by the choice of the finite element basis, which will be later introduced in Section "Immersed finite element method". Its main advantage is to allow for a configuration of solid and fluid that changes independently of the used finite element mesh. Nevertheless, the conditions imposed on \(\partial \Omega \) for the outer fluid boundary are treated in the classic way by imposing \(\varvec{u}= \bar{\varvec{u}}\) essentially and \(\varvec{\sigma }^{f} \varvec{n}= \varvec{0}\) naturally. Since this boundary does not move, there is no disadvantage of this standard approach. We begin with a weighted residual approach using test functions \(\delta \varvec{d}\), \(\delta \varvec{u}\) and \(\delta p\) with the only condition that \(\delta \varvec{u}= \varvec{0}\) on the parts of \(\partial \Omega \) where the condition \(\varvec{u}= \bar{\varvec{u}}\) is applied. Other than that, these test functions are not restricted by any condition. Weighting the solid momentum balance (1) by \(\delta \varvec{d}\), the fluid momentum balance (2) by \(\delta \varvec{u}\), and the fluid mass balance (3) by \(\delta p\) and application of the divergence theorem gives The boundary terms on \(\partial \Omega \) have been dropped due the treatment of boundary conditions as discussed above. For sake of legibility, we introduce the following abbreviations such that Equation (8) can be recast in the form $$\begin{aligned} a^{fsi}(\varvec{d},\varvec{u},p; \delta \varvec{d}, \delta \varvec{u}, \delta p) = a^s(\varvec{d}; \delta \varvec{d}) + a^f(\varvec{u},p; \delta \varvec{u}, \delta p) - a^\Gamma (\varvec{d},\varvec{u},p; \delta \varvec{d}, \delta \varvec{u}, \delta p) = 0. \end{aligned}$$ The next step is to incorporate conditions (6) into this balance equation. Observe that the integrand in expression (11) is the difference of a product. One can show that $$\begin{aligned}&\varvec{\sigma }^s \varvec{n}\cdot \delta \varvec{d}- \varvec{\sigma }^f \varvec{n}\cdot \delta \varvec{u}\nonumber \\&\quad =\left( \beta \varvec{\sigma }^s \varvec{n}+ (1-\beta ) \varvec{\sigma }^f \varvec{n}\right) \cdot (\delta \varvec{d}- \delta \varvec{u}) + \left( (1-\beta ) \delta \varvec{d}+ \beta \delta \varvec{u}\right) \cdot (\varvec{\sigma }^s \varvec{n}- \varvec{\sigma }^f \varvec{n}) \end{aligned}$$ for any \(\beta \in \mathbb {R}\). Hence, we obtain for the interface term Note that the second integral is zero due to interface equilibrium (6)\(_2\) and is now removed from this expression. With reference to Nitsche's method [37, 47, 54] for the incorporation of Dirichlet boundary conditions in a weak form, two new terms are added to this result Note that the same expression has been derived in [25] via a Lagrange multiplier approach and a subsequent elimination of the multipliers. Here, \(\gamma >0\) is some parameter which will be determined in section "Cut elements" and \(\tilde{\varvec{\sigma }}^s\) denotes a stress-like function of the test displacements \(\delta \varvec{d}\). Note that \(\gamma \) turns out to be dependent on the choice of the finite element discretisation, as shown already in [37] and for this reason its specification is postponed. Inserting expression (15) into the balance equation (12) gives a family of formulations for fluid–solid coupling based on Nitsche's method [16, 25] which is parameterised by the number \(\beta \). From now on, we fix \(\beta = 0\) and consider only the interface term Hence, the expression comprises a fluid-sided "mortaring" as coined in [16], see also [25] for a motivation of this choice of parameter. On the other hand, we refer [38, 55] for an analysis of the choice of this parameter \(\beta \) for interface problems in which both domains are governed by the same mathematical model. Inserting this interface term (16) into expression (12) gives the desired global fluid–solid balance equation that incorporates the interface conditions (6). The time semi-discretisation and linearisation of this problem are given in the remainder of this section, whereas the finite element space discretisation is introduced in "Immersed finite element method" section. Time semi-discretisation The fluid–solid balance (12) as derived in the previous section is time-dependent and nonlinear. Typically, such expressions are discretised by following the concept of the method of lines: the discretisation in space leads to a system of (nonlinear) ODEs which is than tackled by a time discretisation. Here, we reverse this order and make use of what is referred to as Rothe's method [56]. The aim of this work is to use a fixed, stationary finite element mesh in which the fluid-solid domain configuration moves freely. Nevertheless the finite element spaces vary in function of this configuration and, for this reason, it is preferred to begin with time discretisation and linearisation before finally applying a spatial discretisation with in finite elements as outlined in "Immersed finite element method" section. For simplicity, the Euler backward method [57] is used for time stepping although the presented approach is not restricted to this choice. Therefore, let a specific time instant be denoted by \(t_n\) and the size of the current time step by \(\Delta t\). Moreover, the approximation of the principal unknown fields at a time instant, are indicated by the same subscript; for instance, \(\varvec{d}_n(\varvec{x})\) approximates \(\varvec{d}(\varvec{x},t_n)\). Based on the Euler-backward scheme, the displacement velocity becomes $$\begin{aligned} \dot{\varvec{d}}_{n+1} = \frac{\varvec{d}_{n+1} - \varvec{d}_n}{\Delta t}. \end{aligned}$$ Using this notation, the non-linear problem to find the system state \((\varvec{d}_{n+1}, \varvec{u}_{n+1}, p_{n+1})\) reads where the current state \((\varvec{d}_n, \varvec{u}_n, p_n)\) is given. Here, the subscripts \(n+1\) at the solid and fluid domain contributions, \(a^s\) and \(a^f\), and at the interface \(\Gamma \) emphasise that the position of solid and fluid domain at the new time instant \(t_{n+1}\) is considered. Note that this configuration is unknown since it depends on the displacement field \(\varvec{d}_{n+1}\). Problem (18) is therefore nonlinear due to the solid contribution \(a^s\) and the implicit dependence on the domain configuration. Updated Lagrangian method and linearisation A full linearisation of the nonlinear problem (18) evidently leads to shape derivatives [46]. We aim to avoid this complexity by expressing the balance law (18) not in the unknown spatial configuration, but in the latest known configuration. This approach, also known as Updated Lagrangian, is well established for large-deformation analysis, see [44]. Moreover, the unknown interface location \(\Gamma _{n+1}\) in (18) is simply replaced by the known interface \(\Gamma _n\), see e.g. [40]. Based on this explicit treatment of the fluid–solid interface, its location is known in every time step and updated after the new system state \((\varvec{d}_{n+1}, \varvec{u}_{n+1}, p_{n+1})\) has been determined. In the following, only the solid state is considered. For the treatment of the fluid part, we simply disregard the discrepancies between the unknown and the latest known configurations. A thorough error analysis of this approach is still pending, but the results given in "Example applications" section convey that this defect is not detrimental to the overall approach. Configurations in updated Lagrangian method. Initial, latest known and unknown configurations of the solid domain, \(\Omega ^s_0\), \(\Omega ^s_n\) and \(\Omega ^s_{n+1}\); maps \(\varvec{\varphi }_a^b\) between configurations \(\Omega _a^s\) and \(\Omega ^s_b\) and the corresponding deformation gradients \(\varvec{F}_a^b\) At first, consider the three solid domain configurations occurring in the Updated Lagrangian Method: the initial configuration \(\Omega ^s_0\), the latest known configuration \(\Omega ^s_n\) and the unknown configuration \(\Omega ^s_{n+1}\), see Fig. 2. Coordinates in these configurations are denoted with the same subscript, for instance, \(\varvec{x}_n \in \Omega _n\). The maps are between two configurations, \(\Omega _a\) and \(\Omega _b\) are denoted by \(\varvec{\varphi }_a^b: \Omega _a \rightarrow \Omega _b\). For instance, the coordinate \(\varvec{x}_{n+1}\) results from either mapping from the initial or the latest known configuration, i.e., \(\varvec{x}_{n+1} = \varvec{\varphi }^{n+1}_0( \varvec{x}_0 ) = \varvec{\varphi }^{n+1}_n( \varvec{x}_n )\). At last, the deformation gradients are defined as \(\varvec{F}_a^b = \partial \varvec{x}_b / \partial \varvec{x}_a\) and are maps between the respective tangent spaces. We refer to [44, 50, 53] for more details on large elastic deformations and the notion of configurations. The deformations \(\varvec{\varphi }_0^{n}\) and \(\varvec{\varphi }_0^{n+1}\) in Fig. 2 are represented by the Lagrangian displacements \(\varvec{d}_n\) and \(\varvec{d}_{n+1}\). Consider the solid contribution where the subscript to the \(\nabla \)-operator indicates the coordinate with respect to which the differentiation is carried out. This expression is now mapped to the latest known configuration \(\Omega _n^s\), Here, the first line is obtained from (19) by mapping the integration domain, the second line results from the chain rule in order to change from \(\nabla _{n+1}\) to \(\nabla _n\), and the last line introduces a new stress tensor \(\varvec{P}_n^{n+1}\). Note that for \(n=0\), \(\varvec{P}_0^{n+1}\) coincides with the standard definition of the first Piola-Kirchhoff stress tensor [53]. Moreover, with an abuse of notation the composition of the test displacements \(\delta \varvec{d}\) with the map \(\varvec{\varphi }_n^{n+1}\) is also denoted by \(\delta \varvec{d}\). Now, the result of (20) is introduced in (18) and the explicit treatment of the interface used. The nonlinear fluid–solid balance now reads and only its first term, the solid domain contribution, remains nonlinear. A Newton method [53] is applied to this expression. Let k be the iteration counter placed as a left superscript and \(\Delta \varvec{d}\) the unknown displacement increment such that the latest iterate for the unknown \(\varvec{d}_{n+1}\) becomes $$\begin{aligned} {}^{k+1}{}{\varvec{d}_{n+1}} = {}^{k}{}{\varvec{d}_{n+1}} + \Delta \varvec{d}. \end{aligned}$$ Note that (21) is linear in \(\varvec{u}_{n+1}\) and \(p_{n+1}\) such that we can directly work with \({}^{k+1}{}{\varvec{u}_{n+1}}\) and \({}^{k+1}{}{p_{n+1}}\) as unknowns without increments. Linearisation of the integrand of the solid part in direction of the increment \(\Delta \varvec{d}\) gives where \(\mathbb {C}_{n+1}\) refers to the material elasticity tensor [50, 53]. Note that expression (23) requires the deformation gradient \({}^{k}{}{\varvec{F}_n^{n+1}}\) and, moreover, the material evaluations for \({}^{k}{}{\varvec{P}_n^{n+1}}\) and \({}^{k}{}{\mathbb {C}_{n+1}}\) are based on the deformation gradient \({}^{k}{}{\varvec{F}_0^{n+1}}\). We note that $$\begin{aligned} {}^{k}{}{\varvec{F}_0^{n+1}} = {}^{k}{}{\varvec{F}_n^{n+1}} \cdot \varvec{F}_0^n = {}^{k}{}{\varvec{F}_n^{n+1}} \cdot \left( \varvec{F}_n^0 \right) ^{-1} \end{aligned}$$ whose factors are furthermore computed by means of $$\begin{aligned} {}^{k}{}{\varvec{F}_n^{n+1}}= & {} \frac{\partial }{ \partial \varvec{x}_n} \left[ \varvec{x}_n + \left( {}^{k}{}{\varvec{d}_{n+1}} - \varvec{d}_n \right) \right] = \varvec{I} + \nabla _n \left( {}^{k}{}{\varvec{d}_{n+1}} - \varvec{d}_n \right) \nonumber \\ \varvec{F}_n^0= & {} \frac{\partial }{\partial \varvec{x}_n} \left[ \varvec{x}_n - \varvec{d}_n \right] = \varvec{I} - \nabla _n \varvec{d}_n.\nonumber \end{aligned}$$ Now, the linearisation process can be summarised and the following Newton step followed by the update (22). This last expression represents a time-discretised and linearised version of the global fluid–solid balance (12) which incorporates the interface conditions (6). The remaining step for a numerical solution is the spatial discretisation by finite elements as outlined in the following section. Immersed finite element method The global fluid–solid balance equation (12) and its time-discretised and linearised version (26) are perfectly suited for an immersed finite element method [19]. Note that here immersed solely refers to the fact that the interface location is independent of the finite element mesh and we do not refer to the method of [29]. Observe that Dirichlet boundary conditions only appear on the boundary \(\partial \Omega \) which is fixed in space and time, and that the interface conditions (6) have been incorporated in a weak form. This implies that the finite element spaces need only be equipped with essential boundary conditions on \(\partial \Omega \) but are not affected by the specific location of the interface \(\Gamma \). In the implementation, quadrilateral elements in two and hexahedrons in three dimensions are used. For the solid displacement piece-wise linear Lagrange polynomials are used (bi- and tri-linear, to be precise), for the fluid velocity quadratic and for the pressure linear functions. For the system state at time instant \(t_n\) the finite element approximation has the form $$\begin{aligned} \varvec{d}_n(\varvec{x})\approx & {} \sum _i \varvec{d}_{n,i} N^{\varvec{d}}_i(\varvec{x}) , \quad \varvec{u}_n(\varvec{x}) \approx \sum _j \varvec{u}_{n,j} N^{\varvec{u}}_j(\varvec{x}) , \quad \text {and} \nonumber \\ p_n(\varvec{x})\approx & {} \sum _k p_{n,k} N^p_k(\varvec{x}). \end{aligned}$$ The specific choice for the discretisation of the fluid variables \(\varvec{u}\) and p corresponds to the Taylor-Hood element and thus guarantees the fulfilment of the \(\inf \)-\(\sup \) stability condition [57]. In principle, the entire domain \(\Omega \) holds the the approximation for the fields \(\varvec{d}\), \(\varvec{u}\) and p. But for the solid only the degrees of freedom \(\varvec{d}_{n,i}\) are active which correspond to points inside \(\Omega ^s\) or, as explained below, in the vicinity of the interface \(\Gamma \). The same holds for the fluid degrees of freedom \(\varvec{u}_{n,j}\) and \(p_{n,k}\) with respect to the domain \(\Omega ^f\). It is assumed that the interface \(\Gamma _n = \partial \Omega ^s_n\) is strictly inside the full domain \(\Omega \) for all time instants \(t_n\). Therefore, there are no boundary conditions for the solid domain. For the fluid domain, the Dirichlet boundary condition (7)\(_1\) on \(\partial \Omega \) is treated essentially. Moreover, in case of \(\partial \Omega \) being entirely treated as a Dirichlet boundary, the fluid pressure can only be known up to a constant value and we therefore set \(p=0\) at some point of this boundary in order to ensure solvability of the fluid problem. Discretisation of fluid and solid domains. Immersed finite element discretisation of the fluid–solid problem at time instant \(t_n\): fluid finite elements (left), solid finite elements (right); the hatched elements allow to accurately interpolate the solution across the interface \(\Gamma _n\) and are referred to as cut elements Figure 3 shows the configuration of Fig. 1 with an immersed finite element discretisation. A simple structured grid is filling the domain \(\Omega \) without awareness of the current location of \(\Gamma _n\). More precisely, the figure shows the respective discretisation of fluid and solid in the left and right pictures. Obviously, the elements which are traversed by the interface \(\Gamma _n\) have a physical and a fictitious side such that the solution is well-defined until the boundary. These fictitious element parts are highlighted in Fig. 3. Implicit geometry representation Although the analysis mesh is virtually independent of the configuration of solid and fluid domains, the expressions in the Newton method (26) require integrating over the interface \(\Gamma _n\) and the volumes \(\Omega ^s_n\) and \(\Omega ^f_n\), respectively. Therefore, it is necessary to know about the location of the interface \(\Gamma _n\) on the element level and we employ a signed distance function $$\begin{aligned} {{\mathrm{dist}}}_{\Gamma _n} (\varvec{x}) = s(\varvec{x}) \min _{\varvec{y} \in \Gamma _n} | \varvec{x}- \varvec{y}|, \quad \text {with} \quad s(\varvec{x}) = \left\{ \begin{array}{ll} \phantom {-}1 &{}\quad \text {if } \varvec{x}\in \overline{\Omega }_n^s\\ -1 &{}\quad \text {if } \varvec{x}\in \Omega _n^f. \end{array}\right. \end{aligned}$$ This function represents \(\Gamma _n\) implicitly as the level set [48] \({{\mathrm{dist}}}_{\Gamma _n}(\varvec{x}) = 0\). Note that this choice of geometry representation has the effect of smoothing out surface features. Here, this poses no problem since the applications we consider all have a smooth interface from the onset. In case of fluid–solid interfaces with a corners and edges, this choice has to be reviewed carefully, see, for instance, [42]. In the implementation, \({{\mathrm{dist}}}_{\Gamma _n}\) is represented by its interpolate using the finite element functions of the background mesh (here, piece-wise linear). The coefficients of this interpolation are the nodal values of the distance function, $$\begin{aligned} {{\mathrm{dist}}}_{\Gamma _n}(\varvec{x}) \approx \sum _i r_{n,i} N^r_i(\varvec{x}) \quad \text {with} \quad r_{n,i} = {{\mathrm{dist}}}_{\Gamma _n} (\varvec{x}_i). \end{aligned}$$ Note the dependence of the coefficients \(r_{n,i}\) on the time instant \(t_n\) due the dynamics of the location of the interface \(\Gamma \). The calculation of \(r_{n,i}\) requires to find the surface element (line or triangle) which is closest to the point \(\varvec{x}_i\) of interest. Given N mesh points \(\varvec{x}_i\) and M surface elements, this task is of complexity \(\mathcal {O}(N \times M)\). There is a variety of algorithms with a lower complexity [58], but here we content ourselves with this brute-force approach; after all this part of the computation is not time-critical. Cut elements Once the distance function (28) has been computed for all mesh points \(\varvec{x}_i\), the interface \(\Gamma _n\) is reconstructed in all elements that are traversed by it. That is where a change of the sign of the distance function occurs. To this end, a linear approximation (line elements in two and triangles in three dimensions) is used to construct a polytope \(\Gamma _n^h\) the approximates \(\Gamma _n\). Such a situation for one element in two dimensions is depicted in Fig. 4. The element is at first decomposed into two triangles (in three dimensions, a hexahedron is decomposed into six tetrahedrons [8]). The arrows in this figure represent the shortest distance from every grid point to the immersed interface \(\Gamma _n\). By linear interpolation along all edges, intersection points (the red circles) are determined. Cut element. Construction of the surrogate interface \(\Gamma _n^h\) based on the nodal values \(r_{n,i}\) of the distance function After the calculation of the intersection points, piece-wise linear elements form the surrogate interface \(\Gamma _n^h\) in the solution process. Effectively all terms that contain integrals over \(\Gamma _n\) are expressed as integrals over \(\Gamma _n^h\). In addition, the sub-regions of the cut element which belong to the fluid (\(\Omega _n^f\)) and the solid (\(\Omega _n^s\)) side are now polygons or polytopes for which a standard numerical integration is in general not possible. Therefore, these shapes are subdivided into triangles or tetrahedrons on which Gauß quadrature rules [57, 59] are used, as it is common in the implementation of XFEM [60]. The integration on the cut elements is therefore carried out with the same accuracy as the volume elements strictly inside the domain. See [61] for an overview of the numerical implementation of integration over cut elements and an alternative approach based on the divergence theorem and surface integration. Alternatively, in [62] an approach for explicit time integration is given in which a surrogate boundary is instead thereby avoiding the cut element integration. At last, the stability of the finite element basis used in (27) needs to be considered. Consider Fig. 3 and let us assume that all grid points hold degrees of freedom \(\varvec{d}_{n,i}\), \(\varvec{u}_{n,j}\), and \(p_{n,k}\). In fact, the quadratic shape functions used for the fluid velocity \(\varvec{u}\) lead to additional degrees of freedom, but they are ignored in this discussion for sake of simplicity. These degrees of freedom can be classified by the intersection of their support with the domain of integration. Let \(N_i(\varvec{x})\) be any shape function of (27) and \(S_i = {{\mathrm{supp}}}(N_i)\) its support, that is \(N_i(\varvec{x}) = 0\) for all \(\varvec{x}\notin S_i\). Focusing on the discretisation of the solid displacements (the fluid side is treated analogously), the degrees of freedom are classified as follows: inactive if \(S_i \cap \Omega _n^s = \emptyset \); critical if \(S_i \cap \Omega _n^s \ne \emptyset \) and \(|S_i \cap \Omega _n^s| < \epsilon h\), using some predefined threshold \(\epsilon \); active otherwise. Note that this categorisation is dependent on the time instant \(t_n\) and therefore the used finite element spaces change between the time instants. Inactive degrees of freedom do not pose any problem as they are simply discarded from the approximation (27). Similarly, active degrees of freedom do not need any special consideration and are treated as in any standard finite element method. But the critical ones need a special consideration which is due to the fact that the measure \(s_{n,i} = |S_i \cap \Omega _n^s|\) relates to the condition of the final system matrix. Obviously, if \(s_{n,i} \rightarrow 0\) the corresponding contributions to the system matrix vanish and the resulting linear system is ill-conditioned. There are various approaches which address this stability issue, see [63]. We choose the method proposed by [63], see also [8, 36], and precondition the system by constraining the critical degrees of freedom to active ones, $$\begin{aligned} \varvec{d}_{n,i} = \sum _{j \in J_n(i)} c_{ij} \varvec{d}_{n,j}, \end{aligned}$$ where \(\varvec{d}_{n,i}\) denotes a critical degree of freedom. The set \(J_n(i)\) contains suitably chosen active degrees of freedom and \(c_{ij}\) are the weights of this linear constraint. See [36, 63] for the technical details of this approach and [64] for an alternative approach of stabilising the finite element basis on cut cells. Note that introducing (30) into the finite element approximation (27) gives rise to a modified finite element basis, akin to the extended B-splines introduced by [63]. Effectively, the support of some shape functions in the vicinity of the immersed interface \(\Gamma _n\) is enlarged as they included linear combinations of shape functions that correspond to critical degrees of freedom. This implies that, even though the intersection of elements with the solid or fluid domain can be arbitrarily small, the shape function supports are bounded from below. In other words, the above introduced measure \(s_{n,i}\) of intersection between shape function support and domain of integration is always of a similar magnitude as the mesh size h. Note, alternatively, the ghost penalty approach from [34], where an additional term is added to the Nitsche method in order to give stability for small cut elements. It remains to discuss the parameter \(\gamma \) of the weak incorporation of boundary conditions as introduced in (15). As shown in [16, 51], the global fluid-solid balance (12) can be interpreted as a fluid-only problem or a solid-only problem, using a space decomposition [52]. The latter is simply a problem of linear elasticity with a Robin boundary condition. Here, the parameter \(\gamma \) represents physically the stiffness of the support and \(\gamma > 0\) is a sufficient condition for the solvability of the problem [57]. For the fluid problem, on the other hand, the value of \(\gamma \) needs further attention. In order to ensure a stable inverse of the fluid saddle point problem, two conditions need to be satisfied: the \(\inf \)-\(\sup \) condition and the ellipticity of the bilinear form which only depends on \(\varvec{u}\) and \(\delta \varvec{u}\). The former condition is already fulfilled by the choice of the finite element discretisation by Taylor-Hood elements. For the ellipticity condition we have to analyse the term Note the change of sign that occurs due to the choice of the normal vector \(\varvec{n}^{f} = - \varvec{n}\). We require now \(\tilde{a}^{f} (\varvec{u}, \varvec{u}) > 0\) for all \(\varvec{u}\ne \varvec{0}\). By introducing the abbreviation , this expression can be estimated as follows This estimate is based on the Cauchy-Schwarz inequality, Korn's inequality [57] to ensure the positivity of \(A(\varvec{u},\varvec{u})\), and a special inverse inequality that reads Details on the derivation of the estimate (32) and the inverse inequality (33) can be found in, among others, [37, 38]. Estimate (32) assures that for the choice \(\gamma > C_I^2\) the matrix block corresponding to \(A(\varvec{u}, \delta \varvec{u})\) has a stable inverse and in combination with the \(\inf \)-\(\sup \) condition a stable solution of the linear system is guaranteed. But the exact value of \(C_I\) from the inverse estimate (33) is not obvious. Ways to find estimates for this value are discussed in [38]. Especially attractive is the element-wise approach in which the problem is considered locally taking into account that all elements which are strictly in side the domain only have the elliptic \(A(\varvec{u}, \delta \varvec{u})\) as a contribution, but no interface terms, and need therefore not to be considered. Following the steps of [38], the estimate for \(C_I\) becomes under the assumption of piece-wise linear finite element shape functions for \(\varvec{u}\) for an element \(\Omega _e\) $$\begin{aligned} C_{I,e}^2 > 2 \mu ^f \frac{|\Omega _e \cap \Gamma |}{| \Omega _e \cap \Omega ^f| }. \end{aligned}$$ Note that there are possible geometric configurations in which this value is unbounded and therefore not suited to guarantee numerical stability. But the derivation of (34) does not consider the stabilisation technique that is employed here. The use of the linear constraints (30) effectively augment the support of shape functions near the boundary and a value of the constant of the order $$\begin{aligned} \gamma > C_I^2 = \frac{\gamma _0 \mu ^f}{h}, \end{aligned}$$ where h is a measure of the element size, is sufficient even for relatively small numbers \(\gamma _0\). See [36] for a numerical study of this parameter in the context of fluid flow around moving boundaries. In our numerical results in "Example applications" section, a value of \(\gamma _0 = 1\) has been chosen unless noted otherwise and no stability issues are encountered. Interface update and particle tracking The Updated Lagrangian Method, as outlined in section "Updated Lagrangian method and linearisation", is now reconsidered. The aim is to change the solid configuration, but maintain a fixed mesh throughout the simulation. Instead of using a material mesh as in a standard finite element method for large deformations which moves along with the solid, only the solid surface (the interface) representation is relocated. This is achieved by the following steps Extract a surface mesh \(\tilde{\Gamma }_n\) from the current level set data \({{\mathrm{dist}}}_{\Gamma _n}\) (Eq. (28) and the construction in Fig. 4) which delivers an explicit description of the interface Evaluate at every node \(\varvec{x}_i^\Gamma \) of this surface mesh the current displacement increment and update the coordinate of that node. $$\begin{aligned} \varvec{x}^\Gamma _i \leftarrow \varvec{x}^\Gamma _i + (\varvec{d}_{n+1} - \varvec{d}_n)(\varvec{x}^\Gamma _i). \end{aligned}$$ The thereby updated surface mesh becomes \(\Gamma _{n+1}\) and is used for the the signed distance function (29) in the next step. These surface operations require some comments. In principle it is possible to work with one surface mesh throughout the simulation. But the surface \(\Gamma _n^h\) as seen by the immersed finite element solver (refer to Fig. 4) does not coincide with the original surface mesh \(\Gamma \). This implies that a displacement solution might not be available at the location of a node of the original surface mesh. Moreover, the use of a newly generated mesh from the level set data liberates the method from the surface mesh size: the thus generated surface mesh has always the same resolution as the domain mesh. A pure level set approach, on the other hand, works only with an implicit geometry description, but requires an additional advection equation to be solved in every time step [48]. Such equation poses numerical difficulties and would add another field of unknowns to our system. For this reason, we have chosen to combine both approaches. This approach of switching between explicit and implicit surface descriptions is depicted in Fig. 5. But the conversion from explicit to implicit and back to an explicit surface representation (as in the top line of this diagram) affects the volume enclosed by \(\Gamma \) and can lead to undesirable shrinking effects, especially in case of relatively coarse meshes. For this reason, an artificial coordinate \(\tilde{\varvec{x}}_i\) is introduced before the surface location is updated which is defined as $$\begin{aligned} \tilde{\varvec{x}}_i = \varvec{c} + \alpha (\varvec{x}^\Gamma _i - \varvec{c}) \quad \text {with} \quad \alpha = \root D \of { V_n / \tilde{V}_n } \end{aligned}$$ with the enclosed volumes \(V_n\) of \(\Gamma _n\) and \(\tilde{V}_n\) of \(\tilde{\Gamma }_n\), and \(\varvec{c}\) the centre of the domain \(\Omega ^s\); recall that \(D=2\) or \(D=3\) denotes the spatial dimension. Note that this coordinate scaling only affects the gain or loss of volume throughout the conversions between explicit and implicit surface representations. The volume changes due to the elastic deformation of the compressible material are not inhibited. Sketch of the processing of the interface mesh. Given an explicit representation \(\Gamma _n\), the signed distance function is generated in order to obtain an implicit geometry representation; after the increment of the surface displacements is computed, a new explicit surface mesh \(\tilde{\Gamma }_n\) is generated whose geometry nodes are updated using the displacement increment and volume conservation between \(\Gamma _n\) and \(\tilde{\Gamma }_n\), see (37) Configuration update. Change of the solid configuration from \(t_n\) to \(t_{n+1}\) and coordinate backtracking In order to maintain the deformation history of the solid, at every grid node in the new configuration its previous location has to be determined. As shown in Fig. 6, the grid point \(\varvec{x}_i\) in the new configuration at time \(t_{n+1}\) results from its previous location \(\varvec{x}_i^*\) and the displacement increment from \(t_n\) to \(t_{n+1}\) at that location, i.e. $$\begin{aligned} \varvec{x}_i = \varvec{x}_i^* + (\varvec{d}_{n+1} - \varvec{d}_n)(\varvec{x}_i^*). \end{aligned}$$ Note that this expression is the same as in a standard Updated Lagrangian Method [44]. But the main difference is that here the new location \(\varvec{x}_i\) is given and the right hand side is sought for. Using a finite element geometry representation and the displacement trial as in (27), this equation becomes $$\begin{aligned} \varvec{x}( \varvec{\xi }_i^* ) + \varvec{d}_{n+1}(\varvec{\xi }_i^*) - \varvec{d}_n(\varvec{\xi }_i^*) - \varvec{x}_i = \varvec{0}, \end{aligned}$$ where \(\varvec{\xi }\) represents the local element coordinates and \(\varvec{\xi }^*_i\) is the solution to this nonlinear equation. In order to solve equation (39), first the element has to be found in which \(\varvec{x}^*_i = \varvec{x}(\varvec{\xi }_i^*)\) lies, and then a Newton method is used to obtain the value of \(\varvec{\xi }_i^*\). Searching for elements in a structured grid is a simple task and one can begin with the element that contains \(\varvec{x}_i - (\varvec{d}_{n+1} - \varvec{d}_n)(\varvec{x}_i)\) as an initial guess. If the Newton method does not converge in this element, its neighbours are considered. The Newton method itself consists of the iterations $$\begin{aligned} \begin{aligned} \left[ \frac{\partial \varvec{x}}{\partial \varvec{\xi }} + \frac{\partial (\varvec{d}_{n+1}- \partial \varvec{d}_n)}{\partial \varvec{\xi }} \right] _{\varvec{\xi }=\varvec{\xi }_i^{(k)}} \Delta \varvec{\xi }&= \varvec{x}_i - \varvec{x}(\varvec{\xi }_i^{(k)}) - (\varvec{d}_{n+1} - \varvec{d}_n)(\varvec{\xi }_i^{(k)})\\ \varvec{\xi }_i^{(k+1)}&= \varvec{\xi }_i^{(k)} + \Delta \varvec{\xi }\end{aligned} \end{aligned}$$ and converges rapidly to \(\varvec{\xi }_i^*\). Once, the point has been determined, the displacement history is transferred from \(\varvec{x}_i^*\) to \(\varvec{x}_i\). This kind of particle tracking can be found in [45] and avoids the advection of the solid displacement typical for a fully Eulerian approach [39, 40, 42, 65]. Fluid–solid coupling In order to finalise this section, the algorithmic steps of the devised method are summarised. Given the latest known configuration at \(t_n\) by means of the solid displacements \(\varvec{d}_n\) and the surface mesh \(\Gamma _n\), the following steps are performed: Geometry immersion : Compute the signed distance function (28) based on the given surface mesh \(\Gamma _n\). Fluid–solid balance : Solve problem (12) with the interface term (16) by a Newton method as shown in expression (26). After convergence, the new solid displacement \(\varvec{d}_{n+1}\) and the fluid state variables \(\varvec{u}_{n+1}\) and \(p_{n+1}\) are known. The spatial discretisation is performed as described in this section. Geometry update : Extract a surface mesh \(\tilde{\Gamma }_n\) from the signed distance function. Update the node locations of this mesh based on the displacement increment \((\varvec{d}_{n+1} - \varvec{d}_n)\) taking into account the re-scaling as explained above. This yields the new interface location \(\Gamma _{n+1}\) and implies the solid and fluid domain locations. Backtrack nodal locations : For every finite element node in the new solid domain \(\Omega _{n+1}^s\), find its previous location in \(\Omega ^s_n\). Transfer the field variables from the previous to the new location. With the end of the last step, the new configuration at \(t_{n+1}\) is completely determined and the solid displacement history is known at every finite element node in \(\Omega _{n+1}^s\). Example applications In all examples, the elastic behaviour of the solid is a compressible Neo-Hookean material [53] with strain energy expressed in the deformation gradient \(\varvec{F}\) $$\begin{aligned} W(\varvec{F}) = \frac{\lambda }{2} (\log J)^2 - \mu ^s \log J + \frac{\mu ^s}{2} ({{\mathrm{tr}}}\varvec{C} - 3 ), \quad J = \det \varvec{F} \,\mathrm{and}\, \varvec{C} = \varvec{F}^\top \varvec{F}. \end{aligned}$$ The material parameters \(\lambda \) and \(\mu ^s\) are the Lamé parameters in E and \(\nu \), that is \(\lambda = \frac{E \nu }{(1-2\nu ) (1+\nu )}\) and \(\mu ^s = \frac{E}{2(1+\nu )}\). The fluid is incompressible Newtonian according to (5) with the dynamic viscosity \(\mu ^f\). Test problem for solid solver. Geometry and fixed mesh (left); Lagrangian mesh for comparison (middle); and comparison of a measured displacement \(\varvec{u}^*\) (right) For the finite element method, we use lowest-order Taylor-Hood elements (\(Q_2/Q_1\)) for the fluid (velocity/pressure) and \(Q_1\) elements for the solid displacement [57]. An Euler backward time integration is used with constant step size \(\Delta t\). The coupling parameter \(\gamma \) is chosen as \(\gamma = \alpha \mu ^f / h\), where h is the characteristic element size, see Eq (35). If not indicated otherwise, \(\alpha =1\) is the default choice. The convergence criterion for the Newton method (26) is the \(\ell ^2\) norm of the vector representing the displacement increment \(\Delta \varvec{d}\) divided by the number of solid degrees of freedom. The tolerance in all computations is chosen as \(10^{-10}\) and at most three iterations are observed. The same tolerance is used for the Newton method (40) in the coordinate backtracking. Note that in the examples the effective mesh size can be arbitrarily small, while the interface moves through the finite element mesh. But due to the stabilisation, as presented in "Cut elements" section, the size of the intersection of the shape function support with the integration domain does not shrink to zero but remain of the order of h. For this reason, the use of the element size h of the embedding domain grid is a valid mesh characteristic. Before focusing on the fluid–solid applications, a pure solid example is considered. The aim of this example is to demonstrate the viability and accuracy of the devised Updated Lagrangian method for the solid domain based on the immersed finite element method with particle backtracking. Updated Lagrangian method In a first, preliminary example, the accuracy of the updated Lagrangian method, as introduced in section "Updated Lagrangian method and linearisation", together with the particle backtracking of "Interface update and particle tracking section" is assessed. Therefore, a problem only consisting of a solid domain without fluid interaction is devised. The undeformed geometry of the solid domain is depicted in the left picture of Fig. 7. Other than in the derivation of the presented fluid–solid coupling method and the remaining examples, the boundary of the solid domain \(\Omega ^s\) partially overlaps with the boundary \(\partial \Omega \) of the embedding domain. Along this overlap the displacements are set to zero by employing essential boundary conditions. Moreover, the initial solid domain \(\Omega ^s_0\) has subdomain \(\tilde{\Omega }^s_{0}\) in which a constant downward body force \(\varvec{f}_0 = - \tfrac{n}{2} \varvec{e}_2\) is applied with \(0\le n \le 4\) as the number of the load step. The region of the applied body force is darker in the picture. The solid is hyperelastic according to (41) with parameters \(E=100\) and \(\nu =0.3\). In order to assess the quality of the solution of this problem, a pure Lagrangian approach with a body-conforming mesh is used as comparison and the middle picture in Fig. 7 shows one of the meshes. Since the computational domain changes throughout the computation in the Updated Lagrangian approach, the applied body force in the latest known configuration \(\Omega ^s_n\) becomes $$\begin{aligned} \varvec{f}_n = \det \varvec{F}_0^n \varvec{f}_0. \end{aligned}$$ Figure 8 shows the deformed solid domain for all load steps and for visual comparison the deformed Lagrangian mesh for the final step. Without noticeable difference between the two approaches, the boundary of the solid domain freely moves throughout the fixed mesh. Load steps of solid test problem. Deformed solid domain \(\Omega _n^s\) for the load steps \(0 \le n \le 4\) coloured by the signed distance function and the deformed Lagrangian mesh for the final step \(n=4\) (rightmost picture) For a more quantitative comparison, the right graph in Fig. 7 shows the modulus of the measured displacement \(\varvec{u}^*\) for the two approaches and various mesh sizes. Clearly, both approaches converge to very similar numerical values of the measured displacement. Note that the presented method for the solid part is tailored towards the analysis large deformation problems. In a linearised setting, the distinction between the configurations as shown in Fig. 2 does not make sense and there would be no need for the update of a configuration. For this reason, analytic solutions for the convergence study are highly complicated and we therefore rely on numerical reference solutions. Moreover, the methods we compare in this section operate in different configurations such that we are restricted to compare the results at individual points. Shear flow–convergence analysis Whereas the previous section shows the convergence of the results for a problem with only a solid domain, we aim to assess here the convergence behaviour of the method for a fluid–solid interaction problem. To this end, we compare the numerical solution from various grid sizes with the outcome of a highly refined grid. Due to the lack of an analytical solution, we use this fine-grid result as the reference solution in order to quantify the convergence. This concept of assessing the accuracy of the method has been employed by other authors, see, for instance, [33]. Solid in shear flow. Setup (left), flow magnitude and deformed shape (middle), and convergence results for distance function and velocity field (right). To this end, we consider the problem as described in Fig. 9, a fully immersed hyperelastic solid domain \(\Omega ^s\) of depicted shape whose material points in the dark red circle are held fixed. The radius of the upper and lower circular boundaries of the solid is 0.15, whereas the circle of fixed material points is half as big with a radius of 0.075. The distance between the centres of the circular boundaries is 0.5 and the whole fluid box has the dimensions \(1.5 \times 1.5\). The surrounding fluid box is subject to a prescribed shear flow and leads to a bending of the solid body. The fluid is Newtonian with viscosity \(\mu ^f=1\) and the parameters for the hyperelastic material as in (41) are \(E=1000\) and \(\nu =0.3\). For the simulation 10 time steps with a step size \(\Delta t = 0.2\) are used which yield the deformed shape as shown in the middle picture of Fig. 9 and which corresponds to a quasi-static state. As reference solution a fine grid of \(480\times 480\) elements, i.e. \(h=0.003125\), is used. The right graphic in Fig. 9 shows the \(L_2\)-norm errors of the distance function \({{\mathrm{dist}}}_\Gamma \) and the velocity field \(\varvec{u}\) for various grid sizes h with respect to the chosen reference at the final step of the simulation at \(t=2\). One can see the both measured errors behave similarly. They have an approximately linear decay for coarse grids followed by a order higher than linear. The final order, which appears to be more than quadratic, is clearly owed to the choice of reference solution and does not claim to be a characteristic of the method. Due to the choice of the time stepping method, see (17), the convergence order is impeded and does not reach the quadratic behaviour as linear finite elements for linear static problems commonly exhibit [57]. Moreover, one has to bear in mind that the solution of nonlinear systems in every time step by the Newton method (26) and the particle backtracking (40) contribute to the overall error of the method. Shear flow–parameter study Consider the setup as depicted in the left picture of Fig. 10. An initially circular solid object with radius R is placed at the centre of a fluid box of size \(2L \times 2L\). The fluid has the viscosity \(\mu ^f\) and the solid the material parameters E and \(\nu \). At the top and bottom boundaries a horizontal velocity is prescribed in positive and negative directions, respectively. The entire computational domain \(\Omega = \Omega ^f \cup \Omega ^s\) is discretised by \(N \times N\) elements leading to a constant mesh size of \(h = 2L / N\). The initial situation is the undeformed circle as shown in the picture and at \(t >0\) the velocity boundary condition \(\bar{\varvec{u}}\) is applied. All simulations cover the time interval \(0 \le t \le 20\). Shear flow analysis. Computational setup (left), deformed shapes of the solid for various moduli E (middle), and the definition of an equivalent ellipse (right, here the case \(E=2\) is shown by the thick black line) In the following, most of the problem parameters related to discretisation, material behaviour, geometry and boundary conditions are varied and their respective influence on the solution is studied. The choice of parameters is given in Table 1, where the standard value and the possible variations of each parameter are shown. To get an image of various outcomes the right picture in Fig. 10 shows the shapes of the immersed interface \(\Gamma \) at \(t=20\) for various solid stiffness parameters E while keeping all other parameters at their standard value. Table 1 Parameters used in two-dimensional shear flow problem As it can be seen from the deformed shapes in Fig. 10, the circle converts into an ellipse-like shape with the deformation obviously greater for the lower material stiffness E. In the following analysis of the influence of the parameters of Table 1, the shape of the solid domain is used. Therefore, the moments of inertia of the solid domain, are computed. In this two-dimensional analysis, the \(I_{ij}\) form a symmetric \(2\times 2\)-matrix whose eigenvalues \(I_1\) and \(I_2\) are the principal moments of inertia. We assume \(I_1 \le I_2\) in the following. Since the observed shapes are almost like a tilted ellipse, these principal moments of inertia allow to determine the minor and major radius, \(r_1\) and \(r_2\), of an equivalent ellipse. With these radii, the eccentricity e and a shape parameter \(D_{12}\) [28] are defined $$\begin{aligned} e = \sqrt{1 - \frac{r_1^2}{r_2^2}} = \sqrt{1 - \frac{I_1}{I_2}} \quad \text {and} \quad D_{12} = \frac{r_2 - r_1}{r_2 + r_1} = \frac{\sqrt{I_2} - \sqrt{I_1}}{\sqrt{I_2} + \sqrt{I_1}}. \end{aligned}$$ Moreover, the angle \(\theta \) by which the ellipse deviates from a horizontal orientation, is also studied. Mesh refinement. Temporal evolution of the shape eccentricity e (left) and convergence of the time-averaged quantities \(\bar{e}\) and \(\bar{\theta }\) (right) At first the method parameters N, \(\Delta t\) and \(\alpha \) are considered. It turned out that the multiplier \(\alpha \) of the boundary term as in the paragraph leading to Equation (35) did not show any noticeable influence on the monitored quantities, and is thus omitted from the rest of the discussion. The left picture of Fig. 11 shows the eccentricity e plotted over the time of analysis \(0 \le t \le 20\) for the variation of N, the number of elements per direction. One can clearly see that there is a fluctuation of the values throughout time and the amplitude of this fluctuation diminishes with increasing values of N. This becomes more clear, when looking at the time-averaged values of e and\(\theta \) in the right picture. Here \(\bar{a}\) denotes the average of the quantity a over the time interval \(5 \le 20\). The values of the average angle and the average eccentricity tend towards a specific value with increasing N, see Fig. 11. Similar observations are made for the time step variations. Now, the influence of the solid material's stiffness parameter E on the outcome of the solution is considered. The deformed shapes at \(t=20\) are already shown in Fig. 10. A decreasing value of E leads to a more pronounced flattening of the solid object and a lower angle of inclination \(\theta \). These observations are confirmed by the plots in Fig. 12, where the evolution of e and \(\theta \) are shown for all considered values of E. Varying solid stiffness. Temporal evolution of the shape eccentricity e (left) and the principal direction \(\theta \) (right) The deformation of red blood cells in simple shear flow has been studied in [28] for various membrane stiffness. This study is based on the common model of these cells as a liquid-filled membrane. Nevertheless, there are strong similarities between the findings in [28] and the results shown in Fig. 12 in this work. In both cases, the shape deformation from initially spherical to elliptical and the angle variation take mainly place for \(t<3\). Moreover, a decrease of the material stiffness (membrane stiffness in [28] and solid stiffness E here) leads to an increase in shape eccentricity and a deviation of the principal direction from \(\pi /4\). Despite the different constitutive models (solid vs. membrane), the qualitative outcome is very similar. Velocity fields. Stationary velocity for \(E=20\) (left) and \(E=2\) (right) Even though the fluctuations of e diminish for larger times, this state of equilibrium is dynamic. This means that the immersed body experiences a non-zero velocity along its surface \(\Gamma \) and thus is constantly moving in circumferential direction. This type of motion is referred to as tank-treading and typical for vesicles [66]. Snapshots of the velocity fields in fluid and solid at \(t=20\) are shown in Fig. 13 for two different choices of E. Clearly, the solid body is subject to a rotating flow field and responds with internal rotation. For a lower material stiffness, this rotational motion is increased. It is important to note that this is not a rigid body rotation, because the solid domain's outer shape, which is non-circular, remains fixed while the material moves. The tank-treading is also confirmed by marking individual material particles and tracing their path throughout the simulation. Fig. 14 shows the trajectories for three material points and the same choices of E as in the previous figure. These particles move to their almost elliptical orbits during the initial deformation and then stay on this path throughout the considered time. Particle trajectories. Selected solid particles for \(E=20\) (left) and \(E=2\) (right) At last, the deformed shapes at \(t=20\) are displayed for variations of the fluid viscosity \(\mu ^f\), the applied velocity f and the radius of the circular solid domain at \(t=0\). For the chosen values of \(\mu ^f\), see Table 1, one observes that a higher viscosity increases the deformation of the solid body. Clearly, the viscous forces of the fluid flow are increased and have a stronger effect on the solid, see the left picture in Fig. 15. Obviously the same happens for an increase of the applied velocity, as shown in the middle picture. The right picture of Fig. 15 shows the deformed shapes for various sizes of the solid body. Whereas the smallest solid with \(R=0.2\) poses almost no obstruction to the fluid flow, there is an increasing deformation (i.e. deviation from the initial circle) visible for larger values of R. The variation of the size of the fluid box, namely the half-width parameter L, did not reveal any significant alterations in the solution. Deformed solid bodies. Representation at \(t=20\) for various fluid viscosities \(\mu ^f\) (left), applied velocities f (middle), and initial radii R (right) Cauchy stress. Colour contours of the solid and fluid stress components \(\sigma _{ij}\) for the times \(t=1\), 2, 3, and 5 using \(E=2\) Finally, we consider for the softest solid with \(E=2\) the variation of the stress components \(\sigma _{ij}\) in the fluid and the solid domains at some time instants. Figure 16 displays contour plots of the stress components for four different times. It has to be emphasises the plotted stress components refer to the Cartesian coordinate axis and not the principal axes of the deformed solid. Note that there are strong indications that stresses regulate substantial biological processes in living cells [5]. The proposed method allows for future applications in which a detailed stress analysis is required for a deeper insight in such processes. Flow in a narrowed tube Constricted pipe. Computational setup Table 2 Parameter variations in constricted pipe flow A circular solid object is placed in a pipe with a geometric constriction and subject to a forced flow. The geometric dimensions are shown in Fig. 17 and the boundary conditions are a parabolic inflow from the left with average velocity \(\bar{\varvec{u}} = (0.1,0)\), an open boundary at the right end, and no-slip walls at the top and bottom boundaries. The constriction is defined by a cubic polynomial reduces the pipes diameter from 2 down to \(2\delta \) with different values of \(\delta \) as given in Table 2. The mesh used in all simulations is a structured \(250\times 100\)-grid deformed to fit into the constricted pipe. Therefore, the elements have a constant length in \(x_1\)-direction of \(h_1 = 0.02\). They are of square shape in the left (the wide) part of the pipe and squeezed correspondingly compressed in \(x_2\)-direction in the right (the constricted) part. Hence, the mesh size in the vertical direction shrinks from \(h_2 = 0.02\) down to a value between 0.005 (\(\delta = 0.25\)) and 0.008 (\(\delta = 0.4\)). The size of the time steps is constant with \(\Delta t = 0.025\). The fluid viscosity is set to \(\mu ^f=1\), the solid's Poisson ratio \(\nu =0.3\) and the stiffness modulus assumes the values from Table 2. The radius of the solid circle is always \(R=0.4\). Solid deformation. Snapshots for the time instances \(t=0\), 1.25, 2.5, ..., 13.75 for the parameter choices \(E=100\) and \(\delta =0.25\) Since \(\delta \ge R\) for all chosen values of \(\delta \), the solid cannot pass undeformed through the narrow pipe section. Figure 18 shows snapshots of the deformed solid at 12 different time instants between \(t=0\) and \(t=13.75\) with a constant time difference 1.25. In this picture, the most narrow case of \(\delta = 0.25\) is shown. One can observe that the compressible solid initially shrinks, which is due to the hydrostatic pressure of the fluid environment, and moves with the fluid motion towards the constriction. The closer the solid gets towards narrow section, the more it deforms to a tampion-like shape, just wide enough to fit through the pipe. Varying solid stiffness. Snapshots of the deformed solid for the time instance \(t=14\) and constriction size \(\delta =0.33\) Varying constriction size \(\delta \). Snapshots of the deformed solid for the time instance \(t=14\) and a stiffness of \(E=100\); the dotted lines indicate the location of the boundary in each case Enclosed area A versus travelled distance \(X_1\). Displacement of the solid body's centroid for different stiffnesses E (left) and constriction sizes \(\delta \) (right); the dotted lines indicate the location of begin and end of the transition from wide to narrow pipe sections Figures 19, 20 show the deformed solid at the time instant \(t=14\). At this moment, the solid is located completely inside the narrow part of the pipe. Figure 19 shows the different deformed shapes for the various stiffness parameters E. Clearly, the softest material leads to narrower state of deformation and is slightly more advanced since it poses less disturbance to the fluid flow. In Fig. 20 the deformed shapes for a constant stiffness but different constriction sizes are shown. Obviously, a narrower pipe requires a larger deformation of the solid. Moreover, the fluid velocity is higher in the narrower section due to the mass balance of the incompressible fluid. Therefore, the solid has advanced more in the narrower pipe. Velocity versus travelled distance. Varying stiffnesses E (left) and constriction sizes \(\delta \) (right); the dotted lines indicate the location of begin and end of the transition from wide to narrow pipe sections Above observations are quantified in Figs. 21 , 22, where the enclosed area A of the solid body and the velocity are plotted versus the current position and for all parameter variations. The initial shrinkage due to the fluid pressure is clear visible in Fig. 21 and more pronounced for lower values of E (softer material) and lower values of \(\delta \) (higher fluid pressure). The area then stays constant while the solid travels towards the narrower section. When entering the transition region it begins to shrink more until reaching a minimum size when approximately entering the narrow section. The travelled distance is measured by the location of the centroid of the solid body with respect to its initial location. When inside the narrow section, the solid gets stretched and increases in area. The velocity of the solid body's centroid is shown in Fig. 22 for all parameter choices. At the begin of the simulation the solid catches the velocity of the surrounding fluid and this increases when approaching the transition to the narrower section. Once entered this final part, the velocity stays approximately constant. In [2], a three-dimensional experiment of the flow of liquid-filled capsules through a narrowed pipe have been carried out. Although the presented example is two-dimensional and a compressible solid rather than a capsule is subject to the fluid flow, the observed shapes [2] are similar to Figs. 19, 20. In the experiments, it has been found that with increasing capillary number Ca the rear end of the capsule becomes flatter and eventually, above some value of Ca buckles inwards. Here the values of Ca are 1 / E and do not fall into the range of inward buckling. Yet a increasing flatting with lower value of E (i.e., higher value of Ca) is visible in Fig. 19. Note that our model does not allow for contact between the immersed solid and the domain boundaries. As the solid squeezes through the narrowed section of the pipe it never touches the boundaries but there remains a fluid gap between the interface and the boundary, see Fig. 18. This gap is an outcome of the simulation and not subject to artificially imposed distance constraints. Via the stabilisation techniques of "Cut elements" section, this implies a restriction on the mesh size. If too many neighbouring fluid elements have degenerate degrees of freedom (a geometric situation similar to a cusp), the employed stabilisation technique fails. Flow visualisation. Streamlines, pressure and the surface traction for some combinations of solid stiffness E and pipe constriction \(\delta \) at time \(t=14\): the fluid domain is coloured by the fluid pressure (between 0 and 60, see colour bar) and the red line indicates the distribution of the traction \(\varvec{\sigma }\varvec{n}\) along the surface (divided by 1000) Finally, in Fig. 23 for a few parameter combinations the velocity streamlines are shown together with the fluid pressure and the surface traction \(\varvec{\sigma }\varvec{n}\). For the computation of the streamlines, the discrete velocity according to (17) has been used inside the solid domain. Clearly, the flow pattern do not differ significantly among the displayed images. But the fluid pressure is higher in case of a larger solid stiffness E as it is necessary in order to sufficiently deform the solid body. In case of a smaller pipe diameter the fluid pressures are obviously larger. Accordingly, the distribution of the surface traction becomes higher for larger values of E and smaller values of \(\delta \). This example is concluded by a comment on the advantage of immersed finite elements. The most common technique for the fully-coupled analysis of fluid–solid interaction is the Arbitrary Lagrangian-Eulerian (ALE) technique [26] in which the fluid mesh is deformed in order to accommodate for the solid deformation and to maintain a usable analysis mesh. Although a powerful method, it is expected that the here presented example is not directly accessible by an ALE method. Figs. 18, 19, 20 clearly show that the initial fluid mesh would be highly distorted when the solid body enters the narrow section of the pipe. Tedious re-meshing and solution mapping techniques are required in an ALE approach. Three-dimensional shear flow At last, a three-dimensional example is considered. Here the setup is similar to the shear flow of section "Shear flow – parameter study" but extended to the third dimension. Fig. 10 shows the initial configuration: a sphere of radius \(R=0.6\) located at the centre of a fluid box of dimension \([-1.4,1.4]^3\). All parameters are chosen as the standard values of Table 1 apart from the material stiffness parameter, which assumes the values \(E=5\) or \(E=10\), and the spatial discretisation is carried out by 20\(^3\) elements of size \(h=0.14\). Three-dimensional shear flow. Computational setup for the analysis: application of a velocity field on the top and bottom boundaries, all other boundaries remain open (left); deformed solid shapes for \(E=10\) (middle) and \(E=5\) (right) with streamlines Analysis of deformation. Principal moments of inertia of the deformed shape over time Figure 24 displays the initial configuration and the final deformed shapes for the two chosen material parameters. As in the two-dimensional case, the solid assumes an elliptic shape that is inclined in the plane of the shear flow. In Fig. 25, the principal moments of inertia, computed from the \(3\times 3\)-matrix with coefficients (43), are shown for a time interval \(0 \le t \le 2.5\) and the two choices of E. The spheres flatten as discussed above and rotate about the axis perpendicular to the plane of shear by approximate angles \(0.20\pi \) for \(E=10\) and \(0.17\pi \) for \(E=5\). As seen in Fig. 25, the moment of inertia around this out-of-plane axis stays almost constant throughout the simulation and is similar for both stiffnesses. The other two moments clearly deviate from their initial values as the elliptic shape in the plane of shear is formed. A new approach for the numerical analysis of the interaction between viscous fluid flow and highly deformable solids has been presented. The method builds up on previous works, such as [8, 36] for the analysis of fluid flow around moving and highly flexible boundaries. Derived from the basic balance equations for the quasi-static equilibrium of solid and fluid, the interface conditions are incorporated weakly and a global solid-fluid balance law is obtained. The formalism of an updated Lagrangian method is used for the description of the solid constituent. Its equilibrium is thus expressed in the latest known configuration and the deformation history maintained by particle tracking between the newly computed and the previous state of the solid. By means of this choice, the advection equations and the shape derivatives in the linearisation of a fully Eulerian method are avoided. The analysis is carried out on one mesh which stays fixed in space and time. For the spatial discretisation an immersed finite element method is employed with the mesh independent of the location of the solid–fluid configuration. By using a signed distance function, the location of the interface is given implicitly to the finite element solver. The difficulty of an accurate quadrature of the elements which are crossed by the interface is as much addressed as the possible ill-conditioning of the system of equations due to the small support of shape functions on such elements. Once the nonlinear iterations of the fully coupled fluid–solid system are converged and the new equilibrium has been found, the configuration is updated. To this end, an explicit representation of the surface is recovered from the level set and this is updated by means of the displacement increments along the surface. Finally, every mesh node of the solid domain is tracked back to its previous location in order to transfer the displacement history. Due to the choice of a monolithic fluid–solid coupling, the method is unconditionally stable. Moreover, the full linearisation in the Updated Lagrangian framework leads to a fast convergence within every time step. Several example applications in two and three spatial dimensions are presented and the influence of all parameters of the method are studied. Being tailored towards the analysis of cell motility and microfluidic experimentation, we consider shear flow examples which reveal the basic characteristics of liquid-filled vesicles, such as tumbling and tank-treading behaviour. Moreover, the passing of a deformable object through a narrowed tube of diameter smaller than the body is analysed and the trajectories of an elastic solid in the vortex of a driven cavity flow. Especially, the former application provides an important step in the direction of computational analysis of cell migration in confined spaces [67]. In view of these examples, we highlight the following features of the devised method. Based on simple balance laws, virtually any material law of solid and fluid constituents can be incorporated. Especially, active behaviour, like growth or self-propulsion, is a feasible extension. Moreover, the restriction to a quasi-static equilibrium is not essential and an adaption of the method to fully dynamic solid–fluid interaction is straightforward. The developed immersed finite element method operates with a single analysis mesh for solid and fluid that is not subject to any deformation. This is particularly useful for the analysis of the narrowed tube and the driven cavity examples in which the use of a body-fitted mesh is not possible in any standard way. Finally, we aim to emphasise that the here presented approach is virtually mesh-free: even though a volume finite element mesh is employed it is not subject to any geometrical restrictions and its generation is a trivial task. Tarbell JM, Weinbaum S, Kamm RD. Cellular fluid mechanics and mechanotransduction. Ann Biomed Eng. 2005;33:1719–23. Risso F, Collé-Paillot F, Zagzoule M. Experimental investigation of a bioartificial capsule flowing in a narrow tube. J Fluid Mech. 2006;547:149–73. Tezduyar TE, Sathe S, Cragin T, Nanna B, Conklin BS, Pausewang J, Schwaab M. Modelling of fluid-structure interactions with the space-time finite elements: Arterial fluid mechanics. Int J Numer Methods Fluids. 2007;54:901–22. Keller SR, Skalak R. Motion of a tank-treading ellipsoidal particle in a shear flow. J Fluid Mech. 1982;120:27–47. Diamond S, Eskin S, McIntire L. Fluid flow stimulates tissue plasminogen activator secretion by cultured human endothelial cells. Science. 1989;243:1483–5. Polacheck WJ, Li R, Uzel SG, Kamm RD. Microfluidic platforms for mechanobiology. Lab Chip. 2013;13:2252–67. van der Meulen MC, Huiskes R. Why mechanobiology?: A survey article. J Biomech. 2002;35:401–14. Rüberg T, Cirak F. A fixed-grid b-spline finite element technique for fluid-structure interaction. Int J Numer Methods Fluids. 2014;74:623–60. Tian FB, Dai H, Luo H, Doyle JF, Rousseau B. Fluid-structure interaction involving large deformations: 3d simulations and applications to biological systems. J Computational Phys. 2014;258:451–69. Lim C, Zhou E, Quek S. Mechanical models for living cells-a review. J Biomech. 2006;39:195–216. Ohayon R, Felippa C. Advances in computational methods for fluid-structure interaction. Comput Methods Appl Mech Eng. 2001;190:2977–3292. Bazilevs Y, Takizawa K, Tezduyar TE. Special issue on computational fluid mechanics and fluid-structure interaction. Comput Mech. 2011;48:245–348. Tezduyar TE, Bazilevs Y. Advances in computational fluid mechanics and fluid-structure interactions: A tribute to Yoichiro Matsumoto on the occasion of his 60th birthday. Int J Numer Method Fluids. 2011;65:1–340. Bazilevs Y, Takizawa K, Tezduyar TE. Computational fluid-structure interaction: methods and applications. Hoboken: Wiley; 2012. Heil M. An efficient solver for the fully coupled solution of large-displacement fluid-structure interaction problems. Comput Method Appl Mech Eng. 2004;193:1–23. Burman E, Fernández M. Stabilization of explicit coupling in fluid-structure interaction involving fluid incompressiblity. Comput Method Appl Mech Eng. 2009;198:766–84. Felippa CA, Park K, Farhat C. Partitioned analysis of coupled mechanical systems. Comput Method Appl Mech Eng. 2001;190(24):3247–70. Küttler U, Wall WA. Fixed-point fluid-structure interaction solvers with dynamic relaxation. Comput Mech. 2008;43:61–72. Mittal R, Iaccarino G. Immersed boundary methods. Ann Rev Fluid Mech. 2005;37:239–61. Tezduyar TE. Finite element methods for flow problems with moving boundaries and interfaces. Arch Comput Method Eng. 2001;8:83–130. Takizawa K, Tezduyar TE. Multiscale space-time fluid-structure interaction techniques. Comput Mech. 2011;48:247–67. Küttler U, Gee M, Förster C, Comerford A, Wall W. Coupling strategies for biomedical fluid-structure interaction problems. Int J Numer Method Biomed Eng. 2010;26:305–21. Takizawa K, Wright S, Moorman C, Tezduyar TE. Fluid-structure interaction modeling of parachute clusters. Int J Numer Method Fluids. 2011;65:286–307. Kramer R, Cirak F, Pantano C. Fluid-structure interaction simulation of an inflatable aerodynamic tension-cone decelerator. AIAA J. 2010; 4608. Bazilevs Y, Hsu MC, Scott M. Isogeometric fluid-structure interaction analysis with emphasis on non-matching discretizations, and with application to wind turbines. Comput Method Appl Mech Eng. 2012;249:28–41. Hirt C, Amsden AA, Cook J. An arbitrary Lagrangian-Eulerian computing method for all flow speeds. J Comput Phys. 1974;14:227–53. Peskin C. The immersed boundary method. Acta Numer. 2002;11:479–517. Eggleton CD, Popel AS. Large deformation of red blood cell ghosts in a simple shear flow. Phys Fluids. 1998;10:1834–45. Zhang L, Gerstenberger A, Wang X, Liu WK. Immersed finite element method. Comput Method Appl Mech Eng. 2004;193:2051–67. Pozrikidis C. Interfacial dynamics for stokes flow. J Comput Phys. 2001;169:250–301. Veerapaneni SK, Gueyffier D, Zorin D, Biros G. A boundary integral method for simulating the dynamics of inextensible vesicles suspended in a viscous fluid in 2d. J Comput Phys. 2009;228(7):2334–53. Veerapaneni SK, Rahimian A, Biros G, Zorin D. A fast algorithm for simulating vesicle flows in three dimensions. J Comput Phys. 2011;230(14):5610–34. Valkov B, Rycroft CH, Kamrin K. Eulerian method for fluid-structure interaction and submerged solid–solid contact problems. 2014. Burman E, Fernández MA, et al. An unfitted nitsche method for incompressible fluid-structure interaction using overlapping meshes. Comput Method Appl Mech Eng. 2014;279:497–514. Boffi D, Gastaldi L. A finite element approach for the immersed boundary method. Comput Struct. 2003;81(8):491–501. Rüberg T, Cirak F. Subdivision-stabilised immersed b-spline finite elements for moving boundary flows. Comput Method Appl Mech Eng. 2011;209–212:266–83. Hansbo A, Hansbo P. An unfitted finite element method, based on Nitsche's method, for elliptic interface problem. Comput Method Appl Mech Eng. 2002;191:5537–52. Dolbow J, Harari I. An efficient finite element method for embedded interface problems. Int J Numer Method Eng. 2009;78:229–52. Laadhari A, Ruiz-Baier R, Quarteroni A. Fully eulerian finite element approximation of a fluid-structure interaction problem in cardiac cells. Int J Numer Method Eng. 2013;96:712–38. Richter T, Wick T. Finite elements for fluid-structure interaction in ale and fully eulerian coordinates. Comput Method Appl Mech Eng. 2010;199:2633–42. He P, Qiao R. A full-eulerian solid level set method for simulation of fluid-structure interactions. Microfluid Nanofluid. 2011;11(5):557–67. Dunne T. An eulerian approach to fluid-structure interaction and goal-oriented mesh adaptation. Int J Numer Method Fluid. 2006;51(9–10):1017–39. Wick T. Fully eulerian fluid-structure interaction for time-dependent problems. Comput Method Appl Mech Eng. 2013;255:14–26. Bathe KJ, Ramm E, Wilson EL. Finite element formulations for large deformation dynamic analysis. Int J Numer Method Eng. 1975;9:353–86. Armero F, Love E. An arbitrary lagrangian-eulerian finite element method for finite strain plasticity. Int J Numer Method Eng. 2003;57:471–508. Fernández MÁ, Moubachir M. A Newton method using exact jacobians for solving fluid-structure coupling. Comput Struct. 2005;83:127–42. Nitsche J. Über ein Variationsprinzip zur Lösung von Dirichlet-Problemen bei Verwendung von Teilräumen, die keinen Randbedingungen unterworfen sind. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, vol. 36, Springer, 1971; 9–15. Sethian JA. Theory, algorithms, and applications of level set methods for propagating interfaces. Acta Numer. 1996;5:309–95. Batchelor GK. An introduction to fluid dynamics. Cambridge: University Press; 2000. Ogden RW. Non-linear elastic deformations. New York: Courier Dover Publications; 1997. Fernández M, Gerbeau JF, Grandmont C. A projection semi-implicit scheme for the coupling of an elastic structure with an incompressible fluid. Int J Numer Method Eng. 2007;69:794–821. LeTallec P, Mouro J. Fluid structure interaction with large structural displacements. Comput Method Appl Mech Eng. 2001;190:3039–67. Bonet J. Nonlinear continuum mechanics for finite element analysis. Cambridge: University Press; 1997. Stenberg R. On some techniques of approximating boundary conditions in the finite element method. J Comput Appl Math. 1995;63:139–48. Annavarapu C, Hautefeuille M, Dolbow JE. A robust Nitsche's formulation for interface problems. Comput Method Appl Mech Eng. 2012;225:44–54. Bangerth W, Rannacher R. Finite element approximation of the acoustic wave equation: Error control and mesh adaptation. EastWest J Numer Math. 1999;7:263–82. Ern A, Guermond JL. Theory and practice of finite elements. New York: Springer; 2004. Mauch S. Efficient algorithms for solving static hamilton-jacobi equations. PhD Thesis, Calinfornia Institute of Technology; 2003. Hughes TJ. The finite element method: linear static and dynamic finite element analysis. New York: Courier Dover Publications; 2012. Moës N, Dolbow J, Belytschko T. A finite element method for crack growth without remeshing. Int J Numer Method Eng. 1999;46:131–50. Massing A, Larson MG, Logg A. Efficient implementation of finite element methods on nonmatching and overlapping meshes in three dimensions. SIAM J Sci Comput. 2013;35:C23–47. Zeng X, Farhat C. A systematic approach for constructing higher-order immersed boundary and ghost fluid methods for fluid-structure interaction problems. J Comput Phys. 2012;231(7):2892–923. Höllig K. Finite element methods with B-splines. SIAM Front Appl Math. 2003. Schott B, Wall W. A new face-oriented stabilized XFEM approach for 2D and 3D incompressible Navier-Stokes equations. Comput Method Appl Mech Eng. 2014;276:233–65. Richter T. A fully eulerian formulation for fluid-structure-interaction problems. J Comput Phys. 2013;233:227–40. Barthès-Biesel D. Motion of a spherical microcapsule freely suspended in a linear shear flow. J Fluid Mech. 1980;100:831–53. Wolf K, te Lindert M, Krause M, Alexander S, te Riet J, Willis AL, Hoffman RM, Figdor CG, Weiss SJ, Friedl P. Physical limits of cell migration: control by ecm space and nuclear deformation and tuning by proteolysis and traction force. J Cell Biol. 2013;201:1069–84. TR derived the mathematical model for the immersed FE method and carried out the numerical implementation. JMGA defined the conception of the underlying physical models. Both authors were fully involved in the preparation of this manuscript and the interpretation of the results. All authors read and approved the final manuscript. Multiscale in Mechanical and Biological Engineering (M2BE), University of Zaragoza, María de Luna 3, 50018, Zaragoza, Spain Thomas Rüberg & José Manuel Garcí Aznar Fundación ARAID, María de Luna 11, 50018, Zaragoza, Spain Thomas Rüberg José Manuel Garcí Aznar Correspondence to Thomas Rüberg. The support of the European Research Council (ERC), through project ERC-2012-StG 306751, and of the Spanish Ministry of Economy and Competitiveness, through project DPI2012-38090-C03-01 (partly financed by the European Union through the European Regional Development Fund), is gratefully acknowledged. Rüberg, T., Aznar, J.M.G. Numerical simulation of solid deformation driven by creeping flow using an immersed finite element method. Adv. Model. and Simul. in Eng. Sci. 3, 9 (2016). https://doi.org/10.1186/s40323-016-0061-0 Immersed finite elements Computational mechanobiology Nitsche's method Computational mechanics and medicine
CommonCrawl
Three Parameters Weibull Distribution The Weibull distribution is a continuous probability distribution that is commonly used in reliability engineering and statistical analysis. In addition to the two-parameter Weibull distribution, there is also a three-parameter Weibull distribution. The third parameter is the location parameter, also known as the threshold parameter, which determines the point at which the distribution begins. When the threshold parameter is zero, the three-parameter Weibull distribution reduces to the two-parameter Weibull distribution. Probability Density Function (PDF) The probability density function (PDF) of the three-parameter Weibull distribution is given by: $$f(x) = \frac{\alpha}{\beta} \left(\frac{x-\gamma}{\beta}\right)^{\alpha-1} e^{-\left(\frac{x-\gamma}{\beta}\right)^\alpha}$$ Cumulative Distribution Function (CDF) The cumulative distribution function (CDF) of the three-parameter Weibull distribution is given by: $$F(x) = 1 - e^{-\left(\frac{x-\gamma}{\beta}\right)^\alpha}$$ Where alpha (α) is the shape parameter, beta (β) is the scale parameter, and gamma (γ) is the location parameter. The three-parameter Weibull distribution is often used to model failure times that have a threshold, such as the time it takes for a device to fail after it has been turned on. Mean and Variance: The mean and variance of the three-parameter Weibull distribution can be calculated using the formulas: $$\mu = \beta\Gamma\left(1 + \frac{1}{\alpha}\right) + \gamma$$ Variance: $$\sigma^2 = \beta^2\left[\Gamma\left(1 + \frac{2}{\alpha}\right) - \Gamma^2\left(1 + \frac{1}{\alpha}\right)\right]$$ Where beta is the scale parameter, alpha is the shape parameter, gamma is the location parameter, and Gamma is the gamma function. Using Excel WEIBULL.DIST(x–γ, β, α, cum) There is no separate formula for 3-parameter Weibull Distribution in Excel. However, you can use the 2-parameter Weibull Distribution formula with the location parameter (γ) as an additional argument. The syntax for this is WEIBULL.DIST(x–γ, β, α, cum). Where x is the value for which you want to calculate the probability, γ is the location parameter, β is the scale parameter, and α is the shape parameter. The last argument, "cum " is a logical value that determines whether the function returns the cumulative distribution function (CDF) or the probability density function (PDF). If cum is TRUE, WEIBULL.DIST returns the CDF; if FALSE, it returns the PDF. In summary, the three-parameter Weibull distribution is useful for modelling failure times and other phenomena that exhibit a threshold. It has three parameters (alpha, beta, and gamma) that can be adjusted to fit the data and make predictions. Calculating the Range: A Quick Guide Understand Business Continuity, Resiliency, and Contingency Planning Five Whys Analysis: How to Use It to Improve Business Performance Data Coding The Best Tips for Managing Meetings Productively: How to Prepare and Stay Focused Stakeholders Analysis Six Sigma = 3.4 DPMO … Why? List of Common Probability Distributions
CommonCrawl
Tensor Calculus with Word VBA macros Presentation of Word VBA macros for helping with tensor equations! One of the things you often to do when working in with tensor equations is to expand things like Riemann tensors, covariant derivatives and Christoffel symbols. The expansions are shown below at (1) to (6). They all involve shuffling indices and introducing dummy indices which are used in 'contractions'. These are summations over the dummy index so more like an expansion than a contraction. The expansions are fiddly and after a while my eyes start to pop out. You can see that they pretty much all turn into Christoffel symbols, the ##\Gamma_{\mu\nu}^\sigma## thing. That in turn can be expressed in terms of the metric and inverse metric: ##g_{\mu\nu}\ ,\ g^{\mu\nu}##. So if you know the metrics you can write out all the Christoffel symbols in terms of coordinates - proper equations. There are only slightly less than ##n^3## of these in ## n## dimensions. There's a reduction to only ##n^2\left(n+1\right)/2## coefficients because ##\Gamma_{\mu\nu}^\sigma=\Gamma_{\mu\nu}^\sigma## (that's called being 'torsion free', more like drudgery free). Even when it's six on a the surface of a sphere, that process finally pops my eyes right out of their sockets and it's all too easy to make a mistake at any stage of the process. And it's 40 Christoffel symbols in the four dimensions of General Relativity. When I came to Exercise 3.12 on 14 October I had to expand a Riemann tensor again. So I wrote some code to do it. That was quite hard and I decided to do covariant derivatives and Christoffel symbols while I was at it. Then I suffered from mission creep and decided to fully expand those ##n^2\left(n+1\right)/2## coefficients of the Christoffel symbol. There are lots of terms like ##g^{\sigma\lambda}\partial_\mu g_{\lambda\nu}## which very often vanish, it was quite hard to work out which they were and use the information. I had to learn about the structure of MS equations from scratch. That may be the subject of another post. Three weeks slipped by and now I now have this all-singing toolbox: LxL: Insert inline equation x: Insert equation on new line x|(n): Insert numbered equation in new table row x|x|(n): Inserted two part numbered equation row Grid: Operations for enhancing tabular equations Expand: Expand equation containing Riemann, Christoffel and/or covariant derivative Renumber: Renumber all equations and references To Web: Prepare document for Web/MathJax in new window Pick up: Pick up definitions from equations Write Metrics: Write out (& calculate) picked up metrics Write ##\Gamma##s: Write out full expansion of Riemann symbols using metrics and coordinates. The box at the bottom is for information and error messages, when it might beep. Riemann tensor R_{\ \ \ \sigma\mu\nu}^\rho=\partial_\mu\Gamma_{\nu\sigma}^\rho-\partial_\nu\Gamma_{\mu\sigma}^\rho+\Gamma_{\mu\lambda}^\rho\Gamma_{\nu\sigma}^\lambda-\Gamma_{\nu\lambda}^\rho\Gamma_{\mu\sigma}^\lambda&\phantom {10000}(1)\nonumber \end{align}Covariant derivative (0,2) tensor \nabla_\mu V^{\nu\rho}=\partial_\mu V^{\nu\rho}+\Gamma_{\mu\lambda}^\nu V^{\lambda\rho}+\Gamma_{\mu\lambda}^\rho V^{\nu\lambda}&\phantom {10000}(2)\nonumber \end{align}vector \nabla_\mu V^\nu=\partial_\mu V^\nu+\Gamma_{\mu\lambda}^\nu V^\lambda&\phantom {10000}(3)\nonumber \end{align}one form \nabla_\mu\omega_\nu=\partial_\mu\omega_\nu-\Gamma_{\mu\nu}^\lambda\omega_\lambda&\phantom {10000}(4)\nonumber \end{align}Other tensor \nabla_\mu U_{\ \ \lambda\ \ \kappa}^{\nu\ \rho}=\partial_\mu U_{\ \ \lambda\ \ \kappa}^{\nu\ \rho}+\Gamma_{\mu\sigma}^\nu U_{\ \ \lambda\ \ \kappa}^{\sigma\ \rho}+\Gamma_{\mu\sigma}^\rho U_{\ \ \lambda\ \ \kappa}^{\nu\ \sigma}-\Gamma_{\mu\lambda}^\sigma U_{\ \ \sigma\ \ \kappa}^{\nu\ \rho}-\Gamma_{\mu\kappa}^\sigma U_{\ \ \lambda\ \ \sigma}^{\nu\ \rho}&\phantom {10000}(5)\nonumber \end{align}Christoffel symbol \Gamma_{\mu\nu}^\sigma=\frac{1}{2}g^{\sigma\lambda}\left(\partial_\mu g_{\lambda\nu}+\partial_\nu g_{\mu\lambda}-\partial_\lambda g_{\mu\nu}\right)&\phantom {10000}(6)\nonumber \end{align} To read more about the macros click Read More. Macros available at Archive2019-11-06. Pdf file here: Tensor Calculus.pdf. About the Macros The code is implemented in three modules of Word VBA and one form. The macro to launch the Tensor Calculus form is Main.StartAll. The form is called TensorCalculus. Most buttons into another macro. Those can all be also activated from the Quick Access toolbar which might have advantages, except that there are so many. The following list describes the action of each button and macro in more detail. There is one module called OldMacros which mostly contains code written some time ago and reflects my lack of knowledge about MS equations. It contains a few constants which may be changed. The first has an obvious meaning I have it set to 12, compared to my normal font of 10. The second two are used when producing MathJax / Latex. EquationFontSize = 12 InlineDelimiter = "##" DisplayDelimiter = "$$" Insert in line equation Button: LxL, Macro: OldMacros.InsertEquationInLine Inserts inline equation followed by space which is important not to change the font in the rest of the line. Uses EquationFontSize. Insert equation on a new line Button: x, Macro: OldMacros.InsertEquationNewLine Inserts equation on a new line. Best used on an empty line. Uses EquationFontSize. Insert numbered equation on a new line Button: x|(n), Macro: OldMacros.EquationTable2 Inserts numbered equation on a new line. Best used on an empty line. Uses EquationFontSize. Example: =&\phantom {10000}(7)\nonumber \end{align}The equation contains the = sign and the number is one higher tan the preceding equation number. Normally the table row is a bit higher (default TableHeight = 1.29 cm) and the table has not outline. If the insertion point is immediately below a similar line, the row is formed as another row of the table. Insert numbered two part equation on a new line Button: x|x|(n), Macro: OldMacros.EquationTable3 Inserts numbered two part equation on a new line. Best used on an empty line. Uses EquationFontSize. Example: x&=&\phantom {10000}(8)\nonumber \end{align}In this case there are two equations in the row, the first containing ## x##. Useful when there are a few lines to expand an expression or see example for Christoffel symbol expansion (20)-(28) below. Show / Hide borders (added for Office 365) Buttons: Show / Hide, Macros: OldMacros.BordersAll / BordersNone Shows or hides or all borders on a table. Latter useful when equations get in a mess. Box round table (added for Office 365) Button: Box, Macro: OldMacros.EquationTable3 Puts a big box round a table. Useful when you get to the answer. Example: \therefore x=42&\phantom {10000}(9)\nonumber Pack table rows closer Button: Pack, Macro: OldMacros.Point8cmTableRows Reduces selected table row heights to 0.8 cm. Expand an equation Button: Expand, Macro: Expander.ExpandSymbols Expands all Riemann tensors ##R_{\ \ \ bcd}^a##, covariant derivatives ##\nabla_a## and Christoffel symbols ##\Gamma_{bc}^a## according to equations (1) to (6) and further generalizations in the case of covariant derivatives. The operand of a covariant derivative must be straightforward tensor. Brackets are not tolerated. Metric compatibility (##\nabla_\sigma g_{\mu\nu}=0##) is not recognized. Riemann tensors are recognized by the ## R##. Dummy (summation) indices are found by GetNewDummyIndex. Using indices already in the equation is avoided by FindIndexesUsed. If existing indices are all roman then the new dummy indices will be roman. Otherwise greek. If all indexes are used then new dummy indices cannot be provided so some bizarre symbol is inserted. It cannot be shown by the To Web function! Here's an image Christoffel symbols generated are not re-expanded. Apply Expand again. Renumber all equations Button: Expand, Macro: OldMacros.RenumberEquationsCode Renumbers all equations in document. Useful after new equations have been added in the middle of the document. Tip: When adding in new equations in the middle of a document, start the numbering at, say, 100. Prompts and assists to save document first before starting. To Web: Create plain text and Latex suitable for MathJax processor Opens a new window and copies the whole document as plain text and latex into it. With more editing it is suitable for a web page with a Latex / MathJax processor (such as Physics Forums). (Here's how I did it. See end of the post.) The equation editor must be in Latex mode. I have not found a way to check this or change it in code. Special processing occurs at tables and equations: Inline equations are output as inline Latex typically with ## delimiters. Standalone equations are output as stand alone Latex typically with $$ delimiters. Tables are assumed to have been created as numbered equations and generate latex code using the \begin{align}, \end{align} structure with line delimiters and equations and numbers inside. Other tables might be destroyed! Delimiters can be altered with the InlineDelimiter, DisplayDelimiter constants. Pick up Coordinates Button: Coordinates, Macro: TensorCalculus.PickUpCoordinates_Click Pick up coordinates from an equation like ##\left\{t,r,\theta,\phi\right\}## and display them. These are used by Write Γs (see below). Pick up Metric / Inverse Metric Button: Metric / Inv. Metric, Macros: TensorCalculus. PickUpMetric_Click, PickUpInvMetric_Click Picks up metric or inverse metric from a selected matrix see (11) and displays the dimensions. These are used by Write Metrics, Write Γs (see below). Write metrics Button: Write Metrics, Macro: TensorCalculus.WriteMetrics_Click Writes out metrics in an equation. If inverse metric is undefined and the metric is diagonal the inverse metric will be calculated. Write Christoffel Symbols Button: Write Γs, Macro: Expander.WriteChristoffelSymbols Writes out all the Christoffel symbols. Coordinates and both metrics must be loaded and same dimension. Christoffel symbol are in form ##\Gamma_{r\theta}^\theta## where ## r,\theta,\phi## were coordinates inless the first or second coordinate is ## x## when they are in the form ##\Gamma_{13}^2##. Zero components in the metrics are eliminated. Derivatives of expressions not containing what is being differentiated with respect to are also eliminated. For example \frac{\partial}{\partial x}\left({tz}^2\right)\rightarrow0&\phantom {10000}(10)\nonumber \end{align}Identical terms in ##g^{\sigma\lambda}\partial_\mu g_{\lambda\nu}+g^{\sigma\lambda}\partial_\nu g_{\mu\lambda}-g^{\sigma\lambda}\partial_\lambda g_{\mu\nu}## may be identified and added or subtracted. Further differentiation and simplification is not attempted. See (20) to (28) for examples. Invert equation Button: -, Macro: Main.InvertEquation There is no button for this bit of code that inverts an equation. It is a stub for testing InvertOmath which is used to calculate inverse metrics. It illustrates some of the difficulties in expression manipulation. This code will probably not work on Apple Macs unless operating with normal Word in a windows shell. VBA help tells us that Visual Basic for the Macintosh does not support Unicode strings. Therefore, ChrW(n) cannot return all Unicode characters for n values in the range of 128–65,535, as it does in the Windows environment. Instead, ChrW(n) attempts a "best guess" for Unicode values n greater than 127. Therefore, you should not use ChrW in the Macintosh environment. Use of Unicode strings is unavoidable. Possible Enhancements Add Undo, see https://docs.microsoft.com/en-us/office/vba/word/concepts/working-with-word/working-with-the-undorecord-object Discover how to control Latex on/off. Posted a question on msofficeforums Return focus to document after button pressed in Tensor Calculus dialog. Renumber all equations should use progress error window in dialog. Avoid use of function ToUnicode now that I understand more! Pick up metric from equation such as ##{ds}^2={d\theta}^2+\sin^2{\theta}{d\phi}^2##. Extract and replace latex for use on compatible websites that do differentiation etc. Examples and notes Surface of a sphere (2-sphere) Ex 3.05 2-sphere geodesics and parallel transport.docx g_{ij}=\left(\begin{matrix}\mathrm{1}&\mathrm{0}\\\mathrm{0}&\sin^2{\theta}\\\end{matrix}\right)\ \ ,\ \ g^{ij}=\left(\begin{matrix}\mathrm{1}&\mathrm{0}\\\mathrm{0}&\sin^{-2}{\theta}\\\end{matrix}\right)&\phantom {10000}(11)\nonumber \end{align}coordinates ##\left(\theta,\phi\right)## Alternate metric (with strict italicisation) {ds}^2={d\theta}^2+\sin^2{\theta}{d\phi}^2&\phantom {10000}(12)\nonumber \end{align}The Christoffel symbols were all 0 except \Gamma_{\phi\phi}^\theta&=-\sin{\theta}\cos{\theta}&\phantom {10000}(13)\nonumber\\ \Gamma_{\theta\phi}^\phi=\Gamma_{\phi\theta}^\phi&=\frac{2\sin{\theta}\cos{\theta}}{2\sin^2{\theta}}=\cot{\theta}&\phantom {10000}(14)\nonumber Metric near Earth (Exercise 3.06) \left\{t,r,\theta,\phi\right\}&\phantom {10000}(15)\nonumber \end{align}Metric$$ {ds}^2=-\left(1+2\Phi\right){dt}^2+\left(1-2\Phi\right){dr}^2+r^2\left({d\theta}^2+\sin^2{\theta}{d\phi}^2\right) $$Expanding ## \Phi## that is$$ {ds}^2=-\left(\frac{2GM}{r}-1\right){dt}^2+\left(1+\frac{2GM}{r}\right){dr}^2+r^2{d\theta}^2+r^2\sin^2{\theta}{d\phi}^2 g_{\mu\nu}&=\left(\begin{matrix}-\left(1+2\Phi\right)&0&0&0\\0&\left(1-2\Phi\right)&0&0\\0&0&r^2&0\\0&0&0&r^2\sin^2{\theta}\\\end{matrix}\right)&\phantom {10000}(16)\nonumber\\ g^{\mu\nu}&=\left(\begin{matrix}-\frac{1}{\left(1+2\Phi\right)}&0&0&0\\0&\frac{1}{\left(1-2\Phi\right)}&0&0\\0&0&\frac{1}{r^2}&0\\0&0&0&\frac{1}{r^2\sin^2{\theta}}\\\end{matrix}\right)&\phantom {10000}(17)\nonumber \end{align}where$$ \Phi=-\frac{GM}{r} $$So g_{\mu\nu}&=\left(\begin{matrix}\left(\frac{2GM}{r}-1\right)&\mathrm{0}&\mathrm{0}&\mathrm{0}\\\mathrm{0}&\left(1+\frac{2GM}{r}\right)&\mathrm{0}&\mathrm{0}\\\mathrm{0}&\mathrm{0}&r^2&\mathrm{0}\\\mathrm{0}&\mathrm{0}&\mathrm{0}&r^2\sin^2{\theta}\\\end{matrix}\right)&\phantom {10000}(18)\nonumber\\ g^{\mu\nu}&=\left(\begin{matrix}\frac{r}{2GM-r}&\mathrm{0}&\mathrm{0}&\mathrm{0}\\\mathrm{0}&\frac{r}{2GM+r}&\mathrm{0}&\mathrm{0}\\\mathrm{0}&\mathrm{0}&\frac{1}{r^2}&\mathrm{0}\\\mathrm{0}&\mathrm{0}&\mathrm{0}&\frac{1}{r^2\sin^2{\theta}}\\\end{matrix}\right)&\phantom {10000}(19)\nonumber \end{align}Eliminating 0's and duplicates we got the following. I simplified the expressions by hand after the ##\rightarrow## sign. The result is the same as I got so laboriously in Exercise 3.06. Hurrah. \Gamma_{tr}^t&=\Gamma_{rt}^t=\ \frac{1}{2}\left(\frac{r}{2GM-r}\frac{\partial}{\partial r}\left(\left(\frac{2GM}{r}-1\right)\right)\right)\rightarrow\frac{1}{2}\frac{r}{2GM-r}\frac{-2GM}{r^2}=\frac{GM}{r^2-2GMr}& &\phantom {10000}(20)\nonumber\\ \Gamma_{tt}^r&=\ \frac{1}{2}\left(-\left(\frac{r}{2GM+r}\frac{\partial}{\partial r}\left(\left(\frac{2GM}{r}-1\right)\right)\right)\right)\rightarrow\frac{1}{2}\frac{-r}{2GM+r}\frac{-2GM}{r^2}=\frac{GM}{r^2+2GMr}& \Gamma_{rr}^r&=\ \frac{1}{2}\left(\frac{r}{2GM+r}\frac{\partial}{\partial r}\left(\left(1+\frac{2GM}{r}\right)\right)\right)\rightarrow\frac{1}{2}\frac{r}{2GM+r}\frac{-2GM}{r^2}=\frac{-GM}{r^2+2GMr}& \Gamma_{\theta\theta}^r&=\ \frac{1}{2}\left(-\left(\frac{r}{2GM+r}\frac{\partial}{\partial r}\left(r^2\right)\right)\right)\rightarrow\frac{1}{2}\frac{-2r^2}{2GM+r}=\frac{-r^2}{2GM+r}& \Gamma_{\phi\phi}^r&=\ \frac{1}{2}\left(-\left(\frac{r}{2GM+r}\frac{\partial}{\partial r}\left(r^2\sin^2{\theta}\right)\right)\right)\rightarrow\frac{1}{2}\frac{-2r^2\sin^2{\theta}}{2GM+r}=\frac{-r^2\sin^2{\theta}}{2GM+r}& \Gamma_{r\theta}^\theta&=\Gamma_{\theta r}^\theta=\ \frac{1}{2}\left(\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\right)\right)\rightarrow\frac{1}{r}& \Gamma_{\phi\phi}^\theta&=\ \frac{1}{2}\left(-\left(\frac{1}{r^2}\frac{\partial}{\partial\theta}\left(r^2\sin^2{\theta}\right)\right)\right)\rightarrow-\sin{\theta}\cos{\theta}& \Gamma_{r\phi}^\phi&=\Gamma_{\phi r}^\phi=\ \frac{1}{2}\left(\frac{1}{r^2\sin^2{\theta}}\frac{\partial}{\partial r}\left(r^2\sin^2{\theta}\right)\right)\rightarrow\frac{1}{r}& \Gamma_{\theta\phi}^\phi&=\Gamma_{\phi\theta}^\phi=\ \frac{1}{2}\left(\frac{1}{r^2\sin^2{\theta}}\frac{\partial}{\partial\theta}\left(r^2\sin^2{\theta}\right)\right)\rightarrow\cot{\theta}& &\phantom {10000}(28)\nonumber \end{align}Special tests for WriteChristoffelSymbols These exercise nearly all paths in WriteChristoffel from If (LinearTerm(1) = "0") And (LinearTerm(2) = "0") And (LinearTerm(3) = "0") Then coordinates##\left(x,y\right)## metric for all Christoffel symbols$$ g_{ij}=\left(\begin{matrix}xy11&xy12\\xy12&xy22\\\end{matrix}\right)\ ,\ g^{ij}=\left(\begin{matrix}ixy11&ixy12\\ixy12&ixy22\\\end{matrix}\right) $$exercises ElseIf LinearTerm(1) = LinearTerm(3) & ElseIf LinearTerm(2) = LinearTerm(3)$$ g_{ij}=\left(\begin{matrix}\mathrm{1}&\mathrm{0}\\\mathrm{0}&xy\\\end{matrix}\right)\ ,\ g^{ij}=\left(\begin{matrix}\mathrm{1}&\mathrm{0}\\\mathrm{0}&\frac{1}{xy}\\\end{matrix}\right) Labels: Tools Success at killing vectors Exercise 3.12 Derivatives of Killing vectors
CommonCrawl
ACP, 12, 591–603, 2012 Atmos. Chem. Phys., 12, 591–603, 2012 https://doi.org/10.5194/acp-12-591-2012 Special issue: Atmospheric mercury processes: papers from the 10th ICMGP Research article 11 Jan 2012 Research article | 11 Jan 2012 Gas-particle partitioning of atmospheric Hg(II) and its effect on global mercury deposition H. M. Amos et al. Related subject area Subject: Gases | Research Activity: Atmospheric Modelling | Altitude Range: Troposphere | Science Focus: Physics (physical properties and processes) High-resolution hybrid inversion of IASI ammonia columns to constrain US ammonia emissions using the CMAQ adjoint model Yilin Chen, Huizhong Shen, Jennifer Kaiser, Yongtao Hu, Shannon L. Capps, Shunliu Zhao, Amir Hakami, Jhih-Shyang Shih, Gertrude K. Pavur, Matthew D. Turner, Daven K. Henze, Jaroslav Resler, Athanasios Nenes, Sergey L. Napelenok, Jesse O. Bash, Kathleen M. Fahey, Gregory R. Carmichael, Tianfeng Chai, Lieven Clarisse, Pierre-François Coheur, Martin Van Damme, and Armistead G. Russell Atmos. Chem. Phys., 21, 2067–2082, https://doi.org/10.5194/acp-21-2067-2021,https://doi.org/10.5194/acp-21-2067-2021, 2021 Ammonia (NH3) emissions can exert adverse impacts on air quality and ecosystem well-being. NH3 emission inventories are viewed as highly uncertain. Here we optimize the NH3 emission estimates in the US using an air quality model and NH3 measurements from the IASI satellite instruments. The optimized NH3 emissions are much higher than the National Emissions Inventory estimates in April. The optimized NH3 emissions improved model performance when evaluated against independent observation. Simulation of radon-222 with the GEOS-Chem global model: emissions, seasonality, and convective transport Bo Zhang, Hongyu Liu, James H. Crawford, Gao Chen, T. Duncan Fairlie, Scott Chambers, Chang-Hee Kang, Alastair G. Williams, Kai Zhang, David B. Considine, Melissa P. Sulprizio, and Robert M. Yantosca We simulate atmospheric 222Rn using the GEOS-Chem model to improve understanding of 222Rn emissions and characterize convective transport in the model. We demonstrate the potential of a customized global 222Rn emission scenario to improve simulated surface 222Rn concentrations and seasonality. We assess convective transport using observed 222Rn vertical profiles. Results have important implications for using chemical transport models to interpret the transport of trace gases and aerosols. Regional CO2 fluxes from 2010 to 2015 inferred from GOSAT XCO2 retrievals using a new version of the Global Carbon Assimilation System Fei Jiang, Hengmao Wang, Jing M. Chen, Weimin Ju, Xiangjun Tian, Shuzhuang Feng, Guicai Li, Zhuoqi Chen, Shupeng Zhang, Xuehe Lu, Jane Liu, Haikun Wang, Jun Wang, Wei He, and Mousong Wu We present a 6-year inversion from 2010 to 2015 for the global and regional carbon fluxes using only the GOSAT XCO2 retrievals. We find that the XCO2 retrievals could significantly improve the modeling of atmospheric CO2 concentrations and that the inferred interannual variations in the terrestrial carbon fluxes in most land regions have a better relationship with the changes in severe drought area or leaf area index, or are more consistent with the previous estimates about drought impact. The friagem event in the central Amazon and its influence on micrometeorological variables and atmospheric chemistry Guilherme F. Camarinha-Neto, Julia C. P. Cohen, Cléo Q. Dias-Júnior, Matthias Sörgel, José Henrique Cattanio, Alessandro Araújo, Stefan Wolff, Paulo A. F. Kuhn, Rodrigo A. F. Souza, Luciana V. Rizzo, and Paulo Artaxo Atmos. Chem. Phys., 21, 339–356, https://doi.org/10.5194/acp-21-339-2021,https://doi.org/10.5194/acp-21-339-2021, 2021 It was observed that friagem phenomena (incursion of cold waves from the high latitudes of the Southern Hemisphere to the Amazon region), very common in the dry season of the Amazon region, produced significant changes in microclimate and atmospheric chemistry. Moreover, the effects of the friagem change the surface O3 and CO2 mixing ratios and therefore interfere deeply in the microclimatic conditions and the chemical composition of the atmosphere above the rainforest. Modeling atmospheric ammonia using agricultural emissions with improved spatial variability and temporal dynamics Xinrui Ge, Martijn Schaap, Richard Kranenburg, Arjo Segers, Gert Jan Reinds, Hans Kros, and Wim de Vries This article is about improving the modeling of agricultural ammonia emissions. By considering land use, meteorology and agricultural practices, ammonia emission totals officially reported by countries are distributed in space and time. We illustrated the first step for a better understanding of the variability of ammonia emission, with the possibility of being applied at a European scale, which is of great significance for ammonia budget research and future policy-making. Quantifying methane emissions from Queensland's coal seam gas producing Surat Basin using inventory data and a regional Bayesian inversion Ashok K. Luhar, David M. Etheridge, Zoë M. Loh, Julie Noonan, Darren Spencer, Lisa Smith, and Cindy Ong With the sharp rise in coal seam gas (CSG) production in Queensland's Surat Basin, there is much interest in quantifying methane emissions from this area and from unconventional gas production in general. We develop and apply a regional Bayesian inverse model that uses hourly methane concentration data from two sites and modelled backward dispersion to quantify emissions. The model requires a narrow prior and suggests that the emissions from the CSG areas are 33% larger than bottom-up estimates. Large-eddy simulation of traffic-related air pollution at a very high-resolution in a mega-city: Evaluation against mobile sensors and insights for influencing factors Yanxu Zhang, Xingpei Ye, Shibao Wang, Xiaojing He, Lingyao Dong, Ning Zhang, Haikun Wang, Zhongrui Wang, Yun Ma, Lei Wang, Xuguang Chi, Aijun Ding, Mingzhi Yao, Yunpeng Li, Qilin Li, Ling Zhang, and Yongle Xiao Atmos. Chem. Phys. Discuss., https://doi.org/10.5194/acp-2020-1168,https://doi.org/10.5194/acp-2020-1168, 2020 Revised manuscript accepted for ACP Urban air quality varies drastically at street scale but traditional methods are too coarse to resolve it. We develop a 10 m resolution air quality model and apply it for traffic-related carbon monoxide air quality in a megacity, Nanjing. The model reveals a detailed geographical dispersion pattern of air pollution in and out of the road network and agrees well with validation dataset. The model can be a vigorous part of the smart city system and inform urban planning and air quality management. COVID-19 lockdowns highlight a risk of increasing ozone pollution in European urban areas Stuart K. Grange, James D. Lee, Will S. Drysdale, Alastair C. Lewis, Christoph Hueglin, Lukas Emmenegger, and David C. Carslaw The changes in mobility across Europe due to the COVID-19 lockdowns had consequences for air quality. We compare what was experienced, to estimates of what would have been without the lockdowns. Nitrogen dioxide (NO2), an important vehicle-sourced pollutant, decreased by a third. However, ozone (O3) increased in response to the lower NO2. Because NO2 is decreasing over time, increases in O3 can be expected in European urban areas and will require management to avoid future negative outcomes. Impact of Western Pacific Subtropical High on Ozone Pollution over Eastern China Zhongjing Jiang, Jing Li, Xiao Lu, Cheng Gong, Lin Zhang, and Hong Liao This study demonstrates that the intensity of Western Pacific Subtropical High (WPSH), a major synoptic pattern in the North Pacific during the summer season, can induce a dipole change of surface ozone pollution over Eastern China. Ozone concentration increases in the north and decreased in the south during the strong WPSH phase, and vice versa. The change of chemical processes associated with the WPSH change plays a decisive role, whereas natural emission of ozone precursors accounts for ~30 %. Errors in top-down estimates of emissions using a known source Wayne M. Angevine, Jeff Peischl, Alice Crawford, Christopher P. Loughner, Ilana B. Pollack, and Chelsea R. Thompson Emissions of air pollutants must be known for a wide variety of applications. Different methods of estimating emissions often disagree substantially. In this study, we apply standard methods to a well-known source, a power plant. We explore the uncertainty implied by the different answers that come from the different methods, different samples taken over several years, and different pollutants. We find that the overall uncertainty of emissions estimates is about 30 %. The impact of urban land-surface on extreme air pollution over central Europe Peter Huszar, Jan Karlický, Jana Ďoubalová, Tereza Nováková, Kateřina Šindelářová, Filip Švábik, Michal Belda, Tomáš Halenka, and Michal Žák The paper shows how extreme meteorological conditions change due to the urban land-cover forcing and how this translates to the impact on the extreme air pollution over central European cities. It focuses on ozone, nitrogen dioxide, and particulate matter with a diameter of less than 2.5 μm and shows that, while for the extreme daily maximum 8 h ozone, changes are same as for the mean ones, much larger modifications are calculated for extreme NO2 and PM2.5 compared to their mean changes. Impacts of future land use and land cover change on mid-21st-century surface ozone air quality: distinguishing between the biogeophysical and biogeochemical effects Lang Wang, Amos P. K. Tai, Chi-Yung Tam, Mehliyar Sadiq, Peng Wang, and Kevin K. W. Cheung We investigate the effects of future land use and land cover change (LULCC) on surface ozone air quality worldwide and find that LULCC can significantly influence ozone in North America and Europe via modifying surface energy balance, boundary-layer meteorology, and regional circulation. The strength of such "biogeophysical effects" of LULCC is strongly dependent on forest type and generally greater than the "biogeochemical effects" via changing deposition and emission fluxes alone. Technical note: Emission mapping of key sectors in Ho Chi Minh city, Vietnam using satellite derived urban land-use data Trang Thi Quynh Nguyen, Wataru Takeuchi, and Prakhar Misra This study provides annual emissions of transportation, manufacturing industries and construction and residential sectors at 1 km resolution from 2009 to 2016 for Ho Chi Minh city, Vietnam. We consider both Scope 1 – all direct emissions from the activities occurring within the city and Scope 2 that is indirect emissions from electricity purchased. Our originality is the use of satellite derived urban land-use morphological maps which allow emission mapping in study area. What have we missed when studying the impact of aerosols on surface ozone via changing photolysis rates? Jinhui Gao, Ying Li, Bin Zhu, Bo Hu, Lili Wang, and Fangwen Bao Light extinction of aerosols can decease surface ozone mainly via reducing photochemical production of ozone. However, it also leads to high levels of ozone aloft being entrained down to the surface which partly counteracts the reduction in surface ozone. The impact of aerosols is more sensitive to local ozone, which suggests that while controlling the levels of aerosols, controlling the local ozone precursors is an effective way to suppress the increase of ozone over China at present. Stratospheric impact on the Northern Hemisphere winter and spring ozone interannual variability in the troposphere Junhua Liu, Jose M. Rodriguez, Luke D. Oman, Anne R. Douglass, Mark A. Olsen, and Lu Hu Our paper quantifies and identifies the importance of stratospheric ozone influence on the tropospheric ozone IAV in Northern Hemisphere mid-high latitudes. Our analysis provides an in-depth understanding of how 3-D dynamics influences the O3 redistribution in the troposphere. These findings are particularly important considering the potential changes in these dynamical conditions in the future as a result of climate change Design and evaluation of CO2 observation network to optimize surface CO2 fluxes in Asia using observation system simulation experiments Jun Park and Hyun Mee Kim Observation network experiments were conducted to optimize the surface CO2 flux in Asia. The impacts of the redistribution of and additions to the existing observation network were evaluated. The addition experiments revealed that considering both the normalized self-sensitivity and ecoregion information can yield better simulated surface CO2 fluxes compared to random addition. This study provides useful information for future observation network design to estimate the surface CO2 flux. Ozone pollution over China and India: seasonality and sources Meng Gao, Jinhui Gao, Bin Zhu, Rajesh Kumar, Xiao Lu, Shaojie Song, Yuzhong Zhang, Beixi Jia, Peng Wang, Gufran Beig, Jianlin Hu, Qi Ying, Hongliang Zhang, Peter Sherman, and Michael B. McElroy A regional fully coupled meteorology–chemistry model, Weather Research and Forecasting model with Chemistry (WRF-Chem), was employed to study the seasonality of ozone (O3) pollution and its sources in both China and India. Influences of oceanic ozone deposition on tropospheric photochemistry Ryan J. Pound, Tomás Sherwen, Detlev Helmig, Lucy J. Carpenter, and Mat J. Evans Ozone is an important pollutant with impacts on health and the environment. Ozone is lost to plants, land and the oceans. Loss to the ocean is slow compared to all other types of land cover and has not received as much attention. We build on previous work to more accurately model ozone loss to the ocean. We find changes in the concentration of ozone over the oceans, notably the Southern Ocean, which improves model performance. Investigating the regional contributions to air pollution in Beijing: a dispersion modelling study using CO as a tracer Marios Panagi, Zoë L. Fleming, Paul S. Monks, Matthew J. Ashfold, Oliver Wild, Michael Hollaway, Qiang Zhang, Freya A. Squires, and Joshua D. Vande Hey In this paper, using dispersion modelling with emission inventories it was determined that on average 45 % of the total CO pollution that affects Beijing is transported from other areas. About half of the CO comes from beyond the immediate surrounding areas. Finally three classification types of pollution were identified and used to analyse the APHH winter campaign. The results can inform targeted control measures to be implemented in Beijing and the other regions to tackle air quality problems. Evaluation of NU-WRF model performance on air quality simulation under various model resolutions – an investigation within the framework of MICS-Asia Phase III Zhining Tao, Mian Chin, Meng Gao, Tom Kucsera, Dongchul Kim, Huisheng Bian, Jun-ichi Kurokawa, Yuesi Wang, Zirui Liu, Gregory R. Carmichael, Zifa Wang, and Hajime Akimoto One goal of the Model Inter-Comparison Study for Asia (MICS-Asia) Phase III is to identify strengths and weaknesses of current air quality models to provide insights into reducing uncertainties. This study identified that a 15 km grid would be the optimal horizontal resolution in terms of performance and resource usage to capture average and extreme air quality over East Asia and is thus suggested for use in future MICS-Asia modeling activities if the investigation domain remains the same. Urban canopy meteorological forcing and its impact on ozone and PM2.5: role of vertical turbulent transport Peter Huszar, Jan Karlický, Jana Ďoubalová, Kateřina Šindelářová, Tereza Nováková, Michal Belda, Tomáš Halenka, Michal Žák, and Petr Pišoft Urban surfaces alter meteorological conditions which consequently alter air pollution due to modified transport and chemical reactions. Here, we focus on a major component of this influence, enhanced vertical eddy diffusion. Using a regional climate model coupled to a chemistry transport model, we investigate how different representations of turbulent transport translate to urban canopy impact on ozone and PM2.5 concentrations and whether turbulence remains the most important component. Uncertainty analysis of a European high-resolution emission inventory of CO2 and CO to support inverse modelling and network design Ingrid Super, Stijn N. C. Dellaert, Antoon J. H. Visschedijk, and Hugo A. C. Denier van der Gon Emission data contain uncertainties introduced by the methodology and the data used. We quantified uncertainties in gridded emissions using the uncertainty in underlying data, showing that disaggregation in space and time significantly increases the uncertainty. Understanding uncertainties helps to interpret atmospheric measurements and the gap with modelled concentrations. Moreover, our analyses help identify regions with large uncertainties, which require further scrutiny. Climate benefits of proposed carbon dioxide mitigation strategies for international shipping and aviation Catherine C. Ivanovich, Ilissa B. Ocko, Pedro Piris-Cabezas, and Annie Petsonk The Paris Agreement set the goal of remaining well below a 2 &degC global temperature rise, but it is unclear how future emissions from international shipping and aviation will contribute to this threshold. Here we estimate that the sectors' future emissions of carbon dioxide will contribute a combined 0.12 &degC by the end of the century should no action be taken, but proposed mitigation policies have the potential to reduce this warming by almost 90 %. Objective evaluation of surface- and satellite-driven carbon dioxide atmospheric inversions Frédéric Chevallier, Marine Remaud, Christopher W. O'Dell, David Baker, Philippe Peylin, and Anne Cozic We present a way to rate the CO2 flux estimates made from inversion of a global atmospheric transport model. Our approach relies on accurate aircraft measurements in the free troposphere. It shows that some satellite soundings can now provide inversion results that are, despite their uncertainty, comparable in credibility to traditional inversions using the accurate but sparse surface network and that these inversions are, therefore, complementary for studies of the global carbon budget. Analysis of summer O3 in the Madrid air basin with the LOTOS-EUROS chemical transport model Miguel Escudero, Arjo Segers, Richard Kranenburg, Xavier Querol, Andrés Alastuey, Rafael Borge, David de la Paz, Gotzon Gangoiti, and Martijn Schaap In this work we optimise LOTOS-EUROS CTM for simulating tropospheric O3 during summer in the Madrid metropolitan area, one of the largest conurbations in the Mediterranean. Comparing the outputs from five set-ups with different combinations of spatial resolution, meteorological data and vertical structure, we conclude that the model benefits from fine horizontal resolution and highly resolved vertical structure. Running optimized configuration run, we interpret O3 variability during July 2016. Analysis of temporal and spatial variability of atmospheric CO2 concentration within Paris from the GreenLITE™ laser imaging experiment Jinghui Lian, François-Marie Bréon, Grégoire Broquet, T. Scott Zaccheo, Jeremy Dobler, Michel Ramonet, Johannes Staufer, Diego Santaren, Irène Xueref-Remy, and Philippe Ciais CO2 emissions within urban areas impact nearby and downwind concentrations. A different system, based on bi-wavelength laser measurements, has been deployed over Paris. It samples CO2 concentrations along horizontal lines, between a transceiver and a reflector. In this paper, we analyze the measurements provided by this system, together with the more classical in situ sampling and high-resolution modeling. We focus on the temporal and spatial variability of atmospheric CO2 concentrations. A typical weather pattern for ozone pollution events in North China Cheng Gong and Hong Liao Severe O3 pollution events (OPEs) were observed frequently in summer in North China. We found a typical weather pattern that was responsible for the 21 OPEs observed in North China in May to July of 2014–2017. This weather pattern is characterized by high daily maximum temperature, low relative humidity and an anomalous high-pressure system at 500 hPa. Under such a weather pattern, chemical production of O3 is high between 800 and 900 hPa, which is then transported downward to enhance O3 levels. The control of anthropogenic emissions contributed to 80 % of the decrease in PM2.5 concentrations in Beijing from 2013 to 2017 Ziyue Chen, Danlu Chen, Mei-Po Kwan, Bin Chen, Bingbo Gao, Yan Zhuang, Ruiyuan Li, and Bing Xu We employed Kolmogorov–Zurbenko filtering and WRF-CMAQ to quantify the relative contribution of meteorological variations and emission reduction to PM2.5 reduction in Beijing from 2013 to 2017, which is crucial to evaluate the Five-year Clean Air Action Plan. Both models suggested that despite favourable meteorological conditions, the control of anthropogenic emissions accounted for around 80 % of PM2.5 reduction in Beijing. Therefore, such a long-term clean air plan should be continued. Trans-Pacific transport and evolution of aerosols: spatiotemporal characteristics and source contributions Zhiyuan Hu, Jianping Huang, Chun Zhao, Yuanyuan Ma, Qinjian Jin, Yun Qian, L. Ruby Leung, Jianrong Bi, and Jianmin Ma This study investigates aerosol chemical compositions and relative contributions to total aerosols in the western US. The results show that trans-Pacific aerosols have a maximum concentration in the boreal spring, with the greatest contribution from dust. Over western North America, the trans-Pacific aerosols dominate the column-integrated aerosol mass and number concentration. However, near the surface, aerosols mainly originated from local emissions. Foreign influences on tropospheric ozone over East Asia through global atmospheric transport Han Han, Jane Liu, Huiling Yuan, Tijian Wang, Bingliang Zhuang, and Xun Zhang In the East Asian middle and upper troposphere, foreign ozone is 0.8–4.8 times more than its native counterpart in all the seasons. At the East Asian surface, the annual mean concentrations of foreign ozone and native ozone are comparable, being approximately 20 ppbv. The seasonal and interannual variations in foreign ozone over East Asia are closely related to the East Asian monsoon. Estimating ground-level CO concentrations across China based on the national monitoring network and MOPITT: potentially overlooked CO hotspots in the Tibetan Plateau Dongren Liu, Baofeng Di, Yuzhou Luo, Xunfei Deng, Hanyue Zhang, Fumo Yang, Michael L. Grieneisen, and Yu Zhan The spatiotemporal distributions of daily ground-level CO concentrations across China during 2013–2016 are derived by fusing the data from remote sensing and ground monitoring. The population–weighted CO was predicted to be 0.99 ± 0.30 mg m−3 and showed a decreasing trend of −0.021 ± 0.004 mg m−3 per year. The CO pollution was the most severe in the North China Plain. The hotspots in the Tibetan Plateau overlooked by the remote sensing were depicted by the data-fusion approach. Terrestrial ecosystem carbon flux estimated using GOSAT and OCO-2 XCO2 retrievals Hengmao Wang, Fei Jiang, Jun Wang, Weimin Ju, and Jing M. Chen The differences in inverted global and regional carbon fluxes from GOSAT and OCO-2 XCO2 from 1 January to 31 December 2015 are studied. We find significant differences for inverted terrestrial carbon fluxes on both global and regional scales. Overall, GOSAT XCO2 has a better performance than OCO-2, and GOSAT data can effectively improve carbon flux estimates in the Northern Hemisphere, while OCO-2 data, with the specific version used in this study, show only slight improvement. Diagnosing spatial error structures in CO2 mole fractions and XCO2 column mole fractions from atmospheric transport Thomas Lauvaux, Liza I. Díaz-Isaac, Marc Bocquet, and Nicolas Bousserez A small-size ensemble of mesoscale simulations has been filtered to characterize the spatial structures of transport errors in atmospheric CO2 mixing ratios. The extracted error structures in in situ and column CO2 show similar length scales compared to other meteorological variables, including seasonality, which could be used as proxies in regional inversion systems. How marine emissions of bromoform impact the remote atmosphere Yue Jia, Susann Tegtmeier, Elliot Atlas, and Birgit Quack An atmospheric inversion over the city of Cape Town: sensitivity analyses Alecia Nickless, Peter J. Rayner, Robert J. Scholes, Francois Engelbrecht, and Birgit Erni Different frameworks for an atmospheric inversion study over Cape Town, South Africa, are considered. We focused particularly on how sensitive the estimates of CO2 fluxes were to changes in the way the uncertainty in these estimates was specified and the impact different prior information had on the final flux estimates. We used atmospheric measurements from two new sites located near Cape Town: Robben Island and Hangklip lighthouses, which were specifically deployed for this inversion study. Modelling CO2 weather – why horizontal resolution matters Anna Agustí-Panareda, Michail Diamantakis, Sébastien Massart, Frédéric Chevallier, Joaquín Muñoz-Sabater, Jérôme Barré, Roger Curcoll, Richard Engelen, Bavo Langerock, Rachel M. Law, Zoë Loh, Josep Anton Morguí, Mark Parrington, Vincent-Henri Peuch, Michel Ramonet, Coleen Roehl, Alex T. Vermeulen, Thorsten Warneke, and Debra Wunch This paper demonstrates the benefits of using global models with high horizontal resolution to represent atmospheric CO2 patterns associated with evolving weather. The modelling of CO2 weather is crucial to interpret the variability from ground-based and satellite CO2 observations, which can then be used to infer CO2 fluxes in atmospheric inversions. The benefits of high resolution come from an improved representation of the topography, winds, tracer transport and CO2 flux distribution. Calibration of a multi-physics ensemble for estimating the uncertainty of a greenhouse gas atmospheric transport model Liza I. Díaz-Isaac, Thomas Lauvaux, Marc Bocquet, and Kenneth J. Davis We demonstrate that transport model errors, one of the main contributors to the uncertainty in regional CO2 inversions, can be represented by a small-size ensemble carefully calibrated with meteorological data. Our results also confirm transport model errors represent a significant fraction of the model–data mismatch in CO2 mole fractions and hence in regional inverse CO2 fluxes. Analysis of atmospheric CH4 in Canadian Arctic and estimation of the regional CH4 fluxes Misa Ishizawa, Douglas Chan, Doug Worthy, Elton Chan, Felix Vogel, and Shamil Maksyutov The Canadian Arctic has the potential for enhanced methane (CH4) emissions under global warming. However, the regional CH4 emission (fluxes) estimates range widely. This study analyzes recent Canadian Arctic CH4 observations and estimates the regional emissions. The additional observations yield robust CH4 flux estimates and enable the partitioning of the CH4 sources into wetland and forest fires. The results indicate that years with warmer summer conditions result in more wetland CH4 emissions. Accounting for the vertical distribution of emissions in atmospheric CO2 simulations Dominik Brunner, Gerrit Kuhlmann, Julia Marshall, Valentin Clément, Oliver Fuhrer, Grégoire Broquet, Armin Löscher, and Yasjka Meijer Atmospheric transport models are increasingly being used to estimate CO2 emissions from atmospheric CO2 measurements. This study demonstrates the importance of distributing CO2 emissions vertically in the model according to realistic profiles, since a major proportion of CO2 is emitted through tall stacks from power plants and industrial sources. With the traditional approach of emitting all CO2 at the surface, models may significantly overestimate the atmospheric CO2 levels. Urban source term estimation for mercury using a boundary-layer budget method Basil Denzler, Christian Bogdal, Cyrill Kern, Anna Tobler, Jing Huo, and Konrad Hungerbühler Mercury poses a threat to human health and the environment. Therefore, the reduction of emissions is a declared aim. Here, we quantified mercury emission for the city of Zurich, Switzerland, based on atmospheric measurements and box modeling. This so-called top-down approach allows us to better constrain mercury emissions from diffuse distributed sources. This is applicable to other regions and presents a cost-effective way of quantifying emissions, as a first step in the reduction thereof. Characterizing uncertainties in atmospheric inversions of fossil fuel CO2 emissions in California Kieran Brophy, Heather Graven, Alistair J. Manning, Emily White, Tim Arnold, Marc L. Fischer, Seongeun Jeong, Xinguang Cui, and Matthew Rigby We investigate potential errors and uncertainties related to the spatial and temporal prior representation of emissions and modelled atmospheric transport for the inversion of California's fossil fuel CO2 emissions. Our results indicate that uncertainties in posterior total state fossil fuel CO2 estimates arising from the choice of prior emissions or atmospheric transport model are on the order of 15 % or less for the ground-based network in California we consider. Intercomparison of atmospheric trace gas dispersion models: Barnett Shale case study Anna Karion, Thomas Lauvaux, Israel Lopez Coto, Colm Sweeney, Kimberly Mueller, Sharon Gourdji, Wayne Angevine, Zachary Barkley, Aijun Deng, Arlyn Andrews, Ariel Stein, and James Whetstone In this study, we use atmospheric methane concentration observations collected during an airborne campaign to compare different model-based emissions estimates from the Barnett Shale oil and natural gas production basin in Texas, USA. We find that the tracer dispersion model has a significant impact on the results because the models differ in their simulation of vertical dispersion. Additional work is needed to evaluate and improve vertical mixing in the tracer dispersion models. Attributing differences in the fate of lateral boundary ozone in AQMEII3 models to physical process representations Peng Liu, Christian Hogrefe, Ulas Im, Jesper H. Christensen, Johannes Bieser, Uarporn Nopmongcol, Greg Yarwood, Rohit Mathur, Shawn Roselle, and Tanya Spero This study represents an intercomparison of four regional-scale air quality simulations in order to understand the model similarities and differences in estimating the impact of ozone imported from outside of the US on the surface ozone within the US at process level. Vertical turbulent mixing stands out as a primary contributor to the model differences in inert tracers. Impacts of physical parameterization on prediction of ethane concentrations for oil and gas emissions in WRF-Chem Maryam Abdi-Oskouei, Gabriele Pfister, Frank Flocke, Negin Sobhani, Pablo Saide, Alan Fried, Dirk Richter, Petter Weibring, James Walega, and Gregory Carmichael This study presents a quantification of model uncertainties due to configurations and errors in the emission inventories. The analysis includes performing simulations with different configurations and comparisons with airborne and ground-based observations with a focus on capturing transport and emissions from the oil and gas sector. The presented results reflect the challenges that one may face when attempting to improve emission inventories by contrasting measured with modeled concentrations. An important mechanism of regional O3 transport for summer smog over the Yangtze River Delta in eastern China Jun Hu, Yichen Li, Tianliang Zhao, Jane Liu, Xiao-Ming Hu, Duanyang Liu, Yongcheng Jiang, Jianming Xu, and Luyu Chang Using observational and modeling studies, the importance of the mechanism driving regional O3 transport in the residual layer (RL) with respect to summer smog over the Yangtze River Delta region in eastern China was revealed. This mechanism was also examined in association with diurnal change in the atmospheric boundary layer. Regional O3 transport through the nocturnal RL is believed to have great implications for understanding urban and regional O3 pollution in this area. Rapid and reliable assessment of methane impacts on climate Ilissa B. Ocko, Vaishali Naik, and David Paynter As communities worldwide analyse options to reduce methane emissions from energy use, agriculture, and waste management, there is an immediate need to build confidence in rapid assessment tools other than standard climate metrics – which misrepresent impacts over all timescales. In this paper, we show that a simplified climate model can easily and rapidly provide scientifically robust climate responses to changes in methane emissions, thereby improving mitigation analysis and decision-making. Impact of physical parameterizations and initial conditions on simulated atmospheric transport and CO2 mole fractions in the US Midwest Liza I. Díaz-Isaac, Thomas Lauvaux, and Kenneth J. Davis Atmospheric inversions rely on the accurate representation of the atmospheric dynamics in order to produce reliable surface fluxes. In this work, we evaluate the sensitivity of a state-of-the-art mesoscale atmospheric model to the different physics parameterizations and forcing. We conclude that no model configuration is optimal across an entire region. Therefore, we recommend an ensemble approach or the assimilation of meteorological observations in future inversion studies. Seasonal ozone vertical profiles over North America using the AQMEII3 group of air quality models: model inter-comparison and stratospheric intrusions Marina Astitha, Ioannis Kioutsioukis, Ghezae Araya Fisseha, Roberto Bianconi, Johannes Bieser, Jesper H. Christensen, Owen R. Cooper, Stefano Galmarini, Christian Hogrefe, Ulas Im, Bryan Johnson, Peng Liu, Uarporn Nopmongcol, Irina Petropavlovskikh, Efisio Solazzo, David W. Tarasick, and Greg Yarwood This work is unique in the detailed analyses of modeled ozone vertical profiles from sites in North America through the collaboration of four research groups from the US and EU. We assess the air quality models' performance and model inter-comparison for ozone vertical profiles and stratospheric ozone intrusions. Lastly, we designate the important role of lateral boundary conditions in the ozone vertical profiles using chemically inert tracers. Quantifying the vertical transport of CHBr3 and CH2Br2 over the western Pacific Robyn Butler, Paul I. Palmer, Liang Feng, Stephen J. Andrews, Elliot L. Atlas, Lucy J. Carpenter, Valeria Donets, Neil R. P. Harris, Stephen A. Montzka, Laura L. Pan, Ross J. Salawitch, and Sue M. Schauffler Natural sources of short-lived bromoform and dibromomethane are important for determining the inorganic bromine budget in the stratosphere that drives ozone loss. Two new modelling techniques describe how different geographical source regions influence their atmospheric variability over the western Pacific. We find that it is driven primarily by open ocean sources, and we use atmospheric observations to help estimate their contributions to the upper tropospheric inorganic bromine budget. 2010–2016 methane trends over Canada, the United States, and Mexico observed by the GOSAT satellite: contributions from different source sectors Jian-Xiong Sheng, Daniel J. Jacob, Alexander J. Turner, Joannes D. Maasakkers, Joshua Benmergui, A. Anthony Bloom, Claudia Arndt, Ritesh Gautam, Daniel Zavala-Araiza, Hartmut Boesch, and Robert J. Parker Analysis of 7 years (2010–2016) of GOSAT methane trends over Canada, the contiguous US, and Mexico suggests that US methane emissions increased by 2.5 ± 1.4 % a−1 over the 7-year period, with contributions from both oil–gas systems and livestock in the Midwest. Mexican emissions show a decrease that can be attributed to a decreasing cattle population. Canadian emissions show year-to-year variability driven by wetland emissions and correlated with wetland areal extent. More articles (45) Alegría, N., Herranz, M., Idoeta, R., and Legarda, F.: Study of Be-7 activity concentration in the air of northern Spain, J. Radioanal. Nucl. Chem., 286, 347–351, https://doi.org/10.1007/s10967-010-0710-6, 2010. AMAP/UNEP: Technical Background Report to the Global Atmospheric Mercury Assessment. Arctic Monitoring and Assessment Programme/UNEP Chemical Branch. 159 pp., 2008. Arnott, W., Gyawali, M., and Arnold, I.: Aerosol extinction and single scattering albedo downwind of the summer 2008 California wildfires measured with photoacoustic spectrometers and sunphotometers from 355 nm to 1047 nm. Eos, 89, AGU Fall Meeting Supplement, Abstract A11D-0169, 2008. Balkanski, Y. J., Jacob, D. J., and Gardner, G. M.: Transport and residence times of tropospheric aerosols inferred from a global three-dimensional simulation of Pb-210, J. Geophys. Res., 98, 20573–20586, 1993. Bergan, T. and Rodhe, H.: Oxidation of elemental mercury in the atmosphere; constraints imposed by global scale modeling, J. Atmos. Chem., 40, 191–212, 2001. Bullock, O. R. and Brehme, K. A.: Atmospheric mercury simulation using the CMAQ model: Formulation description and analysis of wet deposition results, Atmos. Environ., 36, 2135–2146, 2002. Calvert, J. G., and Lindberg, S. E.: Mechanisms of mercury removal by O-3 and OH in the atmosphere, Atmos. Environ., 39, 3355–3367, https://doi.org/10.1016/j.atmosenv.2005.01.055, 2005. Chung, S. H. and Seinfeld, J. H.: Global distribution and climate forcing of carbonaceous aerosols, J. Geophys. Res., 107, 4407, https://doi.org/10.1029/2001JD001397, 2002. Clarkson, T. W. and Magos, L.: The toxicology of mercury and its chemical compounds, Crit. Rev. Toxic. , 36, 609–662, 2006. Clever, H. L., Johnson, S. A., and Derrick, M. E.: The solubility of mercury and some sparingly soluble mercury salts in water and aqueous-electrolyte solutions, J. Phys. Chem. Ref. Data, 14, 631–681, 1985. Dastoor, A. P. and Larocque, Y.: Global circulation of atmospheric mercury: A modeling study, Atmos. Environ., 38, 147–161, https://doi.org/10.1016/j.atmosenv.2003.08.037, 2004. Edgerton, E. S., Hartsell, B. E., and Jansen, J. J.: Mercury speciation in coal-fired power plant plumes observed at three surface sites in the southeastern US, Environ. Sci. Technol., 40, 4563–4570, https://doi.org/10.1021/es0515607, 2006. Engle, M. A., Tate, M. T., Krabbenhoft, D. P., Schauer, J. J., Kolker, A., Shanley, J. B., and Bothner, M. H.: Comparison of atmospheric mercury speciation and deposition at nine sites across central and eastern North America, J. Geophys. Res., 115, D18306, https://doi.org/10.1029/2010jd014064, 2010. Faïn, X., Obrist, D., Hallar, A. G., McCubbin, I., and Rahn, T.: High levels of reactive gaseous mercury observed at a high elevation research laboratory in the Rocky Mountains, Atmos. Chem. Phys., 9, 8049–8060, https://doi.org/10.5194/acp-9-8049-2009, 2009. Faïn, X., Obrist, D., Pierce, A., Barth, C., Gustin, M. S., and Boyle, D. P.: Whole-watershed mercury balance at Sagehen Creek, Sierra Nevada, CA, Geochim. Cosmochim. Acta, 75, 2379–2392, https://doi.org/10.1016/j.gca.2011.01.041, 2011. Finley, B. D., Swartzendruber, P. C., and Jaffe, D. A.: Particulate mercury emissions in regional wildfire plumes observed at the Mount Bachelor Observatory, Atmos. Environ., 43, 6074–6083, https://doi.org/10.1016/j.atmosenv.2009.08.046, 2009. Friedli, H. R., Radke, L. F., Lu, J. Y., Banic, C. M., Leaitch, W. R., and MacPherson, J. I.: Mercury emissions from burning of biomass from temperate North American forests: Laboratory and airborne measurements, Atmos. Environ., 37, 253–267, https://doi.org/10.1016/s1352-2310(02)00819-1, 2003a. Friedli, H. R., Radke, L. F., Prescott, R., Hobbs, P. V., and Sinha, P.: Mercury emissions from the August 2001 wildfires in Washington State and an agricultural waste fire in Oregon and atmospheric mercury budget estimates, Glob. Biogeochem. Cy, 17, 1039, https://doi.org/10.1029/2002GB001972, 2003b. Galarneau, E., Bidleman, T. F., and Blanchard, P.: Seasonality and interspecies differences in particle/gas partitioning of PAHs observed by the Integrated Atmospheric Deposition Network (IADN), Atmos. Environ., 40, 182–197, https://doi.org/10.1016/j.atmosenv.2005.09.034, 2006. Graydon, J. A., Louis, V. L. S., Hintelmann, H., Lindberg, S. E., Sandilands, K. A., Rudd, J. W. M., Kelly, C. A., Hall, B. D., and Mowat, L. D.: Long-term wet and dry deposition of total and methyl mercury in the remote boreal ecoregion of Canada, Environ. Sci. Technol., 42, 8345–8351, https://doi.org/10.1021/es801056j, 2008. Gustin, M. and Jaffe, D.: Reducing the uncertainty in measurement and understanding of mercury in the atmosphere, Environ. Sci. Technol., 44, 2222–2227, https://doi.org/10.1021/es902736k, 2010. Hall, B.: The gas-phase oxidation of elemental mercury by ozone, Water Air Soil Pollut., 80, 301–315, https://doi.org/10.1007/bf01189680, 1995. Hedgecock, I. M. and Pirrone, N.: Mercury and photochemistry in the marine boundary layer-modeling studies suggest the in situ production of reactive gas phase mercury, Atmos. Environ., 35, 3055–3062, https://doi.org/10.1016/s1352-2310(01)00109-1, 2001. Holmes, C. D., Jacob, D. J., Mason, R. P., and Jaffe, D. A.: Sources and deposition of reactive gaseous mercury in the marine atmosphere, Atmos. Environ., 43, 2278–2285, https://doi.org/10.1016/j.atmosenv.2009.01.051, 2009. Holmes, C. D., Jacob, D. J., Corbitt, E. S., Mao, J., Yang, X., Talbot, R., and Slemr, F.: Global atmospheric model for mercury including oxidation by bromine atoms, Atmos. Chem. Phys., 10, 12037–12057, https://doi.org/10.5194/acp-10-12037-2010, 2010. Hynes, A., Donohoue, D., Goodsite, M., Hedgecock, I., Pirrone, N., and Mason, R.: Our current understanding of major chemical and physical processes affecting mercury dynamics in the atmosphere and at air-water/terrestrial interfaces, in: Mercury Fate and Transport in the Global Atmosphere, edited by: Pirrone, N. and Mason, R. P., chap. 14, Springer, 322–344, 2009. Keeler, G., Glinsorn, G., and Pirrone, N.: Particulate mercury in the atmosphere – its significance, transport, transformation and sources, Water Air Soil Pollut., 80, 159–168, https://doi.org/10.1007/bf01189664, 1995. Keeler, G. J., Gratz, L. E., and Al-Wali, K.: Long-term atmospheric mercury wet deposition at Underhill, Vermont, Ecotoxicology, 14, 71–83, https://doi.org/10.1007/s10646-004-6260-3, 2005. Koch, D. M., Jacob, D. J., and Graustein, W. C.: Vertical transport of tropospheric aerosols as indicated by Be-7 and Pb-210 in a chemical tracer model, J. Geophys. Res., 101, 18651–18666, https://doi.org/10.1029/96jd01176, 1996. Kos, G., Ryzhkov, A., and Dastoor, A.: Analysis of uncertainties in measurements and mode for oxidised and particle-bound mercury, 10th International Conference on Mercury as a Global Pollutant, Halifax, Nova Scotia, Canada, 2011. Lamborg, C. H., Fitzgerald, W. F., Vandal, G. M., and Rolfhus, K. R.: Atmospheric mercury in northern Wisconsin – sources and species, Water Air Soil Pollut., 80, 189–198, https://doi.org/10.1007/bf01189667, 1995. Landis, M. S., Stevens, R. K., Schaedlich, F., and Prestbo, E. M.: Development and characterization of an annular denuder methodology for the measurement of divalent inorganic reactive gaseous mercury in ambient air, Environ. Sci. Technol., 36, 3000–3009, https://doi.org/10.1021/es015887t, 2002. Levine, S. Z. and Schwartz, S. E.: In-cloud and below-cloud scavenging of nitric acid vapor, Atmos. Environ., 16, 1725–1734, 1982. Lin, C. J., Pongprueksa, P., Lindberg, S. E., Pehkonen, S. O., Byun, D., and Jang, C.: Scientific uncertainties in atmospheric mercury models I: Model science evaluation, Atmos. Environ., 40, 2911–2928, https://doi.org/10.1016/j.atmosenv.2006.01.009, 2006. Lindberg, S. E. and Stratton, W. J.: Atmospheric mercury speciation: Concentrations and behavior of reactive gaseous mercury in ambient air, Environ. Sci. Technol., 32, 49–57, https://doi.org/10.1021/es970546u, 1998. Lindqvist, O. and Rodhe, H.: Atmospheric mercury - a review, Tellus B-Chem. Phys. Meteorol., 37, 136–159, 1985. Liu, H. Y., Jacob, D. J., Bey, I., and Yantosca, R. M.: Constraints from Pb-210 and Be-7 on wet deposition and transport in a global three-dimensional chemical tracer model driven by assimilated meteorological fields, J. Geophys. Res., 106, 12109–12128, https://doi.org/10.1029/2000jd900839, 2001. Lohman, K., Seigneur, C., Edgerton, E., and Jansen, J.: Modeling mercury in power plant plumes, Environ. Sci. Technol., 40, 3848–3854, https://doi.org/10.1021/es051556v, 2006. Lombard, M. A. S., Bryce, J. G., Mao, H., and Talbot, R.: Mercury deposition in southern New Hampshire, 2006–2009, Atmos. Chem. Phys., 11, 7657–7668, https://doi.org/10.5194/acp-11-7657-2011, 2011. Lyman, S. N., Gustin, M. S., Prestbo, E. M., and Marsik, F. J.: Estimation of dry deposition of atmospheric mercury in Nevada by direct and indirect methods, Environ. Sci. Technol., 41, 1970–1976, https://doi.org/10.1021/es062323m, 2007. Lyman, S. N., Gustin, M. S., Prestbo, E. M., Kilner, P. I., Edgerton, E., and Hartsell, B.: Testing and application of surrogate surfaces for understanding potential gaseous oxidized mercury dry deposition, Environ. Sci. Technol., 43, 6235–6241, https://doi.org/10.1021/es901192e, 2009. Lyman, S. N. and Gustin, M. S.: Determinants of atmospheric mercury concentrations in Reno, Nevada, USA, Sci. Total Environ., 408, 431–438, https://doi.org/10.1016/j.scitotenv.2009.09.045, 2009. Lyman, S. N., Jaffe, D. A., and Gustin, M. S.: Release of mercury halides from KCl denuders in the presence of ozone, Atmos. Chem. Phys., 10, 8197–8204, 10.5194/acp-10-8197-2010, 2010. Lynam, M. M. and Keeler, G. J.: Artifacts associated with the measurement of particulate mercury in an urban environment: The influence of elevated ozone concentrations, Atmos. Environ., 39, 3081–3088, https://doi.org/10.1016/j.atmosenv.2005.01.036, 2005 Mahaffey, K. R., Clickner, R. P., and Bodurow, C. C.: Blood organic mercury and dietary mercury intake: National health and nutrition examination survey, 1999 and 2000, Environ. Health Persp., 112, 562–570, 2004. Mahaffey, K. R., Clickner, R. P., and Jeffries, R. A.: Adult women's blood mercury concentrations vary regionally in the United States: Association with patterns of fish consumption (NHANES 1999–2004), Environ. Health Persp., 117, 47–53, 10.1289/ehp.11674, 2009. Malcolm, E. G. and Keeler, G. J.: Evidence for a sampling artifact for particulate-phase mercury in the marine atmosphere, Atmos. Environ., 41, 3352–3359, https://doi.org/10.1016/j.atmosenv.2006.12.024, 2007. Mao, H., Talbot, R., Hegarty, J., and Koermer, J.: Speciated mercury at marine, coastal, and inland sites in New England - Part 2: Relationships with atmospheric physical parameters, Atmos. Chem. Phys. Discuss., 11, 28395–28443, http://dx.doi.org/10.5194/acpd-11-28395-2011https://doi.org/10.5194/acpd-11-28395-2011, 2011. Mason, R. P. and Sheu, G. R.: Role of the ocean in the global mercury cycle, Global Biogeochem. Cy., 16, 1093, https://doi.org/10.1029/2001gb001440, 2002. Mergler, D., Anderson, H. A., Chan, L. H. M., Mahaffey, K. R., Murray, M., Sakamoto, M., and Stern, A. H.: Methylmercury exposure and health effects in humans: A worldwide concern, Ambio, 36, 3–11, 2007. Muramatsu, H., Yoshizawa, S., Abe, T., Ishii, T., Wada, M., Horiuchi, Y., and Kanekatsu, R.: Variation of Be-7 concentration in surface air at Nagano, Japan, J. Radioanal. Nucl. Chem., 275, 299–307, https://doi.org/10.1007/s10967-007-7056-8, 2008. National Atmospheric Deposition Program: Mercury Deposition Network (MDN): A NADP Network, available online at: http://nadp.sws.uiuc.edu/MDN/, 2011. Odum, J. R., Hoffmann, T., Bowman, F., Collins, D., Flagan, R. C., and Seinfeld, J. H.: Gas/particle partitioning and secondary organic aerosol yields, Environ. Sci. Technol., 30, 2580–2585, https://doi.org/10.1021/es950943, 1996. Pacyna, E. G., Pacyna, J. M., Sundseth, K., Munthe, J., Kindbom, K., Wilson, S., Steenhuisen, F., and Maxson, P.: Global emission of mercury to the atmosphere from anthropogenic sources in 2005 and projections to 2020, Atmos. Environ., 44, 2487–2499, https://doi.org/10.1016/j.atmosenv.2009.06.009, 2010. Pankow, J. F.: Review and comparative-analysis if the theories on partitioning between the gas and aerosol particulate phases in the atmosphere, Atmos. Environ., 21, 2275–2283, https://doi.org/10.1016/0004-6981(87)90363-5, 1987. Pankow, J. F., Storey, J. M. E., and Yamasaki, H.: Effects of relative-humidity on gas-particle partitioning of semivolatile organic-compounds to urban particulate matter, Environ. Sci. Technol., 27, 2220–2226, https://doi.org/10.1021/es00047a032, 1993. Pankow, J. F.: An absorption-model of the gas aerosol partitioning involved in the formation of secondary organic aerosol, Atmos. Environ., 28, 189–193, https://doi.org/10.1016/1352-2310(94)90094-9, 1994. Petersen, G., Munthe, J., Pleijel, K., Bloxam, R., and Kumar, A. V.: A comprehensive Eulerian modeling framework for airborne mercury species: Development and testing of the tropospheric chemistry module (TCM), Atmos. Environ., 32, 829–843, https://doi.org/10.1016/s1352-2310(97)00049-6, 1998. Petersen, G., Iverfeldt, A., and Munthe, J.: Atmospheric mercury species over central and northern Europe – model-calculations and comparison with observations from the Nordic Air and Precipitation Network for 1987 and 1988, Atmos. Environ., 29, 47–67, https://doi.org/10.1016/1352-2310(94)00223-8, 1995. Poissant, L., Pilote, M., Xu, X. H., Zhang, H., and Beauvais, C.: Atmospheric mercury speciation and deposition in the Bay St. Francois wetlands, J. Geophys. Res., 109, D11301, 10.1029/2003jd004364, 2004. Pongprueksa, P., Lin, C. J., Lindberg, S. E., Jang, C., Braverman, T., Bullock, O. R., Ho, T. C., and Chu, H. W.: Scientific uncertainties in atmospheric mercury models III: Boundary and initial conditions, model grid resolution, and Hg(II) reduction mechanism, Atmos. Environ., 42, 1828–1845, https://doi.org/10.1016/j.atmosenv.2007.11.020, 2008. Rutter, A. P. and Schauer, J. J.: The impact of aerosol composition on the particle to gas partitioning of reactive mercury, Environ. Sci. Technol., 41, 3934–3939, https://doi.org/10.1021/es062439i, 2007a. Rutter, A. P. and Schauer, J. J.: The effect of temperature on the gas-particle partitioning of reactive mercury in atmospheric aerosols, Atmos. Environ., 41, 8647–8657, https://doi.org/10.1016/j.atmosenv.2007.07.024, 2007b. Rutter, A. P., Hanford, K. L., Zwers, J. T., Perillo-Nicholas, A. L., Schauer, J. J., and Olson, M. L.: Evaluation of an offline method for the analysis of atmospheric reactive gaseous mercury and particulate mercury, J. Air Waste Manage. Assoc., 58, 377–383, 10.3155/1047-3289.58.3.377, 2008a. Rutter, A. P., Schauer, J. J., Lough, G. C., Snyder, D. C., Kolb, C. J., Von Klooster, S., Rudolf, T., Manolopoulos, H., and Olson, M. L.: A comparison of speciated atmospheric mercury at an urban center and an upwind rural location, J. Environ. Monit., 10, 102–108, https://doi.org/10.1039/b710247j, 2008b. Sanei, H., Outridge, P. M., Goodarzi, F., Wang, F., Armstrong, D., Warren, K., and Fishback, L.: Wet deposition mercury fluxes in the Canadian sub-Arctic and southern Alberta, measured using an automated precipitation collector adapted to cold regions, Atmos. Environ., 44, 1672–1681, https://doi.org/10.1016/j.atmosenv.2010.01.030, 2010. Scheuhammer, A. M., Meyer, M. W., Sandheinrich, M. B., and Murray, M. W.: Effects of environmental methylmercury on the health of wild birds, mammals, and fish, Ambio, 36, 12–18, https://doi.org/10.1579/0044-7447(2007)36[12:eoemot]2.0.co;2, 2007. Schroeder, W. H. and Munthe, J.: Atmospheric mercury – an overview, Atmos. Environ., 32, 809–822, https://doi.org/10.1016/s1352-2310(97)00293-8, 1998. Seigneur, C., Abeck, H., Chia, G., Reinhard, M., Bloom, N. S., Prestbo, E., and Saxena, P.: Mercury adsorption to elemental carbon (soot) particles and atmospheric particulate matter, Atmos. Environ., 32, 2649–2657, https://doi.org/10.1016/s1352-2310(97)00415-9, 1998. Seigneur, C., Karamchandani, P., Lohman, K., Vijayaraghavan, K., and Shia, R. L.: Multiscale modeling of the atmospheric fate and transport of mercury, J. Geophys. Res., 106, 27795–27809, https://doi.org/10.1029/2000jd000273, 2001. Seigneur, C., Karamchandani, P., Vijayaraghavan, K., Lohman, K., Shia, R. L., and Levin, L.: On the effect of spatial resolution on atmospheric mercury modeling, Sci. Total Environ., 304, 73–81, https://doi.org/10.1016/s0048-9697(02)00558-2, 2003. Seigneur, C., Vijayaraghavan, K., and Lohman, K.: Atmospheric mercury chemistry: Sensitivity of global model simulations to chemical reactions, J. Geophys. Res., 111, D22306, https://doi.org/10.1029/2005jd006780, 2006. Seinfeld, J. H. and Pandis, S. N.: Atmospheric chemistry and physics: From air pollution to climate change, 2nd ed., John Wiley & Sons, Inc., 1203 pp., 2006. Selin, N. E., Jacob, D. J., Park, R. J., Yantosca, R. M., Strode, S., Jaegle, L., and Jaffe, D.: Chemical cycling and deposition of atmospheric mercury: Global constraints from observations, J. Geophys. Res., 112, D02308, https://doi.org/10.1029/2006jd007450, 2007. Selin, N. E. and Jacob, D. J.: Seasonal and spatial patterns of mercury wet deposition in the United States: Constraints on the contribution from North American anthropogenic sources, Atmos. Environ., 42, 5193–5204, 2008. Selin, N. E., Jacob, D. J., Yantosca, R. M., Strode, S., Jaegle, L., and Sunderland, E. M.: Global 3-D land-ocean-atmosphere model for mercury: Present-day versus preindustrial cycles and anthropogenic enrichment factors for deposition, Global Biogeochem. Cycles, 22, GB3099, https://doi.org/10.1029/2008gb003282, 2008. Sigler, J. M., Mao, H., and Talbot, R.: Gaseous elemental and reactive mercury in southern New Hampshire, Atmos. Chem. Phys., 9, 1929–1942, https://doi.org/10.5194/acp-9-1929-2009, 2009. Skov, H., Brooks, S. B., Goodsite, M. E., Lindberg, S. E., Meyers, T. P., Landis, M. S., Larsen, M. R. B., Jensen, B., McConville, G., and Christensen, J.: Fluxes of reactive gaseous mercury measured with a newly developed method using relaxed eddy accumulation, Atmos. Environ., 40, 5452–5463, https://doi.org/10.1016/j.atmosenv.2006.04.061, 2006. Soerensen, A. L., Sunderland, E. M., Holmes, C. D., Jacob, D. J., Yantosca, R. M., Skov, H., Christensen, J. H., Strode, S. A., and Mason, R. P.: An improved global model for air-sea exchange of mercury: High concentrations over the North Atlantic, Environ. Sci. Technol., 44, 8574–8580, https://doi.org/10.1021/es102032g, 2010. Sommar, J., Gardfeldt, K., Stromberg, D., and Feng, X. B.: A kinetic study of the gas-phase reaction between the hydroxyl radical and atomic mercury, Atmos. Environ., 35, 3049–3054, https://doi.org/10.1016/s1352-2310(01)00108-x, 2001. Subir, M., Ariya, P. A., and Dastoor, A. P.: A review of uncertainties in atmospheric modeling of mercury chemistry I. Uncertainties in existing kinetic parameters - fundamental limitations and the importance of heterogeneous chemistry, Atmos. Environ., 45, 5664–5676, https://doi.org/10.1016/j.atmosenv.2011.04.046, 2011. Sunderland, E. M., and Mason, R. P.: Human impacts on open ocean mercury concentrations, Global Biogeochem. Cy., 21, GB4022, 10.1029/2006GB002876, 2007. Swartzendruber, P. C., Jaffe, D. A., Prestbo, E. M., Weiss-Penzias, P., Selin, N. E., Park, R., Jacob, D. J., Strode, S., and Jaegle, L.: Observations of reactive gaseous mercury in the free troposphere at the Mount Bachelor Observatory, J. Geophys. Res., 111, D24302, https://doi.org/10.1029/2006jd007415, 2006. Talbot, R., Mao, H., Feddersen, D., Smith, M., Kim, S. Y., Sive, B. C., Haase, K., Ambrose, J., Zhou, Y., and Russo, R.: Comparison of particulate mercury measured with manual and automated methods, Atmosphere, 2, 1–20, https://doi.org/10.3390/atmos2010001, 2011. ter Schure, A., Caffrey, J., Gustin, M. S., Holmes, C. D., Hynes, A., Landing, B., Landis, M. S., Laudel, D., Levin, L., Nair, U., Jansen, J., Ryan, J., Walters, J., Schauer, J. J., Volkamer, R., Waters, D., and Weiss, P.: An integrated approach to assess elevated mercury wet deposition and concentrations in the southeastern United States, 10th International Conference on Mercury as a Global Pollutant, Halifax, Nova Scotia, Canada, 2011. Vijayaraghavan, K., Karamchandani, P., Seigneur, C., Balmori, R., and Chen, S. Y.: Plume-in-grid modeling of atmospheric mercury, J. Geophys. Res., 113, D24305, https://doi.org/10.1029/2008jd010580, 2008. Wang, Q., Jacob, D. J., Fisher, J. A., Mao, J., Leibensperger, E. M., Carouge, C. C., Le Sager, P., Kondo, Y., Jimenez, J. L., Cubison, M. J., and Doherty, S. J.: Sources of carbonaceous aerosols and deposited black carbon in the Arctic in winter-spring: Implications for radiative forcing, Atmos. Chem. Phys., 11, 12453–12473, https://doi.org/10.5194/acp-11-12453-2011, 2011. Wang, Y. H., Jacob, D. J., and Logan, J. A.: Global simulation of tropospheric O3-NOx-hydrocarbon chemistry 1. Model formulation, J. Geophys. Res., 103, 10713–10725, https://doi.org/10.1029/98jd00158, 1998. Weiss-Penzias, P., Gustin, M. S., and Lyman, S. N.: Observations of speciated atmospheric mercury at three sites in Nevada: Evidence for a free tropospheric source of reactive gaseous mercury, J. Geophys. Res., 114, D14302, https://doi.org/10.1029/2008jd011607, 2009. Weiss-Penzias, P. S., Gustin, M. S., and Lyman, S. N.: Sources of gaseous oxidized mercury and mercury dry deposition at two southeastern US sites, Atmos. Environ., 45, 4569–4579, https://doi.org/10.1016/j.atmosenv.2011.05.069, 2011. Wesely, M. L.: Parameterization of surface resistances to gaseous dry deposition in regional-scale numerical-models, Atmos. Environ., 23, 1293–1304, https://doi.org/10.1016/0004-6981(89)90153-4, 1989. Yamasaki, H., Kuwata, K., and Miyamoto, H.: Effects of ambient temperature on aspects of airborne polycyclic aromatic hydrocarbons, Environ. Sci. Technol., 16, 189–194, https://doi.org/10.1021/es00098a003, 1982. Yoshimori, M.: Beryllium 7 radionucleide as a tracer of vertical air mass transport in the troposphere, Atmos. Remote Sens., 828–832, 2005. Zhang, L., Jacob, D. J., Downey, N. V., Wood, D. A., Blewitt, D., Carouge, C. C., van Donkelaar, A., Jones, D. B. A., Murray, L. T., and Wang, Y.: Improved estimate of the policy-relevant background ozone in the United States using the GEOS-Chem global model with 1/2$^{\circ}{\times}$2/3° horizontal resolution over North America, Atmos. Environ., 45, 6769–6776, 2011a. Zhang, L. Jacob, D. J., Knipping, E. M., Kumar, N., Munger, J. W., Carouge, C. C., van Donkelaar, A., Wang, Y., and D. Chen: Nitrogen deposition to the United States: distribution, sources, and processes, Atmos. Chem. Phys., submitted, 2011b. Zhang, Y., Jaeglé, L., van Donkelaar, A., Martin, R. V., Holmes, C. D., Amos, H. M., Wang, Q., Jacob, D. J., Talbot, R., Artz, R., Holson, T. M., Felton, D., Miller, E. K., Perry, K. D., Schmeltz, D., Steffen, A., and Tordon, R.: Nested-grid simulation of mercury over North America, Atmos. Chem. Phys., submitted, 2011. Atmospheric mercury processes: papers from the 10th ICMGP
CommonCrawl
Journal of Fluid Mechanics Email your librarian or administrator to recommend adding this journal to your organisation's collection. URL: /core/journals/journal-of-fluid-mechanics Who would you like to send this to? * You are leaving Cambridge Core and will be taken to this journal's article submission site. Leave now Focus on Fluids JFM Rapids JFM Perspectives Only show open access (11) Last 3 months (32) To send this article to your account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to . MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org. < Back to all volumes Volume 906 - 10 January 2021 Page/Article number: low to high Page/Article number: high to low Title Type Online publication date Assessing and improving the accuracy of synthetic turbulence generation J. W. Patterson, R. Balin, K. E. Jansen Published online by Cambridge University Press: 13 November 2020, R1 With the growing interest in scale-resolving simulations of spatially evolving boundary layers, synthetic turbulence generation (STG) has become a valuable tool for providing unsteady turbulent boundary conditions through a sum over a finite number of spatio-temporal Fourier modes with amplitude, direction and phase determined by a random number set. Recent developments of STG methods are designed to match target profiles for anisotropic and inhomogeneous Reynolds stresses. In this paper, it is shown that, for practical values of the number of modes, a given set of random numbers may produce Reynolds stress profiles that are 30 % off their target. To remedy this situation, the error in the STG stress prediction is decomposed into a steady-state bias and a purely unsteady part affecting the time convergence. Direct relationships between the random number vectors and both types of error are developed, allowing large collections of random number sets to be rapidly scanned and the best performers selected for a much improved agreement with the target. The process is verified for the inflow to a direct numerical simulation of a flat plate at $Re_\theta = 1000$. This paper demonstrates sufficient time convergence over a few flow-through times as well as a correction of the method's biases. The organizing centre for the flow around rapidly spinning cylinders Morten Brøns Published online by Cambridge University Press: 09 November 2020, F1 The flow around a rotating circular cylinder has a parameter regime with a complex pattern of periodic solutions and multiple steady states. Sierra et al. (J. Fluid Mech., vol. 905, 2020, A2) provide a complete bifurcation analysis of this regime. The numerical computations are guided by a qualitative analysis of the bifurcations stemming from a highly degenerate singular dynamical system. Surprisingly, the dynamics of the singular system itself cannot be realized as a specific flow, but acts mathematically as an organizer of the physical bifurcation diagram. JFM Papers Reducing aerofoil–turbulence interaction noise through chordwise-varying porosity Lorna J. Ayton, Matthew J. Colbrook, Thomas F. Geyer, Paruchuri Chaitanya, Ennes Sarradj Published online by Cambridge University Press: 05 November 2020, A1 This paper considers the effects of smoothly varying chordwise porosity of a finite perforated plate on turbulence–aerofoil interaction noise. The aeroacoustic model is made possible through the use of a novel Mathieu function collocation method, rather than a traditional Wiener–Hopf approach which would be unable to deal with chordwise-varying quantities. The main focus is on two bio-inspired porosity distributions, modelled from air flow resistance data obtained from the wings of barn owls (tyto alba) and common buzzards (buteo buteo). Trailing-edge noise is much reduced for the owl-like distribution, but, perhaps surprisingly, so too is leading-edge noise, despite both wings having similar porosity values at the leading edge. A general monotonic variation is then considered indicating that there may indeed be a significant acoustic impact of how the porosity is distributed along the whole chord of the plate, not just its values at the scattering edges. Through this investigation, it is found that a plate whose porosity continuously decreases from the trailing edge to a zero-porosity leading edge can, in fact, generate lower levels of trailing-edge noise than a plate whose porosity remains constant at the trailing-edge value. Reynolds number scaling of burning rates in spherical turbulent premixed flames Tejas Kulkarni, Romain Buttay, M. Houssem Kasbaoui, Antonio Attili, Fabrizio Bisetti In the flamelet regime of turbulent premixed combustion the enhancement in the burning rates originates primarily from surface wrinkling. In this work we investigate the Reynolds number dependence of burning rates of spherical turbulent premixed methane/air flames in decaying isotropic turbulence with direct numerical simulations. Several simulations are performed by varying the Reynolds number, while keeping the Karlovitz number the same, and the temporal evolution of the flame surface is compared across cases by combining the probability density function of the radial distance of the flame surface from the origin with the surface density function formalism. Because the mean area of the wrinkled flame surface normalized by the area of a sphere with radius equal to the mean flame radius is proportional to the product of the turbulent flame brush thickness and peak surface density within the brush, the temporal evolution of the brush and peak surface density are investigated separately. The brush thickness is shown to scale with the integral scale of the flow, evolving due to decaying velocity fluctuations and stretch. When normalized by the integral scale, the wrinkling scale defined as the inverse of the peak surface density is shown to scale with Reynolds number across simulations and as turbulence decays. As a result, the area ratio and the burning rate are found to increase as ${Re}_{\lambda }^{1.13}$, in agreement with recent experiments on spherical turbulent premixed flames. We observe that the area ratio does not vary with turbulent intensity when holding the Reynolds number constant. Emergence of superwalking droplets Rahil N. Valani, Jack Dring, Tapio P. Simula, Anja C. Slim A new class of self-propelled droplets, coined superwalkers, has been shown to emerge when a bath of silicone oil is vibrated simultaneously at a given frequency and its subharmonic tone with a relative phase difference between them (Valani et al., Phys. Rev. Lett., vol. 123, 2019, 024503). To understand the emergence of superwalking droplets, we explore their vertical and horizontal dynamics by extending previously established theoretical models for walkers driven by a single frequency to superwalkers driven by two frequencies. Here, we show that driving the bath at two frequencies with an appropriate phase difference raises every second peak and lowers the intermediate peaks in the vertical periodic motion of the fluid surface. This allows large droplets that could otherwise not walk to leap over the intermediate peaks, resulting in superwalking droplets whose vertical dynamics is qualitatively similar to normal walkers. We find that the droplet's vertical and horizontal dynamics are strongly influenced by the relative height difference between successive peaks of the bath motion, a parameter that is controlled by the phase difference. Comparison of our simulated superwalkers with the experiments of Valani et al. (2019) shows good agreement for small- to moderate-sized superwalkers. Attractors for the motion of a finite-size particle in a two-sided lid-driven cavity Haotian Wu, Francesco Romanò, Hendrik C. Kuhlmann The motion of a single spherical particle in a two-sided lid-driven cavity is investigated experimentally. The flow in which the particle moves is created by two facing cavity sidewalls which move with equal velocity in opposite directions. For a long cavity with width-to-height cross-sectional aspect ratio $\varGamma =W/H=1.6$ the flow field at Reynolds number ${Re}=400$ consists of steady spatially periodic three-dimensional convection cells. Nearly neutrally buoyant particles with radius in units of $H$ ranging from $1.1\times 10^{-2}$ to $7.1\times 10^{-2}$ are found to be attracted to periodic or quasi-periodic orbits in close vicinity of Kolmogorov–Arnold–Moser (KAM) tori of the unperturbed flow. Like the KAM tori the attractors of neutrally buoyant particles arise in mirror-symmetric pairs within each convection cell. The particle attractors are created by a dissipative effect in the dynamical system describing the particle motion which arises when the finite-size particle closely passes the moving walls. When the particle density deviates from that of the fluid, inertial attractors arise whose symmetry is broken by buoyancy, and other periodic attractors are created which do not have KAM tori as counterparts. The impact of an oil droplet on an oil layer on water Dohyung Kim, Jinseok Lee, Arijit Bose, Ildoo Kim, Jinkee Lee We present a study of droplet impingement on a two-layer liquid, specifically an oil droplet impinging on a layer of oil on water. In our experiments, the diameter and impact velocity of the droplet and the thickness of the oil layer were varied, and the maximum depth of the crater and the maximum height of the Worthington jet were measured. When the thickness of the oil layer was less than ${\sim }1.6$ times the droplet diameter, the depth of the crater depended on the thickness of the oil layer. Otherwise, the two-layer liquid behaved like a single layer. This observation is rationalized by considering the oil–water interface, whose deformation is negligible when the oil layer is thick but becomes significant when the oil layer is thinner. We define an effective Weber number for the two-layer liquid and show that the height of the Worthington jet is proportional to this effective Weber number. Drying by pervaporation in elementary channel networks Benjamin Dollet, Kennedy Nexon Chagua Encarnación, Romain Gautier, Philippe Marmottant The drying dynamics inside a network of interconnected channels driven by pervaporation, e.g. by diffusion of water through a permeable material surrounding the channels, is studied. The channels are initially filled with water and a single air/water meniscus is initiated at the entrance of the network; drying proceeds as menisci progressively invade the network. The study is focused on elementary networks: simple branched networks without reconnections, or simple loops, in order to get a clear physical picture on which an understanding of drying on more complex networks, such as those encountered in leaves, could be built in the near future. Experiments are compared with models which elaborate on a previously published single-channel model (Dollet et al., J. R. Soc. Interface, vol. 16, 2019, 20180690). In branched networks, experiments reveal velocity discontinuities of the menisci as they split at the nodes. In loops, it is found that the drying rate depends on the number of menisci bounding a given connected water region; when there are two such menisci, a prediction of the dynamics of each of them is proposed, based on the pervaporation-induced hydrodynamics inside the channels. Experiments and model predictions compare favourably for the global drying rate. Some deviations are found for the dynamics of individual menisci, which are ascribed to the sensitivity of the dynamics to small fluctuations in wetting conditions. Model-based design of riblets for turbulent drag reduction Wei Ran, Armin Zare, Mihailo R. Jovanović Both experiments and direct numerical simulations have been used to demonstrate that riblets can reduce turbulent drag by as much as $10\,\%$, but their systematic design remains an open challenge. In this paper we develop a model-based framework to quantify the effect of streamwise-aligned spanwise-periodic riblets on kinetic energy and skin-friction drag in turbulent channel flow. We model the effect of riblets as a volume penalization in the Navier–Stokes equations and use the statistical response of the eddy-viscosity-enhanced linearized equations to quantify the effect of background turbulence on the mean velocity and skin-friction drag. For triangular riblets, our simulation-free approach reliably predicts drag-reducing trends as well as mechanisms that lead to performance deterioration for large riblets. We investigate the effect of height and spacing on drag reduction and demonstrate a correlation between energy suppression and drag reduction for appropriately sized riblets. We also analyse the effect of riblets on drag-reduction mechanisms and turbulent flow structures including very large-scale motions. Our results demonstrate the utility of our approach in capturing the effect of riblets on turbulent flows using models that are tractable for analysis and optimization. Uniform momentum zone scaling arguments from direct numerical simulation of inertia-dominated channel turbulence W. Anderson, Scott T. Salesky Inertia-dominated wall-sheared turbulent flows are composed of an inner and outer layer, where the former is occupied by the well-known autonomous inner cycle while the latter is composed of coherent structures with spatial extent comparable to the flow depth. In arbitrary streamwise–wall-normal planes, outer-layer structures instantaneously manifest as regions of quasi-uniform momentum – relative excesses and deficits about the Reynolds average – and for this reason are termed uniform momentum zones (UMZs). By virtue of this attribute, the interfacial zones between successive UMZs exhibit abrupt wall-normal gradients in streamwise momentum; these interfacial gradients cannot be explained by the notion of attached eddies, for which the vertical gradient goes as $(x_3^+)^{-1}$ in the outer layer, where $x_3^+$ is inner-normalized wall-normal position. Using data from direct numerical simulation (DNS) of channel turbulence across inertial regimes, we recover vertical profiles of Kolmogorov length a posteriori and show that $\eta ^+ \sim (x_3^+)^{1/4}$ , thereby requiring that ambient wall-normal gradients in streamwise velocity must scale as $(x_3^+)^{-1/2}$ . The data reveal that UMZ interfaces are responsible for these relatively larger wall-normal gradients. The DNS data afford a unique opportunity to interpret inner- and outer-layer structures simultaneously: we propose that UMZs – and the associated outer-layer dynamics – can be explained as the product of inner-layer bluff-body-like interactions, wherein wakes of quasi-uniform momentum emanate from the inner layer; wake-scaling arguments agree with observations from DNS. On the starting vortex generated by a translating and rotating flat plate D. I. Pullin, John E. Sader We consider the trailing-edge vortex produced in an inviscid fluid by the start-up motion of a two-dimensional flat plate. A general starting motion is studied that includes the initial angle-of-attack of the plate (which may be zero), individual time power laws for plate translational and rotational speeds and the pivot position for plate rotation. A vortex-sheet representation for a start-up separated flow at the trailing edge is developed whose time-wise evolution is described by a Birkhoff–Rott equation coupled to an appropriate Kutta condition. This description includes convection by the outer flow, rotation and vortex-image self-induction. It admits a power-law similarity solution for the (small-time) primitive vortex, leading to an equation set where each term carries its own time-wise power-law factor. A set of four general plate motions is defined. Dominant-balance analysis of this set leads to discovery of three distinct start-up vortex-structure types that form the basis for all vortex motion. The properties of each type are developed in detail for some special cases. Numerical and analytical solutions are described and transition between solution types is discussed. Singular and degenerate vortex behaviour is discovered which may be due to the absence of fluid viscosity. An interesting case is start-up motion with zero initial angle of attack coupled to power-law plate rotation for which time-series examples are given that can be compared to high Reynolds number viscous flows. Self-adaptive preferential flow control using displacing fluid with dispersed polymers in heterogeneous porous media Chiyu Xie, Wenhai Lei, Matthew T. Balhoff, Moran Wang, Shiyi Chen Published online by Cambridge University Press: 11 November 2020, A10 Preferential flow that leads to non-uniform displacement, especially in heterogeneous porous media, is usually unwelcome in most practical processes. We propose a self-adaptive preferential flow control mechanism by using dispersed polymers, which is supported strongly by experimental and numerical evidence. Our experiments are performed on a microchip with heterogeneous porous structures where oil is displaced by dispersed polymer microsphere particles. Even though the size of the particles is much smaller than the pore-throat size, the diversion effect by the dispersed microspheres is still proved. Therefore, the plugging effect is not the major mechanism for preferential flow control by dispersed polymers. The mechanisms are further investigated by pore-scale modelling, which indicates that the dispersed polymers exhibit an adaption ability to pressure and resistance in the porous flow field. In such an intelligent way, the displacing fluid with dispersed polymers smartly controls the preferential flow by inducing pressure fluctuations, and demonstrates better performance in both efficiency and economic aspects than the traditional method by simply increasing the viscosity. These insights can be applied to improve techniques in the field, such as enhanced oil recovery and soil wetting. On the physical mechanism of front–back asymmetry of non-breaking gravity–capillary waves Alexander Dosaev, Yuliya I. Troitskaya, Victor I. Shrira In nature, the wind waves of the gravity–capillary range are noticeably skewed forward. The salient feature of such waves is a characteristic pattern of capillary ripples on their crests. The train of these 'parasitic capillaries' is not symmetric with respect to the crest, it is localised on the front slope and decays towards the trough. Although understanding the gravity–capillary waves front–back asymmetry is important for remote sensing and, potentially, for wave–wind interaction, the physical mechanisms causing this asymmetry have not been identified. Here, we address this gap by extensive numerical simulations of the Euler equations employing the method of conformal mapping for two-dimensional potential flow and taking into account wave generation by wind and dissipation due to molecular viscosity. On examining the role of various factors contributing to the wave profile front–back asymmetry: wind forcing, viscous stresses and the Reynolds stresses caused by ripples, we found, in the absence of wave breaking, the latter to be by far the most important. It is the lopsided ripple distribution which leads to the noticeable fore–aft asymmetry of the mean wave profile. We also found how the asymmetry depends on wavelength, steepness, wind, viscosity and surface tension. The results of the model are discussed in the context of the available experimental data on asymmetry of gravity–capillary waves in both the breaking and non-breaking regimes. A reasonable agreement of the model with the data has been found for the regime without breaking or microbreaking. Thermocapillary instabilities in a liquid layer subjected to an oblique temperature gradient Ramkarn Patne, Yehuda Agnon, Alexander Oron Stability analysis of a liquid layer subjected to an oblique temperature gradient (OTG) is carried out. The general linear stability analysis reveals a stabilization effect of the imposed horizontal component (horizontal temperature gradient, HTG) of the OTG on the long-wave instabilities introduced by the vertical component (vertical temperature gradient, VTG) of the OTG. This stabilization is due to the VTG induced by the prescribed HTG, which counteracts the imposed VTG. The induced VTG arises due to the presence of advection of the energy. As a result of the stabilization, the long-wave mode forms an island of instability in the $\eta$– $Ma_c$ plane, where $\eta$ and $Ma_c$ are the ratio of the strength of the imposed HTG to imposed VTG components of the OTG, and the critical Marangoni number, respectively. However, for sufficiently high $\eta$, a new class of modes emerge with the critical Marangoni number scaling as $Ma_c \sim 1/\eta$. These modes originate as a result of the interaction between the thermocapillary flow caused by the imposed HTG on the one hand, and the VTG on the other, and remain the dominant modes of instability at higher $\eta$. The long-wave analysis is carried out and, in its framework, the nonlinear evolution equation is derived, and, based on it, linear and weakly nonlinear analyses are performed. An increase in $\eta$ changes the type of bifurcation from subcritical to supercritical. The numerical solution of the evolution equation around the critical parameter values validates the predictions of the weakly nonlinear analysis. The present study illustrates a possible use of imposing the HTG to prevent dry-spot formation and rupture of the film caused by the imposed VTG. Self-excited primary and secondary instability of laminar separation bubbles Daniel Rodríguez, Elmer M. Gennaro, Leandro F. Souza The self-excited instabilities acting on laminar separation bubbles in the absence of external forcing are studied by means of linear stability analysis and direct numerical simulation. Previous studies demonstrated the existence of a three-dimensional modal instability, that becomes active for bubbles with peak reversed flow of approximately $7\,\%$ of the free-stream velocity, well below the ${\approx } 16\,\%$ required for the absolute instability of Kelvin–Helmholtz waves. Direct numerical simulations are used to describe the nonlinear evolution of the primary instability, which is found to correspond to a supercritical pitchfork bifurcation and results in fully three-dimensional flows with spanwise inhomogeneity of finite amplitude. An extension of the classic weakly non-parallel analysis is then applied to the bifurcated flows, that have a strong dependence on the cross-stream planes and a mild dependence on the streamwise direction. The spanwise distortion of the separated flow induced by the primary instability is found to strongly destabilize the Kelvin–Helmholtz waves, leading to their absolute instability and the appearance of a global oscillator-type instability. This sequence of instabilities triggers the laminar–turbulent transition without requiring external disturbances or actuation. The characteristic frequency and streamwise and spanwise wavelengths of the self-excited instability are in good agreement with those reported for low-turbulence wind-tunnel experiments without explicit forcing. This indicates that the inherent dynamics described by the self-excited instability can also be relevant when external disturbances are present. Energy transfer structures associated with large-scale motions in a turbulent boundary layer Wenkang Wang, Chong Pan, Jinjun Wang The role of large-scale motions (LSMs) in energy transfer is investigated by analysing wall-parallel velocity fields at low-to-moderate Reynolds number ( $Re_{\tau }=1200\text {--}3500$ ), which are obtained via a two-dimensional (2-D) particle image velocimetry measurement with large field-of-view. Two types of energy flux, i.e. local interscale energy flux and in-plane spatial energy flux are inspected in detail. Targeting the energy transfer in large-scale regime, an anisotropic filter is designed based on the zero-crossing scale boundary in a 2-D energy transfer spectrum, across which the net energy flux is the maximum. This 'optimal' energy flux boundary separates the scale space into an energy donating large-scale part and an energy receiving small-scale one. The crossover energy flux, as well as the associated flow field structures, are studied by conditional statistics and linear stochastic estimation, in which the statistical spanwise symmetry is deliberately broken by designing special velocity gradient conditions for event probing. A strong connection between large-scale energy flux events and LSMs are found. Namely, forward scatter events have higher probability to reside on the wavy flank of low-momentum LSMs, if compared with the scenario of being clamped in the middle of two streamwise-aligned high- and low-momentum LSMs (Natrajan & Christensen, Phys. Fluids, vol. 18, issue 6, 2006, pp. 299–325). Meanwhile, pairs of positive and negative spatial transfer events tend to locate inside LSMs. It is thus argued that the meandering nature of LSMs, which forms the necessary velocity gradient, might play a determining role in the process of large-scale energy transfer. The spatial correlation between them is then schematized in a conceptual model, which explains most of the present observations. Superhydrophobic annular pipes: a theoretical study Darren G. Crowdy Analytical solutions are presented for longitudinal flow along a superhydrophobic annular pipe where one wall, either the inner or outer, is a fully no-slip boundary while the other is a no-slip wall decorated by a rotationally symmetric pattern of no-shear longitudinal stripes. Formulas are given for the effective slip length associated with laminar flow along the superhydrophobic pipe and the friction properties are characterized. It is shown how these new solutions generalize two solutions to mixed no-slip/no-shear boundary value problems due to Philip (Z. Angew. Math. Phys., vol. 23, 1972, pp. 353–372) for flow in a single-walled superhydrophobic pipe and a superhydrophobic channel. This is done by providing alternative representations of Philip's two solutions, including a useful new formula for the effective slip length for his channel flow solution. For a superhydrophobic annular pipe with inner-wall no-shear patterning there is an optimal pipe bore for enhancing hydrodynamic slip for a given pattern of no-shear stripes. These optimal pipes have a ratio of inner–outer pipe radii in the approximate range 0.5–0.6 and depending only weakly on the geometry of the surface patterning. Boundary point singularities are found to be deleterious to the slip suggesting that, in designing slippery pipes, maximizing the size of uninterrupted no-shear regions is preferable to covering the same surface area with a larger number of smaller no-shear zones. The results add to a list of analytical solutions to mixed boundary value problems relevant to modelling superhydrophobic surfaces. Three-dimensional dynamics of falling films in the presence of insoluble surfactants Assen Batchvarov, Lyes Kahouadji, Cristian R. Constante-Amores, Gabriel Farah Norões Gonçalves, Seungwon Shin, Jalel Chergui, Damir Juric, Richard V. Craster, Omar K. Matar We study the effect of insoluble surfactants on the wave dynamics of vertically falling liquid films. We use three-dimensional numerical simulations and employ a hybrid interface-tracking/level-set method, taking into account Marangoni stresses induced by gradients of interfacial surfactant concentration. Our numerical predictions for the evolution of the surfactant-free, three-dimensional wave topology are validated against the experimental work of Park & Nosoko (AIChE J., vol. 49, 2003, pp. 2715–2727). The addition of surfactants is found to influence significantly the development of horseshoe-shaped waves. At low Marangoni numbers, we show that the wave fronts exhibit spanwise oscillations before eventually acquiring a quasi-two-dimensional shape. In addition, the presence of Marangoni stresses is found to suppress the peaks of the travelling waves and preceding capillary wave structures. At high Marangoni numbers, a near-complete rigidification of the interface is observed. Reconstruction of turbulent flow fields from lidar measurements using large-eddy simulation Pieter Bauweraerts, Johan Meyers We investigate the reconstruction of a turbulent flow field in the atmospheric boundary layer from a time series of lidar measurements, using large-eddy simulations (LES) and a four-dimensional variational data assimilation algorithm. This leads to an optimisation problem in which the error between measurements and simulations is minimised over an observation time horizon. We also consider reconstruction based on a Taylor's frozen turbulence (TFT) model as a point of comparison. To evaluate the approach, we construct a series of virtual lidar measurements from a fine-grid LES of a pressure-driven boundary layer. The reconstruction uses LES on a coarser mesh and smaller domain, and results are compared to the fine-grid reference. Two lidar scanning modes are considered: a classical plan-position-indicator mode, which swipes the lidar beam in a horizontal plane, and a three-dimensional pattern that is based on a Lissajous curve. We find that normalised errors lie between $15\,\%$ and $25\,\%$ (error variance normalised by background variance) in the scanning region, and increase to $100\,\%$ over a distance that is comparable to the correlation length scale outside this scanning region. Moreover, LES outperforms TFT by 30 %–70 % depending on scanning mode and location. On the effect of electrostatic surface forces on dielectric falling films Wilko Rohlfs, Liam M. F. Cammiade, Manuel Rietz, Benoit Scheid The destabilization of a dielectric film flow due to an electrostatic surface force is investigated. A weighted residuals integral boundary-layer (WIBL) model is derived and validated against full numerical simulations. The equations of the WIBL model indicate that the electrostatic surface force contributes to the evolution equations in a similar mathematical way as the volumetric gravitational force. Contrary to gravity, an additional electrostatic contribution ( $\chi _2$ ) arises, whose impact increases nonlinearly with decreasing capacitor plate distance. This nonlinear contribution causes a fold of the branch of solutions of the dynamical system and, thus, the co-existence of a low amplitude solution that is stable against infinitesimal disturbances and an unstable high amplitude solution. In time-dependent simulations, the fold coincides with the limit in the parameter space beyond which a finite-time blow-up occurs with an unsaturated growth of the main wave hump leading to wave pinch-off and drop formation. Thus, a phase diagram can be constructed by tracking this fold. The shape of the main wave prior to blow-up depends on the electrostatic parameter $\chi _2$ . If this parameter is zero, the force is equivalent to a hanging film flow configuration and dripping occurs with a drop-shaped structure. With an increasing contribution of the parameter $\chi _2$ , Taylor-cone waves occur prior to finite-time blow-up, leading to jetting. Finally, the transition from stable to unstable waves is investigated in terms of the two dimensionless electric parameters, the Reynolds, and viscous dissipation numbers. Imposing the most amplified wavelength, a transition border between stable solutions and jetting is identified.
CommonCrawl
Yang-Mills functional, geometry of the The geometry of the Yang–Mills equations (cf. Yang–Mills field) led to deep purely mathematical insight, some of which is given below. For notations, see Yang–Mills functional. A symplectic approach in terms of Yang–Mills theory for a bundle on a closed $2$-manifold enabled M.F. Atiyah and R. Bott [a1] to explain old results of M.S. Narasimhan and C.S. Seshadri on moduli spaces of stable holomorphic vector bundles on Riemann surfaces. Among others, Atiyah and Bott derived formulas for the Betti numbers of these moduli spaces which had been obtained earlier by G. Harder and Narasimhan by number-theoretical methods. These moduli spaces carry additional geometrical structures, such as symplectic or, more generally, Poisson or even Kähler structures; see, e.g., [a1] or [a5]. An analogous formalism led to gauge-theoretic equations (now called Hermite–Einstein equations) whose solutions might be expected to exist on stable bundles over higher-dimensional Kähler manifolds. Much research on the Narasimhan–Seshadri moduli spaces is still (1998) going on, e.g. on the ring structure of their real cohomology; see, e.g., [a6] for more details and references. Spectacular results in the topology of $4$-manifolds have been obtained by S.K. Donaldson: Consider a principal $G$-bundle $\xi $ on an oriented Riemannian $4$-manifold $M$, for a compact Lie group $G$ with an invariant scalar product on its Lie algebra. The connections of interest are the so-called instantons, the solutions of the anti-self-dual Yang–Mills equation \begin{equation*} ^ { * } F _ { A } = - F _ { A }. \end{equation*} When $G$ is the circle group, the anti-self-duality equation is linear and the instantons are completely described by Hodge theory. When $G$ is $\operatorname{SU} ( 2 )$ (say), the equation is a non-linear partial differential equation. Its space of solutions, the moduli space of instantons modulo gauge transformations, is generically (that is, for a generic choice of the metric on $M$) a finite-dimensional smooth manifold which is usually non-compact. In 1981, Donaldson had the insight that the algebraic topology of this moduli space could be used to analyze the topology of the manifold $M$ itself. This came as a complete surprise, since there was no conceptual understanding of how and why instantons were related to the structure of $4$-manifolds. Donaldson first discovered restrictions on the intersection form of a compact $4$-manifold and deduced that certain topological $4$-manifolds do not support any differentiable structure at all. Later he defined differentiable invariants of large classes of manifolds, nowadays called Donaldson polynomials, which were successful in distinguishing non-diffeomorphic differentiable structures on $4$-manifolds. This prompted an entirely new research area. See [a3] for details. In 1994, this research area was turned on its head by the introduction of a new kind of gauge theory, phrased in terms of the differential-geometric equation by N. Seiberg and E. Witten, which, in turn, had again originated from physics, more precisely, from supersymmetry considerations in quantum field theory. Some long-standing problems were solved. For example, P.B. Kronheimer and T.S. Mrowka solved the so-called Thom conjecture that algebraic curves should minimize the genus within a given homology class in $\mathbf{CP} ^ { 2 }$. Also, new and unexpected results were found, e.g. C. Taubes established the non-existence of symplectic structures on certain $4$-manifolds and linked Seiberg–Witten invariants of symplectic manifolds with Gromov invariants, as well as gave simpler new proofs for existing results. The Seiberg–Witten equations involve two entities, a $U ( 1 )$-connection and a spinor field, the most relevant notion being that of a -structure on an oriented Riemannian $4$-manifold $M$. Associated with a structure on $M$ are bundles $V _ { \pm }$ of positive and negative spinors, a complex determinant line bundle $L = \operatorname { det } ( V _ { \pm } )$ and, furthermore, a canonical mapping $\sigma$ from $V _ { + } \times V _ { + }$ to $\Lambda _ { + } ^ { 2 }$. The equations involve a connection $A$ and a positive spinor $\phi \in \Gamma ( V _ { + } )$ and read \begin{equation*} D _ { A } \phi = 0, \end{equation*} \begin{equation*} F _ { A } ^ { + } = i \sigma ( \phi , \phi ); \end{equation*} here, $D _ { A }$ is the Dirac operator $D _ { A } : \Gamma ( V _ { + } ) \rightarrow \Gamma ( V _ { - } )$ arising from composition of the covariant derivative with Clifford multiplication. The solutions of these equations (these are certain monopoles) are the absolute minima of a certain functional, much as the Yang–Mills instantons minimize the ordinary Yang–Mills functional. The moduli space of these monopoles modulo bundle automorphisms is generically a smooth manifold and it is always compact. The compactness makes these spaces much easier to handle than the instanton moduli spaces. See [a2] and [a7] for more details and references. Many other aspects, for example links to Yang–Mills–Higgs bundles and Floer theory, have been omitted here; see, e.g., the cited references. [a1] M.F. Atiyah, R. Bott, "The Yang–Mills equations over Riemann surfaces" Philos. Trans. R. Soc. London A , 308 (1982) pp. 523–615 [a2] S.K. Donaldson, "The Seiberg–Witten equations and $4$-manifold topology" Bull. Amer. Math. Soc. , 33 (1996) pp. 45–70 [a3] S.K. Donaldson, P.B. Kronheimer, "The geometry of four–manifolds" , Oxford Univ. Press (1991) [a4] N.J. Hitchin, "Book reviews. T. Petrie and J. Randall: Connections, definite forms, and four–manifolds; S.K. Donaldson and P.B. Kronheimer: The geometry of four-manifolds" Bull. London Math. Soc. , 25 (1993) pp. 499–502 [a5] J. Huebschmann, "Poisson geometry of certain moduli spaces" , Lectures 14th Winter School, Srni, Czeque Republic, Jan. 1994 Rend. Circ. Mat. Palermo, Ser. II , 39 (1996) pp. 15–35 [a6] J. Huebschmann, "Review on a paper by L. Jeffrey and F. Kirwan" Math. Reviews , 98e (1998) pp. 58088 [a7] D. Kotschik, "Gauge theory is dead! — long live gauge theory" Notices Amer. Math. Soc. , 42 (1995) pp. 335–338 Yang-Mills functional, geometry of the. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Yang-Mills_functional,_geometry_of_the&oldid=50726 This article was adapted from an original article by Johannes Huebschmann (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Yang-Mills_functional,_geometry_of_the&oldid=50726" TeX semi-auto TeX partially done
CommonCrawl
New asymptotic analysis method for phase field models in moving boundary problem with surface tension DCDS-B Home Explosion birth and extinction: Double big bang bifurcations and Allee effect in Tsoularis-Wallace's growth models November 2015, 20(9): 3165-3183. doi: 10.3934/dcdsb.2015.20.3165 Boundedness vs.blow-up in a two-species chemotaxis system with two chemicals Youshan Tao 1, and Michael Winkler 2, Department of Applied Mathematics, Dong Hua University, Shanghai 200051 Institut für Mathematik, Universität Paderborn, 33098 Paderborn Received August 2014 Revised April 2015 Published September 2015 We consider a model for two species interacting through chemotaxis in such a way that each species produces a signal which directs the respective motion of the other. Specifically, we shall be concerned with nonnegative solutions of the Neumann problem, posed in bounded domains $\Omega\subset \mathbb{R}^n$ with smooth boundary, for the system $$\begin{cases} u_t= \Delta u - \chi \nabla \cdot (u\nabla v), & x\in \Omega, \, t>0, \\ 0=\Delta v-v+w, & x\in \Omega, \, t>0, \qquad (\star)\\ w_t= \Delta w - \xi \nabla \cdot (w\nabla z), & x\in \Omega, \, t>0, \\ 0=\Delta z-z+u, & x\in \Omega, \, t>0, \end{cases}$$ with parameters $\chi \in \{\pm 1\}$ and $\xi\in \{\pm 1\}$, thus allowing the interaction of either attraction-repulsion, or attraction-attraction, or repulsion-repulsion type. It is shown that $\bullet$ in the attraction-repulsion case $\chi=1$ and $\xi=-1$, if $n\le 3$ then for any nonnegative initial data $u_0\in C^0(\bar{\Omega})$ and $ w_0\in C^0 (\bar{\Omega})$, there exists a unique global classical solution which is bounded; $\bullet$ in the doubly repulsive case when $\chi=\xi=-1$, the same holds true; $\bullet$ in the attraction-attraction case $\chi=\xi=1$, $-$ if either $n=2$ and $\int_\Omega u_0 + \int_\Omega w_0$ lies below some threshold, or $n\ge 3$ and $\|u_0\|_{L^\infty(\Omega)}$ and $\|w_0\|_{L^\infty(\Omega)}$ are sufficiently small, then solutions exist globally and remain bounded, whereas $-$ if either $n=2$ and $m$ is suitably large, or $n\ge 3$ and $m>0$ is arbitrary, then there exist smooth initial data $u_0$ and $w_0$ such that $\int_\Omega u_0 + \int_\Omega w_0=m$ and such that the corresponding solution blows up in finite time. In particular, these results demonstrate that the circular chemotaxis mechanism underlying ($\star$) goes along with essentially the same destabilizing features as known for the classical Keller-Segel system in the doubly attractive case, but totally suppresses any blow-up phenomenon when only one, or both, taxis directions are repulsive. Keywords: preventing blow-up., repulsion, attraction, Chemotaxis. Mathematics Subject Classification: Primary: 35A01, 35B44, 35K57, 35Q92, 92C1. Citation: Youshan Tao, Michael Winkler. Boundedness vs.blow-up in a two-species chemotaxis system with two chemicals. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3165-3183. doi: 10.3934/dcdsb.2015.20.3165 P. Biler, E. E. Espejo and I. Guerra, Blowup in higher dimensional two species chemotactic systems,, Commun. Pure Appl. Anal., 12 (2013), 89. doi: 10.3934/cpaa.2013.12.89. Google Scholar P. Biler, W. Hebisch and T. Nadzieja, The Debye system: Existence and large time behavior of solutions,, Nonlinear Analysis, 23 (1994), 1189. doi: 10.1016/0362-546X(94)90101-5. Google Scholar H. Brézis and W. A. Strauss, Semilinear second-order elliptic equations in $L^{1}$,, J. Math. Soc. Japan, 25 (1973), 565. doi: 10.2969/jmsj/02540565. Google Scholar V. Calvez and J. A. Carrillo, Volume effects in the Keller-Segel model: Energy estimates preventing blow-up,, J. Math. Pure Appl., 86 (2006), 155. doi: 10.1016/j.matpur.2006.04.002. Google Scholar S. Cantrell, C. Cosner and S. Ruan, Spatial Ecology,, Chapman & Hall/CRC, (2010). Google Scholar T. Cieślak, P. Laurençot and C. Morales-Rodrigo, Global existence and convergence to steady-states in a chemorepulsion system,, Banach Center Publ., 81 (2008), 105. doi: 10.4064/bc81-0-7. Google Scholar T. Cieślak and M. Winkler, Finite-time blow-up in a quasilinear system of chemotaxis,, Nonlinearity, 21 (2008), 1057. doi: 10.1088/0951-7715/21/5/009. Google Scholar C. Conca, E. E. Espejo and K. Vilches, Global existence and blow-up for a two species Keller-Segel model for chemotaxis,, European J. Appl. Math., 22 (2011), 553. doi: 10.1017/S0956792511000258. Google Scholar E. E. Espejo, A. Stevens and J. L. L. Velázquez, Simultaneous finite time blow-up in a two-species model for chemotaxis,, Analysis (Munich), 29 (2009), 317. doi: 10.1524/anly.2009.1029. Google Scholar A. Friedman, Partial Differential Equations,, Holt, (1969). Google Scholar M. A. Gates, V. M. Coupe, E. M. Torres, R. A. Fricker-Gares and S. B. Dunnett, Saptially and temporally restricted chemoattractant and repulsive cues direct the formation of the nigro-sriatal circuit,, Euro. J. Neuroscicen, 19 (2004), 831. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,, Grundlehren der Mathematischen Wissenschaften, (1977). Google Scholar D. Henry, Geometric Theory of Semilinear Parabolic Equations,, Lecture Notes in Mathematics, (1981). Google Scholar M. A. Herrero and J. L. L. Velázquez, A blow-up mechanism for a chemotaxis model,, Ann. Sc. Norm. Super. Pisa Cl. Sci., 24 (1997), 633. Google Scholar M. E. Hibbing, C. Fuqua, M. R. Parsek and S. B. Peterson, Bacterial competition: Surviving and thriving in the microbial jungle,, Nature Reviews Microbiology, 8 (2010), 15. doi: 10.1038/nrmicro2259. Google Scholar T. Hillen and K. Painter, A users' guide to PDE models for chemotaxis,, J. Math. Biol., 58 (2009), 183. doi: 10.1007/s00285-008-0201-3. Google Scholar S. Hittmeir and A. Jüngel, Cross-diffusion preventing blow up in the two-dimensional Keller-Segel model,, SIAM J. Math. Anal., 43 (2011), 997. doi: 10.1137/100813191. Google Scholar D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences. I,, Jahresber. Deutsch. Math.- Verien, 105 (2003), 103. Google Scholar D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system,, J. Differential Equations, 215 (2005), 52. doi: 10.1016/j.jde.2004.10.022. Google Scholar E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instaility,, J. Theor. Biol., 26 (1970), 399. doi: 10.1016/0022-5193(70)90092-5. Google Scholar F. X. Kelly, K. J. Dapsis and D. A. Lauffenburger, Effect of bacterial chemotaxis on dynamics of microbial competition,, Microbial Ecology, 16 (1988), 115. doi: 10.1007/BF02018908. Google Scholar M. Luca, A. Chavez-Ross, L. Edelstein-Keshet and A. Mogilner, Chemotactic signalling, microglia, and alzheimer's disease senile plague: Is there a connection?, Bull. Math. Biol., 65 (2003), 673. Google Scholar N. Mizoguchi and M. Winkler, Is aggregation a generic phenomenon in the two-dimensional Keller-Segel system?,, Preprint., (). Google Scholar J. D. Murray, Mathematical Biology,, Second edition. Biomathematics, (1993). doi: 10.1007/b98869. Google Scholar T. Nagai, Blow-up of nonradial solutions to parabolic-elliptic systems modelling chemotaxis in two-dimensional domains,, J. of Inequal. & Appl., 6 (2001), 37. doi: 10.1155/S1025583401000042. Google Scholar T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis,, Funkc. Ekvacioj, 40 (1997), 411. Google Scholar K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations,, Nonlinear Analysis, 51 (2002), 119. doi: 10.1016/S0362-546X(01)00815-X. Google Scholar K. Osaki and A. Yagi, Finite dimensional attractor for one-dimensional Keller-Segel equations,, Funkcialaj Ekvacioj, 44 (2001), 441. Google Scholar K. J. Painter, Continuous models for cell migration in tissues and applications to cell sorting via differential chemotaxis,, Bull. Math. Biol., 71 (2009), 1117. doi: 10.1007/s11538-009-9396-8. Google Scholar K. Painter and T. Hillen, Volume-filling and quorum-sensing in models for chemosensitive movement,, Can. Appl. Math. Quart., 10 (2002), 501. Google Scholar K. J. Painter and J. A. Sherratt, Modelling the movement of interacting cell populations,, Journal of Theoretical Biology, 225 (2003), 327. doi: 10.1016/S0022-5193(03)00258-3. Google Scholar C. Stinner, J. I. Tello and M. Winkler, Competitive exclusion in a two-species chemotaxis model,, J. Math. Biology, 68 (2014), 1607. doi: 10.1007/s00285-013-0681-7. Google Scholar Y. Tao, Global dynamics in a higher-dimensional repulsion chemotaxis model with nonlinear sensitivity,, Discr. Cont. Dyn. Syst. B, 18 (2013), 2705. doi: 10.3934/dcdsb.2013.18.2705. Google Scholar Y. Tao and Z. A. Wang, Competing effects of attraction vs. repulsion in chemotaxis,, Math. Models Methods Appl. Sci., 23 (2013), 1. doi: 10.1142/S0218202512500443. Google Scholar Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,, J. Differential Equations, 252 (2012), 692. doi: 10.1016/j.jde.2011.08.019. Google Scholar Y. Tao and M. Winkler, Dominance of chemotaxis in a chemotaxis-haptotaxis model,, Nonlinearity, 27 (2014), 1225. doi: 10.1088/0951-7715/27/6/1225. Google Scholar J. I. Tello and M. Winkler, Stabilization in a two-species chemotaxis system with a logistic source,, Nonlinearity, 25 (2012), 1413. doi: 10.1088/0951-7715/25/5/1413. Google Scholar M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,, Commun. Partial Differential Equations, 35 (2010), 1516. doi: 10.1080/03605300903473426. Google Scholar M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model,, J. Differential Equations, 248 (2010), 2889. doi: 10.1016/j.jde.2010.02.008. Google Scholar M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system,, J. Math. Pures Appl., 100 (2013), 748. doi: 10.1016/j.matpur.2013.01.020. Google Scholar Sainan Wu, Junping Shi, Boying Wu. Global existence of solutions to an attraction-repulsion chemotaxis model with growth. Communications on Pure & Applied Analysis, 2017, 16 (3) : 1037-1058. doi: 10.3934/cpaa.2017050 Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 423-447. doi: 10.3934/dcdsb.2018180 Hai-Yang Jin, Tian Xiang. Repulsion effects on boundedness in a quasilinear attraction-repulsion chemotaxis model in higher dimensions. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3071-3085. doi: 10.3934/dcdsb.2017197 Yilong Wang, Zhaoyin Xiang. Boundedness in a quasilinear 2D parabolic-parabolic attraction-repulsion chemotaxis system. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1953-1973. doi: 10.3934/dcdsb.2016031 Shijie Shi, Zhengrong Liu, Hai-Yang Jin. Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source. Kinetic & Related Models, 2017, 10 (3) : 855-878. doi: 10.3934/krm.2017034 Rachidi B. Salako. Traveling waves of a full parabolic attraction-repulsion chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5945-5973. doi: 10.3934/dcds.2019260 Lianzhang Bao, Wenxian Shen. Logistic type attraction-repulsion chemotaxis systems with a free boundary or unbounded boundary. I. Asymptotic dynamics in fixed unbounded domain. Discrete & Continuous Dynamical Systems - A, 2020, 40 (2) : 1107-1130. doi: 10.3934/dcds.2020072 Maria Antonietta Farina, Monica Marras, Giuseppe Viglialoro. On explicit lower bounds and blow-up times in a model of chemotaxis. Conference Publications, 2015, 2015 (special) : 409-417. doi: 10.3934/proc.2015.0409 Ping Liu, Junping Shi, Zhi-An Wang. Pattern formation of the attraction-repulsion Keller-Segel system. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2597-2625. doi: 10.3934/dcdsb.2013.18.2597 Hai-Yang Jin, Zhi-An Wang. Global stabilization of the full attraction-repulsion Keller-Segel system. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020027 Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2299-2323. doi: 10.3934/dcds.2015.35.2299 Yan Li. Emergence of large densities and simultaneous blow-up in a two-species chemotaxis system with competitive kinetics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5461-5480. doi: 10.3934/dcdsb.2019066 Youshan Tao. Global dynamics in a higher-dimensional repulsion chemotaxis model with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2705-2722. doi: 10.3934/dcdsb.2013.18.2705 C. Y. Chan. Recent advances in quenching and blow-up of solutions. Conference Publications, 2001, 2001 (Special) : 88-95. doi: 10.3934/proc.2001.2001.88 Marina Chugunova, Chiu-Yen Kao, Sarun Seepun. On the Benilov-Vynnycky blow-up problem. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1443-1460. doi: 10.3934/dcdsb.2015.20.1443 Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1 Yong Zhou, Zhengguang Guo. Blow up and propagation speed of solutions to the DGH equation. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 657-670. doi: 10.3934/dcdsb.2009.12.657 Marek Fila, Hiroshi Matano. Connecting equilibria by blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 155-164. doi: 10.3934/dcds.2000.6.155 Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399 W. Edward Olmstead, Colleen M. Kirk, Catherine A. Roberts. Blow-up in a subdiffusive medium with advection. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1655-1667. doi: 10.3934/dcds.2010.28.1655 Youshan Tao Michael Winkler
CommonCrawl
Results for 'Adele Mccollum' Hypertext.Adele Mccollum & David Stuehler - 1989 - Inquiry: Critical Thinking Across the Disciplines 4 (4):9-11.details Thinking Critically About Gender.David Steuhler & Adele Mccollum - 1989 - Inquiry: Critical Thinking Across the Disciplines 4 (2):5-5.details Feminist Ethics in Normative Ethics Informal Logic in Logic and Philosophy of Logic The Organization of Human Postural Movements: A Formal Basis and Experimental Synthesis.Lewis M. Nashner & Gin McCollum - 1985 - Behavioral and Brain Sciences 8 (1):135-150.details Aspects of Consciousness in Philosophy of Mind Developmental Genetics and Early Hominid Craniodental Evolution.Melanie A. McCollum & Paul T. Sharpe - 2001 - Bioessays 23 (6):481-493.details Biological Sciences in Natural Sciences Constructions: A New Theoretical Approach to Language.Adele E. Goldberg - 2003 - Trends in Cognitive Sciences 7 (5):219-224.details A new theoretical approach to language has emerged in the past 10–15 years that allows linguistic observations about form–meaning pairings, known as 'construc- tions', to be stated directly. Constructionist approaches aim to account for the full range of facts about language, without assuming that a particular subset of the data is part of a privileged 'core'. Researchers in this field argue that unusual constructions shed light on more general issues, and can illuminate what is required for a complete account of (...) language. (shrink) Philosophy of Linguistics in Philosophy of Language Elmer McCollum and the Disappearance of Rickets.Gale W. Rafter - 1986 - Perspectives in Biology and Medicine 30 (4):527-534.details History of Science in General Philosophy of Science On The Decision Problem For Two-Variable First-Order Logic, By, Pages 53 -- 69.Erich Gr\"Adel, Phokion Kolaitis & Moshe Vardi - 1997 - Bulletin of Symbolic Logic 3 (1):53-69.details Argument Structure Constructions Versus Lexical Rules or Derivational Verb Templates.Adele E. Goldberg - 2013 - Mind and Language 28 (4):435-465.details The idea that correspondences relating grammatical relations and semantics (argument structure constructions) are needed to account for simple sentence types is reviewed, clarified, updated and compared with two lexicalist alternatives. Traditional lexical rules take one verb as 'input' and create (or relate) a different verb as 'output'. More recently, invisible derivational verb templates have been proposed, which treat argument structure patterns as zero derivational affixes that combine with a root verb to yield a new verb. While the derivational template perspective (...) can address several problems that arise for traditional lexical rules, it still faces problems in accounting for idioms, which often contain specifications that are not appropriately assigned to individual verbs or derivational affixes (regarding adjuncts, modification, and inflection). At the same time, it is clear that verbs play a central role in determining their distribution. The balance between verbs and phrasal argument structure constructions is addressed via the Principles of Semantic Coherence and Correspondence together with a usage-based hierarchy of constructions that contains entries which can include particular verbs and other lexical material. (shrink) Other Areas of Linguistics in Philosophy of Language A Novel of Fuzzy PSS Based on New Objective Function in Multimachine Power System.Adel Akbarimajd & Nasser Yousefi - 2016 - Complexity 21 (6):288-298.details The UNESCO Universal Declaration on Bioethics and Human Rights: Perspectives From Kenya and South Africa. [REVIEW]Adèle Langlois - 2008 - Health Care Analysis 16 (1):39-51.details In October 2005, UNESCO (the United Nations Educational, Scientific and Cultural Organization) adopted the Universal Declaration on Bioethics and Human Rights. This was the culmination of nearly 2 years of deliberations and negotiations. As a non-binding instrument, the declaration must be incorporated by UNESCO's member states into their national laws, regulations or policies in order to take effect. Based on documentary evidence and data from interviews, this paper compares the declaration's universal principles with national bioethics guidelines and practice in Kenya (...) and South Africa. It concentrates on areas of particular relevance to developing countries, such as protection of vulnerable persons and social responsibility. The comparison demonstrates the need for universal principles to be contextualised before they can be applied in a meaningful sense at national level. The paper also assesses the 'added value' of the declaration in terms of biomedical research ethics, given that there are already well-established international instruments on bioethics, namely the World Medical Association Declaration of Helsinki and the CIOMS (Council for International Organizations of Medical Sciences) guidelines on biomedical research. It may be that the added value lies as much in the follow-up capacity building activities being initiated by UNESCO as in the document itself. (shrink) Meaning and Necessity: Can Semantics Stop Same-Sex Marriage?Adèle Mercier - 2007 - Essays in Philosophy 8 (1):14.details Think of this paper as an exercise in applied philosophy of language. It has both semantic and deontic concerns. More than about the meaning of 'marriage,' it is about how one goes about determining the meaning of social kind terms like 'marriage'. But it is equally about the place of philosophy of language in the legislative sphere, and inter alia, about the roles and responsibilities of philosophers in public life. Analytic Feminism in Philosophy of Gender, Race, and Sexuality Feminism: Marriage and Civil Unions in Philosophy of Gender, Race, and Sexuality Feminist Philosophy of Language in Philosophy of Language Gay Marriage in Philosophy of Gender, Race, and Sexuality Homosexuality in Philosophy of Gender, Race, and Sexuality Philosophy of Sexuality in Philosophy of Gender, Race, and Sexuality Topics in Feminist Philosophy, Misc in Philosophy of Gender, Race, and Sexuality A Perverse Case of the Contingent A Priori: On the Logic of Emasculating Language.Adèle Mercier - 1995 - Philosophical Topics 23 (2):221-259.details Apriority and Necessity in Epistemology Logic and Philosophy of Logic, Misc in Logic and Philosophy of Logic Corpus Evidence of the Viability of Statistical Preemption.Adele E. Goldberg - 2011 - Cognitive Linguistics 22 (1).details Three Elements of Stakeholder Legitimacy.Adele Santana - 2012 - Journal of Business Ethics 105 (2):257-265.details This paper focuses attention on the stakeholder attribute of legitimacy. Drawing upon institutional and stakeholder theories, I develop a framework of stakeholder legitimacy based on its three aspects—legitimacy of the stakeholder as an entity, legitimacy of the stakeholder's claim, and legitimacy of the stakeholder's behavior. I assume that stakeholder legitimacy is socially constructed by management and that each of its three aspects exists in degree in the manager's perception. I discuss how these aspects interact and change over time, and propose (...) an agenda for future research on stakeholder legitimacy. (shrink) Stakeholder Theory in Applied Ethics Parameter Optimization of MIMO Fuzzy Optimal Model Predictive Control By APSO.Adel Taieb, Moêz Soltani & Abdelkader Chaari - 2017 - Complexity:1-11.details Diagrams as Tools for Scientific Reasoning.Adele Abrahamsen & William Bechtel - 2015 - Review of Philosophy and Psychology 6 (1):117-131.details We contend that diagrams are tools not only for communication but also for supporting the reasoning of biologists. In the mechanistic research that is characteristic of biology, diagrams delineate the phenomenon to be explained, display explanatory relations, and show the organized parts and operations of the mechanism proposed as responsible for the phenomenon. Both phenomenon diagrams and explanatory relations diagrams, employing graphs or other formats, facilitate applying visual processing to the detection of relevant patterns. Mechanism diagrams guide reasoning about how (...) the parts and operations work together to produce the phenomenon and what experiments need to be done to improve on the existing account. We examine how these functions are served by diagrams in circadian rhythm research. (shrink) Transparency and Social Responsibility Issues for Wikipedia.Adele Santana & Donna J. Wood - 2009 - Ethics and Information Technology 11 (2):133-144.details Wikipedia is known as a free online encyclopedia. Wikipedia uses largely transparent writing and editing processes, which aim at providing the user with quality information through a democratic collaborative system. However, one aspect of these processes is not transparent—the identity of contributors, editors, and administrators. We argue that this particular lack of transparency jeopardizes the validity of the information being produced by Wikipedia. We analyze the social and ethical consequences of this lack of transparency in Wikipedia for all users, but (...) especially students; we assess the corporate social performance issues involved, and we propose courses of action to compensate for the potential problems. We show that Wikipedia has the appearance, but not the reality, of responsible, transparent information production. (shrink) Computer Ethics in Applied Ethics Dynamic Analysis of Complex Synchronization Schemes Between Integer Order and Fractional Order Chaotic Systems with Different Dimensions.Adel Ouannas, Xiong Wang, Viet-Thanh Pham & Toufik Ziar - 2017 - Complexity 2017:1-12.details We present new approaches to synchronize different dimensional master and slave systems described by integer order and fractional order differential equations. Based on fractional order Lyapunov approach and integer order Lyapunov stability method, effective control schemes to rigorously study the coexistence of some synchronization types between integer order and fractional order chaotic systems with different dimensions are introduced. Numerical examples are used to validate the theoretical results and to verify the effectiveness of the proposed schemes. Student Academic Dishonesty: What Do Academics Think and Do, and What Are the Barriers to Action?Adele Thomas & GideonP De Bruin - 2012 - African Journal of Business Ethics 6 (1):13.details The aims of the study were to explore the awareness of and attitudes towards student academic dishonesty at a South African university, and to explore perceived personal and institutional barriers to taking action against such dishonesty. All full-time academic staff at the University of Johannesburg were anonymously surveyed during late 2009. The findings indicated a high level of awareness of student academic dishonesty, with few faculty members taking action against it. Four groups of barriers to preventing and acting on student (...) academic dishonesty were identified, with two of these barrier groups being significantly related to willingness to report student academic dishonesty. (shrink) Academic and Teaching Ethics in Philosophy of Social Science William McElroy, the McCollum–Pratt Institute, and the Transformation of Biology at Johns Hopkins, 1945–1960.Tulley Long - 2009 - Journal of the History of Biology 42 (4):765 - 809.details In 1948, a dynamic junior member of the Johns Hopkins Biology Department, William McElroy, became the first director of the McCollum—Pratt Institute for the Investigation of Micronutrient Elements. The Institute was founded at the university to further studies into the practicalities of animal nutrition. Ultimately, however, the Institute reflected McElroy's vision that all biological problems, including nutrition, could be best investigated through basic biochemical and enzymes studies. The Institute quickly became a hub of biochemical research over the following decade, (...) producing foundational work on metabolism and a respected series of symposia. In this paper, I argue that McElroy's biochemical vantage on biology also permeated the traditionally morphological and embryological Biology Department at Hopkins. Largely due to the activity of McElroy and the Institute, the faculty, course offerings, and research underwent a radical reorientation toward biochemistry and molecular biology in the 1950s, even while maintaining a commitment to developmental biology. While the history of postwar biology is often told as the ascendancy of the "new" biology over "traditional" biology, the case of McElroy and the McCollum—Pratt Institute affords an opportunity for historical examination of biochemical and molecular science as a lens through which all branches of biology at an institution were reconceived and unified. (shrink) History of Biology in Philosophy of Biology Explanation and Constructions: Response to Adger.Adele E. Goldberg - 2013 - Mind and Language 28 (4):479-491.details Explanation, Miscellaneous in General Philosophy of Science Explanation: A Mechanist Alternative.William Bechtel & Adele Abrahamsen - 2005 - Studies in History and Philosophy of Biological and Biomedical Sciences 36 (2):421-441.details Explanations in the life sciences frequently involve presenting a model of the mechanism taken to be responsible for a given phenomenon. Such explanations depart in numerous ways from nomological explanations commonly presented in philosophy of science. This paper focuses on three sorts of differences. First, scientists who develop mechanistic explanations are not limited to linguistic representations and logical inference; they frequently employ diagrams to characterize mechanisms and simulations to reason about them. Thus, the epistemic resources for presenting mechanistic explanations are (...) considerably richer than those suggested by a nomological framework. Second, the fact that mechanisms involve organized systems of component parts and operations provides direction to both the discovery and testing of mechanistic explanations. Finally, models of mechanisms are developed for specific exemplars and are not represented in terms of universally quantified statements. Generalization involves investigating both the similarity of new exemplars to those already studied and the variations between them. (shrink) Explanation in Biology in Philosophy of Biology Interlevel Relations in Biology in Philosophy of Biology Bookmark 421 citations Bridging Boundaries Versus Breaking Boundaries: Psycholinguistics in Perspective.Adele A. Abrahamsen - 1987 - Synthese 72 (3):355 - 388.details Psycholinguistics in Philosophy of Language Need, Frames, and Time Constraints in Risky Decision-Making.Adele Diederich, Marc Wyszynski & Stefan Traub - 2020 - Theory and Decision 89 (1):1-37.details In two experiments, participants had to choose between a sure and a risky option. The sure option was presented either in a gain or a loss frame. Need was defined as a minimum score the participants had to reach. Moreover, choices were made under two different time constraints and with three different levels of induced need to be reached within a fixed number of trials. The two experiments differed with respect to the specific amounts to win and the need levels. (...) The $$2 \times 2 \times 3$$2×2×3 design was a within-subject design. Data were evaluated on an overall and on a group level, the latter based on participants' stated risk preference and on revealed preferences using cluster analysis across subjects. Overall, the results showed riskier behavior when the choice options were presented as losses as compared to gains and when the induced need was highest. Time limits enhanced the framing effect. (shrink) Phenomena and Mechanisms: Putting the Symbolic, Connectionist, and Dynamical Systems Debate in Broader Perspective.Adele A. Abrahamsen & William P. Bechtel - 2006 - In R. Stainton (ed.), Contemporary Debates in Cognitive Science. Blackwell.details Cognitive science is, more than anything else, a pursuit of cognitive mechanisms. To make headway towards a mechanistic account of any particular cognitive phenomenon, a researcher must choose among the many architectures available to guide and constrain the account. It is thus fitting that this volume on contemporary debates in cognitive science includes two issues of architecture, each articulated in the 1980s but still unresolved: " • Just how modular is the mind? – a debate initially pitting encapsulated mechanisms against (...) highly interactive ones. • Does the mind process language-like representations according to formal rules? – a debate initially pitting symbolic architectures against less language-like architectures. " Our project here is to consider the second issue within the broader context of where cognitive science has been and where it is headed. The notion that cognition in general—not just language processing—involves rules operating on language-like representations actually predates cognitive science. In traditional philosophy of mind, mental life is construed as involving propositional attitudes—that is, such attitudes towards propositions as believing, fearing, and desiring that they be true—and logical inferences from them. On this view, if a person desires that a proposition be true and believes that if she performs a certain action it will become true, she will make the inference and perform the action. (shrink) Dynamical Systems in Philosophy of Cognitive Science $14.41 used $27.00 new (collection) Amazon page Model Theory of Adeles I.Jamshid Derakhshan & Angus Macintyre - 2022 - Annals of Pure and Applied Logic 173 (3):103074.details Hermeneutical Injustice and the Social Sciences: Development Policy and Positional Objectivity.James McCollum - 2012 - Social Epistemology 26 (2):189-200.details In Epistemic injustice, Miranda Fricker employs the critical concept of hermeneutical injustice. Such injustice entails unequal participation in the epistemic practices of a community that often results in an inability of dominated subjects to understand their own experiences and have them understood by their community. I argue that hermeneutical injustice can be an aspect of institutions as well communites?to the extent that they too engage in epistemic practices that seek to understand the problems and experiences of their constituents. My primary (...) example is the case of development theory and international development agencies where human beings were objectified in undesirable ways by the prevailing neoliberal economic theories that guided development practice. Here economic theory and the power to achieve its vision of unconstrained economic growth were combined in various organizations. Consequently such organizations systematically misunderstood the problems of the very people they were supposed to help. I argue that if hermeneutical injustice can be the result of the intersection of science and organizations, we need to create more participatory ways of gleaning information about social ills to alleviate institutionally mediated hermeneutical injustice. (shrink) Epistemic Injustice in Epistemology Social Choice Theory, Misc in Social and Political Philosophy Sociology of Science in General Philosophy of Science Learning Argument Structure Generalizations.Adele E. Goldberg, Devin M. Casenhiser & Nitya Sethuraman - 2004 - Cognitive Linguistics 15 (3).details Ethics and the Networked Business.Adele Santana, Antonino Vaccaro & Donna J. Wood - 2009 - Journal of Business Ethics 90 (S4):661 - 681.details Pushing through a logical continuum of closed-to open-system views of organizations necessarily changes the conceptualization of a firm from a strongly bounded entity to a configuration of networks and sub-networks, which exists and operates in a larger systemic network configuration. We unfold a classification of management processes corresponding to views of the firm along the closed/open-systems continuum. We examine ethical issues that are likely to devolve from these classes of management processes, and we suggest typical means by which managers will (...) attempt to control their firms' exposure to such issues. The final class of management processes examined focuses on the achievement of out-comes that are mutually satisfactory in the set of networks and sub-networks that constitute the focal firm, and that support the sustainability of the whole system. The article contributes to organizational theory, business ethics, and computer and information ethics by providing a comprehensive analysis of the impact of managerial views of the firm and of networks - virtual, social, informational - on managerial processes and on our understanding of how business ethics issues are linked to perceptions of what a firm is, does, and can do. (shrink) Pattern Destabilization and Emotional Processing in Cognitive Therapy for Personality Disorders.Adele M. Hayes & Carly Yasinski - 2015 - Frontiers in Psychology 6.details From Reactive to Endogenously Active Dynamical Conceptions of the Brain.Adele Abrahamsen & William Bechtel - unknowndetails We contrast reactive and endogenously active perspectives on brain activity. Both have been pursued continuously in neurophysiology laboratories since the early 20thcentury, but the endogenous perspective has received relatively little attention until recently. One of the many successes of the reactive perspective was the identification, in the second half of the 20th century, of the distinctive contributions of different brain regions involved in visual processing. The recent prominence of the endogenous perspective is due to new findings of ongoing oscillatory activity (...) in the brain at a wide range of time scales, exploiting such techniques as single-cell recording, EEG, and fMRI. We recount some of the evidence pointing to ways in which this endogenous activity is relevant to cognition and behavior. Our major objective is to consider certain implications of the contrast between the reactive and endogenous perspectives. In particular, we relate these perspectives to two different characterizations of explanation in the new mechanistic philosophy of science. In a basic mechanistic explanation, the operations of a mechanism are characterized qualitatively and as functioning sequentially until a terminating condition is realized. In contrast, a dynamic mechanistic explanation allows for non-sequential organization and emphasizes quantitative modeling of the mechanisms's behavior. For example, with appropriate parameter values a set of differential equations can be used to demonstrate ongoing oscillations in a system organized with feedback loops. We conclude that the basic conception of mechanistic explanation is adequate for reactive accounts of brain activity, but dynamical accounts are required to explain sustained, endogenous activity. (shrink) Philosophy of Neuroscience, Misc in Philosophy of Cognitive Science On Communication-Based D E Re Thought, Commitments D E Dicto, and Word Individuation.Adele Mercier - 1998 - In Robert Stainton & Kumiko Murasagi (eds.), Philosophy and Linguistics. Westview Press. pp. 85--111.details Provides an account of how necessary subjective syntactic investments on the part of speakers affect the semantic contents of their words and the possibilities for their thought-contents. Speaker Meaning and Linguistic Meaning in Philosophy of Language Words in Philosophy of Language $6.02 new (collection) Amazon page The Time Window of Multisensory Integration: Relating Reaction Times and Judgments of Temporal Order.Adele Diederich & Hans Colonius - 2015 - Psychological Review 122 (2):232-241.details The Nature of Generalization in Language.Adele E. Goldberg - 2009 - Cognitive Linguistics 20 (1).details On the Nature of Marriage: Somerville on Same-Sex Marriage.Adèle Mercier - 2008 - The Monist 91 (3-4):407-421.details Vashti McCollum and Separation of Church and State in the USA.Robert Bender - 2012 - The Australian Humanist (106):13.details Bender, Robert The USA constitution does not have a clause requiring any separation of church and state and until 1948 there were no Supreme Court rulings to ensure that this was seen as a basic constitutional principle. Then in 1945 Vashti McCollum, a 33-year-old part-time squaredancing teacher from Champaign, Illinois, initiated a legal action that changed all that. Freedom and Liberty in Social and Political Philosophy Toleration in Social and Political Philosophy Cognitive Accessibility Predicts Word Order of Couples' Names in English and Japanese.Adele E. Goldberg & Karina Tachihara - 2020 - Cognitive Linguistics 31 (2):231-249.details We investigate the order in which speakers produce the proper names of couples they know personally in English and Japanese, two languages with markedly different constituent word orders. Results demonstrate that speakers of both languages tend to produce the name of the person they feel closer to before the name of the other member of the couple. In this way, speakers' unique personal histories give rise to a remarkably systematic linguistic generalization in both English and Japanese. Insofar as closeness serves (...) as an index of cognitive accessibility, the current work demonstrates that systematicity emerges from a domain-general property of memory. (shrink) Dynamic Mechanistic Explanation: Computational Modeling of Circadian Rhythms as an Exemplar for Cognitive Science.William Bechtel & Adele Abrahamsen - 2010 - Studies in History and Philosophy of Science Part A 41 (3):321-333.details Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists' guiding ideas about mechanism have developed in conjunction with their (...) styles of modeling. In particular, mental operations often are conceptualized as comparable to the processes employed in classical symbolic AI or neural network models. These models, in turn, have been interpreted by some as themselves intelligent systems since they employ the same type of operations as does the mind. For this paper, what is significant about these approaches to modeling is that they are constructed specifically to account for behavior and are evaluated by how well they do so—not by independent evidence that they describe actual operations in mental mechanisms. (shrink) Computation and Physical Systems, Misc in Philosophy of Computing and Information Computation and Representation, Misc in Philosophy of Cognitive Science Surface Generalizations: An Alternative to Alternations.Adele E. Goldberg - 2002 - Cognitive Linguistics 13 (4).details The Inherent Semantics of Argument Structure: The Case of the English Ditransitive Construction.Adele E. Goldberg - 1992 - Cognitive Linguistics 3 (1):37-74.details But Do We Need Universal Grammar? Comment On.Adele E. Goldberg - 2004 - Cognition 94 (1):77-84.details Linguistic Universals in Philosophy of Language Syntactic Theories in Philosophy of Language William McElroy, the McCollum–Pratt Institute, and the Transformation of Biology at Johns Hopkins, 1945–1960.Tulley Long - 2009 - Journal of the History of Biology 42 (4):765-809.details "Fair Ones of a Purer Caste": White Women and Colonialism in Nineteenth-Century British Columbia.Adele Perry - 1997 - Feminist Studies 23 (3):501.details Relatedness and Self‐Definition: Two Dominant Themes in Middle‐Class Americans' Life Stories.Chris Mccollum - 2002 - Ethos: Journal of the Society for Psychological Anthropology 30 (1‐2):113-139.details But Do We Need Universal Grammar? Comment on Lidz Et Al.Adele E. Goldberg - 2004 - Cognition 94 (1):77-84.details Language Acquisition in Philosophy of Language Linguistic Innateness in Philosophy of Language Linguistics in Cognitive Sciences Anatomy, and Biochemistry.Adele Diamond - 2002 - In Donald T. Stuss & Robert T. Knight (eds.), Principles of Frontal Lobe Function. Oxford University Press. pp. 466.details $4.25 used $95.00 new $365.00 from Amazon (collection) Amazon page Linguistic Generalization on the Basis of Function and Constraints on the Basis of Statistical Preemption.Florent Perek & Adele E. Goldberg - 2017 - Cognition 168:276-293.details The UNESCO Bioethics Programme.Adèle Langlois - 2014 - The New Bioethics 20 (1):3-11.details Relatedness and Self-Definition: Two Dominant Themes in Middle-Class Americans' Life Stories.Chris Mccollum - 2002 - Ethos: Journal of the Society for Psychological Anthropology 30 (1-2):113-139.details Reflections on the Reproductive Sciences in Agriculture in the UK and US, Ca. 1900–2000+.Adele E. Clarke - 2007 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 38 (2):316-339.details This paper provides a brief comparative overview of the development of the reproductive sciences especially in agriculture in the UK and the US. It begins with the establishment by F. H. A. Marshall in 1910 of the boundaries that framed the reproductive sciences as distinct from genetics and embryology. It then examines how and where the reproductive sciences were taken up in agricultural research settings, focusing on the differential development of US and UK institutions. The reproductive sciences were also pursued (...) in medical and biological settings, and I discuss how the intersections among all three allowed the circulation of both ideas and scientists' careers. Across the twentieth century, scientific leadership in the reproductive sciences alternated between the UK and US, and these patterns are elucidated. I conclude with thoughts on future research that might emphasize the elaboration of industrialization processes in agriculture and new capacities to transform both reproductive processes and their products—life itself—as biopower comes to be more ambitiously understood as extending across all species. (shrink) Genetics and Molecular Biology in Philosophy of Biology
CommonCrawl
A method to reduce ancestry related germline false positives in tumor only somatic variant calling Rebecca F. Halperin1, John D. Carpten2, Zarko Manojlovic2, Jessica Aldrich3, Jonathan Keats3, Sara Byron1, Winnie S. Liang4, Megan Russell3, Daniel Enriquez4, Ana Claasen4, Irene Cherni3, Baffour Awuah7, Joseph Oppong7, Max S. Wicha6, Lisa A. Newman5, Evelyn Jaigge6, Seungchan Kim3 & David W. Craig2,4 Significant clinical and research applications are driving large scale adoption of individualized tumor sequencing in cancer in order to identify tumors-specific mutations. When a matched germline sample is available, somatic mutations may be identified using comparative callers. However, matched germline samples are frequently not available such as with archival tissues, which makes it difficult to distinguish somatic from germline variants. While population databases may be used to filter out known germline variants, recent studies have shown private germline variants result in an inflated false positive rate in unmatched tumor samples, and the number germline false positives in an individual may be related to ancestry. First, we examined the relationship between the germline false positives and ancestry. Then we developed and implemented a tumor only caller (LumosVar) that leverages differences in allelic frequency between somatic and germline variants in impure tumors. We used simulated data to systematically examine how copy number alterations, tumor purity, and sequencing depth should affect the sensitivity of our caller. Finally, we evaluated the caller on real data. We find the germline false-positive rate is significantly higher for individuals of non-European Ancestry largely due to the limited diversity in public polymorphism databases and due to population-specific characteristics such as admixture or recent expansions. Our Bayesian tumor only caller (LumosVar) is able to greatly reduce false positives from private germline variants, and our sensitivity is similar to predictions based on simulated data. Taken together, our results suggest that studies of individuals of non-European ancestry would most benefit from our approach. However, high sensitivity requires sufficiently impure tumors and adequate sequencing depth. Even in impure tumors, there are copy number alterations that result in germline and somatic variants having similar allele frequencies, limiting the sensitivity of the approach. We believe our approach could greatly improve the analysis of archival samples in a research setting where the normal is not available. Next generation sequencing of tumours is widely used both for discovery of biologically important somatic variants as well as for personalizing treatment based on clinically relevant somatic variants. In both cases, accurate identification of somatic variants is crucial. Ideally, a constitutional DNA sample from the same individual is sequenced in parallel, so that somatic variants can be identified by comparing the tumour to the constitutional sequence. In some cases, the constitutional sample may not be available, such as with many archival samples. Tumour only sequencing is also frequently used in clinical practice [1]. Without a matched germline sequence it is difficult to differentiate between private germline variants and somatic variants [2]. In addition to using variant databases to filter out germline variants, it is also possible to use variant allele frequencies to differentiate between somatic and germline variants [3, 4]. We have implemented a Bayesian tumour-only somatic variant caller, LumosVar, that leverages both prior knowledge of population frequencies of germline and cancer mutations, as well as the observed variant allele frequencies. Here we will evaluate how the tumour content, copy number alterations, read depth, and ancestry of a sample effect the ability to detect somatic variants in an unmatched tumour sample. Misidentifying germline variants as somatic can be problematic in several ways, depending on the mutation. First, the interpretation of a previously uncharacterized variant would be very different depending on whether it was thought to be somatic or germline. Healthy individuals typically have 130–400 rare non-synonymous germline variants [5]. Therefore, it is not surprising that a study sequencing a panel of 42 cancer genes in 175 participants uncovered 269 germline missense mutations of unknown significance [6]. According to ACMG guidelines, missense germline mutations that are rare and have not been functionally characterized should be reported as uncertain or likely benign [7]. Since tumour suppressors tend to have loss of function mutations through their length, it is unlikely that any specific mutation in a tumour suppressor is well characterized [8]. Clinical tumour sequencing tests, such as MSK-IMPACT, typically include uncharacterized non-synonymous mutations in known cancer genes in the main body of their report [9]. There are some variants, such as those in BRCA1 and BRCA2 that are known to occur in the germline and effect cancer risk, but may also occur as somatic mutations in tumours. A study of tumour with matched normal sequencing in over 1000 individuals identified known pathogenic germline variants over 2% of the participants [10]. Jones et al. examined the presence of germline false positives when tumour samples were analysed without a matched normal sample [2]. They used standard somatic variant calling tools designed for matched tumour normal pairs with unmatched tumour normal pairs. After filtering out putative germline variants by removing those found in public databases, they found that an average of 65% of the variants called somatic in the unmatched samples were private germline variants. Additionally, strict filtering removed a small number of mutations that were truly somatic. Most strikingly, they found that ~50% of patients in their cohort would have germline false positives in clinically actionable genes. The advent of next generation sequencing has accelerated the discovery of cancer driver mutations. However, we have now reached a point where most of the common coding driver mutations have already been discovered [8]. Efforts are now turning to uncover rare and noncoding driver mutations [11,12,13]. One prominent example of a noncoding driver mutations is the TERT promoter mutation [14]. Statistical methods may be used to prioritize somatic noncoding mutations for functional characterization [15, 16]. Germline variants may also contribute to cancer risk and progression [17]. Different statistical models are required to analyze germline and somatic alterations [18]. When a filtering approach is used, the false positive rate is much higher for noncoding variants, as most large scale sequencing projects have focused on coding regions (Additional file 1: Figure S8). Therefore, we believe our approach would be extremely valuable for discovering novel noncoding variants in unmatched tumor cohorts. As private germline variants are potential false positives in tumour only somatic variant calling, the number of private germline variants that an individual has would have a direct impact on the calling precision. The 1000 genomes project found more novel variants in populations of African ancestry compared to those of European ancestry [5]. We would also expect there to be differences in the number of private germline variants between populations of different ancestry. In this study, we explicitly examine how the number of private germline variants varies with ancestry and present a strategy to reduce false positives due to private germline variants in tumor-only somatic mutation calling. Our strategy uses a model of variant allele frequencies to improve classification of somatic versus germline variants. We estimate allelic copy number and clonal sample fractions to model the expected allele frequency distributions of somatic and germline variants. We use a Bayesian framework that integrates the allele frequency model with prior probabilities of somatic or germline calculated from 1000 genomes population and cancer mutations counts from COSMIC. In order to evaluate our model, we first use simulations to systematically examine the effects of tumour content, copy number, and read depth on the power to detect somatic variants. Then we use in silico dilutions and down-sampling experiments to examine these effects on real data. Finally, we evaluate the tumour only calling approach on tumour samples of different ancestry. Nucleic acid isolation, library prep, and sequencing Biospecimens were previously collected under approval by the Western Institutional Review Board (WIRB #20100721; WIRB #20141201; and WIRB #20031485). "Fresh-frozen" tumor biopsy specimens were used for determination of the tumor's genomic profile and quality assessed for tumor cellularity, necrosis, crush artifact, etc. Constitutional (or inherited) germline variants were determined by sequencing genomic DNA previously isolated from whole-blood provided at the time of biopsy. RNA and DNA were extracted from tumor biopsy specimens using the Qiagen All Prep kit (Qiagen, Germantown, MD). Table 1 provides information regarding patients and samples. Table 1 Samples and sequencing statistics For fresh frozen tissue, tissue from the needle biopsy was disrupted and homogenized in Buffer RLT plus, Qiagen AllPrep DNA/RNA Mini Kit, using the Bullet Blender™, Next Advance. Specifically, tissue was transferred to a microcentrifuge tube containing 600 μl of Buffer RLT plus, and stainless steel beads. The tissue was homogenized in the Bullet Blender at room temperature. The sample was centrifuged at full speed and the lysate was transferred to the Qiagen AllPrep DNA spin column. Genomic DNA purification was conducted as directed by the AllPrep DNA/RNA Mini Handbook, Qiagen. DNA was quantified using the Thermo Scientific Nanodrop spectrophotometer and quality was accessed from 260/280 and 260/230 absorbance ratios. For blood germline tissue, the QIAamp DNA Blood Maxi Kit, Qiagen, was used to isolate DNA from 8 to 10 ml of whole blood. The protocol was conducted as written. Specifically, the buffy coat layer was isolated from whole blood by centrifugation. The volume of buffy coat was brought up to 5–10 ml with PBS and treated with Qiagen protease at 70 °C. 100% ethanol was added and the sample was applied to a QIAamp Maxi column and centrifuged. Samples were then washed with buffers AW1 and AW2 and eluted in 1000 μl of Buffer AE. The Qubit 2.0 Fluorometer, Invitrogen, and the Nanodrop spectrophotometer, Thermo Scientific, were used to assess DNA quality and concentration. Slides mounted with 10 um FFPE tissue sections were incubated in a thermal oven overnight at 60 °C. Deparaffinization was conducted on slides by three exchanges of xylene followed by washes in descending concentrations of ethanol (100, 95, 70, 50 and 20%) and a final wash in deionized water. Using a double-edge dissecting needle, tumor tissue was scraped into a 1.5 ml microcentrifuge tube containing 150 μl of Buffer PKD and 10 μl of proteinase K. Samples were vortexed to mix, incubated at 56 °C for 15 min and chilled on ice for 3 min. After centrifugation at 20,000 x g for 15 min, the supernatant was transferred to a new 1.5 ml centrifuge tube for RNA purification. The pellet was resuspended in 180 μl of Buffer ATL containing 40 μl of proteinase K, mixed by vortexing and incubated at 56 °C for 1 h followed by incubation at 90 °C for 2 h. After brief centrifugation, samples were treated with 4 μl of RNAse A (100 mg/ml) at room temperature. Genomic DNA purification was conducted with automation on the QIAcube using the AllPrep DNA/RNA FFPE Kit and the QIAcube standard protocol, Purification of DNA and total RNA including small RNAs from FFPE tissue sections, Version 2 (DNA purification). Samples were eluted in 100 μl of BufferATE. Extracted DNAs were quantified using the Invitrogen 2.0 fluorometer and DNA quality was assessed on the Nanodrop by evaluating 260/280 and 260/230 absorbance ratios. Captured libraries were sequencing on Illumina HiSeq2500 using paired-end 83x83bp reads. Paired tumor/normal exomes were constructed using KAPA Biosystems' Library Preparation Kit using the manufacturer's "with bead" protocol and Agilent's XT2 adaptors and Agilent's SureSelect Human All Exon V5 + UTR baits. Unmatched tumor SBS Kit V3 on the Illumina HiSeq. Tumor and normal genome libraries were sequenced with paired 125 bp reads using Illumina HiSeq2500 V4 chemistry running HCS2.2 controller software and RTA 1.18.61. The five lanes of each library generated 2.819 billion reads for the tumor and 2.968 billion reads for the normal sample. Alignment and assembly Pipeline analysis is triggered when data is written from the sequencer to the analysis server in the form of BCL files. Using a queuing system and write FAIL/COMPLETED system BCL files are converted to FASTQ files (raw sequence) and aligned to the genome using BWA-MEM (version 0.7.8) [19]. BWA-MEM aligns long query sequences against a large reference genome utilizing a backward-search with a Burrows-Wheeler Transform tool. We used the reference genome from 1000 Genomes project build hs37d5 with decoy contigs [b37d5] and Ensembl v74 for annotations [20]. For the samples containing the molecular barcodes, the barcodes were appended to the RG tags using a custom script. BAM files were sorted with SAMTools (version 0.1.2) and merged and duplicate marked using Picard MarkDuplicates.jar (version 1.111). Chastity failed reads were marked in the BAM files through a custom script using the Picard Tools API (version 1.31). Targeted reassembly was performed using ABRA (version 0.94) [21]. These final BAM files were then used to identify genetic variants. In silico dilutions and downsampling For the in silico dilutions, BAMs of the tumor and normal samples were subsampled and then merged using samtools. For the downsampling experiment, samtools was used to subsample the tumor bam. Benchmark variant calling Germline SNV and INDELS were identified using HAPLOTYPE CALLER (version 3.1–1) [22], samtools (version 1.2) [23] and freebayes (v0.9.21) [24] in the constitutional sample. Somatic SNV and INDEL were identified using three different somatic variant callers SEURAT [25], STRELKA [26], and MUTECT [27]. After normalizing INDELs with VT [28], a custom script was used to merge the VCFs from the three callers. A set of ten constitutional samples was pooled as an unmatched reference sample for the somatic SNV and INDEL callers. Agreement of all callers was required to define a true variant (Additional file 2: Table S1). Positions with discordant calls were considered unknown and excluded from sensitivity and precision calculations. Germline variant filtering Known germline variants were identified by their presence in dbSNP (build 146) [29]. Since dbSNP does contain somatic variants, we used the "allele origin" field to exclude those variants from germline filtering. Variants with an allele origin listed as germline or unspecified were considered known germline variants, while those listed as somatic, both, or not present in dbSNP were considered potential somatic or private germline variants. For the filtering approach, variants called by all somatic callers in the pooled reference comparison, and not filtered out as known germline variants were considered somatic calls. We simulated somatic mutations in order to examine how read depth, tumor content, and copy number, affect the power to detect somatic variants. Read depth of each mutation was drawn from a lognormal distribution where \( {\overline{R}}_T \) is the mean target coverage. The standard deviation was derived from fitting a lognormal distribution to the read depth distribution from several tumor samples. $$ {R}_T\sim lognormal\left({\overline{R}}_T,1,1\right) $$ The expected allele frequency of a somatic variant (ϕS) is a function of the fraction of cells in the sample containing the variant (f), the total copy number (N), and the minor allele copy number (M). $$ {\upphi}^S=\frac{f^{\ast}\left(N-M\right)}{f^{\ast }N+{2}^{\ast}\left(1-f\right)} $$ The read depth of the B alleles were drawn from a binomial distribution as R B $$ {R}_B\sim binomial\left({R}_T,{\upphi}^S\right) $$ Caller overview The variant calling approach consists of four main steps. The first step involves analysis of a set of unmatched controls in order to calculate position quality scores and get average exon read depths for the copy number analysis. The second step involves calculating position quality scores of candidate variant positions in the tumor sample, which takes into account both the quality scores from the unmatched controls as well as quality metrics from the tumor sample such as mapping quality scores and strand bias. The third step involves estimating the allelic copy number and clonal sample fractions. The model assumes that due to clonal expansions, subsets of variants will occur in the same fraction of the cells in the sample. There are a fixed number of these subsets (determined by the user – 3 was used in the results presented here) that have the same sample fraction and each variant (mutation or copy number) is assumed to belong to one of these subsets. The user may select any positive integer number of subsets to find. In an unrelated training dataset, we found that three works well for most samples. An expectation maximization approach is used to find the clonal sample fractions that best explain the data and assign the most likely copy number state to each segment and sample fraction to each variant. The fourth step involves finding the posterior probability that each candidate variant is somatic, germline heterozygous, or homozygous based on the expected allelic fractions for somatic and germline variants which takes into account the allelic copy number and clonal sample fractions. The somatic and heterozygous germline variants are then used as input to step three. The caller iterates between step three and step four until the result converges (Additional file 3: Figure S1). Position quality scores from unmatched controls A quality score for each position in the exome is determined based on the assumption that positions that do not appear diploid in control samples are unreliable. The conditional probability of the data (D) given that the position is homozygous (P(D| G AA )), heterozygous (P(D| G AB )), or poorly mapped (P(D| U)) are calculated based on the number of reads supporting the B allele (R B ), the total number of reads (R T ), the mean base quality of the B allele (\( {Q}_B^b \)) and the mean mapping quality of the A or B alleles (\( {Q}_A^m \)or\( {Q}_B^m \)). The read depths of the B allele are assumed to follow a binomial distribution with R B successes, R T trials, and the probability of success dependent on the genotype. For homozygous positions, the probability of observing reads supporting the B-allele depends on the mean B allele base quality, and is 0.5 for heterozygous positions. \( P\left(D|{G}_{AA}\right)={binomial}_{pmf}\Big({R}_B,{R}_T,{10}^{\frac{-{Q}_B^b}{10}} \)). P(D| G AB ) = binomial pmf (R B , R T , 0.5) $$ P\left(D|U\right)={10}^{\frac{-\mathit{\min}\left({Q}_A^m,{Q}_B^m\right)}{10}} $$ The prior probabilities of homozygous (π AA ) or heterozygous (π AB ) are based on the population allele frequencies (F A and F B ), assuming Hardy –Weinberg equilibrium and the prior probability that the position is unreliable is a constant (π U ) reflecting the percentage of the exome expected to be mappable [30]. The posterior probability that the position is unreliable is given the data: $$ \mathrm{P}\left(\mathrm{U}|\mathrm{D}\right)=\frac{\mathrm{P}{\left(\mathrm{D}|\mathrm{U}\right)}^{\ast }{\uppi}_U}{P{\left(D|{G}_{AA}\right)}^{\ast }{\uppi}_{AA}+P{\left(D|{G}_{AB}\right)}^{\ast }{\uppi}_{AB}+P{\left(D|U\right)}^{\ast }{\uppi}_U} $$ The mean posterior probability that the position is unreliable is calculated across the unmatched controls for each position and then transformed to Phred-like score. Tumor quality metrics and filtering Sixteen quality metrics are calculated at each position (see Table 2). Each metric has a PASS threshold and a REJECT threshold (see Additional file 4: Table S2). Each position that passes all of the PASS thresholds is assigned to the PASS training group, and each position that meets any of the REJECT criteria is assigned to the REJECT training group. A quadratic discriminant model is fit to the SNVs and Indels separately. All of the positions are classified according to the respective quadratic discriminant model, and the posterior probability of belonging to the PASS group is used to filter the positions on quality. Table 2 Model input parameters Copy number and clonal sample fraction estimation In order to properly determine the expected allelic fractions of germline and somatic variants we need to estimate allele specific copy number of clonal and sub-clonal copy number events, as well as the sample fractions of the main clonal and sub-clonal populations. Our model starts with segmented read count data, and finds the most likely allelic copy number state for each segment given the observed mean exon read depths, the B-allele frequencies of the germline heterozygous variants in each segment, and the clonal and sub clonal sample fractions. Using an expectation maximization approach, we find the clonal and sub clonal sample fractions that maximize the probability of the observed mean exon read depths, germline heterozygous B-allele frequencies, as well as somatic variant B-allele frequencies. We also include prior probabilities of copy number states and sample fractions to favor solutions more diploid copy number segments and intermediate sample fractions. The model assumes that at most one clone can have a copy number alteration in a given segment, and the rest of the tumor cells, as well as the normal cells are diploid in that segment. See Table 3 for key notation used in the model. Table 3 Other model parameters and variables The copy number segmentation is performed on the ratio of the tumor to the normal mean exon read depth using the circular binary segmentation implementation in the Matlab bioinformatics toolbox. Somatic and germline variant allelic read counts are required to as input to the expectation maximization step. In the initial iteration, likely germline and somatic variants are selected based on database frequencies. In subsequent iterations, the posterior probability described in the "Somatic Variant Calling" section below (Eq. 4) is used to select the positions considered somatic and a similar germline posterior probability is used to select germline variants. In the expectation maximization step, in addition to optimizing the clonal and subclonal sample fractions (f), we also optimize a centering parameter (C) and a parameter that controls the spread of the allelic fraction distributions (W). We aim to maximize the following sum of likelihoods: 1) the likelihood of the exon read depth given the sample fraction and centering parameters, 2) the likelihood of the heterozygous variant B allele read depth given the sample fractions and W parameter, 3) the likelihood of the somatic variant B allele read depth given the sample fractions and W parameter. Terms reflecting the prior probabilities of observing the copy number states and sample fractions are also included in the sum to maximize, as shown below. Here X is the number of exons, X * is the number of copy number altered exons, V is the number of heterozygous germline variants, and Y is the number of somatic variants. $$ \left\{f,W,C\right\}= argmax\left(\frac{\sum_{n=1}^X\mathit{\log}\left(L\left(C,f|{R}_{Tn}\right)\right)+{\sum}_{h=1}^V\mathit{\log}\left(L\left(f,W|{R}_{Bh},{R}_{Th}\right)\right)+{\sum}_{s=1}^Y\mathit{\log}\left(L\left(f,W|{R}_{Bs},{R}_{Ts},\right)\right)+{\sum}_{n=1}^X\mathit{\log}\left(\pi \left({N}_n\right)\right)+{\sum}_{n=1}^X\mathit{\log}\left(\pi \left({M}_n\right)\right)+{\sum}_{s=1}^Y\mathit{\log}\left(\pi \left({f}_s\right)\right)+{\sum}_{n=1}^{X^{\ast }}\mathit{\log}\left(\pi \left({f}_n\right)\right)}{3^{\ast }X+{X}^{\ast }+V+{2}^{\ast }Y}\right) $$ The likelihood of the exon read depth is modeled as a Poisson distribution with a mean of \( {\widehat{R}}_{ni} \), which is calculated, based on the observed exon read depths in the unmatched control samples (Eq. 2, below). $$ L\left(C,{f}_i|{R}_{Tn}\right)={poisson}_{pdf}\left( round\left({R}_{Tn}\right), round\Big({\widehat{R}}_{ni\Big)}\right)\Big) $$ The likelihood of the heterozygous position minor allele read counts (R Bh ) are modeled as a beta binomial distribution with an expected allelic fraction ϕG (Eq. 3). $$ L\left({f}_i,{W}_i|{R}_{Bh},{R}_{Th}\right)={betabinomial}_{pmf}\left({R}_{Bh},{R}_{Th},{W_i}^{\ast }{\upphi}_i,{W}_I\left(1-{\phi}_i\right)\right) $$ The likelihood of somatic position minor allele read counts (R Bs ) are modeled as a beta binomial distribution with an expected allelic fraction ϕS (Eq. 1). $$ L\left(f,W|{R}_{Ts}\right)={betabinomial}_{pmf}\left({R}_{Bs},{R}_{Ts},\mathit{\min}\left({W}_{I_s}{\phi}_{I_s}^S,{W}_i\Big(1-{\phi}_{I_s}^S\right)\right),\mathit{\max}\left({W}_{I_s}{\phi}_{I_s}^S,{W}_{I_s}\Big(1-{\phi}_{I_s}^S\right)\Big) $$ The prior distribution of f is described as a beta distribution parameterized such that f π is the mode of the distribution. $$ \pi (f)={beta}_{pdf}\left(f,{\alpha}_{\pi },\frac{\alpha_{\pi }-1}{f_{\pi }}-{\alpha}_{\pi }+2\right) $$ In the expectation step, we first estimate the copy number and minor allele copy number of each segment. For each clone (i) and segment (j), the copy number (N ij ) and minor allele copy number (M ij ) are calculated based on the mean segment read depth in the tumor (\( \overline{R_{Tj}} \)) and controls (\( \overline{R_{Cj}} \)), and the mean segment B allele read depth in likely germline heterozygous positions in the tumor (\( \overline{R_{HBj}} \)). $$ {N}_{ij}=\mathit{\max}\left[ round\left(\frac{C^{\ast}\frac{\overline{R_{Tj}}}{\overline{R_{Cj}}}-{2}^{\ast}\left(1-{f}_i\right)}{f_i}\right),0\right] $$ $$ {M}_{ij}= round\left(\frac{{N_{ij}}^{\ast}\overline{R_{HBj}}}{{f_i}^{\ast}\overline{R_{Tj}}}-\frac{1-{f}_i}{2}\right) $$ Then the expected read depth is calculated for each exon (n) and clone. $$ {\widehat{R}}_{ni}=\frac{{f_i}^{\ast }{R_{Cn}}^{\ast }{N}_j+{2}^{\ast }{\left(1-{f}_i\right)}^{\ast }{R}_{Cn}}{C} $$ The expected allele frequencies for germline heterozygous positions are also determine for each clone. $$ {\upphi}_{ij}^G=\frac{{f_i}^{\ast }{M}_{ij}}{N_{ij}}+\frac{1-{f}_i}{2} $$ We find the clone I that is most likely to have the copy number alteration in each segment, and then use the expected read depth and expected allele frequency corresponding to the most likely clone for each segment. $$ {I}_j={argmax}_i\left(\frac{1}{Q_j}\sum_{n=1}^{Q_j}L\left(C,{f}_i|{R}_{Tn}\right)+\frac{1}{V_j}\sum_{h=1}^{V_j}L\left({f}_i,{W}_i|{R}_{Bh},{R}_{Th}\right)\right) $$ We are then able to find expected allele frequencies for each somatic variant, for each sample fraction. If the variant occurs in a copy-altered segment, then we must take into account whether the variant occurs on the major or minor copy of the chromosomal segment. Here we assume that a variant that occurs in the same sample fraction as the copy number alteration will be on the major allele, while variants in other sample fractions would occur on exactly one chromosomal copy. $$ {\upphi}_{ij}^S=\left\{\begin{array}{c}i={I}_j,\kern0.5em \frac{{f_j}^{\ast}\left({N}_j-{M}_j\right)}{{f_j}^{\ast }{N}_j+{2}^{\ast}\left(1-{f}_j\right)}\ \\ {}i\ne {I}_j\bigwedge {N}_j>0,\kern0.75em \frac{f_i}{{{f_I}_j}^{\ast }{N}_j}+{2}^{\ast}\left(1-{f}_{I_j}\right)\ \\ {}i\ne {I}_j\bigwedge {N}_j=0,\kern0.5em \frac{\mathit{\min}\left(1-{f}_{I_j},\kern0.75em {f}_i\right)\ }{2}\end{array}\right. $$ We can then find the most likely clone for each somatic variant. $$ {I}_s={argmax}_i\left({betabinomial}_{pmf}\left({R}_{Bs},{R}_{Ts},\mathit{\min}\left({W}_i{\upphi}_{ij}^S,{W}_i\Big(1-{\upphi}_{ij}^S\right)\right),\mathit{\max}\left({W}_i{\upphi}_{ij}^S,{W}_i\Big(1-{\upphi}_{ij}^S\right)\right)\Big) $$ Somatic variant calling The somatic variant calling model assumes that reads at a given position were generated based on one of four mutually exclusive models: somatic mutation (S), germline heterozygous (G AB ), germline homozygous (G AA ), or other (O). We can then calculate the probability of the data given each of the models. $$ P\left(\mathrm{D}|\mathrm{S}\right)={\mathrm{max}}_{\mathrm{i}=1\dots \mathrm{K}}\ \left({betabinomial}_{pmf}\left({\mathrm{R}}_B,{\mathrm{R}}_T,\mathit{\min}\left({W}_i{\upphi}_{ij}^S,{W}_i\Big(1-{\upphi}_{ij}^S\right),\mathit{\max}\left({W}_i{\upphi}_{ij}^S,{W}_i\Big(1-{\upphi}_{ij}^S\right)\right)\right) $$ $$ P\left(\mathrm{D}|{G}_{AB}\right)={betabinomial}_{pmf}\left({R}_B,{R}_T,{W_i}^{\ast }{\upphi}_{ij}^G,{W}_i\left(1-{\upphi}_{ij}^G\right)\right) $$ $$ P\left(\mathrm{D}|{G}_{AA}\right)={betabinomial}_{pmf}\left({R}_A,{R}_T,{W_i}^{\ast}\left(1-{10}^{\frac{-{Q}_B^b}{10}}\right),{W_i}^{\ast }{10}^{\frac{-{Q}_B^b}{10}}\right) $$ $$ P\left(\mathrm{D}|\mathrm{O}\right)={betabinomial}_{pmf}\left({R}_T-{R}_A-{R}_B,{R}_T,{W_i}^{\ast}\left(1-{10}^{\frac{-{Q}_A^b}{10}}\right),{W_i}^{\ast }{10}^{\frac{-{Q}_A^b}{10}}\right) $$ The prior probability of a somatic mutation is based on the count of mutations in that position in cosmic (ω) and the prior probabilities of the germline genotypes are based on population allele frequencies (F A , F B ). $$ {\pi}_s={\rho}^{\ast}\left(\omega +1\right) $$ $$ {\pi}_{AB}={\left({2}^{\ast }{F_A}^{\ast }{F}_B\right)}^{\ast}\left(1-{\pi}_s\right) $$ $$ {\pi}_{AA}={{F_A}^2}^{\ast}\left(1-{\pi}_s\right) $$ $$ {\pi}_O=1-{\pi}_s-{\pi}_{AB}-{\pi}_{AA} $$ We can then calculate the posterior probability that the mutation is somatic. $$ \mathrm{P}\left(\mathrm{S}|\mathrm{D}\right)=\frac{\mathrm{P}{\left(\mathrm{D}|\mathrm{S}\right)}^{\ast }{\uppi}_S}{P{\left(D|{G}_{AA}\right)}^{\ast }{\uppi}_{AA}+P{\left(D|{G}_{AB}\right)}^{\ast }{\uppi}_{AB}+P{\left(D|S\right)}^{\ast }{\uppi}_S+P{\left(D|O\right)}^{\ast }{\uppi}_O} $$ Current approaches for filtering out germline variants from potential somatic variants typically include comparison to databases containing large numbers of germline variants. A recent study has shown increased false positive germline variants in non-Caucasians [3]. We first sought to examine the dependence of private germline variation on ancestry independent of prior databases by utilizing 1000 Genomes Phase 3 data on 26 different populations (Additional file 5: Table S3). In this analysis, for each of the 2503 individuals, germline variants were counted as private if there were found in no other individual within phase 3 of 1000 genomes. Figure 1a shows the distribution of missense variants unique to each individual across the 26 different cohorts. Populations such as Bengali from Bangladesh (BEB) show a significant number of private and rare variants due to both the uniqueness of this population within 1000 Genomes and a rapid recent population expansion. In particularly for the BEB population, there is considerable evidence that one's ability to precisely distinguish germline and somatic variation would require significantly greater numbers of sequenced individuals than the Finnish population. Evident from the violin plots, admixed populations show a bimodal distribution such as in Americans of African Ancestry in SW (ASW) indicating a high degree of variability. As expected, some populations show a smaller number of unique variants consistent with their geographic isolation such the Puerto Rico participants (PUR). Correlation between ancestry and the effectiveness of using database filters to identify somatic variants. a The distribution and number of variants unique to an individual across 2503 individual from Phase 3 of 1000 Genomes plotted as violin plot for each of 26 different populations (indicated by their 3 letter code), and colored based on their ancestral super population. b The number of private variants for 150 individuals after filtering through 1000 Genomes, ExAc. not previously sequenced shown by their principle components of common variation (>1%) is shown as a color-metric bubble chart. c The distribution of variants within the groups within the right PCA plot, correlating to sections in B, where individuals clustering near those of European, Asian, and African Ancestry We extended this analysis by utilizing an additional set of 578 exome sequenced tumor/normal sets not previously included in existing databases. To obtain high quality variant calls, we utilized strict thresholds (increasing marginally false negatives), by excluding genes in highly homologous and paralogous regions, requiring greater than 20X coverage and that they were called by two different germline variant callers (GATK Haplotype Caller and FreeBayes). In this analysis, we limited to single nucleotide variants that have a defined impact on protein transcription or translation, and not found in Phase 3 of 1000 Genomes Phase 3, ExAc 3.0, ESP6500, or ARIC 5600 cohorts. Overall, we find approximately 100 to 200 private variants per individual. We then overlaid ancestry by PCA on common coding variants ascertained from exome sequencing of germline. These are summarized in Fig. 1b and c where one observes there are significantly greater challenges in removing germline false-positives for many populations of non-European ancestry. First, shown in Fig. 1b, the number of private putatively functional variants for each individual are plotted in a bubble graph for the 2nd and 3rd principal components to distinguish non-African samples. The number of private variants for each individual is shown both by color and by size on the bubble chart, and the locations of individuals from 1000 Genomes are shown for orientation. Importantly, this resolution of ancestry shows that even within a European ancestry cohort, there are many individuals who still will have a high number of private variants that would result in a higher number of false positives when detecting somatic variants in tumor only samples by filtering. This effect is seen as we further examine 3 areas of Fig. 1b by grouping individuals heuristically, sectioning off those that cluster near European 1000 Genomes (EUR) for one cluster, those near African 1000 Genomes (AFR) individuals for a second cluster, and those near Eastern and Southern Asian individuals in 1000 Genomes (SAS/EAS). Examining the mean number of missense variants separating into three approximate groups, we see individuals clustering with those of European ancestry with a mean of 101 private missense variants, individuals clustering or admixed with individuals of African ancestry with a mean of 108 missense variants, and those clustering with individuals of South or East Asian ancestry show a mean of 117 missense variants. A 1-way ANOVA analysis between these groups shows significant differences in the number of private variants (p < 0.003). These results are overall consistent both within 1000 Genomes populations and within individuals not included with existing databases, showing that individuals of non-European ancestry have greater number of private variants per individual. Still, the wide distribution of individuals of Caucasian ancestry indicates that ancestry alone, as driven by common variation, does not explain all of the variation. Likely admixture with individuals from populations that have recently undergone rapid expansions indicate there is considerably heterogeneity within populations. Taken together, these results are consistent with a lack of diversity being a major but the exclusive factor in the higher number of private variants for individuals of non-European descent. Additional population factors are likely also at play such as the when populations have undergone recent expansions leading to a larger number of variants only within the most recent generations. The observation that most non-hypermutated cancers have approximately the same number of somatic mutations (~100) as private germline variants has significant implications towards using tumor only sequencing in precision medicine. In Fig. 1c, group 1 would have an approximate 50% false discovery rate with filtering alone, whereas the African group would have over a 70% false discovery rate. Our results suggest filtering-based approaches are substantially more effective for European American individuals. Since using databases to filter germline variants from tumour only somatic variant calls does not appear sufficient, we examine integrating variant allele frequency information. Framework for considering allele fraction shifts as a function of copy number and clonal heterogeneity Since somatic variants will only occur in tumor cells, but germline variants will occur in all cells, we can leverage differences in allele frequencies to differentiate between somatic and normal variants in impure tumor samples. In solid tumours, stromal cells and infiltrating lymphocytes are typically interspersed among the tumour cells [31, 32]. The normal cell contamination in tumours can be leveraged to differentiate somatic from germline variants. For example, in a normal diploid region, a heterozygous germline variant should have an allele frequency around 50% while a heterozygous somatic mutation in an impure tumour should have a lower allele frequency. Still, tumours often have many copy number alterations that will affect the expected allele frequencies of both germline and somatic variants. One approach, implemented by Smith et al., is to fit distribution of allele frequencies of common germline variants in each segment and detect outliers as likely somatic variants [4]. We chose to explicitly model allelic copy number and clonal sample fractions so that we can examine how these factors impact the power to detect somatic variants. A conceptual overview of our approach is shown in Fig. 2 and a more detailed illustration is provided in Additional file 3: Figure S1. Overview of Variant Calling Strategy. After filtering candidate variant positions by quality, an EM approach is used to fit a model of clonal allelic copy number. The plots on the left show example copy number plots for three conditions, the top panel showing high tumor content and moderate coverage, the middle panels with high tumor content and high coverage, and the bottoms panel with moderate tumor content and moderate coverage. A one copy loss is detected in the segment indicated by the blue line in the first left-most column. Next the expected somatic and germline allelic fractions are modeled in subsequent column. The center two columns plots the expected allelic fractions for germline variants (grey), somatic main clone (blue), and somatic sub clonal (green and red) for diploid regions (left) and one copy loss regions (right). We can see that in high tumor content, moderate coverage, the main clone distribution overlaps with the germline and is difficult to detect in the diploid region, while the red sub-clone is more difficult to detect in the one copy loss region. Increasing the coverage increases sharpness of the distributions making the somatic variants easier to detect. In the moderate tumor content sample, all clones are easy to differentiate from germline in the diploid region, but the main clone is hard to detect in the one copy loss region. Using these distributions to calculate conditional probabilities, as well as using 1000 genomes population frequencies and COSMIC mutation counts to calculate prior probabilities, somatic and germline variants can be called. The right most columns show plots of the allelic fractions of germline (grey) and somatic variants colored by clone. In these, encircled '+' indicates the variant was detected and empty "o" indicates a false negative. As expected, in the high tumor content moderate coverage condition, variants in the main clone are detected better in the deleted region, and the number of variants detected increases in the high coverage condition A key aspect of our strategy is modeling the clonal and subclonal allele specific copy number alterations, which can also effect the allele frequencies of both somatic and germline variants. The expected allele frequencies can be calculated (see methods Eqs. 1 and 3). Figure 3 illustrates how the expected allele frequencies for somatic and germline differ with tumor content for different copy number alterations. As we would expect, the biggest differences in allele frequencies between the somatic and germline variants occurs at the lowest tumor content regardless of copy number state. In a normal diploid region, the difference in allele frequencies monotonically decreases as tumor content increases. However, other copy number states result in points of intermediate tumor content where the allele frequencies for somatic and germline variants are similar. Therefore, we would expect copy number alterations to make it more difficult to detect somatic variants based on allele frequencies. Allele Frequencies of Somatic and Germline Variants and Required Coverage for Somatic Variant Detection by Simulation. The top half of each graph shows the expected allele frequency of somatic (blue) and germline variants (red) by tumor content (x-axis) for different copy number states (plot titles, N indicates total copy number, M indicates minor allele copy number). The bottom half of each graphs shows the coverage required (indicated by the color) to get the power indicated by the y-label. Black squares indicate that the detection power was not achieved even at the highest coverage evaluated. We can see that the closer the somatic and germline allele frequencies, the more difficult it is to detect somatic variants We would expect that at higher read depth we would be able to measure allele frequency more precisely, therefore would be better able to detect somatic variants. The read depth required should depend on the tumor content and the copy number state. We used simulations to examine how the power to detect somatic variants depends on tumor content, mean target coverage, and copy number state. We simulated somatic variants in eight different copy number states, with sample fraction from 5%–95% and mean target coverage from 50 to 3200 with 1000 variants for each condition. Then we found the percentage that would be called somatic using the default thresholds. We can see in Fig. 3 that the read depth required depends greatly on the sample fraction and copy number state. In a diploid region (N = 2,M = 1), we would only need 200X mean target coverage to detect almost 80% of the somatic variants with a sample fraction of 50%. However, we would need 800X mean target coverage to detect a similar proportion of variants with a sample fraction of 75%, and 3200X coverage to detect a similar proportion of variants with a sample fraction of 85%. Copy number alterations reduce the power to detect somatic variants in specific ranges of sample fractions. For example copy neutral loss of heterozygosity (LOH) (N = 2,M = 0) makes it very hard to detect variants with a sample fraction around 35%–40%, while a one copy gain (N = 3,M = 1) makes it very hard to detect variants with a sample fraction around 50%–55%. Evaluation dataset A set of nine samples consisting of two glioblastoma samples and seven triple negative breast cancer samples were used to evaluate the tumor only caller. These included four African Americans, three European Americans, one Ghanaian, and one Hispanic. One of the glioblastoma samples was sequenced to 2101X mean target coverage for downsampling and in silico dilution experiments. The other samples were sequenced to 454X-1012X mean target coverage. We used a consensus calling approach to define true somatic and true germline variants, as consensus calling typically outperforms any individual caller [33]. Using strict criteria of detection by three out of three somatic variants callers (or two out of two for indels) we found that each sample had an average of 129 somatic (range 75–196) mutations. We also used three out of three consensus calling to define germline variants, and considered variants private those that did not appear in dbSNP. By these strict criteria, we found an average of 224 (range 126–319) private germline variants per sample. Variant quality filtering Strict quality filtering is required to exclude variants that are not mapped cleanly because any mapping artifacts may shift the measured allele frequency and result in an incorrect classification. Therefore we adopt a two-tiered approach to variant quality filtering. About 80% of somatic variants and 78% of private germline variants have sufficient quality to call (Additional file 6: Figure S2). A much lower percentage of indels meet the strict quality criteria as they are much more difficult to map. We find that increasing coverage increases the number of somatic and private germline variants that pass the strict quality criteria (Additional file 7: Figure S3), while changing the tumor content has little effect (Additional file 8: Figure S4). Sample fraction and copy number calling In the downsampling and in silico dilution experiments, we find that the main copy number events are consistently called, except for at the lowest dilution (Additional file 9: Figure S5). For the one copy loss and LOH events, the sample fraction decreases linearly with the dilution as expected. In our approach, we observe that there is some ambiguity in calling segments as a one copy gain in the highest sample fraction or a higher level gain in a lower sample fraction. Though too small to see in the plot, there is also a ~0.2 megabase deletion on chromosome 9 that is detected as a two copy loss in all but the lowest dilution encompassing CDKN2A. The TNBC samples show a large number of gains and losses (Additional file 10: Figure S6). The large number of copy number alterations are evidence of genome instability which is typical of triple negative breast cancer [34]. Somatic variant detection sensitivity Only polymorphic variants appearing in dbSNP are considered false negatives in the filtering approach since the same callers were used to define the truth set, so the sensitivity of the filtering approach does not represent the sensitivity of the callers. There were an average of 16 somatic variants found in dbSNP per sample (range 6–28). LumosVar's sensitivity varies greatly between samples (Fig. 4). We expect the power to detect somatic variants to depend on the sample fraction, copy number states, and read depth. Using the sample fraction and copy number state assigned to each variant, we simulated somatic variants, and determined proportion of simulated somatic variants that would be called somatic by our model (Fig. 5). These simulations are able to predict very accurately the proportion of somatic variants that we are able to detect, which indicates that the samples with poor sensitivity have copy number states and sample fractions that are not conducive to detecting somatic variants by allele frequency. As we would expect the sensitivity to detect somatic variants increases with coverage (Fig. 4). Also consistent with expectations, we see that the detection sensitivity is best at intermediate tumor content, where the somatic variants would generally have the biggest difference in expected allele frequency from the germline variants (Fig. 4). We also find that we can adjust the caller threshold to tune the tradeoff between sensitivity and precision (Additional file 11: Figure S7). Comparison of Calls of True Somatic Variants and True Values of Variants Called Somatic. The graphs on the left shows the calls of LumosVar (bottom bar in pair) compared to filtering approach (top bar in pair) in calling true somatic variants. The size of the yellow portions of the bars indicate the number of true somatic variants falsely called germline heterozygotes or homozygous, the grey represents true somatic variants that were filtered on quality or not detected as variants, and the blue represents true positive somatic calls. We can see that the filtering approach has better sensitivity (mean TPR 87%, range 78%–96%) compared to the tumor only caller (mean TPR 52%, range 27%–62%). The graphs on the right shows the number of somatic calls by the LumosVar (bottom bar in pair) compared to the filtering approach (top bar in pair) that are truly germline private heterozygous (red), germline heterozygous database variants (pink), homozygous (grey) or truly somatic (blue). We can see that the tumor only caller has better precision (mean PPV 75%, range 56%–89%) compared to the filtering approach (mean PPV 35%, range 19%–55%). The top pair of panels shows the comparison for eight of the nine evaluation samples. The middle of panels shows the comparison for an in-silico dilution series preformed using the ninth evaluation sample (GBMEA1), while the bottom panel shows a down-sampling experiment on the same sample Simulations were used to predict the power to detect each true somatic variant assuming the sample fraction and copy number were correctly called. For each clone and each sample, the true positive rate is plotted against the power predicted from the simulations. The size of the bubble is proportional to the number of true positive variants in each clone, the color the points represents the sample fraction of the clone, and the number indicates the sample number. As expected, the highest sample fraction clone has the worse predicted and observed sensitivity. The graph on the left includes all of the true somatic variants, and the graph on the right only includes those that pass the quality filters. We can see that the predicted power correlates well with the measured sensitivity, particularly when the low quality variants are excluded Somatic variant detection precision All of the private germline variants are called as false positives in the filtering approach. Because the number of private germline variants varies by ancestry, the positive predictive value of the filtering approach also depends on ancestry. For the samples of European American ancestry, the positive predictive value of the filtering approach ranges from 35 to 62%, while the samples of Hispanic, African American, or African ancestry the positive predictive value of the filtering approach ranges from 20 to 40%. LumosVar is able to correctly classify most of the private germline variants, and has much better positive predictive (range 67–91%). While there still are some false positive germline variants, many are found in dbSNP (Fig. 3). Combining the filtering approach with the tumor only caller could further improve the positive predictive value. Private germline variants are difficult to distinguish from somatic variants when a constitutional sample from the same individual is not available. These variants are not present in polymorphism databases, so they may not be easily filtered out. Of critical importance, our results show that the number of private variants is dependent on ancestry. Underlying these differences are under-sampling of some populations within databases along with population-specific characteristics, such as admixture or that have recently undergone rapid expansions. There are often logistical reasons why only a tissue sample may be available, but the tumor tissue is often a mixture of tumor and surrounding stromal tissue. We demonstrated that a model leveraging deep sequencing to measure differences in allele frequencies between somatic and germline variants can be utilized to call somatic mutations with greater specificity than using population variant frequencies alone. We find that our allele frequency based strategy can reduce by 2/3rds the number of false positives. However, the sensitivity of the allele frequency strategy is highly dependent on the tumor content and the copy number alteration profile of the sample, as well as the sequencing depth. Deep sequencing is important for these models. A minimum sequencing depth of 200-400X is needed with even higher depth required for samples with high tumor content. We believe that the Bayesian calling strategy described here, along with appropriate sample collection and sequencing depth will enable the more accurate detection of somatic variants when the germline samples are not available. The intuitive question moves to what is the accuracy of tumor only sequencing. It turns out, accuracy is not the most informative statistical tool since one is assured 99% + accuracy due to the millions of true negatives – even if one reports zero variants in a hypermutated sample. Positive predictive value is a natural tool, but it brings forth a different problem. In the case of tumor only sequencing, the positive predictive value for variants called somatic will depend on the number of true mutations. The number of mutations or mutational burden varies by cancer type. Hypermutated phenotypes often seen in melanomas, bladder, and lung cancer can be 100-times higher than the mutational burden scene in lymphomas. Recent data shows the importance of mutational burden as it correlates to the response to immune checkpoint blockade therapies [35]. Given the dependence of mutational burden on cancer type and the relationship between tumor only false positives and ancestry, a more complex picture appears. In some cases, for some ancestries and some cancers can stack in favor of a low false discovery rate. For example, cutaneous melanomas have a higher a mutational burden and are more frequently found in individuals of European ancestry. However, acral melanomas have a low mutational burden and are much more frequent found in individuals of non-European decent (as compared to cutaneous). In this example, a melanoma of a person with non-European decent would show a very low positive predictive value and a European-American would have a higher positive predictive value. While the goal of the present study was to evaluate the benefit and limitations of leveraging allele frequencies to distinguish somatic and germline variants in unmatched tumor samples, in the process we have developed a tool that we have made available to the research community. We have clearly demonstrated that LumosVar has improved positive predictive value in calling somatic variants compared to database filtering, which is the most commonly used approach with unmatched tumor samples. The sensitivity of LumosVar is clearly too low for us to advocate its use in a clinical setting. While future work could make some improvements in sensitivity (such as through optimizing variant quality filtering), we believe that there are inherent limitations to using allelic fractions to distinguish somatic and germline variants that are clearly demonstrated by our simulations. When high sensitivity and specificity are required in a clinical setting, comparative analysis with a matched germline sample remains the ideal choice. When analyzing archival samples in a research setting, we believe LumosVar would be of great utility. In addition to calling somatic and germline variants, LumosVar also calls allele specific copy number and assigns both mutations and copy number alterations to clonal sample fractions. There are number of other tools that call allele specific copy number such as exomeCNV [36] and sequenza [37], but these require a tumor/normal pair and do not identify subclones. There are also a number of tools that detect subclonal populations from mutations such as pyClone [38], or from copy number such as Theta [39]. A thorough comparison of lumosVar to these other approaches is beyond the scope of this work, but future work will focus on validating and benchmarking the copy number and clonality functions of LumosVar. Overall, our results provide insight into how experimental design and sample characteristics can have a large impact on the sensitivity of the allele frequency based tumor only somatic variant detection. Moderate tumor content is optimal and could be achieved through strategic sectioning of FFPE blocks. High sequencing depth is also critical to sensitivity, and as the cost of sequencing continues to decline, high depth sequencing is becoming more common practice. The researcher cannot control the copy number alterations of a tumor, but can be aware that cancer types that stray farther from diploid will be less amenable to this approach. The copy number model assumes that only one type of copy number event may occur in a given segment, which may sometimes be violated, particularly in cancer types with highly unstable genomes. It is possible that the inability of our model to completely capture the complexity of the triple negative breast cancer copy number profiles in our evaluation dataset may have contributed to some of our variant misclassification. However, our impression from visual inspection of misclassified variants due to incorrect copy number calls is due to uncertainty in the placement of segmentation boundaries rather than incorrect assignment of a copy number state within a segment. Since different copy number alterations have tumor content where the somatic and germline variants are most difficult to distinguish, it could be valuable to sequence different sections of the same tumor that may have different tumor content. We intended to extend our model to leverage multiple samples from the same patient. The number of germline false positives detected in tumor only sequencing is dependent on the individual's ancestry. Our Bayesian framework, which integrates modeling copy number and clonality, is able to greatly reduce the number of germline false positives. Sensitivity of our approach depends on tumor purity, coverage and copy number alterations. With appropriate experimental design, our approach has the potential to be extremely useful for somatic variant calling when matched normal tissue is not available, particularly in individuals of non-European ancestry. Availability and requirements LumosVar requires Perl, Samtools, htslib, and MATLAB runtime. The main inputs are bam files, which may be generated by BWA. LumosVar is available for download at https://github.com/tgen/LumosVar. ACMG: American College of Medical Genetics and Genomics ARIC: Atherosclerosis Risk In Communities COSMIC: Catalogue of Somatic Mutations in Cancer dbSNP: The Single Nucleotide Polymorphism database ESP: NHLBI Exome Sequencing Project ExAc: Exome Aggregation Consortium Formalin Fixed Paraffin Embedded INDEL: Insertion or deletion LOH: Loss of Heterozygosity SNV: Single Nucleotide Variant WIRB: Western Institutional Review Board. Also see Additional file 5: Table S3 for 1000 genomes population codes Raymond VM, Gray SW, Roychowdhury S, Joffe S, Chinnaiyan AM, Parsons DW, et al. Germline findings in tumor-only sequencing: points to consider for clinicians and laboratories. J Natl Cancer Inst. 2016;108:djv351. Jones S, Anagnostou V, Lytle K, Parpart-Li S, Nesselbush M, Riley DR, et al. Personalized genomic analyses for cancer mutation discovery and interpretation. Sci Transl Med. 2015;7:283ra53. Garofalo A, Sholl L, Reardon B, Taylor-Weiner A, Amin-Mansour A, Miao D, et al. The impact of tumor profiling approaches and genomic data strategies for cancer precision medicine. Genome Med. 2016;8:79. Smith KS, Yadav VK, Pei S, Pollyea DA, Jordan CT, De S. SomVarIUS: somatic variant identification from unpaired tissue samples. Bioinformatics. 2015;2015:btv685. Consortium T. 1000 GP. An integrated map of genetic variation from 1,092 human genomes. Nature. 2012;491:56–65. Kurian AW, Hare EE, Mills MA, Kingham KE, McPherson L, Whittemore AS, et al. Clinical evaluation of a multiple-gene sequencing panel for hereditary cancer risk assessment. J Clin Oncol. 2014;32:2001–9. Richards CS, Bale S, Bellissimo DB, Das S, Grody WW, Hegde MR, et al. ACMG recommendations for standards for interpretation and reporting of sequence variations: revisions 2007. Genet Med. 2008;10:294–300. Vogelstein B, Papadopoulos N, Velculescu VE, Zhou S, Diaz LA, Kinzler KW. Cancer genome landscapes. Science. 2013;339:1546–58. Cheng DT, Mitchell TN, Zehir A, Shah RH, Benayed R, Syed A, et al. Memorial Sloan Kettering-integrated mutation profiling of actionable cancer targets (MSK-IMPACT): a hybridization capture-based next-generation sequencing clinical assay for solid tumor molecular oncology. J Mol Diagn. 2015;17:251–64. Meric-Bernstam F, Brusco L, Daniels M, Wathoo C, Bailey AM, Strong L, et al. Incidental germline variants in 1000 advanced cancers on a prospective somatic genomic profiling protocol. Ann Oncol. 2016;27:795–800. Leiserson MDM, Vandin F, Wu H-T, Dobson JR, Eldridge JV, Thomas JL, et al. Pan-Cancer network analysis identifies combinations of rare somatic mutations across pathways and protein complexes. Nat Genet. 2014;47:106–14. Khurana E, Fu Y, Chakravarty D, Demichelis F, Rubin MA, Gerstein M. Role of non-coding sequence variants in cancer. Nat Rev Genet. 2016;17:93–108. Piraino SW, Furney SJ. Beyond the exome: the role of non-coding somatic mutations in cancer. Ann Oncol. 2016;27:240–8. Vinagre J, Almeida A, Pópulo H, Batista R, Lyra J, Pinto V, et al. Frequency of TERT promoter mutations in human cancers. Nat Commun. 2013;4:2185. Lawrence MS, Stojanov P, Polak P, Kryukov GV, Cibulskis K, Sivachenko A, et al. Mutational heterogeneity in cancer and the search for new cancer genes. Nature. 2013;499:214–8. Fu Y, Liu Z, Lou S, Bedford J, Mu XJ, Yip KY, et al. FunSeq2: a framework for prioritizing noncoding regulatory variants in cancer. Genome Biol. 2014; [cited 2015 Jan 5];15. Available from: https://www.ncbi.nlm.nih.gov/pubmed/25273974. Kilpivaara O, Aaltonen LA. Diagnostic cancer genome sequencing and the contribution of Germline variants. Science. 2013;339:1559–62. Li J, Poursat M-A, Drubay D, Motz A, Saci Z, Morillon A, et al. A dual model for prioritizing cancer mutations in the non-coding genome based on Germline and somatic events. PLoS Comput Biol. 2015;11:e1004583. Li H, Durbin R. Fast and accurate short read alignment with burrows–wheeler transform. Bioinformatics. 2009;25:1754–60. Flicek P, Ahmed I, Amode MR, Barrell D, Beal K, Brent S, et al. Ensembl 2013. Nucleic Acids Res. 2013;41:D48–55. Mose LE, Wilkerson MD, Hayes DN, Perou CM, Parker JS. ABRA: improved coding indel detection via assembly-based realignment. Bioinformatics. 2014;30:2813–5. DePristo MA, Banks E, Poplin R, Garimella KV, Maguire JR, Hartl C, et al. A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat Genet. 2011;43:491–8. Li H. A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data. Bioinforma Oxf Engl. 2011;27:2987–93. Garrison E, Marth G. Haplotype-based variant detection from short-read sequencing. ArXiv Prepr ArXiv12073907. 2012 [cited 2015 Dec 16]; Available from: http://arxiv.org/abs/1207.3907. Christoforides A, Carpten JD, Weiss GJ, Demeure MJ, Hoff DDV, Craig DW. Identification of somatic mutations in cancer through Bayesian-based analysis of sequenced genome pairs. BMC Genomics. 2013;14:302. Saunders CT, Wong WSW, Swamy S, Becq J, Murray LJ, Cheetham RK. Strelka: accurate somatic small-variant calling from sequenced tumor–normal sample pairs. Bioinformatics. 2012;28:1811–7. Cibulskis K, Lawrence MS, Carter SL, Sivachenko A, Jaffe D, Sougnez C, et al. Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat Biotechnol. 2013;31:213–9. Tan A, Abecasis GR, Kang HM. Unified representation of genetic variants. Bioinformatics. 2015;31:2202–4. Sherry ST, Ward MH, Kholodov M, Baker J, Phan L, Smigielski EM, et al. dbSNP: the NCBI database of genetic variation. Nucleic Acids Res. 2001;29:308–11. Lee H, Schatz MC. Genomic dark matter: the reliability of short read mapping illustrated by the genome mappability score. Bioinformatics. 2012;28:2097–105. Pietras K, Östman A. Hallmarks of cancer: interactions with the tumor stroma. Exp Cell Res. 2010;316:1324–31. Aran D, Sirota M, Butte AJ. Systematic pan-cancer analysis of tumour purity. Nat Commun. 2015;6:8971. Ewing AD, Houlahan KE, Hu Y, Ellrott K, Caloian C, Yamaguchi TN, et al. Combining tumor genome simulation with crowdsourcing to benchmark somatic single-nucleotide-variant detection. Nat Methods. 2015;12:623–30. Kwei KA, Kung Y, Salari K, Holcomb IN, Pollack JR. Genomic instability in breast cancer: pathogenesis and clinical implications. Mol Oncol. 2010;4:255. Allen EMV, Miao D, Schilling B, Shukla SA, Blank C, Zimmer L, et al. Genomic correlates of response to CTLA-4 blockade in metastatic melanoma. Science. 2015;350:207–11. Sathirapongsasuti JF, Lee H, Horst BAJ, Brunner G, Cochran AJ, Binder S, et al. Exome sequencing-based copy-number variation and loss of heterozygosity detection: ExomeCNV. Bioinformatics. 2011;27:2648–54. Favero F, Joshi T, Marquard AM, Birkbak NJ, Krzystanek M, Li Q, et al. Sequenza: allele-specific copy number and mutation profiles from tumor sequencing data. Ann Oncol. 2015;26:64–70. Roth A, Khattra J, Yap D, Wan A, Laks E, Biele J, et al. PyClone: statistical inference of clonal population structure in cancer. Nat Methods. 2014;11:396–8. Oesper L, Mahmoody A, Raphael BJ. THetA: inferring intra-tumor heterogeneity from high-throughput DNA sequencing data. Genome Biol. 2013;14:R80. The authors would like to thank Dr. Sara Nasser, Austin Christofferson, and Tyler Izatt for help with data analysis, and Dr. Jeff Trent and Dr. Nicholas Schork for helpful discussion. The authors are grateful to the Ben and Catherine Ivy Foundation and the Multiple Myeloma Research Foundation. Data is being shared based on participants' informed consent and in accordance with NIH Genomic Data Sharing Policy. Genotype data is available through dbGaP phs000748.v1.p1 and sequencing data for dilution series is being shared as part of a separate dbGAP submission. Center for Translational Innovation, Translational Genomics Research Institute, Phoenix, AZ, USA Rebecca F. Halperin & Sara Byron Department of Translational Genomics, University of Southern California, Los Angeles, CA, USA John D. Carpten, Zarko Manojlovic & David W. Craig Integrated Cancer Division, Translational Genomics Research Institute, Phoenix, AZ, USA Jessica Aldrich, Jonathan Keats, Megan Russell, Irene Cherni & Seungchan Kim Neurogenomics Division, Translational Genomics Research Institute, Phoenix, AZ, USA Winnie S. Liang, Daniel Enriquez, Ana Claasen & David W. Craig Henry Ford Health Systems, Detroit, MI, USA Lisa A. Newman University of Michigan, Ann Arbor, MI, USA Max S. Wicha & Evelyn Jaigge Komfo Anokye Teaching Hospital, Kumasi, Ghana Baffour Awuah & Joseph Oppong Rebecca F. Halperin John D. Carpten Zarko Manojlovic Jessica Aldrich Jonathan Keats Sara Byron Winnie S. Liang Megan Russell Ana Claasen Irene Cherni Baffour Awuah Joseph Oppong Max S. Wicha Evelyn Jaigge Seungchan Kim David W. Craig DWC, RFH, JDC conceived and designed, aided analysis and interpretation of data, and were involved in drafting of manuscript. RFH implemented the software. SK aided in theoretical formulation and drafting of the manuscript. WSL, JK, AC, IC, BA, JO, MW, LN, and EJ were involved in generation of data and drafting of the manuscript. ZM, JA, SB, MR, and DE were involved in analysis of data and drafting of the manuscript. All authors have read and approved the final manuscript for publication. Correspondence to John D. Carpten or David W. Craig. Only already existing de-identified data and biospecimens (both whole-blood and "fresh-frozen" tumor) previously collected under IRB approved studies (WIRB #20100721; WIRB #20141201; and WIRB #20031485) were used for this research. Performance by Variant Type. The graphs on the left shows the calls of LumosVar (bottom bar in pair) compared to filtering approach (top bar in pair) in calling true somatic variants. The size of the yellow portions of the bars indicate the number of true somatic variants falsely called germline heterozygotes or homozygous, the grey represents true somatic variants that were filtered on quality or not detected as variants, and the blue represents true positive somatic calls. The graphs on the right shows the number of somatic calls by the LumosVar (bottom bar in pair) compared to the filtering approach (top bar in pair) that are truly germline private heterozygous (red), germline heterozygous database variants (pink), homozygous (grey) or truly somatic (blue). We can see that proportion of false positives in the filtering approach is much higher in non-coding variants than other variant types. (PNG 47 kb) Definition of True Variants. Describes the criteria for counting a variant as a true variant. Germline variants were called by haplotype caller, samtools, and freebayes. Somatic variants were called by Mutect, Seurat, and Strelka. (DOCX 15 kb) Somatic Variant Calling Workflow. Illustrates a detailed workflow of the somatic variant calling process. The steps from "Transpose to pileup" and below are performed by the lumosVar software. (PNG 151 kb) Filtering Metrics. The criteria used to initially classify a variant in the training set for the quadratic discriminant model. (DOCX 14 kb) One thousand Genomes Population Codes. Abbreviations used to describe the populations from the 1000 Genomes Project. (DOCX 17 kb) Mapping coverage as a function of variant calls. (PNG 38 kb) Variant Quality Filtering By Sample. Shows the number of variants of each type, in each quality filtering category. Each graph represents a variant type, each bar represents a sample, and the color of the bar represents the number of variants in each quality category. High quality positions have a PT > 0.99. Low quality positions have a PT < 0.99 but PV > 0.99. Artifacts have a PV < 0.99 and non variants are not considered by the tumor only caller (NaN). (PNG 49 kb) Variant Quality Filtering Across Dilutions. Shows the number of variants of each type, in each quality filtering category. Each graph represents a variant type, each bar represents a dilution, and the color of the bar represents the number of variants in each quality category. High quality positions have a PT > 0.99. Low quality positions have a PT < 0.99 but PV > 0.99. Artifacts have a PV < 0.99 and non variants are not considered by the tumor only caller (NaN). (PNG 43 kb) Copy Number and Sample Fraction Across Dilutions. The copy number (left), minor allele copy number (center) and sample fraction of the copy number events (right) are plotted as heatmaps. (PNG 35 kb) Additional file 10: Figure S6. Copy Number of and Sample Fractions Across Sample Set. The copy number (left), minor allele copy number (center) and sample fraction of the copy number events (right) are plotted as heatmaps. (PNG 68 kb) Effect of Threshold on Sensitivity and Precision. The true positive rate (left) or positive predictive value (right) is plotted against the pSomatic threshold. Each line represents different mean target coverage (top) or dilution (bottom). Only high trust true somatic or private germline variants are included in this graph. As we would expect, the sensitivity decreases with the threshold, but the positive predictive value increases. We also find that higher coverage results in better sensitivity, but lower positive predictive value. At higher coverage, the threshold may be increased to improve the positive predictive value with less loss sensitivity. (PNG 185 kb) Halperin, R.F., Carpten, J.D., Manojlovic, Z. et al. A method to reduce ancestry related germline false positives in tumor only somatic variant calling. BMC Med Genomics 10, 61 (2017). https://doi.org/10.1186/s12920-017-0296-8 Somatic mutation Germline variant Tumor purity Copy number alterations
CommonCrawl
Foreign direct investment and productivity spillovers: a firm-level analysis of Bangladesh in comparison with Vietnam Md Arif-Ur-Rahman ORCID: orcid.org/0000-0002-3309-51011 & Kazuo Inaba1 Journal of Economic Structures volume 10, Article number: 17 (2021) Cite this article Foreign direct investment (FDI) is expected to generate external effects—usually termed FDI spillovers—for a host country, and these spillovers are thought to have consequences on the productivity of domestic firms. Despite this strong expectation, the empirical findings on FDI spillover are still indecisive. This study examines firm-level panel data to determine the effects of FDI spillover on firms' productivity in Bangladesh in comparison to Vietnam. We consider both the horizontal and vertical (backward and forward) spillover effects of FDI. We find evidence that Bangladeshi firms gain productivity improvement through intra-industry or horizontal linkages, whereas Vietnamese firms gain through backward linkages. Our findings suggest that increases in foreign presence in the same industry for Bangladesh and in downstream industries for Vietnam are related with increase in output of domestic firms. The general belief regarding multinational corporations (MNCs) is that they possess superior production technologies and organizational techniques and tend to be more productive compared to domestic firms (Hymer 1976). MNCs allow local subsidiaries with foreign equity to get access to advanced technologies and techniques. This process in turn makes the local subsidiaries more productive while using a reduced level of input, and thus a higher level of total factor productivity (TFP) than other fully domestically owned firms. Foreign direct investment (FDI) is believed to be the preferred means through which technology transfers, as it can internalize better technologies at minimum or no additional cost (Rugman and Caves 1983). The potential of FDI to initiate technology transfer to local firms through productivity spillovers may be derived from the semi-public nature of technology and the way it is disseminated between firms. These are all neo-classical thoughts about spillover effects regarding FDI. Theoretically, it is proven that host-country firms gain from the externalities associated with foreign investment through productivity improvement and international integration (Costa and de Queiroz 2002). However, empirically there is no consensus regarding the externalities generated by foreign firms. Theoretical works suggest various channels through which knowledge and technology are transferred to domestic firms. The complexities associated with unraveling diverse effects in practice as well as data limitations have prevented researchers from providing influential empirical evidence of externalities resulting from FDI. There are ample number of studies on FDI spillover. However, among empirical studies, comparative firm-level analyses across countries have received relatively limited focus. The main reason behind this limited focus is the lack of comparable firm-level data for a set of countries. This study examines the influence of FDI spillover effects on firm productivity in Bangladesh in comparison to Vietnam. Both countries are emerging economies in Asia. Their economic development and constant changes in improving their FDI policy frameworks have enabled these economies to become important destinations for investment. The fundamental strength of Bangladesh is its favorable geographic location, putting it closer to the two big markets—India and China. It has the potential to perform as an economic passageway between South and East Asia. Moreover, foreign companies are motivated to invest because of Bangladesh's large home market with more than 170 million consumers, high economic growth, a fast-growing private sector, low production cost, available labor, etc. In addition, Bangladesh currently enjoys duty-free access to the EU and some other developed countries. As the South Asian Free Trade Area (SAFTA) comes into force, foreign investors will also enjoy duty-free access to India along with the EU and other developed countries. FDI is expected to amplify because of the current infrastructural development work of power plants, bridges, metro rails, elevated expressways and other projects. Compare to the current escalation of FDI flows and the potentiality of further inflow, the previous research works on FDI spillover effects in Bangladesh are inadequate. They are mostly time-series analyses and confined to FDI's macro-impact on economic growth. This study attempts to fill the research gap on Bangladesh regarding the firm-level analysis of FDI spillovers and their effect on productivity. In addition, this paper aims to compare the effects of FDI spillovers on firm-level productivity of Bangladesh with that of Vietnam. Although currently positioned far ahead of Bangladesh in terms of attracting FDI, the historical trend for Vietnam (Fig. 1) reveals that before the 1990s, the country's FDI inflow over GDP was in line with that of Bangladesh. From the early 1990s on, Vietnam experienced a surge of FDI inflows, while Bangladesh failed to attract foreign investors. In terms of per capita GDP (Fig. 2), Bangladesh was slightly higher compared to Vietnam until 2001. Currently, the per capita GDP of Vietnam is far better than that of Bangladesh.Footnote 1 It is interesting to study this scenario, in which, starting from similar specific economic conditions, one economy progressed over time, while another economy simply maintained its earlier position. Vietnam is a successful developing Southeast Asian nation that has adopted welcoming FDI as a part of its export-led development strategy. Historically, this region has a very good track record of attracting FDI. FDI inflows have significantly contributed to the strong economic growth and sustained development of this region. Ten Southeast Asian nations including Vietnam have formed a regional trade bloc named the Association of Southeast Asian Nations (ASEAN)Footnote 2 for the purpose of promoting governmental and economic cooperation and regional stability. Strong intra-ASEAN investments and robust investment from other Asian economies mainly contribute to the increasing trend of FDI flows in this region. Similarly, the robust increasing trend of FDI in Vietnam is contributed to mostly by ASEAN countries and other East Asian economic giants: China, Japan and the Republic of Korea. Source: The World Development Indicator (WDI) Net FDI inflow (% of GDP). Per capita GDP ($). To the best of our knowledge, there exists no comparative study that specifically examines firm-level spillover effects for Bangladesh and Vietnam to date. This study examines the effect of FDI spillover transmission channels and compares their effects on the firm productivity of selected two countries. Commonly identified FDI spillover channels can be distinguished as intra-industrial and inter-industrial spillover. Intra-industrial and inter-industrial spillovers are commonly referred to as horizontal and vertical spillover (backward and forward), respectively.Footnote 3 According to theoretical expectation, the presence of foreign firms leads domestic firms in the same industry to experience productivity gain (horizontal spillover) through different channels, such as demonstration, competition, labor mobility, etc. First, the demonstration effect works through the copying of foreign firms' advanced technology, production strategies and organizational skills by domestic firms, thereby improving their productivity (Das 1987; Wang and Blomstrom 1992). Second, competition refers to a situation in which domestic firms are forced to improve production efficiency as foreign rivals enter the domestic market. Market concentration may reduce via the process of competition, but the competition effect can also be negative. Fierce competition with foreign firms sometimes forces several domestic firms to exit the market, as they can no longer compete at all (Wang and Blomström, 1992; Glass and Saggi, 2002). Aitken and Harrison (1999) also term such an effect the "market stealing effect", stating that foreign firms actually switch demand from the domestic firms. Third, the migration of skilled and trained employees from foreign firms to domestic firms may result in positive knowledge spillover. Potential technological know-how and managerial skills spread to domestic firms. On the other hand, comparatively high salaries persuade skilled employees to switch from domestic firms to foreign firms, and thus, create productivity losses (Fosfuri et al., 2001; Glass and Saggi, 2002). Foreign firms usually prevent employee turnover by paying higher wages, as well. Many recent studies do not find robust empirical evidence of productivity benefits through horizontal or intra-industry spillovers to domestic firms. Javorcik (2004), Bwalya (2006), Barrios et al. (2004), Blalock and Gertler (2008), Damijan et al. (2008), and Kugler (2006) do not find evidence of horizontal FDI spillovers. Inter-industry, or vertical, spillover mainly results from the upstream–downstream business relationship between foreign firms and domestic firms. The vertical spillover mechanism works through backward and forward linkages. Backward spillover takes place when a domestic firm in an upstream sector experiences productivity gains through the process of supplying inputs to a downstream sector's foreign-owned firms. This can happen as foreign firms deliberately transfer knowledge to domestic input suppliers. To achieve better input supply, foreign-owned firms provide technological assistance as well as training for employees of host-country supplier firms (Lall, 1978). High demands for locally produced intermediates and increased completion for foreign customers persuade domestic suppliers to improve their product quality and efficiency (Javorcik 2004). Forward linkages are not given much attention in the literature. Spillovers through forward linkages may occur from upstream foreign-invested suppliers of inputs supplying downstream domestic firms. A domestic firm can learn from its supplier (a foreign-invested firm), which is embodied in advanced technologies (Grossman and Helpman 1993). Increase in foreign investment in upstream industry boosts competition and forces other suppliers of the same industry to improve their production efficiency in order to survive in business. As a consequence, downstream domestic firms might experience productivity improvements due to more efficiently produced inputs by all upstream firms (Newman et al. 2015). Researchers are now more interested in searching for the possibility of FDI spillover across industries. Schoors and van der Tol (2002) for Hungary, Javorick (2004) for Lithuania, and Blalock (2002) for Indonesia all find positive spillover effects through backward and forward linkages. Similarly, Merlevede and Schoors (2005) find evidence of positive forward spillovers, but found backward spillover only in the case of the export-oriented sectors of Romanian firms. Several studies focus on more than one economy. Konings (2001) and Barrios et al. (2004) find contrasting results on different European countries' economies. While Konings (2001) finds negative FDI spillover effects on local firms in Bulgaria and Romania and no effect on Polish firms, Barrios et al. (2004) found positive spillover effects on firms in Spain and Ireland. Using the World Bank's firm-level survey data for five transitional economies (Poland, Moldova, Tajikistan, Uzbekistan, and the Kyrgyz Republic) Yasar and Morrison (2007) find positive intra-industry spillover effects from foreign presence in domestic industries. Tondl and Forneo (2010) and Muhlen (2013) study spillover effects on Latin American economies. Tondl and Forneo (2010) find evidence for positive horizontal spillovers, whereas Muhlen (2013) finds negative spillover effects from foreign presence within industries. This study utilizes firm-level panel data to estimate productivity spillover effects from FDI. Comparable Bangladeshi and Vietnamese firm-level data for different years is taken from the Enterprise Surveys provided by the World Bank. The findings of this study suggest the channels through which domestic firms gain productivity by the presence of foreign firms differ between the two countries. Our empirical findings support the presence of an intra-industry FDI spillover effect in Bangladeshi firms, while among Vietnamese firms, there is evidence of productivity spillovers through inter-industry backward linkages. Spillover through backward linkages can be explained as the firms' productivity being positively associated with the degree of potential contacts with foreign customers of the downstream sector. The next section of the paper discusses FDI positions and prospects in Bangladesh and Vietnam. Section 3 explains the dynamics of our dataset and its sources. Section 4 deals with the empirical framework and estimation issues of different spillover variables. Section 5 reports the empirical findings and discusses the results. This paper concludes with a brief summary of the findings in Sect. 6. FDI in Bangladesh and Vietnam Contemporary FDI environment Bangladesh gained independence in 1971from Pakistan. During that time of war for liberation, a nationalist movement appeared among the people of Bangladesh that conferred on them the fortitude for freedom. However, the consequence of this nationalistic attitude resulted in a snobbish position in terms of economic policy. At that time, access by foreign companies was viewed negatively by policymakers. Because of this negative view, foreign companies were discouraged; until 1980, FDI in Bangladesh was very insignificant. Then, in the 1990s, this approach changed and the government began encouraging FDI. Since then, a series of policy incentives has been offered to FDI investors from time to time. These incentives include tax holidays for a number of years, 100% foreign ownership, full profit repatriation, duty-free import of capital machinery, reinvestment of profits or dividends as FDI, work permits for foreign executives, export processing zone (EPZ) facilities, special economic zones (SEZs), flexible exit facilities, etc. FDI has tripled in Bangladesh over the past decade, from USD 1.086 billion in the year 2008 to USD 3.613 billion in 2018. However, this inflow of FDI only represents about 1% of Bangladesh's GDP, one of the lowest rates among emerging economies. Though the FDI inflow is rising, considering the current growth and size of the economy of Bangladesh, it has still lagged behind the desired level. Possible barriers to attracting foreign investors may include political unrest, scarcity of power and energy, lack of necessary land and infrastructure, lack of comprehensive policies regarding FDI, valuation challenges, repatriation restrictions, lack of institutional capacity to serve foreign investors, an underdeveloped financial market, etc. Despite such regulatory and institutional obstacles, Bangladesh has the opportunity to attract substantial FDI flows. Geographically, Bangladesh is located in advantageous position between India, China and the ASEAN region. In 2018, JETRO's survey on Business Conditions of Japanese Companies in Asia and Oceania ranked Bangladesh above India and Myanmar. Now, foreign companies are showing interest in investing because of Bangladesh's large domestic market, high economic growth, low production cost, etc. In addition, Bangladesh currently enjoys duty-free access to the EU and some other developed countries. The government's current infrastructure development work (power plants, bridges, metro rails, and elevated expressways) and easing of FDI policy will increase the flow of FDI to Bangladesh. Adopting the strategy of welcoming FDI as a part of export-led development, Vietnam is a booming country. In 1986, through several economic and political reforms, the government of Vietnam opened the country to the global economy in a process known as Doi Moi (renovation). During the Doi Moi period of economic development, Vietnam aggressively sought international trade and foreign investment inflows. Initially, as part of the policy in the early 1990s, the Vietnam extensively strengthened trade relations with Asian countries. In addition, with its available low-cost labor, Vietnam attracted attention from other regional economies as a promising new production site at that time. However, due to Asian currency crisis of the late 1990s, FDI in Vietnam declined (see Fig. 1). After the crisis, bureaucratic and structural problems in its investment environment caused Vietnam to face difficulties in attracting and utilizing FDI effectively. By 2008, Vietnam's accession to the WTO in the previous year had raised the interest of foreign investors; thus, the country experienced a sharp increase in FDI. The recorded FDI in 2008 included a few large projects, such as a software park, a tourism complex, a petrochemical complex, etc. However, because of the severe 2008 global financial crisis, many of these registered projects were deferred or cancelled. In 2015, Vietnam ranked as the world's fourth-highest attractor of FDI in terms of total investment capital behind India, China and Indonesia.Footnote 4 Vietnam's achievement in attracting FDI has had a positive effect on the country's economic development. The contribution of FDI to its GDP was about 18% in 2015. Moreover, FDI contributed to about 4.2% of Vietnam's labor force in 2015 (Thuy Nguyen, 2016). This contribution is likely to be even larger if indirect effects are taken into account. Recent participation in several bilateral and multilateral trade agreements has attracted a large amount of FDI into Vietnam. Its tax incentive framework, transparency and commitments with international trading partners influence foreign investors. The government of Vietnam actively works on market liberalization and other reforms as well. The recent reforms include a state-owned enterprise (SOE) sector, intellectual property rights, government procurement, e-commerce and the digital economy.Footnote 5 These reforms are important to maintaining Vietnam's economic competitiveness as a lucrative investment destination. Currently, labor is becoming expensive in China. Vietnam is enjoying the benefit of China's high labor cost as investors are considering Vietnam as the go-to place for manufacturing. FDI inflows in major sectors In 2019, the power, gas and petroleum sector attracted a maximum FDI share in Bangladesh. This sector accounted for 36.9% of total FDI inflow, amounting to USD 1.061 billion. This was followed by manufacturing and then by the trade and commerce sector, which contributed 29.6 and 16.4%, respectively, toward total FDI inflows. According to the World Bank and the Bangladesh Power Development Board, the growth of the power sector in terms of capacity addition is notable and increased from 5 to 28% in the period from 2012 to 2018. In South Asia, Bangladesh's power sector is one of the fastest growing. It is expected that in the near future, Bangladesh's demand for electric power consumption will increase more in line with its GDP growth and the government's master plan to generate 24,000 MW of electricity by 2021, 40,000 MW by 2030, and 60,000 MW by 2041. Considering these issues, foreign investment is increasing in the power sector. Among manufacturing-sector industries, the textiles and clothing industry comprises the largest share of inward FDI. Currently, Bangladesh is the second-largest garment exporter in the world. This South Asian country enjoys tariff-free access to the EU, Canada, Australia and other major textile and garment markets. Motivated by the country's cheap labor, preferential location and government support, many international investors and famous fashion brands are investing in Bangladesh. In Vietnam's case, the manufacturing and processing sector accounts for 65% of total registered foreign investment capital, topping the list with a total capital of USD 24.6 billion. This industry is followed by real estate, then by retail and wholesale. As in previous years, manufacturing and processing industries continue to account for the major share of FDI. Industry experts say that Vietnam has gained the advantage due to MNCs shifting manufacturing to Vietnam as costs in China began to increase. This process has accelerated because of the ongoing US–China trade war as well. As in past years, Vietnam's real estate market continues to catch the attention of foreign and domestic investors. Increased tourism and mega-infrastructure projects are pushing the demand for real estate. Different tourist spots such as Da Nang, Nha Trang, and Phu Quoc Island are becoming popular, and construction of many hotels and residential projects is ongoing. In addition, mega-projects such as the Hanoi and Ho Chi Minh City metros' construction are further expected to drive the demand for real estate. A fast-growing middle class is the core reason expediting the growth of investment in retail and the wholesale sector in Vietnam. Moreover, relaxation of certain restrictions such as participation in the distribution system by foreign investors has also aided growth. The data for empirical analysis are collected from the Enterprise Surveys provided by the World Bank. The sample provides firm-level data from different periodsFootnote 6 for Bangladesh and Vietnam. The datasets include firm-level information from the manufacturing, retail, wholesale and service industries. We assembled a sample from individual panel datasets of the two individual countries. The World Bank used a standardized questionnaire to conduct the Enterprise Survey for all interviewed firms from various countries. This standardization gives us the opportunity to compare the firm-level data for two different Asian countries. The dataset contains roughly similar and related firm-level information for all firms, enabling us to collect comparable firm-level information across countries. The dataset provides information on companies' foreign ownership, size, age, sales, exports, imports, wages, materials costs, fixed costs, employees, financial obligations, etc. In addition to the World Bank Enterprise Survey, to estimate backward FDI spillover variables, we also used the input–output table provided by the Asian Development Bank. The sample includes 2917 and 3196 firms over 21 two-digit industrial classifications for Bangladesh and Vietnam, respectively. Our empirical analyses are not always based on all the firms' data, because depending on the particular model setting, the number of firms with complete data varies. In the original datasets, all monetary values were given in local currency units. For the purpose of our analysis, we standardize the monetary values by converting them into US dollars. To convert the local currencies, we use the purchasing power parity (PPP) conversion factorFootnote 7 (source: World Development Indicators, the World Bank). Unbalanced panel data have been used for empirical analysis. As the study examines both inter- and intra-industry spillovers, the allocation of the firms across industries are very important. Appendix II illustrates the distribution of firms over industries. The distribution of interviewed firms across industries roughly shows that the food and textiles and garments industries are the two major industries, together encompassing around 40 and 30% of total firms in the samples of Bangladesh and Vietnam, respectively. The textiles and garments industry comprises the greatest number of interviewed firms in both countries' samples, about 28% in Bangladesh and 17% in Vietnam. Having assessed the industrial structure of firms within the two countries, now we turn to discussion of foreign ownership in the sample. For this study, we considered firms foreign-owned in which at least a 10% share of capital is owned by foreign investors. Only 2.5% (72 out of 2917) of firms in the Bangladeshi sample and 11.2% (359 out of 2196) of firms in the Vietnamese sample met this classification. Appendix III shows the industry-wise presence of foreign ownership in terms of numbers of firms and sales shares for both countries. According to the number of firms, foreign presence is highest in the refined petroleum industry for both countries' samples. In Vietnam, this is followed by transport machines, textiles and garments, and electric and electronics industries. In Bangladesh's case, no industry has a significant number of foreign-owned firms. In terms of sales, the machinery industry has the highest contribution from foreign investment in Vietnam. 72.6% of total sales in this industry have come from foreign-owned firms. Similarly transport, transport machine, electric and electronics, textiles, food, and leather industries also show significant shares of total sales are contributed by foreign firms. Despite the number of foreign firms in each industry, foreign penetration in terms of sales is very notable. For example, in the Vietnamese sample, foreign firms contribute about 42.7% of total sales in the food industry, though only 10.6% of firms have foreign presence in that industry. In comparison to Vietnam, Bangladeshi foreign penetration seems lower both in terms of numbers and sales shares. Appendices IV and V illustrate industry-wide comparisons of domestic and foreign firms' export intensity, use of foreign input and size for Bangladesh and Vietnam, respectively. Within both countries' samples, the export intensity of foreign-owned firms seems high compared to that of domestic firms. For most industries, the export share of total sales is high for foreign-owned firms. Similarly, in terms of input sources, we find that in contrast to the domestic firms, foreign firms use more foreign input than domestic input. Finally, firms with foreign presence tended to be large in size. The average numbers of full-time workers are higher for foreign-owned firms compare to domestically owned firms. Empirical model and estimation strategy Our empirical goal is to examine the correlation between firm productivity and foreign presence within and across industry. We follow the conventional model used by previous studies on FDI spillover effects by estimating a log linear Cobb–Douglas production function. The production function is amplified with several variations apart from the regular input variables. The model is as follows: $$\ln Y_{ijt} = \delta_{0} + \delta_{1} \ln L_{ijt} + \delta_{2} \ln M_{ijt} + \delta_{3} \ln K_{ijt} + \delta_{4} {\text{Foreign Share}}_{ijt} + \delta_{5} {\text{Horizontal}}_{jt} + \delta_{6} {\text{Backward}}_{jt} + \delta_{7} {\text{Forward}}_{jt} + \delta_{8} {\text{Fin}}\_{\text{obstacle}}_{ijt} + \delta_{9} {\text{Size}}_{ijt} + \delta_{10} \ln {\text{ Age}}_{ijt} + \delta_{11} {\text{Dummy}}_{j} + \delta_{12} {\text{Dummy}}_{t} + \varepsilon_{ijct} ,$$ where i, j, and t index for firm, sector, and year, respectively. Dependent variable Yijt represents the real output, which is defined as total annual sales of firm i operating in sector j at time t. L, M, and K correspond to labor, materials, and capital, respectively, which are considered a firm's inputs for production procedure. Annual cost of labor is used to measure labor (L). Material (M) is proxied by the total cost of raw materials and intermediate goods. Capital (K) is measured by the total costs of purchasing individual firm's machinery, vehicles, equipment, land, and buildings. The above baseline regression specification includes the share of foreign ownership of a firm to control for the specific effect of foreign presence. To estimate intra-industry and inter-industry spillover effects of FDI, horizontal and vertical (backward and forward)Footnote 8 spillover variables are estimated by following the specifications in Javorcik (2004) and Kim (2015). Javorcik (2004) defines horizontal spillover as foreign equity participation averaged over all firms in a sector, weighted by each firm's share in sectoral output. Thus, in the following equation, \({{\text{Horizontal}}}_{jt}\) represents the degree of foreign share of the total output of an industry j at time t. \({\text{Foreign Share}}_{ijt}\) and Yit denote the percentage share of foreign ownership and total sales of firm i at time t, respectively. Therefore, the numerator can be termed as total output of firms that is weighted by their foreign share portion in industry j at time t. In addition, the denominator is the total output of industry j at time t. $${\text{Horizontal}}_{jt} = \frac{{ \sum \nolimits_{{i \;{\text{for}\; \text{all}}\; i \in j}} ({\text{Foreign}\;\text{share}}_{it} *Y_{it} )}}{{\sum \nolimits_{{i \;{\text{for}\;\text{all}}\; i \in j}} Y_{it} }}.$$ The backward spillover captus the backward linkages between foreign buyers and domestic input suppliers. In the following equation, \({\text{Backward}}_{jt}\) signifies the share of domestic firms' output in industry j supplied to foreign-owned firms in industry h. In other words, \({\text{Backward}}_{jt}\) connotes the presence of foreign firms in the downstream industries that are supplied by upstream industry j at time t (Kim, 2015): $${\text{Backward}}_{jt} = \sum \limits_{{h\;\text{if}\;h \ne j}} \alpha_{jht} {\text{*Horizontal}}_{ht} ,$$ where αjht indicates the proportion of industry j's output supplied to industry h at time t. The data for backward linkage calculation are obtained from input–output matrices of each year of each individual country provided by the Asian Development Bank.Footnote 9 Products supplied for final consumption and import are excluded to calculate the value of αjht. This exclusion will provide better measure of backward production linkage (Javorcik 2004). Within-industry input is also excluded because this effect is already considered while measuring horizontal spillover. A higher backward variable value indicates large foreign presence in downstream industries supplied by upstream industry j and that the extent of intermediates supplied to industries with a foreign presence is larger. \({\text{Forward}}_{jt}\) represents the weighted share of domestic firms' inputs in industry \(j\) purchased from foreign firms in industry x. \({\text{Export}}_{it}\) indicates the export of intermediate goods by firm i at time t. Like Javorcik (2004), we also exclude exports from total output produced by foreign firms as only intermediates sold in the local market are pertinent to this study. $${\text{Forward}}_{jt} = \sum \limits_{{x\; {\text{if}}\; x \ne j}} \sigma_{jxt} \left[ {\frac{{ \sum \nolimits_{{i \;{\text{for}}\; {\text{all}}\; i \in x }} {\text{foreign share}}_{it} *\left( {Y_{it} - {\text{Export}}_{it} } \right)}}{{ \sum \nolimits_{{i \;{\text{for}\;\text{all}}\; i \in x}} \left( {Y_{it} - {\text{Export}}_{it} } \right)}}} \right].$$ Here, σjxt is the portion of intermediate inputs that industry x supplied to industry j in total inputs purchased by industry j at time t. This value is also constructed by using the same input–output matrices used for backward spillover. Similar to before, within-industry inputs purchased are excluded. Finally, as other control variables, the financial obstacleFootnote 10 (fin_obstacle), size, and age of a firm are included in the regression to control for the effect on productivity. Financial obstacle measures the degree of obstacles a firm faces to accessing finance for its current operations. Additionally, industry dummy variables are included to control for industry-specific effect because an industry may be more productive than others. Time dummy variables will control for time variance and macroeconomic shocks. The above regression estimation is performed on the full sample for all firms (domestic and foreign owned) and on the sample of domestic firms, which is defined as firms that have less than 10% foreign capital share. Empirical findings Basic findings on FDI spillovers and productivity This section illustrates the association of various FDI spillover effects and firm output in different model specifications. Table 1 shows the OLS estimated result with time dummies for the sample of all firms (Columns 1 and 3) and domestic firms (Columns 2 and 4) for the Bangladesh (first panel) and Vietnam (second panel) samples. Table 1 FDI spillover and productivity: OLS (dependent variable = log output) As expected, inputs for production process (i.e., labor, capital, and material) exhibit a significant positive effect on firms' outputs. In Columns 1 and 3, the coefficient of foreign ownership implies that a higher percentage of foreign shares in a firm's capital do not have a significant effect on firm output for both samples. Now for spillover variables, in the Bangladeshi case none of the spillover variables seemed to have a significant effect on firm sales. In comparison with the Vietnamese sample, the horizontal spillover shows a higher coefficient though it is statistically insignificant. The coefficients of inter-industry spillovers are negative and statistically insignificant for the Bangladeshi sample. In Vietnam's case, evidence shows spillovers take place through backward linkages. A positive significant (at 5% level of significance) coefficient of backward spillover constitutes evidence of productivity spillovers through contacts between domestic firms and their foreign customers in downstream sectors. Horizontal and forward linkages are not significant to firm output. Among other control variables, financial obstacles have negative effects on a firm's real output. Narrow access to finance for a firm's current operation hampers the productivity of that firm. Larger firms enjoy high productivity because the size variable exhibits a positive significant coefficient. Firm age is not significant to the firm output for both of the countries. The OLS estimation is criticized because it poses the problem of consistency. The assumption of exogeneity is an imperative condition to estimate the productivity model. Griliches and Mairesse (1998) noted that the exogeneity assumption of the production function estimation is usually violated in the case of an OLS estimation model. In the production model, capital is treated as a fixed factor, but the labor and material variables are endogenous in nature. As a consequence, there is possibility of correlation between the unobserved productivity shock and the inputs. To alleviate the drawbacks of OLS, we use fixed-effect model. Table 2 exhibits the results of the fixed-effectFootnote 11 model for both samples. Similar to OLS estimation, the coefficients of inputs for production process (labor, capital and material) exhibit significant positive effect on firms' output. Though the coefficient of foreign ownership is insignificant for both countries' samples, it is positive for Bangladesh and negative for Vietnam. Positive coefficient of foreign ownership implies a higher percentage of foreign share in firms' capital has an effect on firm output. Table 2 FDI spillover and productivity: fixed effect (dependent variable = log output) Among the spillover variables, only horizontal spillover significantly effects firm output in the Bangladesh case. Positive significant coefficient is found for the horizontal spillover variables in both the full sample (Column 1) and the subsample of domestic firms (Column 2). The coefficients of inter-industry spillover variables are negative and statistically insignificant to firm output. The positive significant coefficient of horizontal spillover variable implies that the presence of foreign firms leads domestic firms in the same industry to realize productivity gains, which can be experienced through different channels. Domestic firms can improve productivity by copying foreign firms' advanced technology, production strategies and organizational skills. Migrating skilled and trained employees from foreign firms to domestic firms may result in positive knowledge spillover. Potential technological know-how and managerial skills may spread to domestic firms. In the case of Bangladesh, inter-industry or vertical spillovers that result from the upstream–downstream business relationship between foreign firms and domestic firms are not effective. In terms of the Vietnamese sample, among the spillover variables only backward spillover significantly effects firm output. A positive significant coefficient is found for the backward spillover variables in both the full sample (Column 3) and the subsample of domestic firms (Column 4). This finding strengthens the evidence that domestic firms that supply multinational firms are able to reap productivity improvements in the upstream sectors. The findings of productivity gain through backward linkage validate the recent debate that intra-industry rather inter-industry spillovers occur. However, the horizontal and forward spillover shows a positive coefficient, but appears to be statistically insignificant in all regression specifications of the Vietnamese sample. The insignificant coefficient of the horizontal spillover variable implies a lack of evidence of spillovers taking place through intra-industry channels. This finding on horizontal spillover is consistent with the existing studies that found either negative or insignificant results (Aitken and Harrison 1999; Kathuria 2001; Javorcik 2004). The findings of the OLS and fixed-effect models suggest that Bangladeshi firms gain productivity improvement through horizontal linkages and Vietnamese firms gain productivity spillover through backward linkages. By nature, foreign firms try to set barriers to check the leakage of technology and management skills to its same-industry competitors. Javorcik (2004) states foreign-invested firms within sectors compete with domestic firms and they have every incentive to prevent their embodied knowledge and technologies from leaking to their domestic competitors. Therefore, it is tricky to gain productivity for a long time through intra-industry or horizontal linkage. For example, by imposing patents on new technology, foreign firms prevent technology leakages. Similarly, by paying higher salaries they prevent employee turnover and loss of management skill thereby. Conversely, foreign firms have no reason to check the technology dispersion to its suppliers. To have better input supply, foreign firms deliberately transfer knowledge to domestic input suppliers. Moreover, foreign-owned firms are willing to provide technological assistance and training for employees of host-country supplier firms (Lall 1978). The empirical findings might be influenced by the number of foreign firms in the sample. This sample represents the actual circumstances in which there are a minimum number of foreign firms in different industries of Bangladesh. If there are noteworthy numbers of foreign firms in different industries, then the business relationship with the upstream or downstream industries might result in vertical spillovers. One of main challenges for attracting FDI in Bangladesh is its position in the World Bank's "ease of doing business"Footnote 12 index. Despite the prospect, Bangladesh is still among the least-demanding places to do business. Evidently, this depressing and unappealing business condition fails to achieve investor confidence. In overall global rankings of the World Bank's ease of doing business index, Vietnam is far better positioned compared to Bangladesh. In 2020, Bangladesh ranked 168th among 190 economies with a total score of 45, whereas Vietnam ranked 70th with a total score of 69.8. In 2019, Bangladesh ranked 176th and Vietnam ranked 69th out of 190 countries. Among the 10 areas of assessment, Vietnam did extremely well in credit availability (ranked 25th out of 190 economies), paying taxes (ranked 25th) and providing electricity (ranked 27th). The weakest performance was in resolving insolvency, for which it ranked 122nd. Among the 10 assessment topics, Bangladesh was far behind Vietnam in all criteria except protecting minority investors. Even among South Asian countries, Bangladesh was ranked the second lowest, just above Afghanistan (173). Using TFP as proxy of firm productivity We changed the basic model by replacing the dependent variable, using TFP as the proxy of firm productivity instead of total output. To estimate the TFP, first we estimate a production function and use the resulting coefficients corresponding to a firm's inputs to compute a firm's TFP. We start with the standard Cobb–Douglas production function with constant returns to scale: $$Y_{ijt} = A_{ijt} L_{ijt}^{{\alpha_{L} }} M_{ijt}^{{\alpha_{M} }} K_{ijt}^{{\alpha_{K} }} ,$$ $$\ln Y_{ijt} = \ln A_{ijt} + \alpha_{L} \ln L_{ijt} + \alpha_{M} \ln M_{ijt} + \alpha_{K} \ln K_{ijt} .$$ As before i, j, and t index for firm, sector and year, respectively. And Y, L, M, and K correspond to total output, labor, materials, and capital, respectively. Equation (2) leads the production function into linear form by taking the natural logarithm in both sides of Equation (1). Here, log output Y is linearly related to the three basic factors of production labor L, materials M, and capital K. The residual parts of output Y that are not explained by these three factors are attributed to firm specific productivity, A, which is termed TFP. Put differently, if we regress lnY on lnL, lnM and lnK, the regression errors are the TFP or firm's productivity, lnA. Equation (2) is estimated using the OLS method.Footnote 13 We also included time dummy and country dummy to control for differences in time and industry effects. Table 3 exhibits the different FDI spillover effects on TFP. Share of foreign ownership in a firm has no significant relationship with its technological advancement or TFP. Similar to earlier findings (Tables 1 and 2), spillover through inter-industry backward linkages positively affect firm's productivity improvement in the case of Vietnam. In the case of Bangladesh, intra-industry or horizontal spillover causes increased productivity. Among the other control variables, firm size has a positive association with firm productivity. Table 3 FDI spillover and productivity (dependent variable = log TFP) This study assesses the relationship of FDI spillover on firm productivity for two emerging economies in Asia, Bangladesh and Vietnam. The disparity of different spillover channels of FDI to improve productivity of these countries are analyzed in firm-level. The empirical findings imply that Bangladeshi firms gain productivity improvement through intra-industry or horizontal linkages, and Vietnamese firms gain productivity through inter-industry spillover, specifically through backward linkages. So, Bangladeshi firms realize productivity gain through the presence of foreign-owned firms in the same industry; whereas, for Vietnamese firms, an increase in foreign presence in downstream industries is related to a rise in the output of domestic firms in upstream industries. This significant effects of backward spillover on productivity of Vietnamese firms is in congruence with the results of previous studies (Schoors & van der Tol 2002; Javorick 2004; Blalock 2002) focused on vertical spillovers. This finding suggests that an increase in foreign presence in downstream industries is related to an increase in output of domestic firms in upstream industries. We do not find intra-industry horizontal spillovers for Vietnamese firms, which support earlier studies carried out for other developing and transition economies (Aitken and Harrison 1999; Djankov and Hoekman 2000; Konings 2001; Javorick 2004). In terms of vertical linkages, particularly for backward linkage, the foreign firms have no reason to check the technology spreading to their suppliers. To have a better input supply, foreign firms purposely transfer knowledge to domestic input suppliers. By this process of sharing advanced knowledge, domestic firms gain productivity. In contrast, foreign firms within a sector compete with domestic firms and set barriers to prevent their embodied knowledge and technologies from leaking to their domestic competitors. Moreover, competing with foreign firms can lead to crowding out of domestic firms. Domestic firms, which are unable to compete with foreign firms, are forced to leave their businesses. In fact, by competition within the same industry, foreign firms redirect demand from domestic firms. Bear in mind the above challenging facts, once it might be difficult for Bangladeshi firms to gain productivity through the foreign presence in the same industry. Furthermore, compared to the technologically advanced foreign firms, the domestic firms of Bangladesh are far behind in terms of technological advancement. To gain from the current surge of foreign investment in Bangladesh, the government should patronize firms from upstream or downstream industries. Then the business relationships with the foreign firms of upstream or downstream industries might result in vertical spillovers. Data unavailability is the main limitation in conducting firm-level analysis of FDI spillover on productivity. The same firm's information for consecutive years is highly required to measure different sophisticated measures of firm productivity. As mentioned earlier, several related studies use the measures of productivity suggested by Olley and Pakes (1996) and Levinsohn and Petrin (2003). Unfortunately, we could not fit both strategies due to the unavailability of firm-level information of several consecutive financial years. Again, the non-response to the specific vital questions by firms also creates difficulty in analysis. For example, many firms seem reluctant to report sales/output, capital information, etc. Without such information, it is challenging to measure firm productivity. In addition, the number of foreign firms is very minimum in the sample of few industries. The failure to represent the actual situation of foreign presence in a country by sample might provide incorrect empirical likelihood. Indeed, to obtain a complete understanding of the effect of FDI on sampled countries, more research is required. In particular, confirming the findings of this study using different sophisticated alternative measures of firm productivity would be useful. Only improved data availability can help to ease this limitation. Moreover, deeper analysis of host country and investor characteristics will add variation in the context of determining the extent of FDI spillovers through different channels. Intra-industry (within industry) spillover Inter-industry (between industry) spillover Horizontal spillover When the presence of foreign firms leads domestic firms in the same industry to experience productivity gain then it is termed as horizontal spillover. It may happen through different channels, such as demonstration, competition, labor mobility, etc Vertical spillover - Backward spillover Backward spillover takes place when a domestic firm in an upstream sector experiences productivity gains through the process of supplying inputs to a downstream sector's foreign-owned firms - Forward spillover Spillovers through forward linkages may occur from upstream foreign-invested suppliers of inputs supplying downstream domestic firms Graphical view See Table 4. Table 4 Distribution of firms by industries Table 5 Foreign penetration Table 6 Domestic vs. foreign firms: few selected issues: Bangladesh Table 7 Domestic vs. foreign firms: few selected issues: Vietnam Table 8 Ease of doing business index 2020 According to the World Development Indicator (WDI) data, in 2018, per capita GDP was $2,566.60 and $1698.35 for Vietnam and Bangladesh, respectively. ASEAN members: Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, the Philippines, Singapore, Thailand, and Vietnam. See Appendix I for the brief clarification of spillover channels. The 2016 FDI Report published by fDi Intelligence. ASEAN Business Guide: The economies of ASEAN and the opportunities they present, 2018 edition. Available at: https://assets.kpmg/content/dam/kpmg/sg/pdf/2018/07/ASEAN-Business-Guide-COMPLETE.pdf. Survey Periods: Vietnam 2005, 2009 and 2015; Bangladesh 2007, 2011 and 2013. Definition given by the World Bank: "PPP conversion factor is the number of units of a country's currency required to buy the same amounts of goods and services in the domestic market as a US dollar would buy in the United States". The horizontal and vertical linkage variables are time-varying sector-specific. Input–output matrices 2010 is used for the samples earlier to 2010 because Asian Development Bank provides input–output tables from 2010. Financial obstacles are measured on a scale of 0 to 4 (0 = no obstacle; 1 = minor obstacle; 2 moderate obstacle; 4 = severe obstacle). The Hausman test indicates fixed-effect model is appropriate. The World Bank ranks 190 economies on their ease of doing business. The rankings are measured by sorting the aggregate scores in 10 assessment areas. The rankings for Vietnam and Bangladesh in 2020 in the assessment areas are presented in Appendix VI. The OLS technique is criticized as biased estimation. In the literature, we found two common methods to measure TFP suggested by Olley and Pakes (1996) and Levinsohn and Petrin (2003). Unfortunately, neither strategy fits the data of this study. Aitken BJ, Harrison AE (1999) Do Domestic firms benefit from direct foreign investment? Evidence from Venezuela. Am Econ Rev 89:605–618 Barrios S, Dimelis S, Louri H, Strobl E (2004) Efficiency spillovers from foreign direct investment in the EU periphery: a comparative study of Greece, Ireland, and Spain. Rev World Econ 140(4):688–705 Blalock G, Gertler P (2008) Welfare gains from foreign direct investment through technology transfer to local suppliers. J Int Econ 74:402–421 Blalock, G. (2002). Technology adoption from foreign direct investment and exporting: evidence from indonesian manufacturing. PhD Thesis, Haas Business School, University of California Berkley. Bwalya SM (2006) Foreign direct investment and technology spillovers: evidence from panel data analysis of manufacturing firms in Zambia. J Dev Econ 81(2):514–526 Costa I, de Queiroz SR (2002) Foreign direct investment and technological capabilities in Brazilian industries. Res Policy 31:1431–1443 Damijan, J., Rojec, M., Majcen, B., & Knell, M. (2008). Impact of firm heterogeneity on direct and spillover effects of FDI: micro evidence from 10 transition countries. LIOS Discussion Paper Series 218. Das S (1987) Externalities, and technology transfer through multinational corporations: a theoretical analysis. J Int Econ 22(1–2):171–182 Djankov S, Hoekman B (2000) Foreign investment and productivity growth in Czech enterprises. World Bank Econ Rev 14(1):49–64 Fosfuri A, Motta M, Ronde T (2001) Foreign direct investment and spillovers through worker mobility. J Int Econ 53(1):205–222 Glass AJ, Saggi K (2002) Multinational firms and technology transfer. Scand J Econ 104(4):495–513 Griliches Z, Mairesse J (1998) Production function; search for identification. In: Strom S (ed) Econometrics and economic theory in the 20th century. Cambridge University Press, Cambridge Grossman G, Helpman E (1993) Innovation and growth in the global economy. MIT Press Books. The MIT Press, Cambridge, MA Hymer S (1976) The international operations of national firms: a study of foreign direct investment. MIT Press, Cambridge, MA Javorick BS (2004) Does foreign direct investment increase the productivity of domestic firms? In search of spillovers through backward linkages. Am Econ Rev l.94:605–627 Kathuria V (2001) Foreign firms, technology transfer and knowledge spillovers to indian manufacturing firms—a stochastic frontier analysis. Appl Econ 33(5):625–642 Kim M (2015) Productivity spillovers from FDI and the role of domestic firm's absorptive capacity in south korean manufacturing industries. Empirical Econ 48(2):807–827 Konings J (2001) The effects of Direct foreign investment on domestic firms: evidence from firm level panel data in emerging economies. Econ Transit 9(3):619–633 Kugler, M. (2006). The diffusion of externalities from foreign direct investment: the sectoral pattern of technological spillovers. mimeo, University of Southampton. Lall S (1978) Transnational, Domestic enterprises and industrial structure in host LDCs: a survey. Oxf Econ Pap 30(2):217–248 Levinsohn J, Petrin A (2003) Estimating production functions using inputs to control for unobservables. Rev Econ Stud 70(2):317–342 Merlevede B, Schoors K (2005) Conditional spillovers from FDI within and between sectors: evidence from romania. Department of Economics and CERISE, University of Ghent Mühlen, H. (2013). Firm-level productivity spillovers from FDI in Latin American Countries. Institute of Development Research and Development Policy, Ruhr-Universitat Bochum, IEE Working Paper Series, No. 196. Newman C, Rand J, Talbot T, Tarp F (2015) Technology transfers, foreign investment and productivity spillovers. Eur Econ Rev 76:168–187 Nguyen T (2016) A review of foreign direct investment in Vietnam and implications for improvements. international trade and economic series. https://www.tradeeconomics.com/wp-content/uploads/2019/07/A-review-of-foreign-direct-investments-in-Vietnam-and-implications-for-improvement-min.pdf Olley S, Pakes A (1996) The dynamics of productivity in the telecommunications equipment industry. Econometrica 64(6):1263–1297 Rugman A, Caves R (1983) Multinational enterprises and economic analysis. Can J Econ 16(4):742–744 Schoors K, Van Der Tol B (2002) Foreign Direct Investment Spillovers within and between Sectors: evidence from Hungarian Data. Ghent University Working Paper 02/157, Belgium. Tondl G, Fornero JA (2010) Sectoral productivity and spillover effects in Latin America. Austrian Institute for International Economics, FIW Working Paper Series, No. 53, Wang J-Y, Blomstrom M (1992) Foreign investment and technology transfer: a simple model. Eur Econ Rev 36(1):137–155 Yasar M, Morrison Paul CJ (2007) Firm performance and foreign direct investment: evidence from transition economies. Econ Bull 15(21):1–11 We would like to express our sincere gratefulness to Mr. David Flath (Professor, Graduate School of Economics, Ritsumeikan University) for his insightful comments during the research work. Partial funding from 'Grants‑in‑Aid for Scientific Research' provided by the Ministry of Education, Sports and Sciences. Graduate School of Economics, Ritsumeikan University, Noji Higashi 1-1-1, Kusatsu, Shiga, 525-8577, Japan Md Arif-Ur-Rahman & Kazuo Inaba Md Arif-Ur-Rahman Kazuo Inaba Both authors provided critical feedback and helped shape the research, analysis and manuscript. KI was involved in planning and supervised the work and Md A-U-R processed the data, performed the analysis, and drafted the manuscript. Both authors discussed the results and commented on the manuscript. Both authors read and approved the final manuscript. Correspondence to Md Arif-Ur-Rahman. It is to confirm that there are no known conflicts of interest associated with this study and there has been no significant financial support for this work that could have influenced its outcome. Arif-Ur-Rahman, M., Inaba, K. Foreign direct investment and productivity spillovers: a firm-level analysis of Bangladesh in comparison with Vietnam. Economic Structures 10, 17 (2021). https://doi.org/10.1186/s40008-021-00248-2 Revised: 01 September 2021 Foreign direct investments Vertical spillover
CommonCrawl
Jan Kautz Vice President of Learning and Perception Research @ NVIDIA I lead the Learning and Perception Research Team at NVIDIA, working predominantly on computer vision problems — from low-level vision (denoising, super-resolution, computational photography) and geometric vision (structure from motion, SLAM, optical flow) to visual perception (detection, recognition, classification), as well as machine learning problems (deep learning, reinforcement learning, generative models). ECCV My team has 7 papers accepted to ECCV 2022. Please stop by our posters! CVPR My team has 10 papers accepted to CVPR 2022. Hope to see you there! My team has 3 papers at NeurIPS 2021, with a focus on advanced generative modeling. SIGGRAPH We won the SIGGRAPH 2021's Real-Time Live! Best In Show Award for our demo I am AI: AI-Driven Digital Avatar Made Easy. NVIDIA Canvas, based on our GauGAN work, is now out as a native Windows app. It's being covered by Engadget, PetaPixel, TechCrunch and others. Our work on Binary TTC: A Temporal Geofence for Autonomous Navigation was awarded a Best Student Paper Honorable Mention at CVPR 2021. My team has over 10 papers accepted (including five orals) at CVPR 2021. Please "stop" by our posters and talks. We have 3 papers accepted (including a spotlight) at ICLR 2021. We hope to see you online! My team has 8 papers at NeurIPS 2020. We hope to see you at our presentations! My team has 9 papers (including an oral and several spotlights) at ECCV 2020. We hope to see you online! My team has 9 papers accepted (including two orals) at CVPR 2020. Please stop by our posters and talks. Research Areas & Areas Deep learning is an active area for us. We are interested in efficient deep learning, for both training and inferencing, which includes pruning, neural architecture search, and so forth. Solving perception tasks is an important aspect of our work. In particular, we have recently focused on perception from videos as well as image collections using deep learning approaches. My team is investigating deep learning-based approaches for efficient 3D reconstruction methods, processing of 3D data, as well as stereo and optical flow. We investigate machine learning methods that can synthesize new images and videos, mostly based on generative adversarial networks. Generative Models for Image Synthesis NIPS 2017, CVPR 2018. ECCV 2018, NeurIPS 2018, NeurIPS 2019, NeurIPS 2020 We have been working on image synthesis with generative models for a number of years. In particular, we have been working on GAN- and VAE-based models, with much of our work focused on image translation. We have presented high-quality image translation results on various challenging image translation tasks, including street scene image translation, animal image translation, and face image translation. You can explore some of these yourself, e.g., GauGAN and GANimal on NVIDIA's AI Playground. Hand and Body Pose Estimation ECCV 2020, CVPR 2020, ECCV 2018, CVPR 2018, CVPR 2016, IEEE F&G 2015 We have been working on hand and body pose estimation over the past years. Our methods won the HANDS 2015 competition, HANDS 2017 competition (tracking task), as well as the HANDS 2019 competition (task 3). Our recent methods do this via a latent 2.5D heatmap regression (ECCV 2018), which can be combined with semi-supervised learning (CVPR 2018). We have further shown that a multi-view constraint allows for weakly-supervised pose learning (CVPR 2020) and that biomechanical constraints improve accuracy (ECCV 2020). Efficient Deep Learning CVPR 2020, CVPR 2019, ICLR 2017 We endeavor to make our computer vision models as efficient as possible. To this end, we have been working on methods for pruning neural networks as well as on methods for architecture search. We have focused on structural pruning of neural network parameters in order to reduce computation, energy, and memory transfer costs during inference (CVPR 2019, ICLR 2017). Similarly, our unified neural architecture search (UNAS) method is latency-aware, while achieving high accuracy (CVPR 2020). SPLATNet CVPR 2018 (Best Paper Honorable Mention) We propose a network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice. Our network uses sparse bilateral convolutional layers as building blocks. These layers maintain efficiency by using indexing structures to apply convolutions only on occupied parts of the lattice, and allow flexible specifications of the lattice structure enabling hierarchical and spatially-aware feature learning, as well as joint 2D-3D reasoning. It won "best paper honorable mention" at CVPR 2018. Local Laplacian Filtering Communications of the ACM 2015, ACM SIGGRAPH 2014 Multi-scale manipulations are central to image editing but they are also prone to halos. Achieving artifact-free results requires sophisticated edge-aware techniques and careful parameter tuning. We address these short comings with the local Laplacian filters, which can achieve a broad range of effects using standard Laplacian pyramids. We show that they are closely related to anisotropic diffusion and to bilateral filtering. Building upon this result, we describe an acceleration scheme for local Laplacian filters on gray-scale images that yields speed-ups on the order of 50×. Learning to Relight Portrait Images via a Virtual Light Stage and Synthetic-to-Real Adaptation Y.-Y. Yeh, K. Nagano, S. Khamis, J. Kautz, M.-Y. Liu, T.-C. Wang ACM Transactions on Graphics (Proceedings SIGGRAPH Asia 2022) 41(6), December 2022 ArXiv Video Webpage LANA: Latency Aware Network Acceleration P. Molchanov, J. Hall, H. Yin, N. Fusi, J. Kautz, A. Vahdat European Conference on Computer Vision (ECCV) PDF Supplemental ArXiv Towards Annotation-efficient Segmentation via Image-to-image Translation E. Vorontsov, P. Molchanov, M. Gazda, C. Beckham, J. Kautz, S. Kadoury Medical Image Analysis 82, November 2022 ArXiv PDF Neural Light Field Estimation for Outdoor Scenes with Differentiable Virtual Object Insertion Z. Wang, W. Chen, D. Acuna, J. Kautz, S. Fidler GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras Y. Yuan, U. Iqbal, P. Molchanov, K. Kitani, J. Kautz IEEE Conference on Computer Vision and Pattern Recognition (CVPR) June 2022 (oral) PDF WebPage A-ViT: Adaptive Tokens for Efficient Vision Transformer H. Yin, A. Vahdat, J. M. Alvarez, A. Mallya, J. Kautz, P. Molchanov GradViT: Gradient Inversion of Vision Transformers A. Hatamizadeh, H. Yin, H. Roth, W. Li, J. Kautz, D. Xu, P. Molchanov GroupViT: Zero-Shot Transfer to Semantic Segmentation with Text Supervision J. Xu, S. D. Mello, S. Liu, W. Byeon, T. Breuel, J. Kautz, X. Wang CoordGAN: Self-Supervised Dense Correspondences Emerge from GANs J. Mu, S. Liu, S. D. Mello, Z. Yu, N. Vasconcelos, X. Wang, J. Kautz FreeSOLO: Learning to Segment Objects without Annotations X. Wang, Z. Yu, S. D. Mello, J. Kautz, A. Anandkumar, C. Shen, J. M. Alvarez Learning Continuous Environment Fields via Implicit Functions X. Li, S. D. Mello, X. Wang, M.-H. Yang, J. Kautz, S. Liu International Conference on Learning Representations (ICLR) IJCV Learning Contrastive Representation for Semantic Correspondence T. Xiao, S. Liu, S. D. Mello, Z. Yu, J. Kautz, M.-H. Yang International Journal on Computer Vision (IJCV) Displacement-Invariant Cost Computation for Stereo Matching Y. Zhong, C. Loop, W. Byeon, S. Birchfield, Y. Dai, K. Zhang, A. Kamenev, T. Breuel, H. Li, J. Kautz Neural Interferometry: Image Reconstruction from Astronomical Interferometers using Implicit Neural Representations B. Wu, B. Eckart, C. Liu, J. Kautz AAAI Conference on Artificial Intelligence (AAAI) Coupled Segmentation and Edge Learning Using Dynamic Graph Propagation Z. Yu, R. Huang, W. Byeon, S. Liu, G. Liu, T. Breuel, A. Anandkumar, J. Kautz Neural Information Processing Systems (NeurIPS) A Contrastive Learning Approach for Training Variational Autoencoder Priors J. Aneja, A. Schwing, J. Kautz, A. Vahdat Score-based Generative Modeling in Latent Space A. Vahdat, K. Kreis, J. Kautz KAMA: 3D Keypoint Aware Body Mesh Articulation U. Iqbal, K. Xie, Y. Guo, J. Kautz, P. Molchanov International Conference on 3D Vision (3DV) PDF Supplemental BMVC Hierarchical Contrastive Motion Learning for Video Action Recognition X. Yang, X. Yang, S. Liu, D. Sun, L. Davis, J. Kautz British Machine Vision Conference (BMVC) ICCV Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting Z. Wang, J. Philion, S. Fidler, J. Kautz International Conference on Computer Vision (ICCV) October 2021 (oral) PDF Supplemental WebPage Self-Supervised Object Detection via Generative Image Synthesis S. K. Mustikovela, S. De Mello, A. Prakash, U. Iqbal, S. Liu, T. Nguyen-Phuoc, C. Rother, J. Kautz TPAMI Domain Stylization: A Fast Covariance Matching Framework towards Domain Adaptation A. Dundar, M.-Y. Liu, Z. Yu, T.-C. Wang, J. Zedlewski, J. Kautz IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 43(7), July 2021 PDF (soon) IEEE DL Binary TTC: A Temporal Geofence for Autonomous Navigation A. Badki, O. Gallo, J. Kautz, P. Sen June 2021 (Best Student Paper Honorable Mention & oral) Weakly-Supervised Physically Unconstrained Gaze Estimation R. Kothari, S. De Mello, U. Iqbal, W. Byeon, S. Park, J. Kautz Learning to Track Instances without Video Annotations Y. Fu, S. Liu, U. Iqbal, S. De Mello, H. Shi, J. Kautz See Through Gradients: Image Batch Recovery via GradInversion H. Yin, A. Mallya, A. Vahdat, J. M. Alvarez, J. Kautz, P. Molchanov Self-Supervised Learning on 3D Point Clouds by Learning Latent Generative Models B. Eckart, W. Yuan, C. Liu, J. Kautz DexYCB: A Benchmark for Capturing Hand Grasping of Objects Y.-W. Chao, W. Yang, A. Handa, Y. Xiang, Y. Narang, K. V. Wyk, U. Iqbal, P. Molchanov, J. Tremblay, S. Birchfield, J. Kautz, D. Fox VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models Z. Xiao, K. Kreis, J. Kautz, A. Vahdat May 2021 (spotlight) Parameter Efficient Multimodal Transformers for Video Representation Learning S. Lee, Y. Yu, G. Kim, T. Breuel, J. Kautz, Y. Song NVAE: A Deep Hierarchical Variational Autoencoder A. Vahdat, J. Kautz December 2020 (spotlight) Online Adaptation for Consistent Mesh Reconstruction in the Wild X. Li, S. Liu, S. De Mello, K. Kim, X. Wang, M.-H. Yang, J. Kautz Convolutional Tensor-Train LSTM for Spatio-Temporal Learning J. Su, W. Byeon, J. Kossaifi, F. Huang, J. Kautz, A. Anandkumar Optical Gaze Tracking with Spatially-Sparse Single-Pixel Detectors R. Li, E. Whitmire, M. Stengel, B. Boudaoud, J. Kautz, D. Luebke, S. Patel, K. Aksit IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Contrastive Learning for Weakly Supervised Phrase Grounding T. Gupta, A. Vahdat, G. Chechik, X. Yang, J. Kautz, D. Hoiem August 2020 (spotlight) DeepGMR: Learning Latent Gaussian Mixture Models for Registration W. Yuan, B. Eckart, K. Kim, V. Jampani, D. Fox, J. Kautz Joint Disentangling and Adaptation for Cross-Domain Person Re-Identification Y. Zou, X. Yang, Z. Yu, B. V. K. V. Kumar, J. Kautz August 2020 (oral) Self-supervised Single-view 3D Reconstruction via Semantic Consistency X. Li, S. Liu, K. Kim, S. De Mello, V. Jampani, M.-H. Yang, J. Kautz Weakly Supervised 3D Hand Pose Estimation via Biomechanical Constraints A. Spurr, P. Molchanov, U. Iqbal, O. Hilliges, J. Kautz UFO2: A Unified Framework towards Omni-supervised Object Detection Z. Ren, Z. Yu, X. Yang, M.-Y. Liu, A. Schwing, J. Kautz Angular Visual Hardness B. Chen, W. Liu, A. Garg, Z. Yu, A. Shrivastava, J. Kautz, A. Anandkumar Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion H. Yin, P. Molchanov, J. M. Alvarez, Z. Li, A. Mallya, D. Hoiem, N. Jha, J. Kautz UNAS: Differentiable Architecture Search Meets Reinforcement Learning A. Vahdat, A. Mallya, M.-Y. Liu, J. Kautz Self-Supervised Viewpoint Learning from Image Collections S. K. Mustikovela, V. Jampani, S. De Mello, U. Iqbal, S. Liu, C. Rother, J. Kautz PDF Supplemental Video Bi3D: Stereo Depth Estimation via Binary Classifications A. Badki, O. Gallo, A. Troccoli, K. Kim, P. Sen, J. Kautz Meshlet Priors for 3D Mesh Reconstruction A. Badki, O. Gallo, P. Sen, J. Kautz Weakly-Supervised 3D Human Pose Learning via Multi-view Images in the Wild U. Iqbal, P. Molchanov, J. Kautz Two-shot Spatially-varying BRDF and Shape Estimation M. Boss, V. Jampani, K. Kim, H. Lensch, J. Kautz Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera J. S. Yoon, K. Kim, O. Gallo, H. S. Park, J. Kautz Instance-aware, Context-focused, and Memory-efficient Weakly-Supervised Object Detection Z. Ren, Z. Yu, X. Yang, M.-Y. Liu, Y. J. Lee, A. Schwing, J. Kautz Exploiting Semantics for Face Image Deblurring Z. Shen, W.-S. Lai, T. Xu, J. Kautz, M.-H. Yang ArXiV DOI NRMVS: Non-Rigid Multi-View Stereo M. Innmann, K. Kim, J. Gu, M. Niessner, C. Loop, M. Stamminger, J. Kautz March 2020, pages 2754-2763 PDF Video Joint-task Self-supervised Learning for Temporal Correspondence X. Li, S. Liu, S. De Mello, X. Wang, M.-H. Yang, J. Kautz Dancing to Music H.-Y. Lee, X. Yang, M.-Y. Liu, T.-C. Wang, Y.-D. Lu, M.-H. Yang, J. Kautz Few-shot Video-to-Video Synthesis T.-C. Wang, M.-Y. Liu, A. Tao, G. Liu, J. Kautz, B. Catanzaro Extreme View Synthesis I. Choi, O. Gallo, A. Troccoli, M. H. Kim, J. Kautz IEEE International Conference on Computer Vision (ICCV) PDF Code SENSE: A Shared Encoder Network for Scene-flow Estimation H. Jiang, D. Sun, V. Jampani, Z. Lv, E. Learned-Miller, J. Kautz Few-shot Adaptive Gaze Estimation S. Park, S. De Mello, P. Molchanov, U. Iqbal, O. Hilliges, J. Kautz Learning Propagation for Arbitrarily-structured Data S. Liu, X. Li, V. Jampani, S. De Mello, J. Kautz Few-shot Unsupervised Image-to-Image Translation M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, J. Kautz Neural Inverse Rendering of an Indoor Scene from a Single Image S. Sengupta, J. Gu, K. Kim, G. Liu, D. Jacobs, J. Kautz Unsupervised Video Interpolation Using Cycle Consistency F. Reda, D. Sun, A. Dundar, M. Shoeybi, G. Liu, K. Shih, A. Tao, J. Kautz, B. Catanzaro Few-Shot Viewpoint Estimation H.-Y. Tseng, S. De Mello, J. Tremblay, S. Liu, S. Birchfield, M.-H. Yang, J. Kautz Video Stitching for Linear Camera Arrays W.-S. Lai, O. Gallo, J. Gu, D. Sun, M.-H. Yang, J. Kautz Joint Discriminative and Generative Learning for Person Re-identification Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, J. Kautz STEP: Spatio-Temporal Progressive Learning for Video Action Detection X. Yang, X. Yang, M.-Y. Liu, F. Xiao, L. Davis, J. Kautz PlaneRCNN: 3D Plane Detection and Reconstruction from a Single View C. Liu, K. Kim, J. Gu, Y. Furukawa, J. Kautz Neural RGB → D Sensing: Depth and Uncertainty from a Video Camera C. Liu, J. Gu, K. Kim, S. Narasimhan, J. Kautz June 2019 (Best Paper Finalist & oral) SCOPS: Self-Supervised Co-Part Segmentation W.-C. Hung, V. Jampani, S. Liu, P. Molchanov, M.-H. Yang, J. Kautz Pixel Adaptive Convolutional Neural Networks H. Su, V. Jampani, D. Sun, O. Gallo, E. Learned-Miller, J. Kautz Learning Linear Transformations for Fast Image and Video Style Transfer X. Li, S. Liu, J. Kautz, M.-H. Yang Putting Humans in a Scene: Learning Affordance in 3D Indoor Environments X. Li, S. Liu, K. Kim, M.-H. Yang, X. Wang, J. Kautz Importance Estimation for Neural Network Pruning P. Molchanov, A. Mallya, S. Tyree, I. Frosio, J. Kautz Models Matter, So Does Training: An Empirical Study of CNNs for Optical Flow Estimation D. Sun, X. Yang, M.-Y. Liu, J. Kautz ?(?), ? 2019 PDF Supplemental IEEE DL Statistical Nearest Neighbors for Image Denoising I. Frosio, J. Kautz IEEE Transactions on Image Processing 28(2), February 2019, pages 723-728 Webpage Code PDF A Fusion Approach for Multi-Frame Optical Flow Estimation Z. Ren, O. Gallo, D. Sun, M.-H. Yang, E. Sudderth, J. Kautz Context-aware Synthesis and Placement of Object Instances D. Lee, S. Liu, J. Gu, M. Liu, M.-H. Yang, J. Kautz Video-to-Video Synthesis T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, G. Liu, A. Tao, J. Kautz, B. Catanzaro Webpage Code PDF Supplemental Separating Reflection and Transmission Images in the Wild P. Wieschollek, O. Gallo, J. Gu, J. Kautz Hand Pose Estimation via Latent 2.5D Heatmap Regression U. Iqbal, P. Molchanov, T. Breuel, J. Gall, J. Kautz Multimodal Unsupervised Image-to-image Translation X. Huang, M.-Y. Liu, S. Belongie, J. Kautz PDF Supplemental Code Switchable Temporal Propagation Network S. Liu, J. Gu, S. De Mello, V. Jampani, G. Zhong, M.-H. Yang, J. Kautz Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset Q. Guo, I. Frosio, O. Gallo, T. Zickler, J. Kautz SEAL: A Framework Towards Simultaneous Edge Alignment and Learning Z. Yu, W. Liu, Y. Zou, C. Feng, S. Ramalingam, B. V. K. V. Kumar, J. Kautz Superpixel Sampling Networks V. Jampani, D. Sun, M.-Y. Liu, M.-H. Yang, J. Kautz A Closed-form Solution to Photorealistic Image Stylization Y. Li, M.-Y. Liu, X. Li, M.-H. Yang, J. Kautz Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation Z. Lv, K. Kim, A. Troccoli, D. Sun, J. Rehg, J. Kautz HGMR: Hierarchical Gaussian Mixtures for Adaptive 3D Registration B. Eckart, K. Kim, J. Kautz EOE: Expected Overlap Estimation Over Unstructured Point Cloud Data Budget-Aware Activity Detection with A Recurrent Policy Network B. Mahesseni, X. Yang, P. Molchanov, J. Kautz PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume SPLATNet: Sparse Lattice Networks for Point Cloud Processing H. Su, V. Jampani, D. Sun, S. Maji, E. Kalogerakis, M.-H. Yang, J. Kautz June 2018 (Best Paper Honorable Mention Award & oral) High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, B. Catanzaro PDF Webpage Code Geometry-Aware Learning of Maps for Camera Localization S. Brahmbhatt, J. Gu, K. Kim, J. Hays, J. Kautz June 2018 (spotlight) Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation H. Jiang, D. Sun, V. Jampani, M.-H. Yang, E. Learned-Miller, J. Kautz Depth-Based 3D Hand Pose Estimation: From Current Achievements to Future Goals S. Yuan, G. Garcia-Hernando, B. Stenger, G. Moon, J. Y. Chang, K. M. Lee, P. Molchanov, J. Kautz, S. Honari, L. Ge, J. Yuan, X. Chen, G. Wang, F. Yang, K. Akiyama, Y. Wu, Q. Wan, M. Madadi, S. Escalera, S. Li, D. Lee, I. Oikonomidis, A. Argyros, T.-K. Kim Learning Superpixels with Segmentation-Aware Affinity Loss W.-C. Tu, M.-Y. Liu, V. Jampani, D. Sun, S.-Y. Chien, M.-H. Yang, J. Kautz Deep Semantic Face Deblurring MoCoGAN: Decomposing Motion and Content for Video Generation S. Tulyakov, M.-Y. Liu, X. Yang, J. Kautz Improving Landmark Localization with Semi-Supervised Learning S. Honari, P. Molchanov, S. Tyree, C. Pal, P. Vincent, J. Kautz Making Convolutional Networks Recurrent for Visual Sequence Learning X. Yang, P. Molchanov, J. Kautz ReBlur2Deblur: Deblurring Videos via Self-Supervised Learning H. Chen, J. Gu, O. Gallo, M.-Y. Liu, A. Veeraraghavan, J. Kautz International Conference on Computational Photography (ICCP) ICRA Synthetically Trained Networks for Learning Human-Readable Plans from Real-World Demonstrations J. Tremblay, T. To, A. Molchanov, S. Tyree, J. Kautz, S. Birchfield International Conference on Robotics and Automation (ICRA) Learning Binary Residual Representations for Domain-specific Video Streaming Y.-H. Tsai, M.-Y. Liu, D. Sun, M.-H. Yang, J. Kautz February 2018 (spotlight) PDF Video Blog Learning Adaptive Parameter Tuning for Image Processing J. Dong, I. Frosio, J. Kautz Electronic Imaging, Image Processing: Algorithms and Systems XVI Unsupervised Image-to-Image Translation M.-Y. Liu, T. Breuel, J. Kautz Neural Information Processing Systems (NIPS) Learning Affinity via Spatial Propagation Networks S. Liu, S. De Mello, J. Gu, G. Zhong, M.-S. Yang, J. Kautz Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier, K. Kim, D. Cremers, J. Kautz, M. Niessner October 2017, pages 3133-3141 A Lightweight Approach for On-The-Fly Reflectance Estimation K. Kim, J. Gu, S. Tyree, P. Molchanov, M. Niessner, J. Kautz October 2017, pages 20-28 (oral) Cascaded Scene Flow Prediction using Semantic Segmentation Z. Ren, D. Sun, J. Kautz, E. Sudderth International Conference on 3D Vision Multiframe Scene Flow with Piecewise Rigid Motion V. Golyanik, K. Kim, R. Maier, M. Niessner, D. Stricker, J. Kautz October 2017 (spotlight) PDF ArXiv Computational Zoom: A Framework for Post-Capture Image Composition ACM Transactions on Graphics (Proceedings SIGGRAPH 2017) 36(4), July 2017, pages 46:1-46:14 Mixed-primary Factorization for Dual-frame Computational Displays F.-C. Huang, D. Pajak, J. Kim, J. Kautz, D. Luebke 36(4), July 2017, pages 149:1-149:13 CVPRW Reconstructing Intensity Images from Binary Spatial Gradient Cameras S. Jayasuriya, O. Gallo, J. Gu, T. Aila, J. Kautz IEEE CVPR Embedded Vision Workshop Binary gradient cameras extract edge and temporal information directly on the sensor, allowing for low-power, low-bandwidth, and high-dynamic-range capabilities — all critical factors for the deployment of embedded computer vision systems. However, these types of images require specialized computer vision algorithms and are not easy to interpret by a human observer. In this paper we propose to recover an intensity image from a single binary spatial gradient image with a deep autoencoder. Extensive experimental results on both simulated and real data show the effectiveness of the proposed approach. Dynamic Facial Analysis: From Bayesian Filtering to Recurrent Neural Network J. Gu, S. De Mello, X. Yang, J. Kautz July 2017, pages 1531-1540 Facial analysis in videos, including head pose estimation and facial landmark localization, is key for many applications such as facial animation capture, human activity recognition, and human-computer interaction. In this paper, we propose to use a recurrent neural network (RNN) for joint estimation and tracking of facial features in videos. We are inspired by the fact that the computation performed in an RNN bears resemblance to Bayesian filters, which have been used for tracking in many previous methods for facial analysis from videos. Bayesian filters used in these methods, however, require complicated, problem-specific design and tuning. In contrast, our proposed RNN-based method avoids such tracker-engineering by learning from training data, similar to how a convolutional neural network (CNN) avoids feature-engineering for image classification. As an end-to-end network, the proposed RNN-based method provides a generic and holistic solution for joint estimation and tracking of various types of facial features from consecutive video frames. Extensive experimental results on head pose estimation and facial landmark localization from videos demonstrate that the proposed RNN-based method outperforms frame-wise models and Bayesian filtering. In addition, we create a large-scale synthetic dataset for head pose estimation, with which we achieve state-of-the-art performance on a benchmark dataset. Polarimetric Multi-View Stereo Z. Cui, J. Gu, B. Shi, P. Tan, J. Kautz July 2017, pages 369-378 Multi-view stereo relies on feature correspondences for 3D reconstruction, and thus is fundamentally flawed in dealing with featureless scenes. In this paper, we propose polarimetric multi-view stereo, which combines per-pixel photometric information from polarization with epipolar constraints from multiple views for 3D reconstruction. Polarization reveals surface normal information, and is thus helpful to propagate depth to featureless regions. Polarimetric multi-view stereo is completely passive and can be applied outdoors in uncontrolled illumination, since the data capture can be done simply with either a polarizer or a polarization camera. Unlike previous work on shape-from-polarization which is limited to either diffuse polarization or specular polarization only, we propose a novel polarization imaging model that can handle real-world objects with mixed polarization. We prove there are exactly two types of ambiguities on estimating surface azimuth angles from polarization, and we resolve them with graph optimization and iso-depth contour tracing. This step significantly improves the initial depth map estimate, which are later fused together for complete 3D reconstruction. Extensive experimental results demonstrate high-quality 3D reconstruction and better performance than state-of-the-art multi-view stereo methods, especially on featureless 3D objects, such as ceramic tiles, office room with white walls, and highly reflective cars in the outdoors. GA3C: GPU-based A3C for Deep Reinforcement Learning M. Babaeizadeh, I. Frosio, S. Tyree, J. Clemons, J. Kautz International Conference on Learning Representations We introduce and analyze the computational aspects of a hybrid CPU/GPU implementation of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. Our analysis concentrates on the critical aspects to leverage the GPU's computational power, including the introduction of a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. We also show the potential for the use of larger DNN models on a GPU. Our Tensor-Flow implementation achieves a significant speed up compared to our CPU-only implementation, and it will be made publicly available to other researchers. Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning P. Molchanov, S. Tyree, T. Karras, T. Aila, J. Kautz We propose a new framework for pruning convolutional kernels in neural networks to enable efficient inference, focusing on transfer learning where large and potentially unwieldy pretrained networks are adapted to specialized tasks. We interleave greedy criteria-based pruning with fine-tuning by backpropagation — a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on an efficient first-order Taylor expansion to approximate the absolute change in training cost induced by pruning a network component. After normalization, the proposed criterion scales appropriately across all layers of a deep CNN, eliminating the need for per-layer sensitivity analysis. The proposed criterion demonstrates superior performance compared to other criteria, such as the norm of kernel weights or average feature map activation. IEEE TCI Loss Functions for Neural Networks for Image Processing H. Zhao, O. Gallo, I. Frosio, J. Kautz IEEE Transactions on Computational Imaging 3(1), March 2017, pages 47-57 IEEE DL ArXiv Neural networks are becoming central in several areas of computer vision and image processing and different architectures have been proposed to solve specific problems. The impact of the loss layer of neural networks, however, has not received much attention in the context of image processing: the default and virtually only choice is L2. In this paper we bring attention to alternative choices. We study the performance of several losses, including perceptually-motivated losses, and propose a novel, differentiable error function. We show that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged. Computational Bounce Flash for Indoor Portraits L. Murmann, A. Davis, J. Kautz, F. Durand 35(6), December 2016, pages 190:1-190:9 PDF Supplemental WWW Portraits taken with direct flash look harsh and unflattering because the light source comes from a small set of angles very close to the camera. Advanced photographers address this problem by using bounce flash, a technique where the flash is directed towards other surfaces in the room, creating a larger, virtual light source that can be cast from different directions to provide better shading variation for 3D modeling. However, finding the right direction to point a bounce flash requires skill and careful consideration of the available surfaces and subject configuration. Inspired by the impact of automation for exposure, focus and flash metering, we automate control of the flash direction for bounce illumination. We first identify criteria for evaluating flash directions, based on established photography literature, and relate these criteria to the color and geometry of a scene. We augment a camera with servomotors rotate the flash head, and additional sensors (a fisheye and 3D sensors) to gather information about potential bounce surfaces. We present a simple numerical optimization criterion that finds directions for the flash that consistently yield compelling illumination and demonstrate the effectiveness of our various criteria in common photographic configurations. NIPSW NIPS Workshop on Efficient Methods for Deep Neural Networks ACM MM Multilayer and Multimodal Fusion of Deep Neural Networks for Video Classification ACM Multimedia October 2016, pages 978-987 (oral) This paper presents a novel framework to combine multiple layers and modalities of deep neural networks for video classification. We first propose a multilayer strategy to simultaneously capture a variety of levels of abstraction and invariance in a network, where the convolutional and fully connected layers are effectively represented by our proposed feature aggregation methods. We further introduce a multimodal scheme that includes four highly complementary modalities to extract diverse static and dynamic cues at multiple temporal scales. In particular, for modeling the long-term temporal information, we propose a new structure, FC-RNN, to effectively transform pre-trained fully connected layers into recurrent layers. A robust boosting model is then introduced to optimize the fusion of multiple layers and modalities in a unified way. In the extensive experiments, we achieve state-of-the-art results on two public benchmark datasets: UCF101 and HMDB51. Accelerated Generative Models for 3D Point Cloud Data B. Eckart, K. Kim, A. Troccoli, A. Kelly, J. Kautz June 2016, pages 5497-5505 (spotlight oral) Finding meaningful, structured representations of 3D point cloud data (PCD) has become a core task for spatial perception applications. In this paper we introduce a method for constructing compact generative representations of PCD at multiple levels of detail. As opposed to deterministic structures such as voxel grids or octrees, we propose probabilistic subdivisions of the data through local mixture modeling, and show how these subdivisions can provide a maximum likelihood segmentation of the data. The final representation is hierarchical, compact, parametric, and statistically derived, facilitating run-time occupancy calculations through stochastic sampling. Unlike traditional deterministic spatial subdivision methods, our technique enables dynamic creation of voxel grids according the application's best needs. In contrast to other generative models for PCD, we explicitly enforce sparsity among points and mixtures, a technique which we call expectation sparsification. This leads to a highly parallel hierarchical Expectation Maximization (EM) algorithm well-suited for the GPU and real-time execution. We explore the trade-offs between model fidelity and model size at various levels of detail, our tests showing favorable performance when compared to octree and NDT-based methods. Online Detection and Classification of Dynamic Hand Gestures with Recurrent 3D Convolutional Neural Networks P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, J. Kautz June 2016, pages 4207-4215 PDF Video WWW Automatic detection and classification of dynamic hand gestures in real-world systems intended for human computer interaction is challenging as: 1) there is a large diversity in how people perform gestures, making detection and classification difficult; 2) the system must work online in order to avoid noticeable lag between performing a gesture and its classification; in fact, a negative lag (classification before the gesture is finished) is desirable, as feedback to the user can then be truly instantaneous. In this paper, we address these challenges with a recurrent three-dimensional convolutional neural network that performs simultaneous detection and classification of dynamic hand gestures from multi-modal data. We employ connectionist temporal classification to train the network to predict class labels from in-progress gestures in unsegmented input streams. In order to validate our method, we introduce a new challenging multi-modal dynamic hand gesture dataset captured with depth, color and stereo-IR sensors. On this challenging dataset, our gesture recognition system achieves an accuracy of 83.8%, outperforms competing state-of-the-art algorithms, and approaches human accuracy of 88.4%. Moreover, our method achieves state-of-the-art performance on SKIG and ChaLearn2014 benchmarks. IEEE IVS Towards Selecting Robust Hand Gestures for Automotive Interfaces S. Gupta, P. Molchanov, X. Yang, K. Kim, S. Tyree, J. Kautz IEEE Intelligent Vehicles Symposium 2016 Driver distraction is a serious threat to automotive safety. The visual-manual interfaces in cars are a source of distraction for drivers. Automotive touch-less hand gesture-based user interfaces can help to reduce driver distraction and enhance safety and comfort. The choice of hand gestures in automotive interfaces is central to their success and widespread adoption. In this work we evaluate the recognition accuracy of 25 different gestures for state-of-the-art computer vision-based gesture recognition algorithms and for human observers. We show that some gestures are consistently recognized more accurately than others by both vision-based algorithms and humans. We further identify similarities in the hand gesture recognition abilities of vision-based systems and humans. Lastly, by merging pairs of gestures with high miss-classification rates, we propose ten robust hand gestures for automotive interfaces, which are classified with high and equal accuracy by vision-based algorithms. Robust Model-based 3D Head Pose Estimation G. P. Meyer, S. Gupta, I. Frosio, D. Reddy, J. Kautz December 2015, pages 3649-3657 PDF CVF We introduce a method for accurate three dimensional head pose estimation using a commodity depth camera. We perform pose estimation by registering a morphable face model to the measured depth data, using a combination of particle swarm optimization (PSO) and the iterative closest point (ICP) algorithm, which minimizes a cost function that includes a 3D registration and a 2D overlap term. The pose is estimated on the fly without requiring an explicit initialization or training phase. Our method handles large pose angles and partial occlusions by dynamically adapting to the reliable visible parts of the face. It is robust and generalizes to different depth sensors without modification. On the Biwi Kinect dataset, we achieve best-in-class performance, with average angular errors of 2.1, 2.1 and 2.4 degrees for yaw, pitch, and roll, respectively, and an average translational error of 5.9 mm, while running at 6 fps on a graphics processing unit. CGF Interactive Sketch-Driven Image Synthesis D. Turmukhambetov, N. Campbell, D. Goldman, J. Kautz Computer Graphics Forum 34(8), December 2015, pages 130-142 PDF DOI We present an interactive system for composing realistic images of an object under arbitrary pose and appearance specified by sketching. Our system draws inspiration from a traditional illustration workflow: The user first sketches rough 'masses' of the object, as ellipses, to define an initial abstract pose that can then be refined with more detailed contours as desired. The system is made robust to partial or inaccurate sketches using a reduced-dimensionality model of pose space learnt from a labelled collection of photos. Throughout the composition process, interactive visual feedback is provided to guide the user. Finally, the user's partial or complete sketch, complemented with appearance requirements, is used to constrain the automatic synthesis of a novel, high-quality, realistic image. UIST Joint 5D Pen Input for Light Field Displays J. Tompkin, S. Muff, J. McCann, H. Pfister, J. Kautz, M. Alexa, W. Matusik ACM User Interface Software and Technology (UIST) PDF Video Slides WWW Light field displays allow viewers to see view-dependent 3D content as if looking through a window; however, existing work on light field display interaction is limited. Yet, they have the potential to parallel 2D pen and touch screen systems which present a joint input and display surface for natural interaction. We propose a 4D display and interaction space using a dual-purpose lenslet array, which combines light field display and light field pen sensing, and allows us to estimate the 3D position and 2D orientation of the pen. This method is simple and fast (150 Hz), with position accuracy of 2–3 mm and precision of 0.2–0.6 mm from 0–350 mm away from the lenslet array, and orientation accuracy of 2° and precision of 0.2–0.3° within 50°. Further, we 3D print the lenslet array with embedded baffles to reduce out-of-bounds cross-talk, and use an optical relay to allow interaction behind the focal plane. We demonstrate our joint display/sensing system with interactive light field painting. MLMD: Maximum Likelihood Mixture Decoupling for Fast and Accurate Point Cloud Registration October 2015, pages 241-249 Registration of Point Cloud Data (PCD) forms a core component of many 3D vision algorithms such as object matching and environment reconstruction. In this paper, we introduce a PCD registration algorithm that utilizes Gaussian Mixture Models (GMM) and a novel dual-mode parameter optimization technique which we call mixture decoupling. We show how this decoupling technique facilitates both faster and more robust registration by first optimizing over the mixture parameters (decoupling the mixture weights, means, and covariances from the points) before optimizing over the 6DOF registration parameters. Furthermore, we frame both the decoupling and registration process inside a unified, dual-mode Expectation Maximization (EM) framework, for which we derive a Maximum Likelihood Estimation (MLE) solution along with a parallel implementation on the GPU. We evaluate our MLE-based mixture decoupling (MLMD) registration method over both synthetic and real data, showing better convergence for a wider range of initial conditions and higher speeds than previous state of the art methods. An Adaptive Acceleration Structure for Screen-space Ray Tracing S. Widmer, D. Pajak, A. Schulz, K. Pulli, J. Kautz, M. Goesele, D. Luebke High-Performance Graphics 2015 August 2015, pages 67-76 We propose an efficient acceleration structure for real-time screen- space ray tracing. The hybrid data structure represents the scene geometry by combining a bounding volume hierarchy with local planar approximations. This enables fast empty space skipping while tracing and yields exact intersection points for the planar approximation. In combination with an occlusion-aware ray traversal our algorithm is capable to quickly trace even multiple depth layers. Compared to prior work, our technique improves the accuracy of the results, is more general, and allows for advanced image trans- formations, as all pixels can cast rays to arbitrary directions. We demonstrate real-time performance for several applications, including depth-of-field rendering, stereo warping, and screen-space ray traced reflections. EGSR Filtering Environment Illumination for Interactive Physically-Based Rendering in Mixed Reality S. Mehta, K. Kim, D. Pajak, K. Pulli, J. Kautz, R. Ramamoorthi Eurographics Symposium on Rendering 2015 PDF Supplemental Video DOI Physically correct rendering of environment illumination has been a long-standing challenge in interactive graphics, since Monte-Carlo ray-tracing requires thousands of rays per pixel. We propose accurate filtering of a noisy Monte-Carlo image using Fourier analysis. Our novel analysis extends previous works by showing that the shape of illumination spectra is not always a line or wedge, as in previous approximations, but rather an ellipsoid. Our primary contribution is an axis-aligned filtering scheme that preserves the frequency content of the illumination. We also propose a novel application of our technique to mixed reality scenes, in which virtual objects are inserted into a real video stream so as to become indistinguishable from the real objects. The virtual objects must be shaded with the real lighting conditions, and the mutual illumination between real and virtual objects must also be determined. For this, we demonstrate a novel two-mode path tracing approach that allows ray-tracing a scene with image-based real geometry and mesh-based virtual geometry. Finally, we are able to de-noise a sparsely sampled image and render physically correct mixed reality scenes at over 5 fps on the GPU. Modeling Object Appearance using Context-Conditioned Component Analysis D. Turmukhambetov, N. Campbell, S. Prince, J. Kautz Subspace models have been very successful at modeling the appearance of structured image datasets when the visual objects have been aligned in the images, for example faces. Even with extensions that allow for global transformations or dense warps of the image, the set of visual objects whose appearance may be modeled by such methods is limited. They are unable to account for visual objects where occlusion leads to changing visibility of different parts of the object (without a strict layered structure) and where a one-to-one mapping between parts is not preserved, for example bunches of bananas contain different numbers of bananas but each individual banana shares an appearance subspace. In this work we remove the image space alignment limitations of existing subspace models by conditioning the models on a shape dependent context that allows for complex, non-linear structure within the appearance of the visual object to be captured and shared. This allows us to exploit the advantages of subspace appearance models with non-rigid, deformable objects whilst also dealing with complex occlusions and varying numbers of parts. We demonstrate the effectiveness of our new model with examples of structured in-painting and appearance transfer. Hand Gesture Recognition with 3D Convolutional Neural Networks P. Molchanov, S. Gupta, K. Kim, J. Kautz IEEE CVPR Workshop on Observing and Understanding Hands in Action June 2015, winner of the VIVA hand gesture challenge Touchless hand gesture recognition systems are becoming important in automotive user interfaces as they improve safety and comfort. Various computer vision algorithms have employed color and depth cameras for hand gesture recognition, but robust classification of gestures from different subjects performed under widely varying lighting conditions is still challenging. We propose an algorithm for drivers' hand gesture recognition from challenging depth and intensity data using 3D convolutional neural networks. Our solution combines information from multiple spatial scales for the final prediction. It also employs spatio-temporal data augmentation for more effective training and to reduce potential overfitting. Our method achieves a correct classification rate of 77.5% on the VIVA challenge dataset. Locally Non-rigid Registration for Mobile HDR Photography O. Gallo, A. Troccoli, J. Hu, K. Pulli, J. Kautz Image registration for stack-based HDR photography is challenging. If not properly accounted for, camera motion and scene changes result in artifacts in the composite image. Unfortunately, existing methods to address this problem are either accurate, but too slow for mobile devices, or fast, but prone to failing. We propose a method that fills this void: our approach is extremely fast—under 700ms on a commercial tablet for a pair of 5MP images—and prevents the artifacts that arise from insufficient registration quality. APP. OPTICS Slim Near Eye Display Using Pinhole Aperture Arrays K. Aksit, J. Kautz, D. Luebke Applied Optics 54(11), April 2015, pages 3422-3427 WWW DOI We report a new technique for building a wide-angle, lightweight, thin form factor, cost effective, easy to manufacture near-eye Head-Mounted Display (HMD) for virtual reality applications. Our approach adopts an aperture mask containing an array of pinholes and a screen as a source of imagery. We demonstrate a proof-of-concept HMD prototype with a binocular field of view (FOV) of 70° × 45°, or total diagonal FOV of 83°. This FOV should increase with the increasing display panel size. The optical angular resolution supported in our prototype can go down to 1.4 - 2.1 arcmin by adopting a display with 20 - 30 µm pixel pitch. Local Laplacian Filters: Edge-aware Image Processing with a Laplacian Pyramid S. Paris, S. Hasinoff, J. Kautz Communications of the ACM – Research Highlight 58(3), March 2015, pages 81-91 CACM CACM Cover CACM Interview The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. However, because it is constructed with spatially invariant Gaussian kernels, the Laplacian pyramid is widely believed as being unable to represent edges well and as being ill-suited for edge-aware operations such as edge-preserving smoothing and tone mapping. To tackle these tasks, a wealth of alternative techniques and representations have been proposed, e.g., anisotropic diffusion, neighborhood filtering, and specialized wavelet bases. While these methods have demonstrated successful results, they come at the price of additional complexity, often accompanied by higher computational cost or the need to post-process the generated results. In this paper, we show state-of-the-art edge-aware processing using standard Laplacian pyramids. We characterize edges with a simple threshold on pixel values that allows us to differentiate large-scale edges from small-scale details. Building upon this result, we propose a set of image filters to achieve edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping. The advantage of our approach is its simplicity and flexibility, relying only on simple point-wise nonlinearities and small Gaussian convolutions; no optimization or post-processing is required. As we demonstrate, our method produces consistently high-quality results, without degrading edges or introducing halos. Speaker-Following Video Subtitles Y. Hu, J. Kautz, Y. Yu, and W. Wang ACM Transactions on Multimedia Computing, Communications, and Applications 11(2), January 2015, pages 32:1-32:17 ACM DL ArXiv We propose a new method for improving the presentation of subtitles in video (e.g., TV and movies). With conventional subtitles, the viewer has to constantly look away from the main viewing area to read the subtitles at the bottom of the screen, which disrupts the viewing experience and causes unnecessary eyestrain. Our method places on-screen subtitles next to the respective speakers to allow the viewer to follow the visual content while simultaneously reading the subtitles. We use novel identification algorithms to detect the speakers based on audio and visual information. Then the placement of the subtitles is determined using global optimization. A comprehensive usability study indicated that our subtitle placement method outperformed both conventional fixed-position subtitling and another previous dynamic subtitling method in terms of enhancing the overall viewing experience and reducing eyestrain. FlexISP: A Flexible Camera Image Processing Framework F. Heide, M. Steinberger, Y.-T. Tsai, N. Rouf, D. Pajak, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, K. Pulli 33(6), December 2014, pages 231:1-231:13 PDF Supplemental PDF Supplemental Webpage Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible. Device Effect on Panoramic Video+Context Tasks F. Pece, J. Tompkin, H. Pfister, J. Kautz, C. Theobalt, Conference on Visual Media Production (CVMP) 2014 Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems. ACCVW User Directed Multi-View-Stereo Y. Doron, N. Campbell, J. Starck, J. Kautz Workshop on User-Centred Computer Vision (at ACCV) PDF Talk Depth reconstruction from video footage and image collections is a fundamental part of many modelling and image-based rendering applications. However real-world scenes often contain limited texture information, repeated elements and other ambiguities which remain challenging for fully automatic algorithms. This paper presents a technique that combines intuitive user constraints with dense multi-view stereo reconstruction. By providing annotations in the form of simple paint strokes, a user can guide a multi-view stereo algorithm and avoid common failure cases. We show how smoothness, discontinuity and depth ordering constraints can be incorporated directly into a variational optimization framework for multi-view stereo. Our method avoids the need for heuristic approaches that edit a depth-map in a sequential process, and avoids requiring the user to accurately segment object boundaries or to directly model geometry. We show how with a small amount of intuitive input, a user may create improved depth maps in challenging cases for multi-view-stereo. PMBP: PatchMatch Belief Propagation for Correspondence Field Estimation F. Besse, A. W. Fitzgibbon, C. Rother, J. Kautz International Journal of Computer Vision 110(1), October 2014, pages 2-13 PatchMatch (PM) is a simple, yet very powerful and successful method for optimizing continuous labelling problems. The algorithm has two main ingredients: the update of the solution space by sampling and the use of the spatial neighbourhood to propagate samples. We show how these ingredients are related to steps in a specific form of belief propagation (BP) in the continuous space, called max-product particle BP (MP-PBP). However, MP-PBP has thus far been too slow to allow complex state spaces. In the case where all nodes share a common state space and the smoothness prior favours equal values, we show that unifying the two approaches yields a new algorithm, PMBP, which is more accurate than PM and orders of magnitude faster than MP-PBP. To illustrate the benefits of our PMBP method we have built a new stereo matching algorithm with unary terms which are borrowed from the recent PM Stereo work and novel realistic pairwise terms that provide smoothness. We have experimentally verified that our method is an improvement over state-of-the-art techniques at sub-pixel accuracy level. Highly Overparameterized Optical Flow Using PatchMatch Belief Propagation M. Hornacek, F. Besse, J. Kautz, A. Fitzgibbon, C. Rother European Conference on Computer Vision (ECCV) 2014 September 2014, pages 220-234 Motion in the image plane is ultimately a function of 3D motion in space. We propose to compute optical flow using what is ostensibly an extreme overparameterization: depth, surface normal, and frame-to-frame 3D rigid body motion at every pixel, giving a total of 9 DoF. The advantages of such an overparameterization are twofold: first, geometrically meaningful reasoning can be called upon in the optimization, reflecting possible 3D motion in the underlying scene; second, the 'fronto-parallel' assumption implicit in the use of traditional matching pixel windows is ameliorated because the parameterization determines a plane-induced homography at every pixel. We show that optimization over this high-dimensional, continuous state space can be carried out using an adaptation of the recently introduced PatchMatch Belief Propagation (PMBP) energy minimization algorithm, and that the resulting flow fields compare favorably to the state of the art on a number of small- and large-displacement datasets. Fast Local Laplacian Filters: Theory and Applications M. Aubry, S. Paris, S. Hasinoff, J. Kautz, F. Durand ACM Transactions on Graphics (Presented at SIGGRAPH 2014) 33(5), August 2014, pages 167:1-167:15 PDF WWW Multi-scale manipulations are central to image editing but they are also prone to halos. Achieving artifact-free results requires sophisticated edge- aware techniques and careful parameter tuning. These shortcomings were recently addressed by the local Laplacian filters, which can achieve a broad range of effects using standard Laplacian pyramids. However, these filters are slow to evaluate and their relationship to other approaches is unclear. In this paper, we show that they are closely related to anisotropic diffusion and to bilateral filtering. Our study also leads to a variant of the bilateral filter that produces cleaner edges while retaining its speed. Building upon this result, we describe an acceleration scheme for local Laplacian filters on gray-scale images that yields speed-ups on the order of 50x. Finally, we demonstrate how to use local Laplacian filters to alter the distribution of gradients in an image. We illustrate this property with a robust algorithm for photographic style transfer. Learning a Manifold of Fonts N. Campbell, J. Kautz 33(4), August 2014, pages 91:1-91:11 The design and manipulation of typefaces and fonts is an area requiring substantial expertise; it can take many years of study to become a proficient typographer. At the same time, the use of typefaces is ubiquitous; there are many users who, while not experts, would like to be more involved in tweaking or changing existing fonts without suffering the learning curve of professional typography packages. Given the wealth of fonts that are available today, we would like to exploit the expertise used to produce these fonts, and to enable everyday users to create, explore, and edit fonts. To this end, we build a generative manifold of standard fonts. Every location on the manifold corresponds to a unique and novel typeface, and is obtained by learning a non-linear mapping that intelligently interpolates and extrapolates existing fonts. Using the manifold, we can smoothly interpolate and move between existing fonts. We can also use the manifold as a constraint that makes a variety of new applications possible. For instance, when editing a single character, we can update all the other glyphs in a font simultaneously to keep them compatible with our changes. Cascaded Displays: Spatiotemporal Superresolution using Offset Pixel Layers F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, D. Luebke 33(4), August 2014, pages 60:1--60:11 We demonstrate that layered spatial light modulators (SLMs), subject to fixed lateral displacements and refreshed at staggered intervals, can synthesize images with greater spatiotemporal resolution than that afforded by any single SLM used in their construction. Dubbed cascaded displays, such architectures enable superresolution flat panel displays (e.g., using thin stacks of liquid crystal displays (LCDs)) and digital projectors (e.g., relaying the image of one SLM onto another). We introduce a comprehensive optimization framework, leveraging non-negative matrix and tensor factorization, that decomposes target images and videos into multi-layered, time-multiplexed attenuation patterns—offering a flexible trade-off between apparent image brightness, spatial resolution, and refresh rate. Through this analysis, we develop a real-time dual-layer factorization method that quadruples spatial resolution and doubles refresh rate. Compared to prior superresolution displays, cascaded displays place fewer restrictions on the hardware, offering thin designs without moving parts or the necessity of temporal multiplexing. Furthermore, cascaded displays are the first use of multi-layer displays to increase apparent temporal resolution. We validate these concepts using two custom-built prototypes: a dual-layer LCD and a dual-modulation liquid crystal on silicon (LCoS) projector, with the former emphasizing head-mounted display (HMD) applications. Error Analysis of Estimators that Use Combinations of Stochastic Sampling Strategies for Direct Illumination K. Subr, D. Nowrouzezahrai, W. Jarosz, J. Kautz, K. Mitchell Computer Graphics Forum (Proceedings EGSR 2014) 33(4), July 2014, pages 93-102 We present a theoretical analysis of error of combinations of Monte Carlo estimators used in image synthesis. Importance sampling and multiple importance sampling are popular variance-reduction strategies. Unfortunately, neither strategy improves the rate of convergence of Monte Carlo integration. Jittered sampling (a type of stratified sampling), on the other hand is known to improve the convergence rate. Most rendering software optimistically combine importance sampling with jittered sampling, hoping to achieve both. We derive the exact error of the combination of multiple importance sampling with jittered sampling. In addition, we demonstrate a further benefit of introducing negative correlations (antithetic sampling) between estimates to the convergence rate. As with importance sampling, antithetic sampling is known to reduce error for certain classes of integrands without affecting the convergence rate. In this paper, our analysis and experiments reveal that importance and antithetic sampling, if used judiciously and in conjunction with jittered sampling, may improve convergence rates. We show the impact of such combinations of strategies on the convergence rate of estimators for direct illumination. Hierarchical Subquery Evaluation for Active Learning on a Graph O. Mac Aodha, N. Campbell, J. Kautz, G. Brostow, To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction. Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget. Low-Cost Subpixel Rendering for Diverse Displays T. Engelhardt, T.-W. Schmidt, J. Kautz, C. Dachsbacher PDF CGF DL Subpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces color aliasing, as each subpixel only displays a specific color (usually R, G, and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, we find optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns. We demonstrate that these optimal filters can be approximated well with analytical functions. We incorporate our filters into GPU-based multisample antialiasing to yield subpixel rendering at a very low cost (1-2ms filtering time at HD resolution). We also show that texture filtering can be adapted to perform efficient subpixel rendering. Finally, we analyze the findings of a user study we performed, which underpins the increased visual fidelity that can be achieved for diverse display layouts, by using our optimal filters. JVRB Bitmap Movement Detection: HDR for Dynamic Scenes F. Pece, J. Kautz Journal of Virtual Reality and Broadcasting 10(2), December 2013, pages 1-13 (extended CVMP 2010 paper) Exposure Fusion and other HDR techniques generate well-exposed images from a bracketed image sequence while reproducing a large dynamic range that far exceeds the dynamic range of a single exposure. Common to all these techniques is the problem that the smallest movements in the captured images generate artefacts (ghosting) that dramatically affect the quality of the final images. This limits the use of HDR and Exposure Fusion techniques because common scenes of interest are usually dynamic. We present a method that adapts Exposure Fusion, as well as standard HDR techniques, to allow for dynamic scene without introducing artefacts. Our method detects clusters of moving pixels within a bracketed exposure sequence with simple binary operations. We show that the proposed technique is able to deal with a large amount of movement in the scene and different movement configurations. The result is a ghost-free and highly detailed exposure fused image at a low computational cost. CG&A 3D-Printing Spatially Varying BRDFs O. Roullier, B. Bickel, J. Kautz, W. Matusik, M. Alexa IEEE Computer Graphics and Applications 33(6), November/December 2013, pages 48-57 A new method fabricates custom surface reflectance and spatially varying bidirectional reflectance distribution functions (svBRDFs). Researchers optimize a microgeometry for a range of normal distribution functions and simulate the resulting surface's effective reflectance. Using the simulation's results, they reproduce an input svBRDF's appearance by distributing the microgeometry on the printed material's surface. This method lets people print svBRDFs on planar samples with current 3D printing technology, even with a limited set of printing materials. It extends naturally to printing svBRDFs on arbitrary shapes. The Shading Probe: Fast Appearance Acquisition for Mobile AR D. A. Calian, K. Mitchell, D. Nowrouzezahrai, J. Kautz ACM SIGGRAPH Asia 2013 Technical Briefs November 2013, pages 20:1-20:4 The ubiquity of mobile devices with powerful processors and integrated video cameras is re-opening the discussion on practical augmented reality (AR). Despite this technological convergence, several issues prevent reliable and immersive AR on these platforms. We address one such problem, the shading of virtual objects and determination of lighting that remains consistent with the surrounding environment. We design a novel light probe and exploit its structure to permit an efficient reformulation of the rendering equation that is suitable for fast shading on mobile devices. Unlike prior approaches, our shading probe directly captures the shading, and not the incident light, in a scene. As such, we avoid costly and unreliable radiometric calibration as well as side-stepping the need for complex shading algorithms. Moreover, we can tailor the shading probe's structure to better handle common lighting scenarios, such as outdoor settings. We achieve high-performance shading of virtual objects in an AR context, incorporating plausible local globalillumination effects, on mobile platforms. Video Collections in Panoramic Contexts J. Tompkin, F. Pece, R. Shah, S. Izadi, J. Kautz, C. Theobalt ACM Symposium on User Interface Software and Technology (UIST) 2013 Video collections of places show contrasts and changes in our world, but current interfaces to video collections make it hard for users to explore these changes. Recent state-of-the-art interfaces attempt to solve this problem for 'outside➞in' collections, but cannot connect 'inside➞out' collections of the same place which do not visually overlap. We extend the focus+context paradigm to create a video-collections+context interface by embedding videos into a panorama. We build a spatio-temporal index and tools for fast exploration of the space and time of the video collection. We demonstrate the flexibility of our representation with interfaces for desktop and mobile flat displays, and for a spherical display with joypad and tablet controllers. We study with users the effect of our video-collection+context system to spatio-temporal localization tasks, and find significant improvements to accuracy and completion time in visual search tasks compared to existing systems. We measure the usability of our interface with System Usability Scale (SUS) and task-specific questionnaires, and find our system scores higher. On Visual Realism of Synthesized Imagery E. Reinhard, A. Efros, J. Kautz, H.-P. Seidel Proceedings of the IEEE 101(9), September 2013, pages 1998-2007 Traditionally, computer graphics has been concerned with producing imagery that is as physically accurate as possible. But accurate physical simulation of geometry, lighting, and material properties of a visual scene can be cumbersome and time consuming. At the same time, human vision is far from accurate, which offers an enormous opportunity to create imagery at a reduced computational cost as well as with less reliance on human modelers. As a result, a recent trend is toward accepting perceptual plausibility instead of physical accuracy as a guiding principle in the design of modeling and rendering systems. This requires us to understand visual realism, which involves both learning statistical regularities of the world, for instance, by employing huge amounts of data, as well as human's visual perception of it. This paper addresses issues related to understanding realism, presents several applications, and discusses what this interesting approach may lead to in the future. ACM TAP Preference and Artifact Analysis for Video Collections of Places J. Tompkin, M. H. Kim, K. I. Kim, J. Kautz, C. Theobalt ACM Transactions on Applied Perception (Presented at ACM SAP) 10(3), August 2013, 13:1-13:19 PDF Supplemental Video Webpage Emerging interfaces for video collections of places attempt to link similar content with seamless transitions. However, the automatic computer vision techniques that enable these transitions have many failure cases which lead to artifacts in the final rendered transition. Under these conditions, which transitions are preferred by participants and which artifacts are most objectionable? We perform an experiment with participants comparing seven transition types, from movie cuts and dissolves to image-based warps and virtual camera transitions, across five scenes in a city. This document describes how we condition this experiment on slight and considerable view change cases, and how we analyze the feedback from participants to find their preference for transition types and artifacts. We discover that transition preference varies with view change, that automatic rendered transitions are significantly preferred even with some artifacts, and that dissolve transitions are comparable to less-sophisticated rendered transitions. This leads to insights into what visual features are important to maintain in a rendered transition, and to an artifact ordering within our transitions. Fourier Analysis of Stochastic Sampling Strategies for Assessing Bias and Variance in Integration K. Subr, J. Kautz PDF PDF (2MB) Each pixel in a photorealistic, computer generated picture is calculated by approximately integrating all the light arriving at the pixel, from the virtual scene. A common strategy to calculate these high-dimensional integrals is to average the estimates at stochastically sampled locations. The strategy with which the sampled locations are chosen is of utmost importance in deciding the quality of the approximation, and hence rendered image. We derive connections between the spectral properties of stochastic sampling patterns and the first and second order statistics of estimates of integration using the samples. Our equations provide insight into the assessment of stochastic sampling strategies for integration. We show that the amplitude of the expected Fourier spectrum of sampling patterns is a useful indicator of the bias when used in numerical integration. We deduce that estimator variance is directly dependent on the variance of the sampling spectrum over multiple realizations of the sampling pattern. We then analyse Gaussian jittered sampling, a simple variant of jittered sampling, that allows a smooth trade-off of bias for variance in uniform (regular grid) sampling. We verify our predictions using spectral measurement, quantitative integration experiments and qualitative comparisons of rendered images. Content-adaptive Lenticular Prints J. Tompkin, S. Heinzle, J. Kautz, W. Matusik Lenticular prints are a popular medium for producing automultiscopic glasses-free 3D images. Traditionally, the light field emitted by such prints has a fixed spatial and angular resolution, a trade-off which is defined by the width of the individual lenslets as well as the number of pixels underneath each of those lenslets. We increase both perceived angular and spatial resolution by modifying the lenslet array to better match the content of a given light field. Our optimization algorithm analyzes the input light field and computes an optimal lenslet size, shape, and arrangement that best matches the input light field given a set of output parameters. The resulting lenticular print shows higher detail and smoother motion parallax compared to fixed-size lens arrays. We demonstrate our technique using rendered simulations and by 3D printing lens arrays, and we validate our approach in simulation with a user study. Fully-Connected CRFs with Non-Parametric Pairwise Potentials N. Campbell, K. Subr, J. Kautz IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2013 Conditional Random Fields (CRFs) are used for diverse tasks, ranging from image denoising to object recognition. For images, they are commonly defined as a graph with nodes corresponding to individual pixels and pairwise links that connect nodes to their immediate neighbors. Recent work has shown that fully-connected CRFs, where each node is connected to every other node, can be solved efficiently under the restriction that the pairwise term is a Gaussian kernel over a Euclidean feature space. In this paper, we generalize the pairwise terms to a non-linear dissimilarity measure that is not required to be a distance metric. To this end, we use an efficient embedding technique to estimate an approximate Euclidean feature space, in which the pairwise term can still be expressed as a Gaussian kernel. We demonstrate that the use of non-parametric models for the pairwise interactions, conditioned on the input data, greatly increases expressive power whilst maintaining efficient inference. Accurate Binary Image Selection from Inaccurate User Input K. Subr, S. Paris, C. Soler, J. Kautz Computer Graphics Forum (Proceedings Eurographics 2013) 32(2), May 2013, pages 41-50 Selections are central to image editing, e.g., they are the starting point of common operations such as copy-pasting and local edits. Creating them by hand is particularly tedious and scribble-based techniques have been introduced to assist the process. By interpolating a few strokes specified by users, these methods generate precise selections. However, most of the algorithms assume a 100% accurate input, and even small inaccuracies in the scribbles often degrade the selection quality, which imposes an additional burden on users. In this paper, we propose a selection technique tolerant to input inaccuracies. We use a dense conditional random field (CRF) to robustly infer a selection from possibly inaccurate input. Further, we show that patch-based pixel similarity functions yield more precise selection than simple point-wise metrics. However, efficiently solving a dense CRF is only possible in low-dimensional Euclidean spaces, and the metrics that we use are high-dimensional and often non-Euclidean. We address this challenge by embedding pixels in a low-dimensional Euclidean space with a metric that approximates the desired similarity function. The results show that our approach performs better than previous techniques and that two options are sufficient to cover a variety of images depending on whether the objects are textured. PanoInserts: Practical Spatial Teleconferencing F. Pece, W. Steptoe, F. Wanner, S. Julier, T. Weyrich, J. Kautz, A. Steed ACM Conference on Human Factors in Computing Systems (CHI) 2013 April 2013, pages 1319-1328 (Best Paper Honorable Mention Award) We present PanoInserts: a teleconferencing system that uses smartphone cameras to create a surround plus video representation of meeting places. We take a static panoramic image of a location and insert live video windows from smart- phones. We use a combination of marker- and image-based tracking to position the video inserts within the panorama, and transmit this representation to a remote viewer. We re- port findings from a user study comparing our system against fully panoramic video and conventional webcam video conferencing for two tasks: 1) determining where objects are positioned at a remote location, and 2) instructing a confederate to place objects in the remote location. Results indicate that our system performs comparably to full panoramic video systems and significantly better than standard video conferencing in tasks that require accurate surround representation of a remote space. We discuss the representational properties and usability of the system for video communication applications. Interactive Viewpoint Video Textures P. Levieux, J. Tompkin, J. Kautz December 2012, pages 11-17 We propose an approach to interactively explore video textures from different viewpoints. Scenes can be played back continuously and in a temporally coherent fashion from any camera location along a path. Our algorithm takes as input short videos from a set of discrete camera locations, and does not require contemporaneous capture — data is acquired by moving a single camera. We analyze this data to find optimal transitions within each video (equivalent to video textures) and to find good transition points between spatially distinct videos. We propose a spatio-temporal view synthesis approach that dynamically creates intermediate frames to maintain temporal coherence. We demonstrate our approach on a variety of scenes with stochastic or repetitive motions, and we analyze the limits of our approach and failure-case artifacts. Two-frame Stereo Photography in Low-light Settings: A Preliminary Study K. Subr, G. Bradbury, J. Kautz Image-pairs captured using binocular stereo-vision cameras are increasingly used to reconstruct partial 3D information. Matching corresponding points in a left-right image pair is a crucial step in this reconstruction, and one that is both slow and surprisingly fragile. The reconstruction problem is exacerbated by noise or blur in the input images because of the potential ambiguities they introduce in the matching process. For scenes that are poorly illuminated, it is necessary to make a combination of three adjustments: To increase the size of the aperture to allow more light; to increase the duration of exposure; and to increase the sensor-gain (ISO). These adjustements potentially introduce defocus, motion blur and noise — all of which adversely affect reconstruction. We present an exploratory study of how they relatively affect reconstruction by comparing the performances of a few reconstruction algorithms over the space of exposures. Beaming: An Asymmetric Telepresence System A. Steed, W. Steptoe, W. Oyekoya, F. Pece, T. Weyrich, J. Kautz, D. Friedman, A. Peer, M. Solazzi, F. Tecchia, M. Bergamasco, M. Slater The Beaming project recreates, virtually, a real environment; using immersive VR, remote participants can visit the virtual model and interact with the people in the real environment. The real environment doesn't need extensive equipment and can be a space such as an office or meeting room, domestic environment, or social space. 3D-Printing of Non-Assembly, Articulated Models J. Calì, D. A. Calian, C. Amati, R. Kleinberger, A. Steed, J. Kautz, T. Weyrich 31(6), November 2012, pages 130:1-130:8 Additive manufacturing (3D printing) is commonly used to produce physical models for a wide variety of applications, from archaeology to design. While static models are directly supported, it is desirable to also be able to print models with functional articulations, such as a hand with joints and knuckles, without the need for manual assembly of joint components. Apart from having to address limitations inherent to the printing process, this poses a particular challenge for articulated models that should be posable: to allow the model to hold a pose, joints need to exhibit internal friction to withstand gravity, without their parts fusing during 3D printing. This has not been possible with previous printable joint designs. In this paper, we propose a method for converting 3D models into printable, functional, non-assembly models with internal friction. To this end, we have designed an intuitive work- flow that takes an appropriately rigged 3D model, automatically fits novel 3D-printable and posable joints, and provides an interface for specifying rotational constraints. We show a number of results for different articulated models, demonstrating the effectiveness of our method. Acting Rehearsal in Collaborative Multimodal Mixed Reality Environments W. Steptoe, J.-M. Normand, O. Oyekoya, F. Pece, E. Giannopoulos, F. Tecchia, A. Steed, T. Weyrich, J. Kautz, M. Slater PRESENCE: Teleoperators and Virtual Environments 21(4), Fall 2012, pages 406-422 This paper presents experience of using our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the "destination-visitor� paradigm, which we define and motivate. We detail our heterogeneous system architecture, which spans over the three distributed and technologically-asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful. Background Inpainting for Videos with Dynamic Objects and a Free-moving Camera M. Granados, K. I. Kim, J. Tompkin, J. Kautz, C. Theobalt We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. In a final step, intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps. Match Graph Construction for Large Image Databases K. I. Kim, J. Tompkin, M. Theobald, J. Kautz, C. Theobalt How best to efficiently establish correspondence among a large set of images or video frames is an interesting unanswered question. For large databases, the high computational cost of performing pair-wise image matching is a major problem. However, for many applications, images are inherently sparsely connected, and so current techniques try to correctly estimate small potentially matching subsets of databases upon which to perform expensive pair-wise matching. Our contribution is to pose the identification of potential matches as a link prediction problem in an image correspondence graph, and to propose an effective algorithm to solve this problem. Our algorithm facilitates incremental image matching: initially, the match graph is very sparse, but it becomes dense as we alternate between link prediction and verification. We demonstrate the effectiveness of our algorithm by comparing it with several existing alternatives on large-scale databases. Our resulting match graph is useful for many different applications. As an example, we show the benefits of our graph construction method to a label propagation application which propagates user-provided sparse object labels to other instances of that object in large image collections. British Machine Vision Conference (BMVC) 2012 September 2012, pages 132:1-132:11 (Best Industrial Impact Award) PatchMatch is a simple, yet very powerful and successful method for optimizing continuous labelling problems. The algorithm has two main ingredients: the update of the solution space by sampling and the use of the spatial neighbourhood to propagate samples. We show how these ingredients are related to steps in a specific form of belief propagation in the continuous space, called Particle Belief Propagation (PBP). However, PBP has thus far been too slow to allow complex state spaces. We show that unifying the two approaches yields a new algorithm, PMBP, which is more accurate than PatchMatch and orders of magnitude faster than PBP. To illustrate the benefits of our PMBP method we have built a new stereo matching algorithm with unary terms which are borrowed from the recent PatchMatch Stereo work and novel realistic pairwise terms that provide smoothness. We have experimentally verified that our method is an improvement over state-of-the-art techniques at sub-pixel accuracy level. Videoscapes: Exploring Sparse, Unstructured Video Collections J. Tompkin, K. I. Kim, J. Kautz, C. Theobalt PDF Supplemental Video WWW The abundance of mobile devices and digital cameras with video capture makes it easy to obtain large collections of video clips that contain the same location, environment, or event. However, such an unstructured collection is difficult to comprehend and explore. We propose a system that analyzes collections of unstructured but related video data to create a Videoscape: a data structure that enables interactive exploration of video collections by visually navigating - spatially and/or temporally - between different clips. We automatically identify transition opportunities, or portals. From these portals, we construct the Videoscape, a graph whose edges are video clips and whose nodes are portals between clips. Now structured, the videos can be interactively explored by walking the graph or by geographic map. Given this system, we gauge preference for different video transition styles in a user study, and generate heuristics that automatically choose an appropriate transition style. We evaluate our system using three further user studies, which allows us to conclude that Videoscapes provides significant benefits over related methods. Our system leads to previously unseen ways of interactive spatio-temporal exploration of casually captured videos, and we demonstrate this on several video collections. SIGGRAPH ET Interactive Light-Field Painting J. Tompkin, S. Muff, S. Jakuschevskij, J. McCann, J. Kautz, M. Alexa, W. Matusik ACM SIGGRAPH 2012 — Emerging Technologies Video WWW Since Sutherland's seminal SketchPad work in 1964, direct interaction with computers has been compelling: we can directly touch, move, and change what we see. Direct interaction is a major contribution to the success of smartphones and tablets, but the world is not flat. While existing technologies can display realistic multi-view stereoscopic 3D content reasonably well, interaction within the same 3D space often requires extensive additional hardware. This project presents a cheap and easy system that uses the same lenslet array for both multi-view autostereoscopic display and 3D light-pen position sensing. The display provides multi-user, glasses-free autostereoscopic viewing with motion parallax. A single near-infrared camera located behind the lenslet array is used to track a light pen held by the user. Full 3D position tracking is accomplished by analysing the pattern produced when light from the pen shines through the lenselet array. This light pen can be used to directly draw into a displayed light field, or as input for object manipulation or defining parametric lines. The system has a number of advantages. First, it inexpensively provides both multi-view autostereoscopic display and 3D sensing with 1:1 mapping. A review of the literature indicates that this has not been offered in previous interactive content-creation systems. Second, because the same lenslet array provides both 3D display and 3D sensing, the system design is extremely simple, inexpensive, and easy to build and calibrate. The demo at SIGGRAPH 2012 shows a variety of interesting interaction styles with a prototype implementation: freehand drawing, polygonal and parametric line drawing, model manipulation, and model editing. Interactive Multi-perspective Imagery from Photos and Videos H. Lieng, J. Tompkin, J. Kautz 31(2), May 2012, pages 285-293 Photographs usually show a scene from a single perspective. However, as commonly seen in art, scenes and objects can be visualized from multiple perspectives. Making such images manually is time consuming and tedious. We propose a novel system for designing multi-perspective images and videos. First, the images in the input sequence are aligned using structure from motion. This enables us to track feature points across the sequence. Second, the user chooses portal polygons in a target image into which different perspectives are to be embedded. The corresponding image regions from the other images are then copied into these portals. Due to the tracking feature and automatic warping, this approach is considerably faster than current tools. We explore a wide range of artistic applications using our system with image and video data, such as looking around corners and up and down stair cases, recursive multi-perspective imaging, cubism and panoramas. How Not to Be Seen — Object Removal from Videos of Crowded Scenes M. Granados, J. Tompkin, K. Kim, O. Grau, J. Kautz, C. Theobalt Removing dynamic objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown. State of the Art in Interactive Global Illumination T. Ritschel, T. Grosch, C. Dachsbacher, J. Kautz The interaction of light and matter in the world surrounding us is of striking complexity and beauty. Since the very beginning of computer graphics, adequate modeling of these processes and efficient computation is an intensively studied research topic and still not a solved problem. The inherent complexity stems from the underlying physical processes as well as the global nature of the interactions that let light travel within a scene. This article reviews the state of the art in \emph{interactive global illumination} computation, that is, methods that generate an image of a virtual scene in less than one second with an as exact as possible, or plausible, solution to the light transport. Additionally, the theoretical background and attempts to classify the broad field of methods are described. The strengths and weaknesses of different approaches, when applied to the different visual phenomena, arising from light interaction are compared and discussed. Finally, the article concludes by highlighting design patterns for interactive global illumination and a list of open problems. Towards Moment Imagery: Automatic Cinemagraphs J. Tompkin, F. Pece, K. Subr, J. Kautz PDF Video Webpage The imagination of the online photographic community has recently been sparked by so-called cinemagraphs: short, seamlessly looping animated GIF images created from video in which only parts of the image move. These cinemagraphs capture the dynamics of one particular region in an image for dramatic effect, and provide the creator with control over what part of a moment to capture. We create a cinemagraphs authoring tool combining video motion stabilisation, segmentation, interactive motion selection, motion loop detection and selection, and cinemagraph rendering. Our work pushes toward the easy and versatile creation of moments that cannot be represented with still imagery. JVRC Adapting Standard Video Codecs for Depth Streaming F. Pece, J. Kautz, T. Weyrich Joint Virtual Reality Conference (JVRC) 2011 September 2011, pages 1-8 Cameras that can acquire a continuous stream of depth images are now commonly available, for instance the Microsoft Kinect. It may seem that one should be able to stream these depth videos using standard video codecs, such as VP8 or H.264. However, the quality degrades considerably as the compression algorithms are geared towards standard three-channel (8-bit) colour video, whereas depth videos are single-channel but have a higher bit depth. We present a novel encoding scheme that efficiently converts the single-channel depth images to standard 8-bit three-channel images, which can then be streamed using standard codecs. Our encoding scheme ensures that the compression affects the depth values as little as possible. We show results obtained using two common video encoders (VP8 and H.264) as well as the results obtained when using JPEG compression. The results indicate that our encoding scheme performs much better than simpler methods. PDF PDF (4MB)] Additional Results Code ACM DL Video-based Characters – Creating New Human Performances from a Multi-view Video Database F. Xu, Y. Liu, C. Stoll, J. Tompkin, G. Bharaj, Q. Dai, H.-P. Seidel, J. Kautz, C. Theobalt PDF Video ACM DL We present a method to synthesize plausible video sequences of humans according to user-defined body motions and viewpoints. We first capture a small database of multi-view video sequences of an actor performing various basic motions. This database needs to be captured only once and serves as the input to our synthesis algorithm. We then apply a marker-less model-based performance capture approach to the entire database to obtain pose and geometry of the actor in each database frame. To create novel video sequences of the actor from the database, a user animates a 3D human skeleton with novel motion and viewpoints. Our technique then synthesizes a realistic video sequence of the actor performing the specified motion based only on the initial database. The first key component of our approach is a new efficient retrieval strategy to find appropriate spatio-temporally coherent database frames from which to synthesize target video frames. The second key component is a warping-based texture synthesis approach that uses the retrieved most-similar database frames to synthesize spatio-temporally coherent target video frames. For instance, this enables us to easily create video sequences of actors performing dangerous stunts without them being placed in harm's way. We show through a variety of result videos and a user study that we can synthesize realistic videos of people, even if the target motions and camera views are different from the database content. ACM TOG Edge-Aware Color Appearance M. H. Kim, T. Ritschel, J. Kautz ACM Transactions on Graphics (Presented at ACM SIGGRAPH 2011) 30(2), April 2011, pages 13:1-13:9 PDF Data ACM DL Color perception is recognized to vary with surrounding spatial structure, but the impact of edge smoothness on color has not been studied in color appearance modeling. In this work, we study the appearance of color under different degrees of edge smoothness. A psychophysical experiment was conducted to quantify the change in perceived lightness, colorfulness and hue with respect to edge smoothness. We confirm that color appearance, in particular lightness, changes noticeably with increased smoothness. Based on our experimental data, we have developed a computational model that predicts this appearance change. The model can be integrated into existing color appearance models. We demonstrate the applicability of our model on a number of examples. Display-aware Image Editing W.-K. Jeong, K. Johnson, I. Yu, J. Kautz, H. Pfister, S. Paris, IEEE International Conference on Computational Photography (ICCP) 2011 April 2011, pages 1-8 PDF Video IEEE DL We describe a set of image editing and viewing tools that explicitly take into account the resolution of the display on which the image is viewed. Our approach is twofolds. First, we design editing tools that process only the visible data, which is particularly useful for images that are large compared to the display. This encompasses a variety of cases such as multi-image panoramas and high-resolution medical data. While existing techniques cannot run at interactive rate when image size approaches or exceeds the gigapixel, our algorithms address this challenge by processing only the visible data and being highly data-parallel. Second, we propose an adaptive way to set viewing parameters such brightness and contrast. We let the users set different parameter values for different locations and scales, thereby enabling the exploration of rendition of various subsets of these large images. We demonstrate the efficiency of our approach on different display and image sizes. Since the computational complexity to render a view depends on the display resolution and not the actual input image resolution, we achieve interactive image editing even on a 16 gigapixel image. November 2010, pages 1-8 PACIFIC GRAPHICS Variance Soft Shadow Mapping B. Yang, Z. Dong, J. Feng, H.-P. Seidel, J. Kautz Computer Graphics Forum (Proceedings Pacific Graphics 2010) 29(7), September 2010, pages 2127-2134 We present variance soft shadow mapping (VSSM) for rendering plausible soft shadow in real-time. VSSM is based on the theoretical framework of percentage-closer soft shadows (PCSS) and exploits recent advances in variance shadow mapping (VSM). Our new formulation allows for the efficient computation of (average) blocker distances, a common bottleneck in PCSS-based methods. Furthermore, we avoid incorrectly lit pixels commonly encountered in VSM-based methods by appropriately subdividing the filter kernel. We demonstrate that VSSM renders high-quality soft shadows efficiently (usually over 100 fps) for complex scene settings. Its speed is at least one order of magnitude faster than PCSS for large penumbra. Interactive On-Surface Signal Deformation T. Ritschel, T. Thormaehlen, C. Dachsbacher, J. Kautz, H.-P. Seidel 29(4), July 2010, pages 36:1-36:8 We present an interactive system for the artistic control of visual phenomena visible on surfaces. Our method allows the user to intuitively reposition shadows, caustics, and indirect illumination using a simple click-and-drag user interface working directly on surfaces. In contrast to previous approaches, the positions of the lights or objects in the scene remain unchanged, enabling localized edits of individual shading components. Our method facilitates the editing by computing a mapping from one surface location to another. Based on this mapping, we can not only edit shadows, caustics, and indirect illumination but also other surface properties, such as color or texture, in a unified way. This is achieved using an intuitive user-interface that allows the user to specify position constraints with drag-and-drop or sketching operations directly on the surface. Our approach requires no explicit surface parametrization and handles scenes with arbitrary topology. We demonstrate the applicability of the approach to interactive editing of shadows, reflections, refractions, textures, caustics, and diffuse indirect light. The effectiveness of the system to achieve an artistic goal is evaluated by a user study. Acquisition and Analysis of Bispectral Bidirectional Reflectance and Reradiation Distribution Functions M. Hullin, J. Hanika, B. Ajdin, J. Kautz, H.-P. Seidel, H. Lensch In fluorescent materials, light from a certain band of incident wavelengths is reradiated at longer wavelengths, i.e., with a reduced per-photon energy. While fluorescent materials are common in everyday life, they have received little attention in computer graphics. Especially, no bidirectional reradiation measurements of fluorescent materials have been available so far. In this paper, we extend the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer between wavelengths, resulting in a Bispectral Bidirectional Reflectance and Reradiation Distribution Function (bispectral BRRDF). Using a bidirectional and bispectral measurement setup, we acquire reflectance and reradiation data of a variety of fluorescent materials, including vehicle paints, paper and fabric, and compare their renderings with RGB, RGB$\times$RGB, and spectral BRDFs. Our acquisition is guided by a principal component analysis on complete bispectral data taken under a sparse set of angles. We show that in order to faithfully reproduce the full bispectral information for all other angles, only a very small number of wavelength pairs needs to be measured at a high angular resolution. Micro-Rendering for Scalable, Parallel Final Gathering T. Ritschel, T. Engelhardt, T. Grosch, H.-P. Seidel, J. Kautz, C. Dachsbacher Recent approaches to global illumination for dynamic scenes achieve interactive frame rates by using coarse approximations to geometry, lighting, or both, which limits scene complexity and rendering quality. High-quality global illumination renderings of complex scenes are still limited to methods based on ray tracing. While conceptually simple, these techniques are computationally expensive. We present an efficient and scalable method to compute global illumination solutions at interactive rates for complex and dynamic scenes. Our method is based on parallel final gathering running entirely on the GPU. At each final gathering location we perform micro-rendering: we traverse and rasterize a hierarchical point-based scene representation into an importance-warped micro-buffer, which allows for BRDF importance sampling. The final reflected radiance is computed at each gathering location using the micro-buffers and is then stored in image-space. We can trade quality for speed by reducing the sampling rate of the gathering locations in conjunction with bilateral upsampling. We demonstrate the applicability of our method to interactive global illumination, the simulation of multiple indirect bounces, and to final gathering from photon maps. VMV Real-time Indirect Illumination with Clustered Visibility Z. Dong, T. Grosch, T. Ritschel, J. Kautz, H.-P. Seidel Vision, Modeling, and Visualization Workshop (VMV) 2009 Visibility computation is often the bottleneck when rendering indirect illumination. However, recent methods based on instant radiosity have demonstrated that accurate visibility is not required for indirect illumination. To exploit this insight, we cluster a large number of virtual point lights -- which represent the indirect illumination when using instant radiosity -- into a small number of virtual area lights. This allows us to compute visibility using recent real-time soft shadow algorithms. Such approximate and fractional from-area visibility is faster to compute and avoids banding when compared to exact binary from-point visibility. Our results show, that the perceptual error of this approximation is negligible and that we achieve real-time frame-rates for large and dynamic scenes. Perceptual Influence of Approximate Visibility in Indirect Illumination I. Yu, A. Cox, M. H. Kim, T. Ritschel, T. Grosch, C. Dachsbacher, J. Kautz ACM Transactions on Applied Perception (Presented at APGV 2009) 6(4), September 2009, pages 24:1-24:14 In this paper we evaluate the use of approximate visibility for efficient global illumination. Traditionally, accurate visibility is used in light transport. However, the indirect illumination we perceive on a daily basis is rarely of high frequency nature, as the most significant aspect of light transport in real-world scenes is diffuse, and thus displays a smooth gradation. This raises the question of whether accurate visibility is perceptually necessary in this case. To answer this question, we conduct a psychophysical study on the perceptual influence of approximate visibility on indirect illumination. This study reveals that accurate visibility is not required and that certain approximations may be introduced. Real-Time Global Illumination C. Dachsbacher, J. Kautz ACM SIGGRAPH 2009 August 2009, Courses Global illumination is an important factor in creating realistic scenes and provides visual cues for understanding scene geometry. However, global illumination is very costly and only recently has it become viable to render scenes with global illumination effects at interactive frame rates by exploiting the parallelism and programmability of modern GPUs. These recent GPU-based algorithms enable the computation of global illumination solutions for fully dynamic scenes and are of interest to both the academic research community and practitioners of interactive computer graphics. In this course, we will give a concise overview of recent GPU-based global illumination techniques that support fully dynamic scenes, compare them and discuss their various strengths and weaknesses. After introducing the necessary foundation (rendering equation, direct vs. indirect illumination, etc.), we cover the three main streams of real-time global illumination techniques: virtual point lights, screen-space techniques, and hierarchical finite elements. For each sub-topic, we first give a brief overview of the basic idea and continue with recent GPU-based methods sharing the same basic idea. Modeling Human Color Perception under Extended Luminance Levels M. H. Kim, T. Weyrich, J. Kautz 28(3), August 2009, pages 27:1-27:9 PDF Data Supplemental Display technology is advancing quickly with peak luminance increasing significantly, enabling high-dynamic-range displays. However, perceptual color appearance under extended luminance levels has not been studied, mainly due to the unavailability of psychophysical data. Therefore, we conduct a psychophysical study in order to acquire appearance data for many different luminance levels (up to 16,860cd/m2) covering most of the dynamic range of the human visual system. These experimental data allow us to quantify human color perception under extended luminance levels, yielding a generalized color appearance model. Our proposed appearance model is efficient, accurate and invertible. It can be used to adapt the tone and color of images to different dynamic ranges for cross-media reproduction while maintaining appearance that is close to human perception. Visio-lization: Generating Novel Facial Images U. Mohammed, S. J. D. Prince, J. Kautz Our goal is to generate novel realistic images of faces using a model trained from real examples. This model consists of two components: First we consider face images as samples from a texture with spatially varying statistics and describe this texture with a local non-parametric model. Second, we learn a parametric global model of all of the pixel values. To generate realistic faces, we combine the strengths of both approaches and condition the local non-parametric model on the global parametric model. We demonstrate that with appropriate choice of local and global models it is possible to reliably generate new realistic face images that do not correspond to any individual in the training data. We extend the model to cope with considerable intra-class variation (pose and illumination). Finally, we apply our model to editing real facial images: we demonstrate image in-painting, interactive techniques for improving synthesized images and modifying facial expressions. Capturing Multiple Illuminations using Time and Color Multiplexing B. De Decker, J. Kautz, T. Mertens, P. Bekaert Many vision and graphics problems such as relighting, structured light scanning and photometric stereo, need images of a scene under a number of different illumination conditions. It is typically assumed that the scene is static. To extend such methods to dynamic scenes, dense optical flow can be used to register adjacent frames. This registration becomes inaccurate if the frame rate is too low with respect to the degree of movement in the scenes. We present a general method that extends time multiplexing with color multiplexing in order to better handle dynamic scenes. Our method allows for packing more illumination information into a single frame, thereby reducing the number of required frames over which optical flow must be computed. Moreover, color-multiplexed frames lend themselves better to reliably computing optical flow. We show that our method produces better results compared to time-multiplexing alone. We demonstrate its application to relighting, structured light scanning and photometric stereo in dynamic scenes. Consistent Scene Illumination using a Chromatic Flash M. H. Kim, J. Kautz Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) 2009 May 2009, pages 83-89 Flash photography is commonly used in low-light conditions to prevent noise and blurring artifacts. However, flash photography commonly leads to a mismatch between scene illumination and flash illumination, due to the bluish light that flashes emit. Not only does this change the atmosphere of the original scene illumination, it also makes it difficult to perform white balancing because of the illumination differences. Professional photographers sometimes apply colored gel filters to the flashes in order to match the color temperature. While effective, this is impractical for the casual photographer. We propose a simple but powerful method to automatically match the correlated color temperature of the auxiliary flash light with that of scene illuminations allowing for well-lit photographs while maintaining the atmosphere of the scene. Our technique consists of two main components. We first estimate the correlated color temperature of the scene, e.g., during image preview. We then adjust the color temperature of the flash to the scene's correlated color temperature, which we achieve by placing a small trichromatic LCD in front of the flash. We demonstrate the effectiveness of this approach with a variety of examples. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography T. Mertens, J. Kautz, F. Van Reeth 28(1), March 2009, pages 161-171 (extended version of PG'07) DOI Code We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to HDR first. Skipping the physically-based HDR assembly step simplifies the acquisition pipeline. This avoids camera response curve calibration and is computationally efficient. It also allows for including flash images in the sequence. Our technique blends multiple exposures, guided by simple quality measures like saturation and contrast. This is done in a multi-resolution fashion to account for the brightness variation in the sequence. The resulting image quality is comparable to existing tone mapping operators. Imperfect Shadow Maps for Efficient Computation of Indirect Illumination T. Ritschel, T. Grosch, M. Kim, H.-P. Seidel, C. Dachsbacher, J. Kautz We present a method for interactive computation of indirect illumination in large and fully dynamic scenes based on approximate visibility queries. While the high-frequency nature of direct lighting requires accurate visibility, indirect illumination mostly consists of smooth gradations, which tend to mask errors due to incorrect visibility. We exploit this by approximating visibility for indirect illumination with imperfect shadow maps—low-resolution shadow maps rendered from a crude point-based representation of the scene. These are used in conjunction with a global illumination algorithm based on virtual point lights enabling indirect illumination of dynamic scenes at real-time frame rates. We demonstrate that imperfect shadow maps are a valid approximation to visibility, which makes the simulation of global illumination an order of magnitude faster than using accurate visibility. Real-Time, All-Frequency Shadows in Dynamic Scenes T. Annen, Z. Dong, T. Mertens, P. Bekaert, H.-P. Seidel, J. Kautz PDF Video Talk Shadow computation in dynamic scenes under complex illumination is a challenging problem. Methods based on precomputation provide accurate, real-time solutions, but are hard to extend to dynamic scenes. Specialized approaches for soft shadows can deal with dynamic objects but are not fast enough to handle more than one light source. In this paper, we present a technique for rendering dynamic objects under arbitrary environment illumination, which does not require any precomputation. The key ingredient is a fast, approximate technique for computing soft shadows, which achieves several hundred frames per second for a single light source. This allows for approximating environment illumination with a sparse collection of area light sources and yields real-time frame rates. GRAPHICS INTERFACE Exponential Shadow Maps T. Annen, T. Mertens, H.-P. Seidel, E. Flerackers, J. Kautz Graphics Interface 2008 May 2008, pages 155-161 Rendering high-quality shadows in real-time is a challenging problem. Shadow mapping has proved to be an efficient solution, as it scales well for complex scenes. However, it suffers from aliasing problems. Filtering the shadow map alleviates aliasing, but unfortunately, native hardware-accelerated filtering cannot be applied, as the shadow test has to take place beforehand. We introduce a simple approach to shadow map filtering, by approximating the shadow test using an exponential function. This enables us to pre-filter the shadow map, which in turn allows for high quality hardware-accelerated filtering. Compared to previous filtering techniques, our technique is faster, consumes less memory and produces less artifacts. Interactive Global Illumination Based on Coherent Surface Shadow Maps T. Ritschel, T. Grosch, J. Kautz, H.-P. Seidel Interactive rendering of global illumination effects is a challenging problem. While precomputed radiance transfer (PRT) is able to render such effects in real time the geometry is generally assumed static. This work proposes to replace the precomputed lighting response used in PRT by precomputed depth. Precomputing depth has the same cost as precomputing visibility, but allows visibility tests for moving objects at runtime using simple shadow mapping. For this purpose, a compression scheme for a high number of coherent surface shadow maps (CSSMs) covering the entire scene surface is developed. CSSMs allow visibility tests between all surface points against all points in the scene. We demonstrate the effectiveness of CSSM-based visibility using a novel combination of the lightcuts algorithm and hierarchical radiosity, which can be efficiently implemented on the GPU. We demonstrate interactive n-bounce diffuse global illumination, with a final glossy bounce and many high frequency effects: general BRDFs, texture and normal maps, and local or distant lighting of arbitrary shape and distribution -- all evaluated per-pixel. Furthermore, all parameters can vary freely over time -- the only requirement is rigid geometry. Characterization for High Dynamic Range Imaging M. Kim, J. Kautz Eurographics 2008 April 2008, pages 691-698 In this paper we present a new practical camera characterization technique to improve color accuracy in high dynamic range (HDR) imaging. Camera characterization refers to the process of mapping device-dependent signals, such as digital camera RAW images, into a well-defined color space. This is a well-understood process for low dynamic range (LDR) imaging and is part of most digital cameras --- usually mapping from the raw camera signal to the sRGB or Adobe RGB color space. This paper presents an efficient and accurate characterization method for high dynamic range imaging that extends previous methods originally designed for LDR imaging. We demonstrate that our characterization method is very accurate even in unknown illumination conditions, effectively turning a digital camera into a measurement device that measures physically accurate radiance values --- both in terms of luminance and color --- rivaling more expensive measurement instruments. Exposure Fusion Pacific Graphics 2007 Interactive Global Illumination Using Implicit Visibility Z. Dong, J. Kautz, C. Theobalt, H.-P. Seidel October 2007, pages 77-86 Rendering global illumination effects for dynamic scenes at interactive frame rates is a computationally challenging task. Much of the computation time needed is spent during visibility queries between individual scene elements, and it is almost illusive to update this information at real-time even for moderately complex scenes. In this paper, we propose a global illumination approach for dynamic scenes that runs at near-real-time frame rates on a single PC. Our method is inspired by the principles of hierarchical radiosity and tackles the visibility problem by implicitly evaluating mutual visibility while constructing a hierarchical link structure between scene elements. By means of the same efficient and easy-to-implement framework, we are able to reproduce a large variety of complex lighting effects for moderately sized scenes, such as interreflections, environment map lighting as well as area light sources. Efficient Reflectance and Visibility Approximations for Environment Map Rendering P. Green, J. Kautz, F. Durand We present a technique for approximating isotropic BRDFs and precomputed self-occlusion that enables accurate and efficient prefiltered environment map rendering. Our approach uses a nonlinear approximation of the BRDF as a weighted sum of isotropic Gaussian functions. Our representation requires a minimal amount of storage, can accurately represent BRDFs of arbitrary sharpness, and is above all efficient to render. We precompute visibility due to self-occlusion and store a low-frequency approximation suitable for glossy reflections. We demonstrate our method by fitting our representation to measured BRDF data, yielding high visual quality at real-time frame rates. Interactive Editing and Modeling of Bidirectional Texture Functions J. Kautz, S. Boulos, F. Durand While measured Bidirectional Texture Functions (BTF) enable impressive realism in material appearance, they offer little control, which limits their use for content creation. In this work, we interactively manipulate BTFs and create new BTFs from flat textures. We present an out-of-core approach to manage the size of BTFs and introduce new editing operations that modify the appearance of a material. These tools achieve their full potential when selectively applied to subsets of the BTF through the use of new selection operators. We further analyze the use of our editing operators for the modification of important visual characteristics such as highlights, roughness, and fuzziness. Results compare favorably to the direct alteration of micro-geometry and reflectances of ground-truth synthetic data. APGV Is Accurate Occlusion of Glossy Reflections Necessary? O. Kozlowski, J. Kautz Symposium on Applied Perception in Graphics and Visualization 2007 July 2007, pages 91-98 Much research in recent times has been conducted towards realtime rendering of accurate glossy reflections under direct, natural illumination including correct occlusions. The view dependent nature of these reflections will always cause this computation to be expensive unless heavily approximated. There also remains a question as to whether humans are even capable of noticing the difference in accuracy or whether our perception of the realism of the scene remains unchanged and thus the extra effort expended in rendering accurate reflections is effectively wasted. We conduct a user study to analyse any decline in perceived realism of glossy scenes rendered with a variety of specular occlusion approximations under a multitude of BRDFs, lighting environments and camera orientations. We demonstrate that although no one approximation is always suitable, it is rare to have a scene whose computational complexity cannot be decreased to some degree. Convolution Shadow Maps T. Annen, T. Mertens, P. Bekaert, H.-P. Seidel, J. Kautz June 2007, pages 51-60 PDF Video Demo We present Convolution Shadow Maps, a novel shadow representation that affords efficient arbitrary linear filtering of shadows. Traditional shadow mapping is inherently non-linear w.r.t. the stored depth values, due to the binary shadow test. We linearize the problem by approximating shadow test as a weighted summation of basis terms. We demonstrate the usefulness of this representation, and show that hardware-accelerated anti-aliasing techniques, such as tri-linear filtering, can be applied naturally to Convolution Shadow Maps. Our approach can be implemented very efficiently in current generation graphics hardware, and offers real-time frame rates. Interactive Illumination with Coherent Shadow Maps T. Ritschel, T. Grosch, J. Kautz, S. Müller We present a new method for interactive illumination computations based on precomputed visibility using coherent shadow maps (CSMs). It is well-known that visibility queries dominate the cost of physically based rendering. Precomputing all visibility events, for instance in the form of many shadow maps, enables fast queries and allows for real-time computation of illumination but requires prohibitive amounts of storage. We propose a lossless compression scheme for visibility information based on shadow maps that efficiently exploits coherence. We demonstrate a Monte Carlo renderer for direct lighting using CSMs that runs entirely on graphics hardware. We support spatially varying BRDFs, normal maps, and environment maps all with high frequencies, spatial as well as angular. Multiple dynamic rigid objects can be combined in a scene. As opposed to precomputed radiance transfer techniques, that assume distant lighting, our method includes distant lighting as well as local area lights of arbitrary shape, varying intensity, or anisotropic light distribution that can freely vary over time. Packet-Based Whitted and Distribution Ray Tracing S. Boulos, D. Edwards, J. Lacewell, J. Kniss, J. Kautz, I. Wald, P. Shirley Much progress has been made toward interactive ray tracing, but most research has focused specifically on ray casting. A common approach is to use "packets" of rays to amortize cost across sets of rays. Whether "packets" can be used to speed up the cost of reflection and refraction rays is unclear. The issue is complicated since such rays do not share common origins and often have less directional coherence than viewing and shadow rays. Since the primary advantage of ray tracing over rasterization is the computation of global effects, such as accurate reflection and refraction, this lack of knowledge should be corrected. We are also interested in exploring whether distribution ray tracing, due to its stochastic properties, further erodes the effectiveness of techniques used to accelerate ray casting. This paper addresses the question of whether packet-based ray algorithms can be effectively used for more than visibility computation. We show that by choosing an appropriate data structure and a suitable packet assembly algorithm we can extend the idea of "packets" from ray casting to Whitted-style and distribution ray tracing, while maintaining efficiency. Physically-Based Reflectance for Games N. Hoffman, D. Baker, J. Kautz July 2006, Courses This course discusses the practical implementation of physically-principled reflectance models in interactive graphics and video games, in current practice as well as upcoming technologies. The course begins with the visual phenomena important to the perception of reflectance in real-world materials, which it uses as background for the underlying theory and derivation of common reflectance models. After introducing the current game development pipeline, from content creation to rendering, the course then discusses rendering techniques for implementing reflectance models in games --- with emphasis on real-world trade offs such as shader performance, content creation efficiency, resource size considerations, and overall rendering quality. The course will help a researcher understand constraints in the game development pipeline and it will help a game developer understand the physical phenomena underlying reflectance models. Texture Transfer Using Geometry Correlation T. Mertens, J. Kautz, J. Chen, P. Bekaert, F. Durand June 2006, pages 273-284 PDF Supplemental Data Video Texture variation on real-world objects often correlates with underlying geometric characteristics and creates a visually rich appearance. We present a technique to transfer such geometry-dependent texture variation from an example textured model to new geometry in a visually consistent way. It captures the correlation between a set of geometric features, such as curvature, and the observed diffuse texture. We perform dimensionality reduction on the overcomplete feature set which yields a compact guidance field that is used to drive a spatially varying texture synthesis model. In addition, we introduce a method to enrich the guidance field when the target geometry strongly differs from the example. Our method transfers elaborate texture variation that follows geometric features, which gives 3D models a compelling photorealistic appearance. View-Dependent Precomputed Light Transport Using Nonlinear Gaussian Function Approximations P. Green, J. Kautz, W. Matusik, F. Durand ACM Symposium in Interactive 3D Graphics and Games (I3D) 2006 March 2006, pages 7-14 We propose a real-time method for rendering rigid objects with complex view-dependent effects under distant all-frequency lighting. Existing precomputed light transport approaches can render rich global illumination effects, but high-frequency view-dependent effects such as sharp highlights remain a challenge. We introduce a new representation of the light transport operator based on sums of Gaussians. The nonlinear parameters of our representation enable 1) arbitrary bandwidth because scale is encoded as a direct parameter, and 2) high-quality interpolation across view and mesh triangles because we interpolate the mean direction of the Gaussians, thereby preventing linear cross-fading artifacts. However, fitting the precomputed light transport data to this new representation requires solving a nonlinear regression problem that is more involved than traditional linear and nonlinear (truncation) approximation techniques. We present a new data fitting method based on optimization that includes energy terms aimed at enforcing artifactfree interpolation. We demonstrate that our method achieves high visual quality with a small storage cost and an efficient rendering algorithm. Precomputed Radiance Transfer: Theory and Practice J. Kautz, J. Lehtinen, P.-P. Sloan Interactive rendering of realistic objects under general lighting models poses three principal challenges. Handling complex light transport phenomena like shadows, inter-reflections, caustics and sub-surface scattering is difficult to do in real time. Integrating these effects over large area light sources compounds the difficulty, and finally real objects have complex spatially-varying BRDF's. Precomputed Radiance Transfer (PRT) encapsulates a family of techniques that partially addresses these challenges. PRT is an active of area of research that has relevance to both the academic research community and practitioners of interactive computer graphics. This technique and its variants are being actively investigated in the game development community and there is quite a lot of interest due to the recent appearance of PRT techniques in games such as "Halo 2". This course covers these techniques, compares them and discusses their various strengths and weaknesses. Efficient Rendering of Local Subsurface Scattering Tom Mertens, Jan Kautz, Philippe Bekaert, Frank Van Reeth, Hans-Peter Seidel A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image- or texture-space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration. Real-Time Shadowing Techniques J. Kautz, M. Stamminger, T. Akenine-Moeller, E. Chan, W. Heidrich, M. Kilgard Shadows heighten realism and provide important visual cues about the spatial relationships between objects. But integration of robust shadow shadowing techniques in real-time rendering is not an easy task. In this course on how shadows are incorporated in real-time rendering, attendees learn basic shadowing techniques and more advanced techniques that exploit new features of graphics hardware. The course begins with shadowing techniques using shadow maps. After an introduction to shadow maps and general improvements of this technique (filtering, depth bias, omnidirectional lights, etc.), the first section describes two methods for reducing sampling artifacts: perspective shadow maps and silhouette maps. Both techniques can significantly improve shadow quality, but they require careful implementation. The course continues with extensions of the shadow mapping method that allow soft shadows from linear and area light sources. The second part of the course discusses recent advances in efficient and robust implementation of shadow volumes on graphics hardware and then shows how shadow volumes can be extended to generate accurate soft shadows from area lights. Finally, the course summarizes real-time shadowing from full lighting environments using the technique of precomputed radiance transfer. The course explains the differences among these algorithms and their strengths and weaknesses. Implementation details, often omitted in technical papers, are provided. And throughout the course, the tradeoffs between quality and performance are illustrated for the different techniques. Hemispherical Rasterization for Self-Shadowing of Dynamic Objects J. Kautz, J. Lehtinen, T. Aila PDF Video (15MB) Video (28MB) We present a method for interactive rendering of dynamic models with self-shadows due to time-varying, low-frequency lighting environments. In contrast to previous techniques, the method is not limited to static or pre-animated models. Our main contribution is a hemispherical rasterizer, which rapidly computes visibility by rendering blocker geometry into a 2D occlusion mask with correct occluder fusion. The response of an object to the lighting is found by integrating the visibility function at each of the vertices against the spherical harmonic functions and the BRDF. This yields transfer coefficients that are then multiplied by the lighting coefficients to obtain the final, shadowed exitant radiance. No precomputation is necessary and memory requirements are modest. The method supports both diffuse and glossy BRDFs. A Self-Shadow Algorithm for Dynamic Hair using Clustered Densities T. Mertens, J. Kautz, P. Bekaert, F. van Reeth PDF Video 1 Video 2 Self-shadowing is an important factor in the appearance of hair and fur. In this paper we present a new rendering algorithm to accurately compute shadowed hair at interactive rates using graphics hardware. No constraint is imposed on the hair style, and its geometry can be dynamic. Similar to previously presented methods, a 1D visibility function is constructed for each line of sight of the light source view. Our approach differs from other work by treating the hair geometry as a 3D density field, which is sampled on the fly using simple rasterization. The rasterized fragments are clustered, effectively estimating the density of hair along a ray. Based hereon, the visibility function is constructed. We show that realistic self-shadowing of thousands of individual dynamic hair strands can be rendered at interactive rates using consumer graphics hardware. Spherical Harmonic Gradients for Mid-Range Illumination T. Annen, J. Kautz, F. Durand, H.-P. Seidel Spherical harmonics are often used for compact description of incident radiance in low-frequency but distant lighting environments. For interaction with nearby emitters, computing the incident radiance at the center of an object only is not sufficient. Previous techniques then require expensive sampling of the incident radiance field at many points distributed over the object. Our technique alleviates this costly requirement using a first-order Taylor expansion of the spherical-harmonic lighting coefficients around a point. We propose an interpolation scheme based on these gradients requiring far fewer samples (one is often sufficient). We show that the gradient of the incident-radiance spherical harmonics can be computed for little additional cost compared to the coefficients alone. We introduce a semi-analytical formula to calculate this gradient at run-time and describe how a simple vertex shader can interpolate the shading. The interpolated representation of the incident radiance can be used with any low-frequency light-transfer technique. Decoupling BRDFs from Surface Mesostructures J. Kautz, M. Sattler, R. Sarlette, R. Klein, H.-P. Seidel PDF Video 1 Video 2 Video 3 We present a technique for the easy acquisition of realistic materials and mesostructures, without acquiring the actual BRDF. The method uses the observation that under certain circumstances the mesostructure of a surface can be acquired independently of the underlying BRDF. The acquired data can be used directly for rendering with little preprocessing. Rendering is possible using an offline renderer but also using graphics hardware, where it achieves real-time frame rates. Compelling results are achieved for a wide variety of materials. Hardware Lighting and Shading: A Survey J. Kautz Computers Graphics Forum 23(1), March 2004, pages 85-112 Traditionally, hardware rasterizers only support the Phong lighting model in combination with Gouraud shading using point light sources. However, the Phong lighting model is strictly empirical and physically implausible. Gouraud shading also tends to undersample the highlight unless a highly tesselated surface is used. Hence, higherquality hardware accelerated lighting and shading has gained much interest in the recent five years. The research on hardware lighting and shading is two-fold. On the one hand, better lighting models for local illumination (assuming point light sources but evaluated per pixel) were demonstrated to be amenable to hardware implementation. On the other hand, recent research has demonstrated that even area lights, represented as environment maps, can be combined with complex lighting models. In both areas, many articles have been published, making it hard to decide which algorithm is well-suited for which application. This state-of-the-art report will review all relevent articles in both areas, and list advantages and disadvantages of each algorithm. C&G Advanced Environment Mapping in VR Applications J. Kautz, K. Daubert, H.-P. Seidel 28(1), February 2004, pages 99-104 In this paper, we propose a simple approach for rendering diffuse and glossy reflections using environment maps. This approach is geared towards VR applications, where realism and fast rendering is important. We exploit certain properties of diffuse reflections and certain features of graphics hardware for glossy reflections. This results in a very fast, single-pass rendering algorithm, which even allows to dynamically vary the incident lighting. T. Mertens, J. Kautz, P. Bekaert, H.-P. Seidel, F. Van Reeth A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image-space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration. Interactive Rendering of Translucent Objects H. Lensch, M.Goesele, P. Bekaert, J. Kautz, M. Magnor, J. Lang, H.-P. Seidel 22(2), 2003, pages 195-205 This paper presents a rendering method for translucent objects, in which view point and illumination can be modi- fied at interactive rates. In a preprocessing step the impulse response to incoming light impinging at each surface point is computed and stored in two different ways: The local effect on close-by surface points is modeled as a per-texel filter kernel that is applied to a texture map representing the incident illumination. The global response (i.e. light shining through the object) is stored as vertex-to-vertex throughput factors for the triangle mesh of the object. During rendering, the illumination map for the object is computed according to the current lighting situation and then filtered by the precomputed kernels. The illumination map is also used to derive the incident illumination on the vertices which is distributed via the vertex-to-vertex throughput factors to the other vertices. The final image is obtained by combining the local and global response. We demonstrate the performance of our method for several models. Interactive Rendering of Translucent Deformable Objects Realistic rendering of materials such as milk, fruits, wax, marble, and so on, requires the simulation of subsurface scattering of light. This paper presents an algorithm for plausible reproduction of subsurface scattering effects. Unlike previously proposed work, our algorithm allows to interactively change lighting, viewpoint, subsurface scattering properties, as well as object geometry. The key idea of our approach is to use a hierarchical boundary element method to solve the integral describing subsurface scattering when using a recently proposed analytical BSSRDF model. Our approach is inspired by hierarchical radiosity with clustering. The success of our approach is in part due to a semi-analytical integration method that allows to compute needed point-to-patch form-factor like transport coefficients efficiently and accurately where other methods fail. Our experiments show that high-quality renderings of translucent objects consisting of tens of thousands of polygons can be obtained from scratch in fractions of a second. An incremental update algorithm further speeds up rendering after material or geometry changes. Efficient Light Transport Using Precomputed Visibility K. Daubert, W. Heidrich, J. Kautz, J.-M. Dischler, H.-P. Seidel Global illumination algorithms usually spend the majority of time on visibility computations. It therefore seems natural to reuse visibility information acquired at one point for different computations. For example, once we've established the visibility between two points in a scene, we can use this information for multiple light paths in which different amounts of energy are transported between the points. This is particularly advantageous in cases where we need to compute multiple images with varying illumination or camera settings. Researchers have developed several approaches where illumination information computed for one point in the scene is reused for nearby points. Because these methods store illumination information (irradiance or incident radiance) at discrete points, it isn't possible to reuse the information for light source changes. In addition, finding the desired information for one point in space requires a search through the data structure. Although we can perform this search in logarithmic expected time, the resulting memory access patterns are irregular and can significantly affect performance. We take a different approach. Instead of storing and reusing illumination information, we directly reuse visibility information stored in a regular fashion that allows for constant time lookups. Our method is a generalization of Heidrich et al.'s method for height fields to different geometries such as general parametric surfaces, triangle meshes without a global parameterization, and volumes. For each case we propose efficient algorithms for computing direct and indirect illumination, which also account for shadows. Using the method of dependent tests - a variant of Monte Carlo integration - we can access the visibility in a structured fashion. This allows for efficient memory access patterns in software implementations and lets us use graphics hardware for the light transport. Matrix Radiance Transfer J. Lehtinen, J. Kautz ACM Symposium on Interactive 3D Graphics 2003 April 2003, pages 59-64 Precomputed Radiance Transfer allows interactive rendering of objects illuminated by low-frequency environment maps, including self-shadowing and interreflections. The expensive integration of incident lighting is partially precomputed and stored as matrices. Incorporating anisotropic, glossy BRDFs into precomputed radiance transfer has been previously shown to be possible, but none of the previous methods offer real-time performance. We propose a new method, matrix radiance transfer, which significantly speeds up exit radiance computation and allows anisotropic BRDFs. We generalize the previous radiance transfer methods to work with a matrix representation of the BRDF and optimize exit radiance computation by expressing the exit radiance in a new, directionally locally supported basis set instead of the spherical harmonics. To determine exit radiance, our method performs four dot products per vertex in contrast to previous methods, where a full matrix-vector multiply is required. Image quality can be controlled by adapting the number of basis functions. We compress our radiance transfer matrices through principal component analysis (PCA). We show that it is possible to render directly from the PCA representation, which also enables the user to trade interactively between quality and speed. Image-Based Reconstruction of Spatial Appearance and Geometric Detail H. Lensch, J. Kautz, M. Goesele, W. Heidrich, H.-P. Seidel ACM Transactions on Graphics 22(2), April 2003, pages 234-257 Real-world objects are usually composed of a number of different materials that often show subtle changes even within a single material. Photorealistic rendering of such objects requires accurate measurements of the reflection properties of each material, as well as the spatially varying effects. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. Real-world objects often also have fine geometric detail that is not represented in an acquired mesh. To increase the detail, we derive normal maps even for non-Lambertian surfaces using our measured BRDFs. A high quality model of a real object can be generated with relatively little input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object. JGT Real-Time Halftoning J. Kautz, H.-P. Seidel Journal of Graphics Tools 7(4), 2002, pages 27-32 We present a real-time hardware accelerated method for rendering objects using halftoning. It is solely based on texture mapping and creates the impression of a printed image, although the lighting and the objects can be changed and manipulated on-the-fly. GAME GEMS Rendering with Handcrafted Shading Models Game Programming Gems 3 Quite a few techniques have been proposed on how to implement more complex and realistic shading models with graphics hardware, making them useful for games. Still, these techniques are rarely used probably due to two reasons: complex implementation issues and unintuitive parameters for the used shading models. We propose to use a simple technique called "NDF shading". It allows an artist to handcraft shading models; shape and color of highlights are simply stored in a bitmap. The technique uses per-pixel shading, and can also be used in conjunction with bump mapping; anisotropic shading models can also be created. Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments P.-P. Sloan, J. Kautz, J. Snyder 21(3), July 2002, pages 527-536 We present a new, real-time method for rendering diffuse and glossy objects in low-frequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of arbitrary, low-frequency incident lighting into transferred radiance which includes global effects like shadows and interreflections from the object onto itself. At run-time, these transfer functions are applied to actual incident lighting. Dynamic, local lighting is handled by sampling it close to the object every frame; the object can also be rigidly rotated with respect to the lighting and vice versa. Lighting and transfer functions are represented using low-order spherical harmonics. This avoids aliasing and evaluates efficiently on graphics hardware by reducing the shading integral to a dot product of 9 to 25 element vectors for diffuse receivers. Glossy objects are handled using matrices rather than vectors. We further introduce functions for radiance transfer from a dynamic lighting environment through a preprocessed object to neighboring points in space. These allow soft shadows and caustics from rigidly moving objects to be cast onto arbitrary, dynamic receivers. We demonstrate real-time global lighting effects with this approach. EGWR Fast, Arbitrary BRDF Shading for Low-Frequency Lighting Using Spherical Harmonics J. Kautz, P.-P. Sloan, J. Snyder Eurographics Workshop on Rendering 2002 Real-time shading using general (e.g., anisotropic) BRDFs has so far been limited to a few point or directional light sources. We extend such shading to smooth, area lighting using a low-order spherical harmonic basis for the lighting environment. We represent the 4D product function of BRDF times the cosine factor (dot product of the incident lighting and surface normal vectors) as a 2D table of spherical harmonic coefficients. Each table entry represents, for a single view direction, the integral of this product function times lighting on the hemisphere expressed in spherical harmonics. This reduces the shading integral to a simple dot product of 25 component vectors, easily evaluatable on PC graphics hardware. Non-trivial BRDF models require rotating the lighting coefficients to a local frame at each point on an object, currently forming the computational bottleneck. Real-time results can be achieved by fixing the view to allow dynamic lighting or vice versa. We also generalize a previous method for precomputed radiance transfer to handle general BRDF shading. This provides shadows and interreflections that respond in real-time to lighting changes on a preprocessed object of arbitrary material (BRDF) type. GRAPHICS HARDWARE Real-Time Bump Map Synthesis J. Kautz, W. Heidrich, H.-P. Seidel Eurographics/SIGGRAPH Workshop on Graphics Hardware 2001 August 2001, pages 109-114 In this paper we present a method that automatically synthesizes bump maps at arbitrary levels of detail in real-time. The only input data we require is a normal density function; the bump map is generated according to that function. It is also used to shade the generated bump map. The technique allows to infinitely zoom into the surface, because more (consistent) detail can be created on the fly. The shading of such a surface is consistent when displayed at different distances to the viewer (assuming that the surface structure is self-similar). The bump map generation and the shading algorithm can also be used separately. Image-Based Reconstruction of Spatially Varying Materials The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object. Hardware Accelerated Displacement Mapping for Image Based Rendering In this paper, we present a technique for rendering displacement mapped geometry using current graphics hardware. Our method renders a displacement by slicing through the enclosing volume. The alpha-test is used to render only the appropriate parts of every slice. The slices need not to be aligned with the base surface, e.g. it is possible to do screen-space aligned slicing. We then extend the method to be able to render the intersection between several displacement mapped polygons. This is used to render a new kind of image-based objects based on images with depth, which we call image based depth objects. This technique can also directly be used to accelerate the rendering of objects using the image-based visual hull. Other warping based IBR techniques can be accelerated in a similar manner. Achieving Real-Time Realistic Reflectances J. Kautz, J. Blow, C. Blasband, A. Ahmad, M. McCool Game Developer Magazine January and February 2001, pages 32-37 and 38-45 Within the game development community, several current approaches address the illumination problem. Point lights (with optional fog, distance, or shadow attenuation) are often used to determine the amount of light that arrives at a surface. Directional light sources and light maps effectively serve this purpose as well. Unfortunately, sophisticated models of reflectance have not really made an appearance in games. In terms of reflectance, most games to date use the Phong reflectance model or rely on strict intensity modulation to determine how surfaces reflect the light that strikes them. While this is not a bad thing, Phong reflectance and intensity modulation are limited in the type of lighting phenomena they are capable of simulating. Consequently, they are unable to reproduce the appearance that we observe of many real-world materials. This two-part series of articles focuses on the reflectance aspect of lighting. We will discuss a technique that implements more general reflectance models for a wide variety of surface materials, for example velvet, copper, and others. This is called separable decomposition and is an effective and efficient way to incorporate physically accurate reflection models and ultimately increase the level of realism in a game. The technique can be used in conjunction with point light sources, directional light sources, light maps, shadows, and fog, since each of these influences only the illumination component of lighting and does not affect the reflectance model. Towards Interactive Bump Mapping with Anisotropic Shift-Variant BRDFs August 2000, pages 51-58 (Best Paper Award - 2nd Place) In this paper a technique is presented that combines interactive hardware accelerated bump mapping with shift-variant anisotropic reflectance models. An evolutionary path is shown how some simpler reflectance models can be rendered at interactive rates on current low-end graphics hardware, and how features from future graphics hardware can be exploited for more complex models. We show how our method can be applied to some well known reflectance models, namely the Banks model, Ward's model, and an anisotropic version of the Blinn-Phong model, but it is not limited to these models. Furthermore, we take a close look at the necessary capabilities of the graphics hardware, identify problems with current hardware, and discuss possible enhancements. Illuminating Micro Geometry Based on Precomputed Visibility W. Heidrich, K. Daubert, J. Kautz, H.-P. Seidel Many researchers have been arguing that geometry, bump maps, and BRDFs present a hierarchy of detail that should be exploited for efficient rendering purposes. In practice however, this is often not possible due to inconsistencies in the illumination for these different levels of detail. For example, while bump map rendering often only considers direct illumination and no shadows, geometry-based rendering and BRDFs will mostly also respect shadowing effects, and in many cases even indirect illumination caused by scattered light. In this paper, we present an approach for overcoming these inconsistencies. We introduce an inexpensive method for consistently illuminating height fields and bump maps, as well as simulating BRDFs based on precomputed visibility information. With this information we can achieve a consistent illumination across the levels of detail. The method we propose offers significant performance benefits over existing algorithms for computing the light scattering in height fields and for computing a sampled BRDF representation using a virtual gonioreflectometer. The performance can be further improved by utilizing graphics hardware, which then also allows for interactive display. Finally, our method also approximates the changes in illumination when the height field, bump map, or BRDF is applied to a surface with a different curvature. A Unified Approach to Prefiltered Environment Maps J. Kautz, P.-P. Vázquez, W. Heidrich, H.-P. Seidel Different methods for prefiltered environment maps have been proposed, each of which has different advantages and disadvantages. We present a general notation for prefiltered environment maps, which will be used to classify and compare the existing methods. Based on that knowledge we develop three new algorithms: 1. A fast hierarchical prefiltering method that can be utilized for all previously proposed prefiltered environment maps. 2. A technique for hardware-accelerated prefiltering of environment maps that achieves interactive rates even on low-end workstations. 3. Anisotropic environment maps using the Banks model. Approximation of Glossy Reflection with Prefiltered Environment Maps J. Kautz, M. D. McCool A method is presented that can render glossy reflections with arbitrary isotropic bidirectional reflectance distribution functions (BRDFs) at interactive rates using texture mapping. This method is based on the well-known environment map technique for specular reflections. Our approach uses a single- or multilobe representation of bidirectional reflectance distribution functions, where the shape of each radially symmetric lobe is also a function of view elevation. This approximate representation can be computed efficiently using local greedy fitting techniques. Each lobe is used to filter specular environment maps during a preprocessing step, resulting in a three-dimensional environment map. For many BRDFs, simplifications using lower-dimensional approximations, coarse sampling with respect to view elevation, and small numbers of lobes can still result in a convincing approximation to the true surface reflectance. Interactive Rendering with Arbitrary BRDFs using Separable Approximations A separable decomposition of bidirectional reflectance distributions (BRDFs) is used to implement arbitrary reflectances from point sources on existing graphics hardware. Two-dimensional texture mapping and compositing operations are used to reconstruct samples of the BRDF at every pixel at interactive rates. A change of variables, the Gram-Schmidt halfangle/difference vector parameterization, improves separability. Two decomposition algorithms are also presented. The singular value decomposition (SVD) minimizes RMS error. The normalized decomposition is fast and simple, using no more space than what is required for the final representation. Canned Lightsources W. Heidrich, J. Kautz, P. Slusallek, H.-P. Seidel Complex luminaries and lamp geometries can greatly increase the realism of synthetic images. Unfortunately, the correct rendering of illumination from complex lamps requires costly global illumination algorithms to simulate the indirect illumination re- flected or refracted by parts of the lamp. Currently, this simulation has to be repeated for every scene in which a lamp is to be used, and even for multiple instances of a lamp within a single scene. In this paper, we separate the global illumination simulation of the interior lamp geometry from the actual scene rendering. The lightfield produced by a given lamp is computed using any of the known global illumination algorithms. Afterwards, a discretized version of this lightfield is stored away for later use as a lightsource. We describe how this data can be efficiently utilized to illuminate a given scene using a number of different rendering algorithms, such as ray-tracing and hardware-based rendering. Vice President of Learning and Perception Research, NVIDIA, USA Head of the learning and perception research group at NVIDIA. Senior Director of Visual Computing and Machine Learning Research, NVIDIA, USA Director of Visual Computing Research, NVIDIA, USA Senior Research Manager, NVIDIA, USA Head of the visual computing research group at NVIDIA. Senior Research Scientist, NVIDIA, USA Research in comp. photography and computer vision. Professor of Visual Computing, University College London, UK Associate Professor (Reader), University College London, UK Associate Professor (Senior Lecturer), University College London, UK Assistant Professor (Lecturer), University College London, UK Research in visual computing, teaching and supervision of students (BSc, MSc, PhD). Post-Doctoral Researcher, Massachusetts Institute of Technology, USA Working on appearance editing and realistic, real-time rendering. PhD Student, Max-Planck-Institut fur Informatik, Germany Received PhD (summa cum laude). Graduate Student, University of Waterloo, Canada Received MMath. Student, University Erlangen-Nurnberg, Germany Received Diplom-Informatiker (MSc in Computer Science). NVIDIA | Learning and Perception Research Group 2 Technology Park Drive | Westford, MA 01886 Some text in the Modal...
CommonCrawl
History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up. Why is Dirichlet's L-function called "L-function"? In analytic number theory, the function $$ L(s,\chi_m) = \sum_{n=1}^\infty \frac{\chi_m(n)}{n^s}. $$ is called the Dirichlet L-function and has many important uses in the study of prime numbers. In particular, if $\chi_1$ is the trivial Dirichlet character, we have the identity $$ L(s,\chi_1)=\zeta(s). $$ Why were such functions called "L-functions? What is the functional significance of the letter L? KlangenKlangen It's the original notation used by Dirichlet. The reason why he chose L, without commenting on the choice, rather than some other letter is not known. Chances are there is not much of a reason, and he could just as well have chosen another letter. Answers to the MathOverflow question Why are they called L-functions? mainly also assert this, but also present some theories, including: It just fitted in naturally with the rest of the notation of a paper. It is for Legendre. It is for Lejeune. I rather doubt 3. The plausibility of 2. is based on the original context being closely linked to Legendre, but it's still a bit of a stretch (and presented rather tongue-in-cheek on MO). quidquid Thanks for contributing an answer to History of Science and Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged number-theory or ask your own question. Who named the fugacity, who coined the variable name and did it already relate to complex analysis? Why is the Sophie Germain Identity called thus? Historical development of Möbius function Where does the letter S in "$S$-units" and in localization $S^{-1} R$ come from? What is the modern context of Gauss's work on triangles with integer sides and circumradius? What is the history of these prime counting function approximations? Who first identified $\frac{n}{\ln(n)}$ as an approximation of a prime counting function? Reference for Euler's Introductio in Analysin Infinitorum
CommonCrawl
Lower Bounds for Parallel and Randomized Convex Optimization Jelena Diakonikolas, Cristóbal Guzmán ; We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general and streamlined approach for proving lower bounds in the setting of parallel convex optimization. @InProceedings{pmlr-v99-diakonikolas19c, title = {Lower Bounds for Parallel and Randomized Convex Optimization}, author = {Diakonikolas, Jelena and Guzm\'{a}n, Crist\'{o}bal}, pdf = {http://proceedings.mlr.press/v99/diakonikolas19c/diakonikolas19c.pdf}, url = {http://proceedings.mlr.press/v99/diakonikolas19c.html}, abstract = {We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general and streamlined approach for proving lower bounds in the setting of parallel convex optimization. } %T Lower Bounds for Parallel and Randomized Convex Optimization %A Jelena Diakonikolas %A Cristóbal Guzmán %F pmlr-v99-diakonikolas19c %X We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general and streamlined approach for proving lower bounds in the setting of parallel convex optimization. Diakonikolas, J. & Guzmán, C.. (2019). Lower Bounds for Parallel and Randomized Convex Optimization. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:1132-1157
CommonCrawl
On optimal Bayesian classification and risk estimation under multiple classes Lori A. Dalton1,2 & Mohammadmahdi R. Yousefi1 A recently proposed optimal Bayesian classification paradigm addresses optimal error rate analysis for small-sample discrimination, including optimal classifiers, optimal error estimators, and error estimation analysis tools with respect to the probability of misclassification under binary classes. Here, we address multi-class problems and optimal expected risk with respect to a given risk function, which are common settings in bioinformatics. We present Bayesian risk estimators (BRE) under arbitrary classifiers, the mean-square error (MSE) of arbitrary risk estimators under arbitrary classifiers, and optimal Bayesian risk classifiers (OBRC). We provide analytic expressions for these tools under several discrete and Gaussian models and present a new methodology to approximate the BRE and MSE when analytic expressions are not available. Of particular note, we present analytic forms for the MSE under Gaussian models with homoscedastic covariances, which are new even in binary classification. Classification in biomedicine is often constrained by small samples so that understanding properties of the error rate is critical to ensure the scientific validity of a designed classifier. While classifier performance is typically evaluated by employing distribution-free training-data error estimators such as cross-validation, leave-one-out, or bootstrap, a number of studies have demonstrated that these methods are highly problematic in small-sample settings [1]. Under real data and even under simple synthetic Gaussian models, cross-validation has been shown to suffer from a large variance [2] and often has nearly zero correlation, or even negative correlation, with the true error [3, 4]. Among other problems, this directly leads to severely optimistic reporting biases when selecting the best results among several datasets [5] or when selecting the best classification rule among several candidates [6] and difficulties with performance reproducibility [7]. Furthermore, there are typically no accuracy guarantees for error estimators when applied under small samples. Distribution-free bounds on the mean-square error (MSE) or its square root, the root-mean-square (RMS), of an error estimator with respect to the true error rate are typically either unavailable or unhelpful under small samples. Consider leave-one-out error estimation for a discrete histogram rule that breaks ties with class 0. The following is a distribution-free RMS bound [8]: $$ \text{RMS}(\widehat{\varepsilon }_{\text{loo}}(\mathcal{S}) \,| \, \theta) \leq \sqrt{\frac{1+6/e}{n}+\frac{6}{\sqrt{\pi \left(n-1\right) }}}, $$ where \(\mathcal {S}\) is a random sample, θ is a feature-label distribution, and n is the sample size. To guarantee an RMS less than 0.5 for all distributions, this bound indicates that a sample size of at least n=209 would be required. Typically, the error of a classifier should be between 0 and 0.5 so that an RMS of 0.5 is trivially guaranteed. Rather than a distribution-free approach, recent work takes a Bayesian approach to address these problems. The idea is to assume the true distributions characterizing classes in the population are members of an uncertainty class of models. We also assume that members of the uncertainty class are weighted by a prior distribution, and after observing a sample, we update the prior to a posterior distribution. For a given classifier we may find an optimal MSE error estimator, called a Bayesian error estimator (BEE) [9, 10] and evaluate the MSE of any arbitrary error estimator [11, 12]. These quantities are found by conditioning on the sample in hand and averaging with respect to the unknown population distribution via the posterior, rather than by conditioning on the distribution and averaging over random samples as in (1). Not only does the Bayesian framework supply more powerful error estimators, but the sample-conditioned MSE allows us to evaluate the accuracy of error estimation. The Bayesian framework also facilitates optimal Bayesian classification (OBC), which provides decision boundaries to minimize the BEE [13, 14]. In this way, the Bayesian framework can be used to optimize both error estimation and classification. Classifier design and analysis in the Bayesian framework have previously been developed for binary classification with respect to the probability of misclassification. However, it is common in small-sample classification problems to be faced with classification under multiple classes and for different types of error to be associated with different levels of risk or loss. A few classical classification algorithms naturally permit multiple classes and arbitrary loss functions; for example, a plug-in rule takes the functional form for an optimal Bayes decision rule under a given modeling assumption and substitutes sample estimates of model parameters in place of the true parameters. This can be done with linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) for multiple classes with arbitrary loss functions, which essentially assume that the underlying class-conditional densities are Gaussian with equal or unequal covariances, respectively. Most training-data error estimation methods, for instance, cross-validation, can also be generalized to handle multiple classes and arbitrary loss functions. However, it is expected that the same difficulties encountered under binary classes with simple zero-one loss functions (where the expected risk reduces to the probability of misclassification) will carry over to the more general setting, as they have in ROC curve estimation [15]. Support vector machines (SVM) are inherently binary but can be adapted to incorporate penalties that influence risk by implementing slack terms or applying a shrinkage or robustifying objective function [16, 17]. It is also common to construct multi-class classifiers from binary classifiers using the popular "one-versus-all" or "all-versus-all" strategies [18]. The former method builds several binary classifiers by discriminating one class, in turn, against all others, and at a given test point reports the class corresponding to the highest classification score. The latter discriminates between each combination of pairs of classes and reports a majority vote. However, it is unclear how one may assess the precise effect of these adaptations on the expected risk. We are thus motivated to generalize the BEE, sample-conditioned MSE, and OBC to treat multiple classes with arbitrary loss functions. We will present analogous concepts of Bayesian risk estimation (BRE), the sample-conditioned MSE for risk estimators, and optimal Bayesian risk classification (OBRC). We will show that the BRE and OBRC can be represented in the same form as the expected risk and Bayes decision rule with unknown true densities replaced by effective densities. This approach is distinct from the simple plug-in rule discussed earlier, since the form of the effective densities may not be the same as the individual densities represented in the uncertainty class. We will also develop an interpretation of the conditional MSE based on an effective joint density, which is new even under binary classes with a zero-one loss function. Furthermore, we will provide analytic solutions under several models: discrete spaces with Dirichlet priors (discrete models) and Gaussian distributions with known, independent scaled identity, independent arbitrary, homoscedastic scaled identity, and homoscedastic arbitrary covariance models, all with conjugate priors (Gaussian models). We provide expressions for the BRE and conditional MSE for arbitrary classification in the discrete model and binary linear classification in the Gaussian model. The analytic form that we provide for the MSE of arbitrary error estimators under homoscedastic models is completely new without an analog in prior work under binary classification and zero-one loss. For models in which an analytic form for the BRE and conditional MSE are unavailable, for instance, under multi-class or non-linear classification in the Gaussian model, we also discuss efficient methods to approximate these quantities. In particular, we present a new computationally efficient method to approximate the conditional MSE based on the effective joint density. We denote random quantities with capital letters, e.g., Y; realizations of random variables with lowercase letters, e.g., y; and vectors in bold, e.g., X and x. Matrices will generally be in bold upper case, e.g., S. Spaces will be denoted by a stylized font, e.g., \(\mathcal {X}\). Distributions with conditioning will be made clear through the function arguments; for instance, we write the distribution of X given Y as f(x | y). The probability space of expectations will be made clear by denoting random quantities in the expectation and conditioning, e.g., the expectation of Y conditioned on the random variable X and the event C=c is denoted by E[Y | X,c]. When the region of integration in an integral is omitted then this region is the whole space. Any exceptions in notation will be defined throughout. Bayes decision theory We next review concepts from classical Bayes decision theory. Consider a classification problem in which we are to predict one of M classes, y=0,…,M−1, from a sample drawn in feature space \(\mathcal {X}\). Let X and Y denote a random feature vector and its corresponding random label. Let f(y | c) be the probability mass function of Y, parameterized by a vector c, and for each y, let f(x | y,θ y ) be the class-y-conditional density of X, parameterized by a vector θ y . The full feature-label distribution is parameterized by c and θ={θ 0,…,θ M−1}. Let λ(i,y) be a loss function quantifying a penalty in predicting label i when the true label is y. The conditional risk in predicting label i for a given point, x, is defined as $$\begin{array}{*{20}l} R(i, \mathbf{x}, \mathbf{c}, \theta) &= \text{E}[\lambda(i, Y) \, | \, \mathbf{x}, \mathbf{c}, \theta] \\ &= \sum_{y = 0}^{M-1} \lambda(i, y) f(y \, | \,\mathbf{x}, \mathbf{c}, \theta) \\ &= \frac{\sum_{y = 0}^{M-1} \lambda(i, y) f(y \, | \, \mathbf{c}) f(\mathbf{x} \, | \, y, \theta_{y})} {\sum_{y = 0}^{M-1} f(y \, | \, \mathbf{c}) f(\mathbf{x} \, | \, y, \theta_{y})}. \end{array} $$ The expected risk of a given classification rule, \(\psi :\mathcal {X} \rightarrow \{0, \ldots, M-1\}\), is given by $$\begin{array}{*{20}l} R(\psi, \mathbf{c}, \theta) &= \text{E}[R(\psi(\mathbf{X}), \mathbf{X}, \mathbf{c}, \theta) \, | \, \mathbf{c}, \theta] \\ &= \sum_{y = 0}^{M-1} \sum_{i = 0}^{M-1} \lambda(i, y) f(y \, | \, \mathbf{c}) \varepsilon^{i, y}(\psi, \theta_{y}), \end{array} $$ $$ \varepsilon^{i, y}(\psi, \theta_{y}) = \int_{\Gamma_{i}} f(\mathbf{x} \, | \, y, \theta_{y}) d\mathbf{x} = \text{P}(\mathbf{X} \in \Gamma_{i} \, | \, y, \theta_{y}) $$ is the probability that a class-y point will be assigned class i by the classifier ψ, and the \(\Gamma _{i} = \{\mathbf {x} \in \mathcal {X}: \psi (\mathbf {x}) = i\}\) partition the sample space into decision regions. A Bayes decision rule (BDR) minimizes expected risk or, equivalently, the conditional risk at each fixed point x: $$\begin{array}{*{20}l} \psi_{\text{BDR}}\left(\mathbf{x}\right) &= \text{arg} \min_{i \in \{0, \ldots, M-1\}} R(i, \mathbf{x}, \mathbf{c}, \theta). \end{array} $$ By convention, we break ties with the lowest index, i∈{0,…,M−1}, minimizing R(i,x,c,θ). Optimal Bayesian risk classification In practice, the feature-label distribution is unknown so that we must train a classifier and estimate risk or error with data. The Bayesian framework resolves this by assuming the true feature-label distribution is a member of a parameterized uncertainty class. In particular, assume that c is the probability mass function of Y, that is, c={c 0,…,c M−1}∈Δ M−1, where f(y | c)=c y and Δ M−1 is the standard M−1 simplex defined by c y ∈[0,1] for y∈{0,…,M−1} and \(\sum _{y = 0}^{M-1} c_{y} = 1\). Also assume \(\theta _{y} \in \mathcal {T}_{y}\) for some parameter space \(\mathcal {T}_{y}\), and \(\theta \in \mathcal {T} = \mathcal {T}_{0} \times \ldots \times \mathcal {T}_{M-1}\). Let C and Θ denote random vectors for parameters c and θ, respectively. Finally, assume C and Θ are independent prior to observing data and assign prior probabilities, π(c) and π(θ). Priors quantify uncertainty we have about the distribution before observing the data. Although non-informative priors may be used as long as the posterior is normalizable, informative priors can supplement the classification problem with information to improve performance when the sample size is small. This is key for problems with limited or expensive data. Under mild regularity conditions, as we observe sample points, this uncertainty converges to a certainty on the true distribution parameters, where more informative priors may lead to faster convergence [12]. For small samples, the performance of Bayesian methods depends heavily on the choice of prior. Performance tends to be modest but more robust with a non-informative or weakly informative prior. Conversely, informative priors offer the potential for great performance improvement, but if the true population distribution is not well represented in the prior, then performance may be poor. This trade-off is acceptable as long as the prior is an accurate reflection of available scientific knowledge so that one is reasonably sure that catastrophic results will not occur. If multiple models are scientifically reasonable but result in different inferences, and if it is not possible to determine which model is best from data or prior knowledge, then the range of inferences must be considered [19]. For the sake of illustration, in simulations, we will utilize either low-information priors or a simple prior construction method for microarray data, although modeling and prior construction remain important problems [20]. Let S be a sample, that is, a realization of n independent labeled points drawn from \(\mathcal {X}\). Also let \(\mathbf {x}_{i}^{y}\) denote the ith sample point in class y and n y denote the number of sample points observed from class y. Given a sample, the priors are updated to posterior densities: $$\begin{array}{*{20}l} f(\mathbf{c}, \theta \,| \, S) &\propto \pi(\mathbf{c}) \pi(\theta) \prod_{y = 0}^{M-1} \prod_{i = 1}^{n_{y}} f(\mathbf{x}_{i}^{y}, y \,| \, \mathbf{c}, \theta_{y}), \end{array} $$ where the product on the right is the usual likelihood function. Since \(f(\mathbf {x}_{i}^{y}, y \,| \, \mathbf {c}, \theta _{y}) = c_{y} \,f(\mathbf {x}_{i}^{y} \, | \, y, \theta _{y})\), we may write f(c,θ | S)=f(c | S) f(θ | S), where $$ f(\mathbf{c} \, | \, S) \propto \pi(\mathbf{c}) \prod_{y = 0}^{M-1} (c_{y})^{n_{y}} $$ $$ f(\theta \, | \,S) \propto \pi(\theta) \prod_{y = 0}^{M-1} \prod_{i = 1}^{n_{y}} f(\mathbf{x}_{i}^{y} \, | \, y, \theta_{y}) $$ are marginal posteriors of C and Θ. Thus, independence between C and Θ is preserved in the posterior. Constants of proportionality are found by normalizing the integral of posteriors to 1. When the prior density is proper, this all follows from Bayes' rule; otherwise, (7) and (8) are taken as definitions, where we require posteriors to be proper. f(c | S) depends on the prior and sampling method used. For instance, if C is known, then π(c) and f(c | S) are both point masses at the known value of C. Under separate sampling, in which the number of sample points in each class is fixed to an arbitrary value prior to sampling, f(c | S)=π(c). Under random sampling, the sample size is fixed at n and the number of points observed from each class is determined by independent draws from the feature-label distribution. Given a Dirichlet prior on C with hyperparameters α={α 0,…,α M−1}, a special case being α 0=…=α M−1=1 for a uniform distribution on Δ M−1, then under random sampling the posterior on C is still Dirichlet with hyperparameters \(\alpha _{y}^{\ast }=\alpha _{y}+n_{y}\). Defining \(\alpha _{+}^{\ast } = \sum _{i = 0}^{M-1} \alpha _{i}^{\ast }\), we also have for y≠z, $$\begin{array}{*{20}l} \text{E}\left[ C_{y} \, | \, S\right] &= \frac{\alpha_{y}^{\ast }}{\alpha_{+}^{\ast}}, \end{array} $$ $$\begin{array}{*{20}l} \text{E}\left[ {C_{y}^{2}} \, | \, S \right] &= \frac{\alpha_{y}^{\ast} \left(1 + \alpha_{y}^{\ast} \right)} {\alpha_{+}^{\ast}\left(1 + \alpha_{+}^{\ast}\right)}, \end{array} $$ $$\begin{array}{*{20}l} \text{E}\left[ C_{y} C_{z} \, |\, S \right] &= \frac{\alpha_{y}^{\ast} \alpha_{z}^{\ast}} {\alpha_{+}^{\ast}\left(1 + \alpha_{+}^{\ast}\right)}. \end{array} $$ Bayesian risk estimation We define the BRE to be the minimum mean-square error (MMSE) estimate of the expected risk or, equivalently, the conditional expectation of the expected risk given observations. Given a sample, S, and a classifier, ψ, that is not informed by θ, thanks to posterior independence between C and Θ, the BRE is given by, $$\begin{array}{*{20}l} \widehat{R}(\psi, S) &= \text{E}[R(\psi, \mathbf{C}, \Theta) \, | \,S] \\ &= \sum_{y = 0}^{M-1} \sum_{i = 0}^{M-1} \lambda(i, y) \text{E}[f(y \, | \, \mathbf{C}) \, | \, S] \text{E} [\varepsilon^{i, y}\left(\psi, \Theta \right) \, | \, S]. \end{array} $$ If we assume that {X,Y} and \(\mathcal {S}\) are independent given C and Θ, then $$\begin{array}{*{20}l} f(y \, | \, S) &= \int f(y \, | \, \mathbf{c}) f(\mathbf{c} \, | \, S) d\mathbf{c} \\ &= \text{E} \left[ f(y \,| \, \mathbf{C}) \, | \, S \right], \end{array} $$ $$\begin{array}{*{20}l} f\left(\mathbf{x} \, | \, y, S\right) &=\int f\left(\mathbf{x} \, | \, y, \theta_{y}\right) f\left(\theta_{y} \, | \, S\right) d\theta_{y} \\ &= \text{E}[f(\mathbf{x} \, | \, y, \Theta_{y}) \, | \, S ]. \end{array} $$ We may thus write the BRE in (12) as $$\begin{array}{*{20}l} \widehat{R}(\psi, S) &= \sum_{y = 0}^{M-1} \sum_{i = 0}^{M-1} \lambda(i, y) f(y \,| \, S) \widehat{\varepsilon }^{i, y}(\psi, S), \end{array} $$ where \(\widehat {\varepsilon }^{i, y}(\psi, S) = \text {E} [\varepsilon ^{i, y}\left (\psi, \Theta \right) \, | \, S]\) is the posterior probability of assigning a class-y point to class i, $$\begin{array}{*{20}l} \widehat{\varepsilon }^{i, y}(\psi, S) &= \text{E}\left[ \int_{\Gamma_{i}} f\left(\mathbf{x} \, | \, y, \Theta_{y} \right) d\mathbf{x}\left\vert\!\vphantom{\int_{\Gamma_{i}} f\left(\mathbf{x} \, | \, y, \Theta_{y} \right) d\mathbf{x}S}\right. S \right] \\ &= \int_{\Gamma_{i}} \text{E}\left[\,f\left(\mathbf{x} \, | \, y, \Theta_{y} \right) \, | \, S \right] d\mathbf{x} \\ &= \int_{\Gamma_{i}}f\left(\mathbf{x} \, | \,y, S\right) d\mathbf{x} \end{array} $$ $$\begin{array}{*{20}l} &= \text{P}\left(\mathbf{X} \in \Gamma_{i} \, | \, y, S\right). \end{array} $$ The second equality follows from Fubini's theorem, and in the last equality, X is a random vector drawn from the density in the integrand of (16). We also have f(y | S)=E[C y | S], which depends on the prior for C and is easily found, for instance, from (9) under Dirichlet posteriors. Comparing (3) and (15), observe that f(y | S) and f(x | y,S) play roles analogous to f(y | c) and f(x | y,θ y ) in Bayes decision theory. We thus call f(x | y,S) the effective class-y conditional density or simply the effective density. Whereas the BRE addresses overall classifier performance across the entire sample space, \(\mathcal {X}\), we may also consider classification at a fixed point, \(\mathbf {x} \in \mathcal {X}\). We define the Bayesian conditional risk estimator (BCRE) for class i∈{0,…,M−1} at point \(\mathbf {x} \in \mathcal {X}\) to be the MMSE estimate of the conditional risk: $$\begin{array}{*{20}l} \widehat{R}(i, \mathbf{x}, S) &= \text{E}[R(i, \mathbf{x}, \mathbf{C}, \Theta) \, | \,S] \\ &= \sum_{y = 0}^{M-1} \lambda(i, y) \text{E}[f(y \, | \, \mathbf{x}, \mathbf{C}, \Theta) \, | \, S ]. \end{array} $$ Again assuming {X,Y} and \(\mathcal {S}\) are independent given C and Θ, and if we further assume X is independent from C, Θ, and \(\mathcal {S}\), then, $$\begin{array}{*{20}l} \text{E}[f(y \,| \, \mathbf{x}, \mathbf{C}, \Theta) \, | \, S ] &= \int f(y \, | \, \mathbf{x}, \mathbf{c}, \theta) f(\mathbf{c}, \theta \, | \, S) d\mathbf{c}d\theta \\ &= \int f(y, \mathbf{c}, \theta \, | \, \mathbf{x}, S) d\mathbf{c}d\theta \\ &= f(y \,| \, \mathbf{x}, S). \end{array} $$ Applying Bayes' rule, $$\begin{array}{*{20}l} f(y \,| \, \mathbf{x}, S) &= \frac{f(y \,| \, S) f(\mathbf{x} \, | \, y, S)} {\sum_{y = 0}^{M-1} f(y \, | \,S) f(\mathbf{x} \, | \, y, S)}, \end{array} $$ and applying this to (18), we have $$\begin{array}{*{20}l} \widehat{R}(i, \mathbf{x}, S) &= \frac{\sum_{y = 0}^{M-1} \lambda(i, y)\, f(y \,| \, S) f(\mathbf{x} \, | \, y, S)} {\sum_{y = 0}^{M-1} f(y \,| \, S)\, f(\mathbf{x} \, | \, y, S)}. \end{array} $$ This is analogous to (2) in Bayes decision theory. Furthermore, given a classifier ψ, $$\begin{array}{*{20}l} \text{E}\left[ \widehat{R}(\psi(\mathbf{X}), \mathbf{X}, S)\left\vert\!\vphantom{\widehat{R}(\psi(\mathbf{X}), \mathbf{X}, S)S}\right. S \right] &= \sum_{i = 0}^{M-1} \int_{\Gamma_{i}} \widehat{R}(i, \mathbf{X}, S)\, f(\mathbf{x} \, | \, S) d\mathbf{x} \\ &= \widehat{R}(\psi, S), \end{array} $$ where \(f(\mathbf {x} \, | \, S) = \sum _{y = 0}^{M-1} f(y \, | \, S) f(\mathbf {x} \, | \, y, S)\) is the marginal distribution of x given S. Hence, the BRE of ψ is the mean of the BCRE across the sample space. For binary classification, \(\widehat {\varepsilon }^{i, y}(\psi, S)\) has been solved in closed form as components of the BEE for both discrete models under arbitrary classifiers and Gaussian models under linear classifiers, so the BRE with an arbitrary loss function is available in closed form for both of these models. When closed-form solutions for \(\widehat {\varepsilon }^{i, y}(\psi, S)\) are not available, from (17), \(\widehat {\varepsilon }^{i, y}(\psi, S)\) may be approximated for all i and a given fixed y by drawing a large synthetic sample from f(x | y,S) and evaluating the proportion of points assigned class i. The final approximate BRE can be found by plugging the approximate \(\widehat {\varepsilon }^{i, y}(\psi, S)\) for each y and i into (15). A number of practical considerations for BEEs addressed under binary classification naturally carry over to multiple classes, including robustness to false modeling assumptions [9, 10] and a prior calibration method for microarray data analysis using features discarded by feature selection and a method-of-moments approach [21]. Furthermore, classical frequentist consistency holds for BREs on fixed distributions in the parameterized family owing to the convergence of posteriors in both the discrete and Gaussian models [12]. We define the OBRC to minimize the BRE, that is, $$ \psi_{\text{OBRC}} = \text{arg} \inf_{\psi \in \mathcal{C}} \widehat{R}\left(\psi, S\right), $$ where \(\mathcal {C}\) is a family of classifiers. If \(\mathcal {C}\) is the set of all classifiers with measurable decision regions, it can be shown that ψ OBRC exists and is given for any \(\mathbf {x} \in \mathcal {X}\) by $$ \psi_{\text{OBRC}}\left(\mathbf{x}\right) = \text{arg} \min_{i \in \{0, \ldots, M-1\}} \widehat{R}(i, \mathbf{x}, S). $$ Analogously to the relationship between the BRE and expected risk, the OBRC has the same functional form as the BDR with f(y | S) substituted for the true class probability, f(y | c), and f(x | y,S) substituted for the true density, f(x | y,θ y ), for all y. Closed-form OBRC are available for any model in which f(x | y,S) has been found, including discrete and Gaussian models [13]. A number of important properties also carry over, including invariance to invertible transformations, pointwise convergence to the Bayes classifier, and robustness to false modeling assumptions. Sample-conditioned MSE of risk estimation In a typical small-sample classification scenario, a classifier is trained from data and a risk estimate found for the true risk of this classifier. A key question arises: How close is the risk estimate to the actual risk? A Bayesian approach answers this question with the sample-conditioned MSE of the BRE relative to the true expected risk: $$\begin{array}{*{20}l} \text{MSE} (\widehat{R }(\psi, S) \, | \, S) &= \text{E}\left[ \left(R\left(\psi, \mathbf{C}, \Theta \right) -\widehat{R }(\psi, S)\right)^{2}\bigg| {S} \right] \\ &=\text{Var}\left(R(\psi, \mathbf{C}, \Theta) \, | \,S\right). \end{array} $$ This MSE is precisely the quantity that the BRE minimizes, and it quantifies the accuracy of \(\widehat {R}\) as an estimator of R, conditioned on the actual sample in hand. Thanks to posterior independence between C and Θ, it can be decomposed: $$\begin{array}{*{20}l} &\text{MSE}(\widehat{R }(\psi, S) \, | \, S) \\ &= \left(\sum_{y = 0}^{M-1} \sum_{z = 0}^{M-1} \sum_{i = 0}^{M-1} \sum_{j = 0}^{M-1} \lambda(i, y) \lambda(j, z) \text{E}\left[ C_{y} C_{z} \, | \, S \right] \right.\\ &\qquad \times\left. \text{E}\left[ \varepsilon^{i, y}(\psi, \Theta_{y}) \varepsilon^{j, z}(\psi, \Theta_{z}) \,| \, S \right] {\vphantom{\sum_{y = 0}^{M-1}}}\right) - (\widehat{R}(\psi, S))^{2}, \end{array} $$ where we have applied (3) in (23) and noted \(\text {E}\left [\,f(y \, | \, \mathbf {C})\right. \left. f\!(z \, | \, \mathbf {C}) \, | \, S{\vphantom {\sum }} \right ] = \text {E}\left [ C_{y} C_{z} \, | \, S \right ]\). Second-order moments of C y depend on our prior for C and can be found, for instance, from (10) and (11) under Dirichlet posteriors. Hence, evaluating the conditional MSE of the BRE boils down to evaluating the BRE itself, \(\widehat {R}(\psi, S)\), and evaluating expressions of the form E[ε i,y(ψ,Θ y )ε j,z(ψ,Θ z ) | S]. Furthermore, if we additionally assume Θ 0,…,Θ M−1 are pairwise independent, then when y≠z, $$\begin{array}{*{20}l} \text{E} \left[ \varepsilon^{i, y}(\psi, \Theta_{y}) \varepsilon^{j, z}(\psi, \Theta_{z}) \, | \, S \right] = \,\widehat{\varepsilon}^{i, y}(\psi, S) \widehat{\varepsilon}^{j, z}(\psi, S), \end{array} $$ where \(\widehat {\varepsilon }^{i, y}(\psi, S)\), given in (16), is a component of the BRE. The conditional MSE of an arbitrary risk estimate, \(\widehat {R }_{\bullet }(\psi, S)\), is also of interest and may be easily found from the BRE and the MSE of the BRE: $$\begin{array}{*{20}l} & \text{MSE}(\widehat{R }_{\bullet }(\psi, S) \,| \, S) \\ &= \text{E}\left[\left(R\left(\psi, \mathbf{C}, \Theta \right) -\widehat{R }_{\bullet }(\psi, S)\right)^{2}\bigg| {S} \right] \\ & =\text{MSE}(\widehat{R }(\psi, S) \, | \,S) + (\widehat{R }(\psi, S)-\widehat{R }_{\bullet }(\psi, S))^{2}. \end{array} $$ In this form, the optimality of the BRE is clear. For binary classification with zero-one loss, the sample-conditioned MSE of the BRE converges to zero almost surely as sample size increases, for both discrete models under arbitrary classifiers and Gaussian models with independent covariances under linear classifiers [12]. Closed-form expressions for the MSE are available in these models. In this work, we extend this to multi-class discrimination under discrete models and binary linear classification under homoscedastic Gaussian models. For cases where closed-form solutions are unavailable, in the next section, we present a method to approximate the MSE. Efficient computation The following new interpretation for \(\text {E} \left [ \varepsilon ^{i, y}(\psi, \Theta _{y}) \right. \left. \varepsilon ^{j, z} (\psi, \Theta _{z}) \, | \, S \vphantom {\sum {}}\right ]\) is useful in both deriving analytic forms for and approximating the MSE. From (4), $$\begin{array}{*{20}l} &\text{E}\left[ \varepsilon^{i, y}(\psi, \Theta_{y}) \varepsilon^{j, z}(\psi, \Theta_{z}) \, | \, S \right] \\ &= \int_{\mathcal{T}} \int_{\Gamma_{i}} f(\mathbf{x} \, | \, y, \theta_{y}) d\mathbf{x} \int_{\Gamma_{j}} f(\mathbf{w}\,| \, z, \theta_{z}) d\mathbf{w} f(\theta \, | \, S)d\theta \\ &= \int_{\Gamma_{i}} \int_{\Gamma_{j}} \int_{\mathcal{T}} f(\mathbf{x} \, | \, y, \theta_{y}) f(\mathbf{w} \, | \, z, \theta_{z}) f(\theta \, | \, S) d\theta d\mathbf{w} d\mathbf{x}, \end{array} $$ where we have again applied Fubini's theorem. Further, we may write $$\begin{array}{*{20}l} &\text{E}\left[ \varepsilon^{i, y}(\psi, \Theta_{y}) \varepsilon^{j, z}(\psi, \Theta_{z}) \,| \, S \right] \\ &\qquad\qquad= \int_{\Gamma_{i}} \int_{\Gamma_{j}} f(\mathbf{x}, \mathbf{w} \, | \,y, z, S) d\mathbf{w} d\mathbf{x} \end{array} $$ $$\begin{array}{*{20}l} &\qquad\qquad= \text{P}\left(\mathbf{X} \in \Gamma_{i} \cap \mathbf{W} \in \Gamma_{j} \, | \,y, z, S\right), \end{array} $$ where X and W are random vectors drawn from an effective joint density, defined using similar independence assumptions as in (14): $$\begin{array}{*{20}l} f(\mathbf{x}, \mathbf{w} \,| \, y, z, S) &= \int f(\mathbf{x} \, | \, y, \theta_{y}) f(\mathbf{w} \, | \, z, \theta_{z}) f(\theta \, | \, S) d\theta. \end{array} $$ The marginal densities of X and W under f(x,w | y,z,S) are precisely the effective density, i.e., $$\begin{array}{*{20}l} &\int_{\mathcal{X}} f(\mathbf{x}, \mathbf{w} \,| \, y, z, S) d\mathbf{w} \\ &= \int_{\mathcal{X}} \int_{\mathcal{T}} f(\mathbf{x} \,| \, y, \theta_{y}) f(\mathbf{w} \, | \, z, \theta_{z}) f(\theta \, | \, S) d\theta d\mathbf{w} \\ &= \int_{\mathcal{T}} f(\mathbf{x} \, | \, y, \theta_{y}) \int_{\mathcal{X}} f(\mathbf{w} \, | \, z, \theta_{z}) d\mathbf{w} f(\theta \, | \,S) d\theta \\ &= \int_{\mathcal{T}_{y}} f(\mathbf{x} \, | \, y, \theta_{y}) f(\theta_{y} \, | \, S) d\theta_{y} \\ &= f(\mathbf{x} \, | \, y, S), \end{array} $$ where f(θ y | S) is the marginal posterior density of Θ y . Further, we have an effective conditional density of W given X, $$\begin{array}{*{20}l} f(\mathbf{w} \,| \, \mathbf{x}, y, z, S) &= \frac{f(\mathbf{x}, \mathbf{w} \, | \, y, z, S)}{f(\mathbf{x} \, | \, y, S)} \\ &= \int f(\mathbf{w} \, | \, z, \theta_{z}) \frac{f(\mathbf{x} \,| \, y, \theta_{y}) f(\theta \, | \, S)} {\int f(\mathbf{x} \, | \, y, \theta_{y}^{\prime}) f(\theta_{y}^{\prime} \, | \, S) d\theta_{y}^{\prime}} d\theta \\ &= \int f(\mathbf{w} \, | \, z, \theta_{z}) f(\theta \, | \, S \cup \{\mathbf{x}, y\}) d\theta \\ &= f(\mathbf{w} \, | \, z, S \cup \{\mathbf{x}, y\}), \end{array} $$ where we have used the fact that the fractional term in the integrand of the second equality is of the same form as the posterior defined in (8), updated with a new independent sample point with feature vector x and class y. Hence, the effective joint density may be easily found, once the effective density is known. Furthermore, from (29), we may approximate E[ε i,y(ψ,Θ y )ε j,z(ψ,Θ z ) | S] by drawing a large synthetic sample from f(x | y,S), drawing a single point, w, from the effective conditional density f(w | z,S∪{x,y}) for each x, and evaluating the proportion of pairs, (x,w), for which x∈Γ i and w∈Γ j . Additionally, since x is marginally governed by the effective density, from (17) we may approximate \(\widehat {\varepsilon }^{i,y}(\psi, S)\) by evaluating the proportion of x in Γ i . Evaluating the OBRC, BRE, and conditional MSE requires obtaining E[C y | S], \({\text {E}[C_{y}^{2}} \, | \, S]\) and E[C y C z | S] based on the posterior for C and finding the effective density, f(x | y,S), and the effective joint density, f(x,w | y,z,S), based on the posterior for Θ. At a fixed point, x, one may then evaluate the posterior probability of each class, f(y | x,S), from (19) and the BCRE from (20). The OBRC is then found from (22) or, equivalently, by choosing the class, i, that minimizes \(\sum _{y = 0}^{M-1} \lambda (i, y) \text {E}[C_{y} \, | \, S] f(\mathbf {x} \, | \, y, S)\). For any classifier, the BRE is given by (15) with \(\widehat {\varepsilon }^{i,y}(\psi, S)\) given by (16) (or equivalently (17)) using the effective density, f(x | y,S). The MSE of the BRE is then given by (24), where E[ε i,y(Θ y )ε j,z(Θ z ) | S] is given by (25) when Θ 0,…,Θ M−1 are pairwise independent and y≠z, and E[ε i,y(Θ y )ε j,z(Θ z ) | S] is otherwise found from (28) (or equivalently (29)) using the effective joint density, f(x,w | y,z,S). The MSE of an arbitrary risk estimator can also be found from (26) using the BRE and the MSE for the BRE. We summarize these tools for several discrete and Gaussian models in Appendices Appendix 1: Discrete models, Appendix 2: Gaussian models, and Appendix 3: Effective joint density lemma by providing the effective density, the effective joint density (or a related density), \(\widehat {\varepsilon }^{i,y}(\psi, S)\), and E[ε i,y(Θ y )ε j,z(Θ z ) | S]. Simulation setup and results In the this section, we examine several synthetic data simulations, where random distributions and samples are generated from a low-information prior, and demonstrate the performance gain and optimality of Bayesian methods within the Bayesian framework. We also examine performance with informed priors in two real datasets. Classification rules We consider five classification rules: OBRC, linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), linear support vector machine (L-SVM), and radial basis function SVM (RBF-SVM). We will implement OBRC under Gaussian models. We used built-in MATLAB functions to implement LDA and QDA. For a collection of binary-labeled training sample points, an SVM classifier finds a maximal margin hyperplane based on a well-behaved optimization objective function and a set of constraints. When the data are not perfectly linearly separable, introduction of slack variables in the optimization procedure leads to soft margin classifiers for which mislabeled sample points are allowed. The resulting hyperplane in the feature space is called L-SVM. Alternatively, the underlying feature space can be transformed to a higher dimensional space where the data becomes linearly separable. The equivalent classifier back in the original feature space will generally be non-linear [22, 23]. When the kernel function is a Gaussian radial basis function, we call the corresponding classifier RBF-SVM. We used the package LIBSVM, which, by default, implements a one-versus-one approach for multi-class classification [24]. Since SVM classifiers optimize relative to their own objective function (for example, hinge loss), rather than expected risk, we exclude them from our analysis when using a non-zero-one loss function. For all classification rules, we calculate the true risk defined in (3) and (4). We find the exact value if a formula is available; otherwise, we use a test sample of at least 10,000 points generated from the true feature-label distributions, stratified relative to the true class prior probabilities. This will yield an approximation of the true risk with RMS \(\leq 1/\sqrt {4 \times 10,000} = 0.005\) [8]. Risk estimation rules We consider four risk estimation methods: BRE, 10-fold cross-validation (CV), leave-one-out (LOO), and 0.632 bootstrap (boot). When we do not have closed-form formulae for calculating the BRE, we approximate it by drawing a sample of 1,000,000 points from the effective density of each class. In CV, the training data, S, is randomly partitioned into 10 stratified folds, S (i) for i=1,2,…,10. Each fold, in turn, is held out of the classifier design step as the test set, and a surrogate classifier is designed on the remaining folds, S∖S (i), as the training set. The risk of each surrogate classifier is estimated using S (i). The resulting risk values from all surrogate classifiers are then averaged to get the CV estimate. To reduce "internal variance" arising from random selection of the partitions, we average the CV estimates over 10 repetitions (10 randomly generated partitions over S). If the number of folds equals the sample size, n, then each fold consists of a single point and we get the LOO risk estimation. Bootstrap risk estimators are calculated using bootstrap samples of size n, where in each bootstrap sample, points are drawn, with replacement, from the original training dataset. A surrogate classifier is designed on the bootstrap sample and its risk estimated using sample points left out of the bootstrap sample. The basic bootstrap estimator is the expectation of this risk with respect to the bootstrap sampling distribution. The expectation is usually approximated by Monte Carlo repetitions (100 in our simulations) over a number of independent bootstrap samples. It is known that this estimate is high biased. To reduce bias, the 0.632 bootstrap reports a linear combination of this estimate, with weight 0.632, and the low-biased resubstitution risk estimate, with weight 0.368 [25–27]. Under linear classification, the sample-conditioned MSE from (24) is found analytically by evaluating E[ε i,y(Θ y )ε j,y(Θ y ) | S] from (52), plugging in the appropriate values for k and γ 2 depending on the covariance model, and E[ε i,y(Θ y )ε j,z(Θ z ) | S] for z≠y are found via (25) for independent and (53) for homoscedastic covariance models, plugging in appropriate values for k and γ 2. When analytic forms are not available, the sample-conditioned MSE is approximated as follows. In independent covariance models, for each sample point generated to approximate the BRE, we draw a single point from the effective conditional density with y=z, giving 1,000,000 sample point pairs to approximate E[ε i,y(Θ y )ε j,y(Θ y ) | S] for each y. In homoscedastic covariance models, to find the BRE, we have 1,000,000 points available from the effective density for each y. We generate an additional 1,000,000×(M−1) synthetic points for each y, thus allocating 1,000,000 synthetic points for each combination of y and z. For each of these points, we draw a single point from the effective conditional density of a class-z point given a class-y point. For each y and z, the corresponding 1,000,000 point pairs are used to approximate E[ε i,y(Θ y )ε j,z(Θ z ) | S]. Synthetic data In synthetic data simulations, we assume all classes are equally likely and that the data is stratified, giving an equal number of sample points from each class. We further assume Gaussian feature-label distributions. Table 1 lists all models and prior distributions used. We implement both a low number of features (D=2) and a high number of features (D=20), with independent arbitrary, homoscedastic arbitrary, and independent identity covariance priors. Under each type of prior, we consider classification under a non-zero-one loss function for binary classification and a zero-one loss function for multiple classes. For each prior model and a fixed sample size, we evaluate classification performance in a Monte Carlo estimation loop with 10,000 iterations. In each iteration, we follow a two-step procedure for sample generation: (1) generate random feature-label distribution parameters from the prior (each serving as the true underlying feature-label distribution) and (2) generate a random sample of size n from this fixed feature-label distribution. The generated random sample is used to train classifiers and evaluate their true risk. In the non-zero-one loss case, we also estimate risk and evaluate its accuracy using the performance metrics discussed earlier. We vary the sample size throughout and analyze its effect on performance. Table 1 Synthetic data classification settings and prior models Real data We consider two real datasets. The first is a breast cancer dataset containing 295 sample points [28], which will be used to demonstrate binary classification under a non-zero-one loss function. The second is composed of five different cancer types from The Cancer Genome Atlas (TCGA) project, which demonstrates multi-class classification under zero-one loss. In all real-data simulations, we assume that c y is known and equal to the proportion of class-y sample points in the whole dataset. We form a Monte Carlo estimation loop to evaluate classification and risk estimation, where we iterate 1000 times with the breast cancer dataset and 10,000 times with the TCGA dataset. In each iteration, we obtain a stratified training sample of size n, i.e., we select a subset of the original dataset, keeping the proportion of points in class y as close as possible to c y for every y. We use these training points to design several classifiers, while the remaining sample points are used as holdout data to approximate the true risk of each designed classifier. For the breast cancer dataset, we also use the training data to estimate risk and find the sample-conditioned MSE of the BRE. We vary sample size and analyze its effect on performance. To implement Bayesian methods, we assume Gaussian distributions with arbitrary independent covariances in all real-data simulations. We calibrate hyperparameters, defined in Appendix Appendix 2: Gaussian models, using a variant of the method-of-moments approach presented in [21]. In particular, we construct a calibration dataset from features not used to train the classifier and set ν y =s y /t y , \(\kappa _{y} = 2({s_{y}^{2}}/u_{y})\,+\,D\,+\,3\), m y =[m y ,…,m y ], and S y =(κ y −D−1)s y I D , where m y is the mean of the means of features among class-y points of the calibration dataset, and s y is the mean of the variances of features in class y. t y is the variance of the means of features in class y, where the 10 % of the means with the largest absolute value are discarded. Likewise, u y is the variance of the variances of features in class y, where the 10 % of the variances with the largest value are discarded. In the breast cancer data, 180 patients are assigned to class 0 (good prognosis) and 115 to class 1 (bad prognosis) in a 70-feature prognosis profile. A correct prognosis is associated with 0 loss, wrongly declaring a good prognosis incurs a loss of 1, and wrongly declaring a bad prognosis incurs a loss of 2. We use pre-selected features for classifier training, originally published in [29]. When D=2, these features are CENPA and BBC3, and when D=5, we also add CFFM4, TGFB3, and DKFZP564D0462. Rather than discard the 70 − D features not used for classification, we use these features to calibrate priors using the method-of-moments approach described above. For our second dataset, we downloaded level-3 microarray data from the TCGA data portal for five different kinds of cancers: breast invasive carcinoma (BRCA) with 593 sample points, colon adenocarcinoma (COAD) with 174 sample points, kidney renal clear cell carcinoma (KIRC) with 72 sample points, lung squamous cell carcinoma (LUSC) with 155 sample points, and ovarian serous cystadenocarcinoma (OV) with 562 sample points. We pooled all the sample points into a single dataset, removed features with missing values in any cancer type (17,016 features remained out of 17,814), and quantile-normalized the data with the median of the ranked values. We pre-select features for classifier training and prior calibration using the full dataset and one of two methods, which both operate in two phases: in phase 1, we pass D+100 features, and in phase 2, we select D features from those passing phase 1. The D features passing both phases are used for classifier training, and the features passing phase 1 but not phase 2 are used for prior calibration. The first feature selection method (FS-1) passes features that minimize a score evaluating separation between classes in phase 1 and selects features that minimize a score evaluating Gaussianity of the classes in phase 2. To evaluate separation between classes in phase 1, for each pair of classes, we obtain t-test p-values for each feature and rank these across all features, low p-values being assigned a lower rank, and finally, we report the rank product score for each feature over all 10 pairs of classes. To evaluate Gaussianity in phase 2, for each class, we rank Shapiro-Wilk test p-values across all features passing phase 1, high p-values being assigned a lower rank, and report the rank product score for each feature across all five classes. The second feature selection method (FS-2) passes features minimizing the rank product score from Shapiro-Wilk tests applied to all 17,016 features in phase 1, and in phase 2, we select D features from those passing phase 1 using sequential forward search (SFS) with LDA classification and resubstitution risk as the optimization criterion. Models 1 and 2 focus on the effect of risk on classification and risk estimation performance. In Fig. 1, we evaluate the performance of risk estimators and classifiers under model 1. Graphs in the left column present the mean, averaged over all 10,000 sample realizations, of the true risk and all risk estimators considered for LDA, QDA, and OBRC classification. Note for small samples of size n=20 and LDA or QDA classification, surrogate classifiers in the bootstrap risk estimator are occasionally undefined depending on the realized bootstrap sample. These events are thrown out so that only a subset of the original 10,000 sample realizations are used to approximate the mean bootstrap risk estimator. The graphs on the right column of Fig. 1 present the square root of the mean, averaged over all sample realizations, of the square difference between the true risk and each risk estimator, which we call the empirical RMS. The square root of the mean, averaged over all sample realizations, of the sample-conditioned MSE of the BRE from (24), which we call the Bayesian RMS, is also shown. Mean risks and RMS for model 1, three classification rules (LDA, QDA, and OBRC), and all risk estimators. a Mean risk under LDA; b RMS risk under LDA; c mean risk under QDA; d RMS risk under QDA; e mean risk under OBRC; f RMS risk under OBRC The BRE is an unbiased estimator, so the mean true risk and mean BRE curves should be aligned with enough iterations, which is observed. The empirical and Bayesian RMS both approximate the unconditional RMS so that these curves should also be aligned with enough iterations, as observed. Furthermore, the BRE is theoretically optimal in both the sample-conditioned and unconditioned RMS, and as expected, the empirical and Bayesian RMS curves for BRE under each classification rule outperform all other risk estimation rules. Thus, the BRE yields a significant improvement over classical risk estimators in terms of both bias and RMS performance within the Bayesian model. If we compare classification rules, the RMS of BRE is consistently lower for OBRC than LDA and QDA, although there is no theoretical guarantee for this. Similar curves for model 2 are provided in Fig. 2. To illustrate how the sample-conditioned MSE may be used to assess the accuracy of a risk estimate, suppose that we have observed a sample, trained a classifier, and obtained the BRE and the MSE of the BRE. For this fixed sample, but random parameters in the Bayesian model, the true risk has a mean equal to the BRE and a variance equal to the sample-conditioned MSE so that the random variable \(Z = (\widehat {R} - R)/\text {RMS}(\widehat {R}|S)\) must have zero mean and unit variance. This holds for any classification rule. In Fig. 3, we present quantile-quantile (Q-Q) plots of the sample quantiles of Z versus theoretical quantiles from a standard normal distribution. Figure 3a provides Q-Q plots with realizations of Z taken under OBRC classification and BRE risk estimation in model 1 with various sample sizes, along with a 45° reference line, and Fig. 3 b provides similar graphs for model 2. Observe that Z appears approximately standard normal, particularly under large sample sizes. Under smaller samples, Z appears more positively skewed but has approximately zero mean and unit variance. Q-Q plots for other classifiers are similar. Q-Q plots of Z under OBRC and BRE. a Model 1; b model 2 In Figs. 4 and 5, we provide examples of decision boundaries for models 3 and 4, respectively, which focus on the effect of multiple classes in two dimensions. Under model 3, where we assume independent covariances, the decision boundaries of OBRC are most similar to QDA, although they are in general of a polynomial order. Under model 4, where we assume homoscedastic covariances, OBRC is most similar to LDA, although the decision boundaries are not necessarily linear. Example decision boundaries for model 3 with multi-class classification. a LDA; b QDA; c OBRC; d L-SVM; e RBF-SVM In Fig. 6, we present the mean and standard deviation of the true risk with respect to all sample realizations as a function of sample size for models 3 and 4. OBRC outperforms all other classification rules with respect to mean risk, as it must, since the OBRC is defined to minimize mean risk. Although there is no guarantee that OBRC should minimize risk variance, in these examples, the risk variance is lower than in all other classification rules. The performance gain is particularly significant for small samples. Consider Figs. 6 a and 6 b, where we observe that, at a sample size of 10, the risk of OBRC has a mean of about 0.16 and standard deviation of about 0.065, whereas the risk of the next best classifier, RBF-SVM, has a mean of about 0.22 and standard deviation of about 0.09. True risk statistics for models 3 and 4 and five classification rules (LDA, QDA, OBRC, L-SVM, and RBF-SVM). a Model 3, mean; b model 3, standard deviation; c model 4, mean; d model 4, standard deviation Figure 7 provides the performance of risk estimators under OBRC classification in models 5 and 6, demonstrating performance in 20 dimensions with independent scaled identity covariance priors. Settings in model 5 are designed to produce a low mean risk and model 6 a high mean risk. Graphs in the left column present the mean true risk, averaged over all 10,000 sample realizations; the center column presents empirical and Bayesian RMS curves; and the right column presents Q-Q plots of Z. As in Figs. 1 and 2, the BRE appears unbiased, the empirical and Bayesian RMS curves are aligned, and the RMS curves are optimal. From the Q-Q plots, the distribution of Z appears to be skinny-tailed even under large n, although it is approximately zero mean and unit variance. Mean risks, RMS, and Q-Q plots of Z for models 5 and 6, OBRC classification, and all risk estimators. a Mean risk under model 5; b RMS risk under model 5; c Q-Q plots of Z under model 5; d mean risk under model 6; e RMS risk under model 6; f Q-Q plots of Z under model 6 In Fig. 8, we present the mean and standard deviation of the true risk of all classifiers as a function of sample size for models 7 and 8, where model 7 is designed to produce a low mean risk and model 8 a high mean risk. OBRC again outperforms all other classification rules with respect to mean risk, as it should. There is no guarantee that OBRC should minimize risk variance, and although risk variance is lowest for OBRC in Fig. 8 b, in Fig. 8 d it is actually highest. Performance gain is particularly significant for small samples. In Figs. 9 and 10, we evaluate the performance of risk estimators and classifiers under the breast cancer dataset for D=2 and D=5, respectively. Graphs in the left column present the mean true risk and mean risk estimates, graphs in the center column present the empirical RMS of all risk estimates and the Bayesian RMS for the BRE, and graphs in the right column present the Q-Q plots of Z for various sample sizes. LDA, QDA, and OBRC are presented in the top, center, and bottom rows, respectively. Although the BRE is theoretically unbiased and minimizes RMS when averaged across random distributions in the uncertainty class, when applied to a specific dataset or distribution, we now observe a bias (in the left column) and a discrepancy between the empirical and Bayesian RMS (in the center column). In particular, for all classifiers under D=2 and for LDA under D=5, we observe a high bias, for QDA and OBRC under D=5, we observe a low bias, and in all cases, the Bayesian RMS lies below the empirical RMS. That being said, the empirical RMS still outperforms that of distribution-free resampling error estimators (LOO, CV, and boot). Although resampling estimators are nearly unbiased, they suffer from such large variance under small samples that the BRE, despite imperfections in the Gaussianity assumption and prior construction method, may still outperform in practice thanks to optimization. Turning to classifier performance, in these simulations, LDA appears to outperform QDA and OBRC with independent arbitrary covariances. Keep in mind that Bayesian methods are not guaranteed to be optimal in all datasets and all settings but, rather, are only optimal within the assumed model. In fact, OBRC with homoscedastic arbitrary covariances (not shown in the figures) performs as well as, or significantly better than, LDA, suggesting that covariances in this problem are approximately homoscedastic. From the Q-Q plots, Z deviates from the reference standard normal CDF, with a clear shift in the mean and sometimes variance. For instance, under the LDA classification with D=2 and n=70 (corresponding to Fig. 9 c), the mean of Z is 0.76 and the standard deviation is 1.08, and under the OBRC classification with D=5 and n=70 (corresponding to Fig. 10 i), the mean of Z is −1.08 and the standard deviation is 1.49. Mean risks, RMS, and Q–Q plots of Z for the breast cancer dataset, D=2, three classification rules (LDA, QDA, and OBRC), and all risk estimators. a Mean risk under LDA; b RMS risk under LDA; c Q–Q plots of Z under LDA; d mean risk under QDA; e RMS risk under QDA; f Q–Q plots of Z under QDA; g mean risk under OBRC; h RMS risk under OBRC; i Q–Q plots of Z under OBRC Mean risks, RMS and Q–Q plots of Z for the breast cancer dataset, D=5, three classification rules (LDA, QDA, and OBRC), and all risk estimators. a Mean risk under LDA; b RMS risk under LDA; c Q-Q plots of Z under LDA; d mean risk under QDA; e RMS risk under QDA; f Q–Q plots of Z under QDA; g mean risk under OBRC; h RMS risk under OBRC; i Q–Q plots of Z under OBRC In Fig. 11, we present the mean of the true risk with respect to random samples from the TCGA dataset, as a function of sample size, for different feature selection methods and selected feature set sizes. Due to covariance estimation problems, QDA cannot be trained for D=20 in this range of sample sizes. OBRC with calibrated priors consistently outperforms under small samples and performs robustly under large samples. These results depend on the particular features selected and note LDA may have an advantage under FS-2, which minimizes the apparent error of LDA classifiers. True risk mean for the TCGA dataset and five classification rules (LDA, QDA, OBRC, L-SVM, and RBF-SVM). a FS-1, D=2; b FS-2, D=2; c FS-1, D=5; d FS-2, D=5; e FS-1, D=20; f FS-2, D=20 In real applications, data rarely satisfy modeling assumptions, for instance, Gaussianity, and there may be a concern that performance will suffer. Firstly, keep in mind the need to validate assumptions in the Bayesian model. For example, Gaussianity tests and homoscedasticity tests may be used to validate these underlying assumptions. Our real-data simulations demonstrate a few examples of how Gaussianity tests may be used in conjunction with Bayesian methods. Secondly, previous works have shown that Bayesian methods are relatively robust to deviations from a Gaussianity assumption [10, 14]. This is observed, for instance, in Figs. 9 and 10. Thirdly, inference from non-informative priors may serve as a reference. The OBRC under non-informative priors and an arbitrary homoscedastic covariance model behaves similarly to LDA and under an arbitrary independent covariance model behaves similarly to QDA [13, 14]. Thus, the OBRC can be seen as unifying and optimizing these classifiers. This applies in Fig. 11, where OBRC with an appropriate covariance model and non-informative prior performs indistinguishably from LDA. The conditional MSE is also an immensely useful tool to quantify the accuracy of a risk estimator. For instance, one may employ the MSE for censored sampling by collecting batches of sample points until the sample-conditioned MSE reaches an acceptable level, and either an acceptable risk has been achieved or it has been determined that an acceptable risk cannot be achieved. Lastly, although we provide analytic solutions under discrete and Gaussian models, the basic theory for this work does not require these assumptions. For instance, recent work in [30] develops a Bayesian Poisson model for RNA-Seq data, where Bayesian error estimators and optimal Bayesian classifiers are obtained using Markov chain Monte Carlo (MCMC) techniques. We have extended optimal Bayesian classification theory to multiple classes and arbitrary loss functions, giving rise to Bayesian risk estimators, the sample-conditioned MSE for arbitrary risk estimators, and optimal Bayesian risk classifiers. We have developed a new interpretation of the conditional MSE based on effective joint densities, which is useful in developing analytic forms and approximations for the conditional MSE. We also provide new analytic solutions for the conditional MSE under homoscedastic covariance models. Simulations based on several synthetic Gaussian models and two real microarray datasets also demonstrate good performance relative to existing methods. Appendix 1: Discrete models Consider a discrete sample space, \(\mathcal {X} = \{1, 2, \ldots, b\}\). Let \({p_{x}^{y}}\) be the probability that a point from class y is observed in bin \(x\in \mathcal {X}\), and let \({U_{x}^{y}}\) be the number of sample points observed from class y in bin x. Note \(n_{y}=\sum _{x=1}^{b}{U_{x}^{y}}\). The discrete Bayesian model defines \(\Theta _{y}=[\!{P_{1}^{y}},\ldots, {P_{b}^{y}}]\), with parameter space \(\mathcal {T}_{y} = \Delta ^{b-1}\). For each y, we define Dirichlet priors on Θ y with hyperparameters \(\mathbf {\alpha }^{y} = \{{\alpha _{1}^{y}}, \ldots, {\alpha _{b}^{y}}\}\): $$\pi (\theta_{y}) \propto \prod_{x=1}^{b}({p_{x}^{y}})^{{\alpha_{x}^{y}}-1}. $$ Assume that Θ y are mutually independent. Uniform priors are achieved when \({\alpha _{x}^{y}}=1\) for all x and y. Given data, the posteriors are again Dirichlet with updated hyperparameters, \(\alpha _{x}^{y\ast } = {\alpha _{x}^{y}} + {U_{x}^{y}}\) for all x and y. For proper posteriors, \(\alpha _{x}^{y\ast }\) must all be positive for all x and y. The effective density is thus given by: $$\begin{array}{*{20}l} f(x \, | \, y, S) &= {\text{E}[P_{x}^{y}} \, | \, S] = \frac{\alpha_{x}^{y\ast}}{\alpha_{+}^{y\ast}}, \end{array} $$ where \(\alpha _{+}^{y\ast } = \sum _{x = 1}^{b} \alpha _{x}^{y\ast }\). Thus, we have $$\begin{array}{*{20}l} \widehat{\varepsilon }^{i,y}\left(\psi, S\right) &= \sum_{x=1}^{b} \frac{\alpha_{x}^{y\ast}}{\alpha_{+}^{y\ast}} I_{\psi(x)=i}. \end{array} $$ The effective joint density, f(x,w | y,z,S), for y=z, can be found from properties of Dirichlet distributions. We have for any y∈{0,…,M−1} and \(x, w \in \mathcal {X}\), $$\begin{array}{*{20}l} f(x, w \, | \,y, y, S) &= \text{E}\left[ {P_{x}^{y}} {P_{w}^{y}} \,| \, S \right] = \frac{\alpha_{x}^{y\ast} (\alpha_{w}^{y\ast} + \delta_{xw})}{\alpha_{+}^{y\ast}\left(\alpha_{+}^{y\ast} + 1\right)}, \end{array} $$ where δ xw equals 1 if x=w and 0 otherwise. From (28), $$\begin{array}{*{20}l} &\text{E}\left[ \varepsilon^{i,y}(\Theta_{y}) \varepsilon^{j,y}(\Theta_{y}) \, | \, S\right] \\ &\qquad = \sum_{x = 1}^{b} \sum_{w = 1}^{b} \frac{\alpha_{x}^{y\ast} \left(\alpha_{w}^{y\ast} + \delta_{xw} \right)} {\alpha_{+}^{y\ast} (\alpha_{+}^{y\ast} + 1)} I_{\psi(x) = i} I_{\psi(w) = j} \\ &\qquad = \frac{\widehat{\varepsilon}^{i,y}(\psi, S) \left(\alpha_{+}^{y\ast} \widehat{\varepsilon}^{j,y}(\psi, S) + \delta_{ij}\right) } {\alpha_{+}^{y\ast} + 1}. \end{array} $$ When y≠z, \(\text {E}\left [ \varepsilon _{n}^{i,y}(\Theta _{y}) \varepsilon _{n}^{j,z}(\Theta _{y}) \, | \, S\right ]\) may be found from (25). Appendix 2: Gaussian models Suppose \(\mathcal {X}\) is a D dimensional space in which each point is represented by a column vector and each class-y conditional distribution is Gaussian with mean vector μ y and covariance matrix Σ y . We will consider independent covariance models, where the Σ y are mutually independent prior to observing the data, and homoscedastic covariance models, where Σ y are identical for all y [13]. We will also consider three structures for the covariance: known, scaled identity, and arbitrary. Throughout, we use μ y and Σ y to denote both random quantities and their realizations, and we use Σ y ≻0 to denote a valid covariance matrix, i.e., a symmetric, positive definite matrix. Throughout, we will find analytic forms for the BRE and conditional MSE under binary linear classifiers, ψ, of the form $$ \psi(\mathbf{x}) = \left\{ \begin{array}{ll}0~~\text{if}~ g(\mathbf{x}) \leq 0,\\ 1 ~~\text{otherwise}{,} \end{array} \right. $$ where g(x)=a T x+b for some vector a and scalar b, and a superscript T denotes matrix transpose. Known covariance Assume that Σ y ≻0 is known so that Θ y =μ y with parameter space \(\mathcal {T}_{y} = \mathbb {R}^{D}\). We assume the μ y s are mutually independent and use the following prior: $$ \pi (\mathbf{\mu}_{y}) \propto \left|\mathbf{\Sigma}_{y}\right|^{-\frac{1}{2}} \exp \left(- \frac{\nu_{y}}{2}\left(\mathbf{\mu}_{y} -\mathbf{m}_{y}\right)^{T} \mathbf{\Sigma}_{y}^{-1}\left(\mathbf{\mu}_{y} -\mathbf{m}_{y}\right) \right), $$ with hyperparameters \(\nu _{y} \in \mathbb {R}\) and \(\mathbf {m}_{y} \in \mathbb {R}^{D}\), where |·| denotes a determinant. When ν y >0, this is a Gaussian distribution with mean m y and covariance Σ y /ν y . Under this model, the posterior is of the same form as the prior, with updated hyperparameters $$\begin{array}{*{20}l} \nu_{y}^{\ast} &= \nu_{y} + n_{y}, \\ \mathbf{m}_{y}^{\ast} &= \mathbf{m}_{y} + n_{y} \frac{\widehat{\mathbf{\mu} }_{y} - \mathbf{m}_{y}}{\nu_{y}+n_{y}}, \end{array} $$ where \(\widehat {\mathbf {\mu } }_{y}\) is the usual sample mean of training points in class y. We require \(\nu _{y}^{\ast } > 0\) for a proper posterior. The effective density was shown in [13] to be the following Gaussian distribution: $$ f(\mathbf{x} \, | \, y, S) \sim \mathcal{N} \left(\mathbf{m}^{\ast}_{y}, \frac{\nu^{\ast}_{y} + 1}{\nu^{\ast}_{y}} \mathbf{\Sigma}_{y}\right). $$ To find the BRE for a linear classifier, let P=(−1)i g(X). From the effective density, $$ f(p \, | \, y, S) \sim \mathcal{N} \left((-1)^{i} g(\mathbf{m}^{\ast}_{y}), \frac{\nu^{\ast}_{y} + 1}{\nu^{\ast}_{y}} \mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a}\right). $$ $$\begin{array}{*{20}l} \widehat{\varepsilon}^{i,y}(\psi, S) &= \text{P}((-1)^{i} g(\mathbf{X}) \leq 0 \, | \,y, S) \\ &= \text{P}(P \leq 0 \, | \, y, S) \\ &= \Phi\left(- \frac{(-1)^{i} g(\mathbf{m}^{\ast}_{y})}{\sqrt{\mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a}}} \sqrt{\frac{\nu^{\ast}_{y}}{\nu^{\ast}_{y} + 1}} \right), \end{array} $$ where Φ(x) is the standard normal CDF. This result was also found in [10]. To find the MSE under linear classification, note f(w | x,y,z,S) is of the same form as f(x | y,S) with posterior hyperparameters updated with {x,y} as a new sample point. Hence, for y=z, $$ f(\mathbf{w} \, | \, \mathbf{x}, y, y, S) \sim \mathcal{N} \left(\mathbf{m}^{\ast}_{y} + \frac{\mathbf{x} - \mathbf{m}^{\ast}_{y}}{\nu^{\ast}_{y} + 1}, \frac{\nu^{\ast}_{y} + 2}{\nu^{\ast}_{y} + 1} \mathbf{\Sigma}_{y} \right), $$ and the effective joint density is thus given by $$ f(\mathbf{x}, \mathbf{w} \, | \, y, y, S) \sim \mathcal{N} \left(\left[ \begin{array}{ll} \mathbf{m}^{\ast}_{y} \\ \mathbf{m}^{\ast}_{y} \\ \end{array} \right],\left[ \begin{array}{ll} \frac{\nu^{\ast}_{y} + 1}{\nu^{\ast}_{y}} \mathbf{\Sigma}_{y} & \frac{1}{\nu^{\ast}_{y}} \mathbf{\Sigma}_{y} \\ \frac{1}{\nu^{\ast}_{y}} \mathbf{\Sigma}_{y} & \frac{\nu^{\ast}_{y} + 1}{\nu^{\ast}_{y}} \mathbf{\Sigma}_{y} \\ \end{array} \right] \right). $$ Now let Q=(−1)j g(W). Since X and W are governed by the effective joint density in (40): $$\begin{array}{*{20}l} &f(p, q\,| \, y, y, S) \sim \\ &\mathcal{N} \left(\left[ \begin{array}{ll} (-1)^{i} g(\mathbf{m}^{\ast}_{y}) \\ (-1)^{j} g(\mathbf{m}^{\ast}_{y}) \\ \end{array} \right], \left[ \begin{array}{ll} \frac{\nu^{\ast}_{y} + 1}{\nu^{\ast}_{y}} \mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a} & \frac{(-1)^{i+j}}{\nu^{\ast}_{y}} \mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a} \\ \frac{(-1)^{i+j}}{\nu^{\ast}_{y}} \mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a} & \frac{\nu^{\ast}_{y} + 1}{\nu^{\ast}_{y}} \mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a} \\ \end{array} \right] \right). \end{array} $$ Hence, from (29), we have $${} {\fontsize{8.1}{12}{\begin{aligned} \text{E} \left[ \varepsilon^{i,y}(\psi, \Theta_{y}) \varepsilon^{j,y}(\psi, \Theta_{y}) \, | \, S \right] &= \text{P}(P \leq 0 \cap Q \leq 0 \,| \, y, S)\\ &= \Phi\left(- \frac{(-1)^{i} g(\mathbf{m}^{\ast}_{y})}{\sqrt{\mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a}}} \sqrt{\frac{\nu^{\ast}_{y}}{\nu^{\ast}_{y} + 1}}, \right.\\ &\quad\left.- \frac{(-1)^{j} g(\mathbf{m}^{\ast}_{y})}{\sqrt{\mathbf{a}^{T} \mathbf{\Sigma}_{y} \mathbf{a}}} \sqrt{\frac{\nu^{\ast}_{y}}{\nu^{\ast}_{y} + 1}}, \frac{(-1)^{i+j}}{\nu_{y}^{\ast}+1}\right), \end{aligned}}} $$ where Φ(x,y,ρ) is the joint CDF of two standard normal random variables with correlation ρ. When y≠z, E[ε i,y(ψ,Θ y )ε j,z(ψ,Θ z ) | S] is found from (25). Homoscedastic arbitrary covariance Assume Θ y =[μ y ,Σ], where the parameter space of μ y is \(\mathbb {R}^{D}\) and the parameter space of Σ consists of all symmetric positive definite matrices. Further, assume a conjugate prior in which the μ y s are mutually independent given Σ so that $$ \pi (\theta)=\left(\prod_{y = 0}^{M-1} \pi (\mathbf{\mu}_{y} \, | \, \mathbf{\Sigma})\right)\pi (\mathbf{\Sigma}), $$ where π(μ y | Σ) is as in (34) with hyperparameters \(\nu _{y} \in \mathbb {R}\) and \(\mathbf {m}_{y} \in \mathbb {R}^{D}\), and $$\begin{array}{*{20}l} \pi (\mathbf{\Sigma})& \propto \left|\mathbf{\Sigma}\right|^{-\frac{\kappa +D+1}{2}} \exp \left(- \frac{1}{2} \text{trace} \left(\mathbf{S} \mathbf{\Sigma}^{-1}\right) \right), \end{array} $$ with hyperparameters \(\kappa \in \mathbb {R}\) and S, a symmetric D×D matrix. If ν y >0, then π(μ y | Σ) is Gaussian with mean m y and covariance Σ/ν y . If κ>D−1 and S≻0, then π(Σ) is an inverse-Wishart distribution with hyperparameters κ and S. If in addition κ>D+1, the mean of Σ exists and is given by E[Σ]=S/(κ−D−1); thus, S determines the shape of the expected covariance. The posterior is of the same form as the prior with the same updated hyperparameters given by (35) and $$\begin{array}{*{20}l} \kappa^{\ast} &=\kappa +n, \\ \mathbf{S}^{\ast} &= \mathbf{S} + \sum_{y = 0}^{M-1} (n_{y}-1)\widehat{\mathbf{\Sigma} }_{y} + \textstyle\frac{\nu_{y}n_{y}}{\nu_{y}+n_{y}}(\widehat{\mathbf{\mu} }_{y}-\mathbf{m}_{y})(\widehat{\mathbf{\mu} }_{y}-\mathbf{m}_{y})^{T}, \end{array} $$ where \(\widehat {\mathbf {\Sigma } }_{y}\) is the usual sample covariance of training points in class y (\(\widehat {\mathbf {\Sigma }}_{y} = 0\) if n y ≤1). The posteriors are proper if \(\nu _{y}^{\ast } >0\), κ ∗>D−1 and S ∗≻0. The effective density for class y is multivariate student t with k=κ ∗−D+1 degrees of freedom, location vector \(\mathbf {m}_{y}^{\ast }\), and scale matrix \(\frac {\nu _{y}^{\ast }+1}{k \nu _{y}^{\ast }} \mathbf {S}^{\ast }\) [13]. In other words, $$ f(\mathbf{x} \, | \, y, S) \sim t \left(k, \mathbf{m}_{y}^{\ast}, \frac{\nu_{y}^{\ast}+1}{k \nu_{y}^{\ast}} \mathbf{S}^{\ast} \right). $$ To find the BRE under a binary linear classifier of the form (33), let P=(−1)i g(X). Since P is an affine transformation of a multivariate student t random variable, it has a non-standardized student t distribution [31]: $$ f(p \, | \, y, S) \sim t \left(k, m_{iy}, \frac{\nu_{y}^{\ast}+1}{k \nu_{y}^{\ast}} \gamma^{2} \right), $$ where \(m_{\textit {iy}} = (-1)^{i} g(\mathbf {m}_{y}^{\ast })\) and γ 2=a T S ∗ a. The CDF of a non-standardized student t distribution with d degrees of freedom, location parameter m, and scale parameter s 2 is well known, and at zero, it is given by [32], $$\frac{1}{2} - \frac{\text{sgn}(m)}{2} I\left(\frac{m^{2}}{m^{2} + d s^{2}}; \frac{1}{2}, \frac{d}{2}\right), $$ where I(x;a,b) is an incomplete regularized beta function. Hence, $$\begin{array}{*{20}l} \widehat{\varepsilon}^{i,y}(\psi, S) &= \frac{1}{2} - \frac{\text{sgn}(m_{iy})}{2} I\left(\frac{m_{iy}^{2}}{m_{iy}^{2} + \frac{\nu_{y}^{\ast}+1}{\nu_{y}^{\ast}} \gamma^{2}}; \frac{1}{2}, \frac{k}{2}\right). \end{array} $$ This result was also found in [10]. The effective conditional density for y=z is solved by updating all of the hyperparameters associated with class y with the new sample point, {x,y}, resulting in: $$\begin{array}{*{20}l} &f(\mathbf{w} \, | \, \mathbf{x}, y, y, S) \sim t \left(k+1, \mathbf{m}_{y}^{\ast} + \frac{\mathbf{x} - \mathbf{m}_{y}^{\ast }}{\nu_{y}^{\ast}+1}, \right. \\ & \qquad\qquad\qquad \left. \frac{\nu^{\ast }_{y}+2}{(k+1)(\nu^{\ast }_{y} + 1)}\left(\mathbf{S}^{\ast} + \mathbf{S}_{y}(\mathbf{x})\right) \right), \end{array} $$ $$ \mathbf{S}_{y}(\mathbf{x}) = \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast}+1}(\mathbf{x}-\mathbf{m}_{y}^{\ast})(\mathbf{x}-\mathbf{m}_{y}^{\ast})^{T}. $$ For y≠z, f(w | x,y,z,S) is of the same form as the effective density with only hyperparameters associated with the covariance, κ ∗ and S ∗, updated: $$\begin{array}{*{20}l} &f(\mathbf{w} \,| \, \mathbf{x}, y, z, S) \sim t \left(k+1, \mathbf{m}_{z}^{\ast }, \right. \\ & \quad\qquad\qquad\qquad \left. \frac{\nu^{\ast }_{z}+1}{(k + 1)\nu^{\ast }_{z}} \left(\mathbf{S}^{\ast} + \mathbf{S}_{y}(\mathbf{x})\right) \right). \end{array} $$ To find the conditional MSE of the BRE, let Q=(−1)j g(W). For y=z, $$\begin{array}{*{20}l} &f(q \,| \, \mathbf{x}, y, y, S) \sim t \left(k+1, m_{iy} + \frac{p - m_{iy}}{\nu_{y}^{\ast} + 1}, \right. \\ &\left. \frac{\nu_{y}^{\ast}+2}{(k+1)(\nu_{y}^{\ast}+1)} \left(\gamma^{2} + \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast}+1} \left(p - m_{iy} \right)^{2} \right) \right), \end{array} $$ where we have used the fact that \((-1)^{i} \mathbf {a}^{T} (\mathbf {x} - \mathbf {m}_{y}^{\ast }) = p - m_{y}\). When y≠z, $$\begin{array}{*{20}l} f(q \, | \, \mathbf{x}, y, z, S) &\sim t \left(k+1, m_{jz}, \right. \\ & \left. \frac{\nu_{z}^{\ast}+1}{(k+1)\nu_{z}^{\ast}} \left(\gamma^{2} + \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast}+1} \left(p - m_{iy} \right)^{2} \right) \right). \end{array} $$ Since dependency on X has been reduced to dependency on only P in both of the above distributions, we may write f(q | x,y,z,S)=f(q | p,y,z,S) for all y and z. Lemma 1 in Appendix Appendix 3: Effective joint density lemma produces an effective joint density given an effective density and an effective conditional density of a specified form. The distributions f(p | y,S) and f(q | p,y,y,S) are precisely in the form required by this lemma with D=1. Hence, [P,Q]T follows a bivariate student t distribution when y=z, $$ f(p, q \, | \, y, y, S) \sim t\left(k,\left[ \begin{array}{ll} m_{iy} \\ m_{iy} \\ \end{array} \right], \frac{\gamma^{2}}{k} \left[ \begin{array}{ll} \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} & \frac{(-1)^{i+j}}{\nu_{y}^{\ast}} \\ \frac{(-1)^{i+j}}{\nu_{y}^{\ast}} & \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \\ \end{array} \right] \right), $$ and when y≠z, $$f(p, q \,| \, y, z, S) \sim t\left(k, \left[ \begin{array}{ll} m_{iy} \\ m_{jz} \\ \end{array} \right], \frac{\gamma^{2}}{k} \left[ \begin{array}{cc} \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} & 0 \\ 0 & \frac{\nu_{z}^{\ast} + 1}{\nu_{z}^{\ast}}\\ \end{array} \right] \right). $$ Thus, E[ε i,y(Θ y )ε j,y(Θ y ) | S] can be found from (29). In particular, when y=z, $$\begin{array}{@{}rcl@{}} \text{E} \left[ \varepsilon^{i,y}(\Theta_{y}) \varepsilon^{j,y}(\Theta_{y}) \, | \, S \right] &= \text{P}(P \leq 0 \cap Q \leq 0 \,| \, y, y, S) \\ &= \mathbf{T}\left(- \frac{m_{iy}}{\gamma} \sqrt{\frac{k \nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1}}, - \frac{m_{jy}}{\gamma}\right.\\ &\quad\left.\sqrt{\frac{k \nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1}}, \frac{(-1)^{i+j}}{\nu_{y}^{\ast}+1}, k \right), \end{array} $$ $$\begin{array}{@{}rcl@{}} \text{E} \left[ \varepsilon^{i,y}(\Theta_{y}) \varepsilon^{j,z}(\Theta_{z}) \, | \, S \right] &=\text{P} \text{}(P \leq 0 \cap Q \leq 0 \, | \, y, z, S) \\ &= \mathbf{T}\left(- \frac{m_{iy}}{\gamma} \sqrt{\frac{k \nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1}}, - \frac{m_{jz}}{\gamma}\right. \\ &\quad\left.\sqrt{\frac{k \nu_{z}^{\ast}}{\nu_{z}^{\ast} + 1}}, 0, k \right), \end{array} $$ where T(x,y,ρ,d) is the joint CDF of two standard multivariate student t random variables with correlation ρ and d degrees of freedom. Independent arbitrary covariance Assume Θ=[ μ y ,Σ y ], where the parameter space of μ y is \(\mathbb {R}^{D}\) and the parameter space of Σ y consists of all symmetric positive definite matrices. The independent arbitrary covariance model assumes a conjugate prior with independent Θ y s and $$ \pi (\theta_{y})=\pi (\mathbf{\mu}_{y} \, | \, \mathbf{\Sigma}_{y})\pi (\mathbf{\Sigma}_{y}), $$ where π(μ y | Σ y ) is of the same form as in (34) with hyperparameters \(\nu _{y} \in \mathbb {R}\) and \(\mathbf {m}_{y} \in \mathbb {R}^{D}\), and π(Σ y ) is of the same form as in (42) with hyperparameters \(\kappa _{y} \in \mathbb {R}\) and S y , a symmetric D×D matrix. The posterior is of the same form as the prior with updated hyperparameters given by (35) and $$\begin{array}{*{20}l} \kappa_{y}^{\ast} &=\kappa_{y}+n_{y}, \\ \mathbf{S}_{y}^{\ast} &= \mathbf{S}_{y}+(n_{y}-1)\widehat{\mathbf{\Sigma} }_{y} + \textstyle \frac{\nu_{y}n_{y}}{\nu_{y}+n_{y}}(\widehat{\mathbf{\mu} }_{y}-\mathbf{m}_{y})(\widehat{\mathbf{\mu} }_{y}-\mathbf{m}_{y})^{T}. \end{array} $$ The posteriors are proper if \(\nu _{y}^{\ast } >0\), \(\kappa _{y}^{\ast } >D-1\) and \(\mathbf {S}_{y}^{\ast } \succ 0\). The effective density for class y is multivariate student t as in (44) with \(k_{y} = \kappa _{y}^{\ast }-D+1\) and \(\mathbf {S}_{y}^{\ast }\) in place of k and S ∗, respectively [13]. Further, (45) also holds with \(m_{\textit {iy}} = (-1)^{i} g(\mathbf {m}_{y}^{\ast })\) and with k y and \({\gamma _{y}^{2}} = \mathbf {a}^{T} \mathbf {S}_{y}^{\ast } \mathbf {a}\) in place of k and γ 2, respectively. Under binary linear classification, \(\widehat {\varepsilon }^{i,y}(\psi, S)\) is given by (46) with k y and \({\gamma _{y}^{2}}\) in place of k and γ 2. The same result was found in [10]. E[ε i,y(Θ y )ε j,y(Θ y ) | S] is solved similarly to before, resulting in (47), (50), (51), and ultimately (52), with k y , \(\mathbf {S}_{y}^{\ast }\) and \({\gamma _{y}^{2}}\) in place of k, S ∗, and γ 2, respectively. E[ε i,y(Θ y )ε j,z(Θ z ) | S] for y≠z is found from (25). Homoscedastic scaled identity covariance In the homoscedastic scaled identity covariance model, Σ y is assumed to have a scaled identity structure, i.e., Θ y =[μ y ,σ 2] where Σ y =σ 2 I D and I D is a D×D identity matrix. The parameter space of μ y is \(\mathbb {R}^{D}\) for all y and of σ 2 is (0,∞). We also assume the μ y s are mutually independent given σ 2: $$ \pi (\theta)=\left(\prod_{y=0}^{M-1} \pi (\mathbf{\mu}_{y} \, | \, \sigma^{2})\right) \pi (\sigma^{2}), $$ where π(μ y | σ 2) is of the same form as (34) with hyperparameters ν y and m y , and $$\begin{array}{*{20}l} \pi (\sigma^{2})& \propto \left\vert \sigma^{2} \right\vert^{-\frac{(\kappa +D+1)D}{2}} \exp \left(- \frac{\text{trace} (\mathbf{S})}{2 \sigma^{2}} \right), \end{array} $$ with hyperparameters \(\kappa \in \mathbb {R}\) and S, a symmetric D×D real matrix. When ν y >0, π(μ y | σ 2) is a univariate Gaussian distribution with mean m y and covariance Σ y /ν y , and when (κ+D+1)D>2 and S≻0, π(σ 2) is a univariate inverse-Wishart distribution. If in addition (κ+D+1)D>4, then \(\text {E}[\sigma ^{2}] = \frac {\text {trace} (\mathbf {S})}{(\kappa +D+1)D - 4}\). The form of (57) has been designed so that the posterior is of the same form as the prior with the same hyperparameter update equations given in the arbitrary covariance models, (35) and (43). We require \(\nu _{y}^{\ast } > 0\), (κ ∗+D+1)D>2, and S ∗≻0 for a proper posterior. The effective density for class y is multivariate student t with k=(κ ∗+D+1)D−2 degrees of freedom [13]: $$ f(\mathbf{x} \, | \, y, S) \sim t \left(k, \mathbf{m}_{y}^{\ast}, \frac{\nu_{y}^{\ast}+1}{k \nu_{y}^{\ast}} \text{trace} (\mathbf{S}^{\ast}) \mathbf{I}_{D}\right). $$ Let P=(−1)i g(X). Since P is an affine transformation of a multivariate student t random variable, again it has the same form as in (45) with k=(κ ∗+D+1)D−2, \(m_{\textit {iy}} = (-1)^{i} g(\mathbf {m}_{y}^{\ast })\), and γ 2=trace(S ∗)a T a. Following the same steps as in the homoscedastic arbitrary covariance model, under binary linear classification, \(\widehat {\varepsilon }^{i,y}(\psi, S)\) is given by (46) with the appropriate choice of k, m iy , and γ 2. This was found in [10]. The effective conditional density for y=z is solved by updating all of the hyperparameters associated with class y with the new sample point, {x,y}: $$\begin{array}{*{20}l} &f(\mathbf{w} \, | \, \mathbf{x}, y, y, S) \sim t \Big(k + D, \mathbf{m}_{y}^{\ast }+\frac{\mathbf{x} - \mathbf{m}_{y}^{\ast }}{\nu_{y}^{\ast}+1}, \\ &\qquad \frac{\nu_{y}^{\ast}+2}{(k+D) (\nu_{y}^{\ast}+1)} \text{trace} (\mathbf{S}^{\ast} + \mathbf{S}_{y}(\mathbf{x})) \mathbf{I}_{D} \Big), \end{array} $$ where S y (x) is given by (48). When y≠z, the effective conditional density is found by updating only hyperparameters associated with the covariance, κ ∗ and S ∗, with the point {x,y}. Thus, $$\begin{array}{*{20}l} &f(\mathbf{w} \,| \, \mathbf{x}, y, z, S) \sim t \left(k + D, \mathbf{m}_{z}^{\ast }, \right. \\ &\qquad\qquad\qquad \left. \frac{\nu_{z}^{\ast}+1}{(k+D) \nu_{z}^{\ast}} \text{trace} (\mathbf{S}^{\ast} + \mathbf{S}_{y}(\mathbf{x})) \mathbf{I}_{D} \right). \end{array} $$ Lemma 1 in Appendix Appendix 3: Effective joint density lemma is used to find an effective joint density. When y=z, $$\begin{array}{*{20}l} & f(\mathbf{x}, \mathbf{w} \, | \, y, y, S) \\ &\quad\sim t\left(k, \left[ \begin{array}{ll} \mathbf{m}_{y}^{\ast} \\ \mathbf{m}_{y}^{\ast} \\ \end{array} \right], \frac{\text{trace} (\mathbf{S}^{\ast})}{k} \left[ \begin{array}{ll} \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \mathbf{I}_{D} & \frac{1}{\nu_{y}^{\ast}} \mathbf{I}_{D} \\ \frac{1}{\nu_{y}^{\ast}} \mathbf{I}_{D} & \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \mathbf{I}_{D} \\ \end{array} \right] \right), \end{array} $$ $$\begin{array}{*{20}l} & f(\mathbf{x}, \mathbf{w} \, | \, y, z, S) \\ &\quad\sim t\left(k, \left[ \begin{array}{ll} \mathbf{m}_{y}^{\ast} \\ \mathbf{m}_{z}^{\ast} \\ \end{array} \right], \frac{\text{trace} (\mathbf{S}^{\ast})}{k} \left[ \begin{array}{cc} \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \mathbf{I}_{D} & \mathbf{0}_{D} \\ \mathbf{0}_{D} & \frac{\nu_{z}^{\ast} + 1}{\nu_{z}^{\ast}} \mathbf{I}_{D} \\ \end{array} \right] \right). \end{array} $$ E[ε i,y(Θ y )ε j,z(Θ y ) | S] can be found from (29) by defining P=(−1)i g(X) and Q=(−1)j g(W). Following the same steps as in the homoscedastic arbitrary covariance model, one can show that E[ε i,y(Θ y )ε j,z(Θ y ) | S] is equivalent to (52) when y=z and (53) when y≠z, where we plug in appropriate values for k, m iy and γ 2. Independent scaled identity covariance Now assume that Σ y has a scaled identity structure, i.e., \(\Theta _{y} = [\mathbf {\mu }_{y}, {\sigma _{y}^{2}}]\) where \(\mathbf {\Sigma }_{y} = {\sigma _{y}^{2}} \mathbf {I}_{D}\), and that the parameter space of μ y is \(\mathbb {R}^{D}\) and of \({\sigma _{y}^{2}}\) is (0,∞) for all y. Also, assume the Θ y s are mutually independent, with $$ \pi (\theta_{y})=\pi (\mathbf{\mu}_{y} \,| \, {\sigma_{y}^{2}})\pi ({\sigma_{y}^{2}}), $$ where \(\pi (\mathbf {\mu }_{y} \, | \, {\sigma _{y}^{2}})\) is of the same form as in (34) with hyperparameters \(\nu _{y} \in \mathbb {R}\) and \(\mathbf {m}_{y} \in \mathbb {R}^{D}\), and \(\pi ({\sigma _{y}^{2}})\) is of the same form as in (57) with hyperparameters \(\kappa _{y} \in \mathbb {R}\) and S y , a symmetric D×D real matrix. The posterior is of the same form as the prior with the same hyperparameter update equations in (35) and (55). We require \(\nu _{y}^{\ast } > 0\), \((\kappa _{y}^{\ast } +D+1)D > 2\) and \(\mathbf {S}_{y}^{\ast } \succ 0\) for a proper posterior. The effective density for class y is multivariate student t, as in (58) with \(k_{y} = (\kappa _{y}^{\ast } + D + 1)D-2\) and \(\mathbf {S}_{y}^{\ast }\) in place of k and S ∗, respectively [13]. Under binary linear classification, \(\widehat {\varepsilon }^{i,y}(\psi, S)\) is given by (46) with \(m_{\textit {iy}} = (-1)^{i} g(\mathbf {m}_{y}^{\ast })\) and with k y and \({\gamma _{y}^{2}} = \text {trace} (\mathbf {S}_{y}^{\ast }) \mathbf {a}^{T} \mathbf {a}\) in place of k and γ 2. The effective joint density, f(x,w | y,y,S), is solved as before, resulting in (59) and (61) with k y and \(\mathbf {S}_{y}^{\ast }\) in place of k and S ∗, respectively. Further, E[ε i,y(Θ y )ε j,y(Θ y ) | S] is solved from (51) resulting in (52), with k y and \({\gamma _{y}^{2}}\) in place of k and γ 2, respectively. E[ε i,y(Θ y )ε j,z(Θ z ) | S] for y≠z is found from (25). Appendix 3: Effective joint density lemma The lemma below is used to derive the effective joint density of Gaussian models in Appendix Appendix 2: Gaussian models. Lemma 1. Suppose X is multivariate student t given by, $$\begin{array}{*{20}l} & f(\mathbf{x}) \sim t\left(k, \mathbf{m}_{y}^{\ast}, \frac{\nu_{y}^{\ast} + 1}{k \nu_{y}^{\ast}} \gamma^{2} \mathbf{I}_{D} \right). \end{array} $$ Further, suppose W conditioned on X=x is multivariate student t given by, $$\begin{array}{*{20}l} & f(\mathbf{w} \, | \, \mathbf{x}) \sim t\left(k + D, \mathbf{m}_{z}^{\ast} + I \frac{\mathbf{x} - \mathbf{m}_{y}^{\ast}}{\nu_{y}^{\ast} + 1},\right. \\ &\left. \quad \frac{1}{k + D}J\left(\gamma^{2} + \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1} (\mathbf{x} - \mathbf{m}_{y}^{\ast})^{T} (\mathbf{x} - \mathbf{m}_{y}^{\ast}) \right) \mathbf{I}_{D} \right), \end{array} $$ where either I=0 and \(J = \frac {\nu _{z}^{\ast } + 1}{\nu _{z}^{\ast }}\), or I=1 and \(J = \frac {\nu _{y}^{\ast } + 2}{\nu _{y}^{\ast } + 1}\). Then, the joint density is multivariate student t: $$\begin{array}{*{20}l} & f(\mathbf{x}, \mathbf{w}) \sim t\left(k, \left[ \begin{array}{ll} \mathbf{m}_{y}^{\ast} \\ \mathbf{m}_{z}^{\ast} \\ \end{array} \right], \frac{\gamma^{2}}{k} \left[ \begin{array}{ll} \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \mathbf{I}_{D} & I \frac{1}{\nu_{y}^{\ast}} \mathbf{I}_{D} \\ I \frac{1}{\nu_{y}^{\ast}} \mathbf{I}_{D} & K \mathbf{I}_{D} \\ \end{array} \right] \right), \end{array} $$ where \(K = \frac {\nu _{z}^{\ast } + 1}{\nu _{z}^{\ast }}\) when I=0 and \(K = \frac {\nu _{y}^{\ast } + 1}{\nu _{y}^{\ast }}\) when I=1. After some simplification, one can show $$\begin{array}{*{20}l} f(\mathbf{x}, \mathbf{w}) =& f(\mathbf{x}) f(\mathbf{w} \, | \, \mathbf{x}) \\ &\propto\!\! \left(\!\!1 \,+\, \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1}(\mathbf{x} \!- \!\mathbf{m}_{y}^{\ast})^{T} (\gamma^{2} \mathbf{I}_{D})^{-1} (\mathbf{x} - \mathbf{m}_{y}^{\ast})\!\!\right)\!^{-\frac{k + D}{2}} \\ & \times \left|\gamma^{2} \mathbf{I}_{D} + \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1} (\mathbf{x} - \mathbf{m}_{y}^{\ast})^{T} (\mathbf{x} - \mathbf{m}_{y}^{\ast}) \mathbf{I}_{D}\right|^{\frac{k + 2D - 1}{2}} \\ & \times \left| \gamma^{2} \mathbf{I}_{D} + \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1} (\mathbf{x} - \mathbf{m}_{y}^{\ast})^{T} (\mathbf{x} - \mathbf{m}_{y}^{\ast}) \mathbf{I}_{D} \right. \\ & + \frac{1}{J} \left(\mathbf{w} - \mathbf{m}_{z}^{\ast} - I\frac{\mathbf{x} - \mathbf{m}_{y}^{\ast}}{\nu_{y}^{\ast} + 1}\right) \\ &\left. \times \left(\mathbf{w} - \mathbf{m}_{z}^{\ast} - I\frac{\mathbf{x} - \mathbf{m}_{y}^{\ast}}{\nu_{y}^{\ast} + 1}\right)^{T} \right|^{-\frac{k + 2D}{2}}. \end{array} $$ Simplifying further, we obtain $$\begin{array}{*{20}l} f(\mathbf{x}, \mathbf{w}) &\propto \left(\gamma^{2} + \frac{\nu_{y}^{\ast}}{\nu_{y}^{\ast} + 1} (\mathbf{x} - \mathbf{m}_{y}^{\ast})^{T} (\mathbf{x} - \mathbf{m}_{y}^{\ast}) \right. \\ & \quad + \frac{1}{J} \left(\mathbf{w} - \mathbf{m}_{z}^{\ast} - I\frac{\mathbf{x} - \mathbf{m}_{y}^{\ast}}{\nu_{y}^{\ast} + 1}\right)^{T} \\ & \quad \times \left. \left(\mathbf{w} - \mathbf{m}_{z}^{\ast} - I\frac{\mathbf{x} - \mathbf{m}_{y}^{\ast}}{\nu_{y}^{\ast} + 1}\right) \right)^{-\frac{k + 2D}{2}}. \end{array} $$ If I=0, then it can be shown that $$\begin{array}{*{20}l} f(\mathbf{x}, \mathbf{w}) &\propto \left(1 + \left[ \begin{array}{ll} \mathbf{x} - \mathbf{m}_{y}^{\ast} \\ \mathbf{w} - \mathbf{m}_{z}^{\ast} \\ \end{array} \right]^{T} \mathbf{\Lambda}^{-1} \left[ \begin{array}{ll} \mathbf{x} - \mathbf{m}_{y}^{\ast} \\ \mathbf{w} - \mathbf{m}_{z}^{\ast} \\ \end{array} \right] \right)^{-\frac{k + 2D}{2}}, \end{array} $$ $$\mathbf{\Lambda} = \left[ \begin{array}{cc} \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \gamma^{2} \mathbf{I}_{D} & \mathbf{0}_{D} \\ \mathbf{0}_{D} & \frac{\nu_{z}^{\ast} + 1}{\nu_{z}^{\ast}} \gamma^{2} \mathbf{I}_{D} \\ \end{array}\right].$$ Similarly, if I=1, it can be shown that $$\begin{array}{*{20}l} f(\mathbf{x}, \mathbf{w}) &\propto \left(1 + \left[ \begin{array}{ll} \mathbf{x} - \mathbf{m}_{y}^{\ast} \\ \mathbf{w} - \mathbf{m}_{z}^{\ast} \\ \end{array}\right]^{T} \mathbf{\Lambda}^{-1} \left[ \begin{array}{ll} \mathbf{x} - \mathbf{m}_{y}^{\ast} \\ \mathbf{w} - \mathbf{m}_{z}^{\ast} \\ \end{array} \right] \right)^{-\frac{k + 2 D}{2}}, \end{array} $$ $$\mathbf{\Lambda} = \left[ \begin{array}{ll} \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \gamma^{2} \mathbf{I}_{D} & \frac{1}{\nu_{y}^{\ast}} \gamma^{2} \mathbf{I}_{D} \\ \frac{1}{\nu_{y}^{\ast}} \gamma^{2} \mathbf{I}_{D} & \frac{\nu_{y}^{\ast} + 1}{\nu_{y}^{\ast}} \gamma^{2} \mathbf{I}_{D} \\ \end{array} \right],$$ which completes the proof. ER Dougherty, A Zollanvari, UM Braga-Neto, The illusion of distribution-free small-sample classification in genomics. Curr. Genomics. 12(5), 333–341 (2011). UM Braga-Neto, ER Dougherty, Is cross-validation valid for small-sample microarray classification?Bioinformatics. 20(3), 374–380 (2004). B Hanczar, J Hua, ER Dougherty, Decorrelation of the true and estimated classifier errors in high-dimensional settings. EURASIP J. Bioinforma. Syst. Biol.2007(Article ID 38473), 12 (2007). UM Braga-Neto, ER Dougherty, Exact performance of error estimators for discrete classifiers. Pattern Recogn.38(11), 1799–1814 (2005). MR Yousefi, J Hua, C Sima, ER Dougherty, Reporting bias when using real data sets to analyze classification performance. Bioinormatics. 26(1), 68 (2010). MR Yousefi, J Hua, ER Dougherty, Multiple-rule bias in the comparison of classification rules. Bioinformatics. 27(12), 1675–1683 (2011). MR Yousefi, ER Dougherty, Performance reproducibility index for classification. Bioinformatics. 28(21), 2824–2833 (2012). L Devroye, L Gyorfi, G Lugosi, A probabilistic theory of pattern recognition. Stochastic modelling and applied probability (Springer, New York, 1996). LA Dalton, ER Dougherty, Bayesian minimum mean-square error estimation for classification error–part I: definition and the Bayesian MMSE error estimator for discrete classification. IEEE Trans. Signal Process.59(1), 115–129 (2011). LA Dalton, ER Dougherty, Bayesian minimum mean-square error estimation for classification error–part II: the Bayesian MMSE error estimator for linear classification of Gaussian distributions. IEEE Trans. Signal Process.59(1), 130–144 (2011). LA Dalton, ER Dougherty, Exact sample conditioned MSE performance of the Bayesian MMSE estimator for classification error–part I: representation. IEEE Trans. Signal Process.60(5), 2575–2587 (2012). LA Dalton, ER Dougherty, Exact sample conditioned MSE performance of the Bayesian MMSE estimator for classification error–part II: consistency and performance analysis. IEEE Trans. Signal Process.60(5), 2588–2603 (2012). LA Dalton, ER Dougherty, Optimal classifiers with minimum expected error within a Bayesian framework–part I: discrete and Gaussian models. Pattern Recog. 46(5), 1301–1314 (2013). LA Dalton, ER Dougherty, Optimal classifiers with minimum expected error within a Bayesian framework–part II: properties and performance analysis. Pattern Recog.46(5), 1288–1300 (2013). B Hanczar, J Hua, C Sima, J Weinstein, M Bittner, ER Dougherty, Small-sample precision of ROC-related estimates. Bioinformatics. 26:, 822–830 (2010). H Xu, C Caramanis, S Mannor, S Yun, in Proceedings of the 48th IEEE Conference on Decision and Control, CDC 2009.Risk sensitive robust support vector machines (IEEENew York, 2009), pp. 4655–4661. H Xu, C Caramanis, S Mannor, Robustness and regularization of support vector machines. J. Mach. Learn. Res.10:, 1485–1510 (2009). MathSciNet MATH Google Scholar CM Bishop, Pattern recognition and machine learning vol. 4 (Springer, New York, NY, 2006). A Gelman, JB Carlin, HS Stern, DB Rubin, Bayesian data analysis vol. 2, 3rd edn., (2014). MS Esfahani, ER Dougherty, Incorporation of biological pathway knowledge in the construction of priors for optimal Bayesian classification. IEEE/ACM Trans. Comput. Biol. Bioinform.11(1), 202–218 (2014). LA Dalton, ER Dougherty, Application of the Bayesian MMSE estimator for classification error to gene expression microarray data. Bioinformatics. 27(13), 1822–1831 (2011). BE Boser, IM Guyon, VN Vapnik, in Proceedings of the Fifth Annual Workshop on Computational Learning Theory, COLT '92. A training algorithm for optimal margin classifiers (ACM,New York, NY, USA, 1992), pp. 144–152. C Cortes, V Vapnik, Support-vector networks. Mach. Learn.20(3), 273–297 (1995). C-C Chang, C-J Lin, LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol.2:, 27–12727 (2011). B Efron, Bootstrap methods: another look at the jackknife. Ann. Stat.7(1), 1–26 (1979). MathSciNet Article MATH Google Scholar B Efron, RJ Tibshirani, An introduction to the bootstrap (CRC Press, Boca Raton, FL, 1994). B Efron, Estimating the error rate of a prediction rule: improvement on cross-validation. J. Am. Stat. Assoc.78(382), 316–331 (1983). MJ van de Vijver, YD He, LJ van 't Veer, H Dai, AAM Hart, DW Voskuil, GJ Schreiber, JL Peterse, C Roberts, MJ Marton, M Parrish, D Atsma, A Witteveen, A Glas, L Delahaye, T van der Velde, H Bartelink, S Rodenhuis, ET Rutgers, SH Friend, R Bernards, A gene-expression signature as a predictor of survival in breast cancer. N. Engl. J. Med.347(25), 1999–2009 (2002). A Zollanvari, UM Braga-Neto, ER Dougherty, On the sampling distribution of resubstitution and leave-one-out error estimators for linear classifiers. Pattern Recogn. 42(11), 2705–2723 (2009). JM Knight, I Ivanov, ER Dougherty, MCMC implementation of the optimal Bayesian classifier for non-Gaussian models: model-based RNA-Seq classification. BMC Bioinformatics. 15(1), 401 (2014). S Kotz, S Nadarajah, Multivariate T distributions and their applications (Cambridge University Press, New York, 2004). Book MATH Google Scholar NL Johnson, S Kotz, N Balakrishnan, Continuous univariate distributions vol. 2, 2nd edn. (John Wiley & Sons, Hoboken, NJ, 1995). The results published here are in part based upon data generated by The Cancer Genome Atlas (TCGA) established by the NCI and NHGRI. Information about TCGA and the investigators and institutions who constitute the TCGA research network can be found at http://cancergenome.nih.gov. The work of LAD is supported by the National Science Foundation (CCF-1422631 and CCF-1453563). Department of Electrical and Computer Engineering, The Ohio State University, Columbus, 43210, OH, USA Lori A. Dalton & Mohammadmahdi R. Yousefi Department of Biomedical Informatics, The Ohio State University, Columbus, 43210, OH, USA Lori A. Dalton Mohammadmahdi R. Yousefi Correspondence to Lori A. Dalton. LAD and MRY contributed to the main idea, designed and implemented the algorithms, designed and carried out the simulation, analyzed the results, and drafted the manuscript. Both authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Dalton, L.A., Yousefi, M.R. On optimal Bayesian classification and risk estimation under multiple classes. J Bioinform Sys Biology 2015, 8 (2015). https://doi.org/10.1186/s13637-015-0028-3 Risk estimation Multi-class classification Bayesian estimation Minimum mean-square error Small samples
CommonCrawl
Match Education | Sixth Grade Home Reports Center Math Sixth Grade Match Fishtank Math - Grade 6 The instructional materials reviewed for Match Fishtank, Grade 6 meet expectations for alignment to the CCSSM. ​The instructional materials meet expectations for Gateway 1, focus and coherence, by focusing on the major work of the grade and being coherent and consistent with the Standards. The instructional materials meet expectations for Gateway 2, rigor and balance and practice-content connections, by reflecting the balances in the Standards and helping students meet the Standards' rigorous expectations by giving appropriate attention to the three aspects of rigor. The materials meet expectations for meaningfully connecting the Standards for Mathematical Content and the Standards for Mathematical Practice (MPs). The instructional materials reviewed for Match Fishtank Grade 6 meet the expectations for assessing grade-level content and, if applicable, content from earlier grades. The materials do not assess topics before the grade level in which the topic should be introduced. Unit Assessments were examined for this indicator, and all materials are available digitally and through downloadable PDFs. Unit 1 Test, Question 3, "Slater used 6 black Legos and 18 green Legos to build a tower. What was the ratio of the number of black Legos to the number of Legos in the tower? Answer choices: a. 1:3, b.1:4, c. 1:6, d. 1:9." (6.RP.1) Unit 1 Test, Question 7, "Wyatt hiked 6 miles in 2 hours. At this same rate, what is the total number of miles Wyatt could hike in 9 hours?" (6.RP.3.b) Unit 2 Test, Question 1, "Roya paid $48 for 12 cartons of orange juice. What is the unit rate per carton of orange juice that Roya paid? Answer choices: a. $3, b. $4, c. $6, d. $12." (6.RP.2) Unit 5 Test, Question 4, "Which phrase is a description of 2m +7? Answer choices: a. more than 2 times m, b. 2 more than 7 times m, c. 2 times the sum of 7 and m, d. 7 times the sum of 2 and m." (6.EE.2.a) Unit 6 Test, Question 2, "A shelf has four books on it. The weight, in pounds, of each of the four books on the shelf is listed below. '2.5, 3.2, 2.7, 2.3 Which inequality represents the weight, w, of any book chosen from the shelf? Answer choices: a. w > 2.3, b. w < 2.4 , c. w > 3.2 , d. w < 3.3." (6.EE.b) The instructional materials reviewed for Match Fishtank Grade 6 meet expectations for students and teachers using the materials as designed devoting the large majority of class time to the major work of the grade. The instructional materials devote at approximately 75% of instructional time to the major work of the grade. The instructional materials reviewed for Match Fishtank Grade 6 meet expectations for spending a majority of instructional time on major work of the grade. The approximate number of chapters (units, modules, topics, etc.) devoted to major work of the grade (including assessments and supporting work connected to the major work) is six out of eight units, which is approximately 75%. The number of lessons devoted to major work of the grade (including assessments and supporting work connected to the major work) is 88 out of 118, which is approximately 75%. The instructional materials reviewed for Match Fishtank Grade 6 meet expectations that supporting work enhances focus and coherence simultaneously by engaging students in the major work of the grade. Unit 1, Understanding and Representing Ratios, Lesson 3, Anchor Problem 3, 6.NS.4 connects with 6.RP.1 by using common factors in order to find equivalent ratios. "Pam and her brother both open savings accounts. Each begins with a balance of 0 dollars. For every $2 that Pam saves in her account, her brother saves $5 in his account. a) Determine a ratio to describe the money in Pam's account to the money in her brother's account. b) Create two equivalent ratios that describe the amount of money in Pam's account and the amount of money in her brother's account." Unit 3, Multi-digit and Fraction Computation, Lesson 5, Target Task, supporting standard 6.G.1 connects with 6.NS.1 through solving a real-world problem involving area. "There are other ways to think about division of fractions. Try these two questions. They both use division, but why? And how do you know what to divide by what? 1. The water level in the reservoir has gone down 2 1/2 feet in the last month and a half. How fast is the water level going down per month? 2. Farmer Schmidt owns 3/4 of a square mile of land. Her field is a rectangle. One side is 2/3 of a mile. How long is the other side?" Unit 5, Numerical and Algebraic Expressions, Lesson 4, Target Task connects supporting standard 6.G.2 to 6.EE.2 as students use the formulas to find the volume and surface area of rectangular prisms. "A cube has 6 sides, each with an area of $$s^2$$ square units. The surface area of a cube is the total of all 6 sides and is represented by the formula $$S = 6s^2$$. Find the surface area of a cube with the side lengths below. a. s = 3 inches b. s = 1.2 cm c. s= 2/3 feet." Unit 7, Geometry, Lesson 6, the Target Task fosters coherence between the clusters as students apply the standards from 6.G.A and 6.RP to determine the area of the trapezoid. "Find the deck area around the pool. The deck area is the white area in the diagram." The suggested amount of time and expectations for teachers and students of the materials are viable for one school year as written and would not require significant modifications. As designed, the instructional materials can be completed in 143 days. Included in the 143 days are: Each unit is comprised of 12 to 18 lessons that contain a mixture of Anchor Problems, Problem Set Guidance, a Target Task, and a Mastery Response. These components align to the number of minutes needed to complete each part as provided in the Pacing Guide. Based on the pacing guide, the suggested lesson time frame is 60 minutes: Content from prior or future grades is clearly identified and related to grade-level work. Prior grade knowledge is explicitly related to grade-level concepts. The "future standards" align the work with future grade-level standards. Each lesson provides the teacher with current standards and foundational standards which are identified under the "Standards" tab. Through the Unit Overview, Tips for Teachers, and Unit Summary, teachers are provided explicit connections to prior and future knowledge for each standard. Unit 1, Understanding and Representing Ratios, "In fourth and fifth grade, students learned the difference between multiplicative and additive comparisons and they interpreted multiplication as a way to scale. Students will access these prior concepts in this unit as they investigate patterns and structures in ratio tables and use multiplication to create equivalent ratios. The work students do in this unit connects directly to Unit 2: Rates & Percent and re-appears in Unit 6: Equations and Inequalities when students analyze and graph relationships between independent and dependent variables. Beyond sixth grade, students extend their understanding of ratios and rates to investigate proportional relationships in seventh grade. This sets the groundwork for the study of functions, linear equations, and systems of equations, which students will study in eighth grade and high school." Unit 2, Unit Rates and Percent addresses 6.RP.2, 6.RP.3, 6.RP.3.b, 6.RP.3.c, 6.RP.3.d, and 6.RP.4. Foundational standards are "Covered in previous units or grades that are important background for the unit: 4.MD.1, 4.MD.2, 5.MD.1, 5.NF.3, 5,NF.4.a, 5.NF.4.b, 5.NF.5, 5.NF.5.a, 5.NF.5.b, 5.NF.6, 4.NF.4.c, 4.NF.6, and 5.NBT.6 from previous grades, and 6.RP.1 from a previous unit." Unit 5, Numerical and Algebraic Expressions, "In elementary school, students used variables to represent unknown quantities, and they evaluated and described numerical expressions without exponents. They used the commutative property to enhance their understanding of multiplication and addition, and they used the distributive property when modeling partial areas. All of these concepts come together and support student understanding in this sixth-grade unit. Immediately following this unit, sixth graders will start a unit, Equations and Inequalities, where they will use algebra to model and solve real-world problems. They will also revisit percentages using new skills with expressions and equations to efficiently solve percent problems. In seventh and eighth grades, students continue to simplify and solve more complex expressions and equations using the same tools learned in this unit." Foundational standards (5.OA 1&2, 4.OA.3, 4.NBT.5, & 5.MD.b) as well as future standards (7.EE.1, 7.EE.4). Additionally, 6.EE.5, 6.EE.7 and 6.EE.9 are considered future standards for this lesson as they are not identified until the next unit, Unit 6: Equations and Inequalities. Lessons include connections between grade-level work, standards from earlier grades, and future knowledge. These can include problems from Open Up Resources Grade 6-8 Mathematics, Open Middle, Illustrative Mathematics, and EngageNY, Great Minds. For example: Unit 1, Understanding and Representing Ratios, Lesson 1 objective, "define ratio and use ratio language to describe associations between two or more quantities." This lesson supports 6.RP.1 and links back to Foundational Standards from grade 4, 4.OA.2 and 4.MD.1, as evident in Anchor Problem 3, "Abigail mixed 2 cups of white paint with 6 tablespoons (T) of blue paint." Students write at least four ratio statements to describe the situation. Unit 2, Unit Rates and Percent, Lesson 2 includes Foundational Standards: 6.RP.1 and Future Connections: 7.RP.1, 7.RP.2, 7.RP.3. Lesson objective: define rate and unit rate and find rates from situations involving ratios. (6.RP.2, 6.RP.3b) Unit 5, Numerical and Algebraic Expressions, Lesson 6, teachers are directed to a Problem Set from Engage NY, which moves students from writing fractions to writing algebraic expressions as fractions. Students begin with writing 1 ÷ 2 without the division sign, then a ÷ 2, then proceed to the Problem Set: Problem 1. "Rewrite the expressions using the division symbol and as a fraction. Answer choices: a. Three divided by 4, b. The quotient of m and 11, c. 4 divided by the sum of h and 7, d. The quantity x minus 3 divided by y. Problem 2. Draw a model to show that x ÷ 3 is the same as x/3." (6.EE.2) The instructional materials attend to the full intent of the grade-level standards by giving all students extensive work with grade-level problems. The Anchor Problem(s) help students make sense of the mathematics of the lesson as outlined in the Criteria for Success and Objective by providing them multiple opportunities to engage in the grade-level content in meaningful ways. The Problem Set Guidance provides students the opportunity to work with problems in a variety of formats to integrate and extend concepts and skills. The Target Task is aligned to the Objective and designed to cover key concepts from the lesson and identify any misconceptions students have. It serves also as an indicator of student understanding or mastery of the Objective. For example: Unit 1,Understanding and Representing Ratios, Lesson 16, Target Task, students solve a part:whole ratio problem using a tape diagram (6.RP.3). For example, the Target Task states, "When Carla looked out at the school parking lot, she noticed that for every 2 minivans, there were 5 other types of vehicles. If there are 161 vehicles, how many of them are minivans?" Unit 3, Multi-Digit and Fraction Computation, Lesson 2, Anchor Problem states, "Leonard made 1/4 of a gallon of lemonade and poured all of it into 3 glasses, divided equally. How much lemonade is in each glass? Write a division problem and draw a visual model." (5.NF.7) Unit 4, Rational Numbers, Lesson 6, the Target Task states, "Christina is trying to order the numbers -3 and -2 1/2 from least to greatest. She makes the claim below. Christina's claim: "I know that -2 1/2 is less than -3. So, -2 1/2 must be less than -3." Is Christina correct in her thinking? Explain why or why not. Use a number line to support your reasoning." (6.NS.6.c) Unit 6, Equations and Inequalities, Lesson 7, Problem Set Guidance links to Illustrative Math, Fruit Salad, "A fruit salad consists of blueberries, raspberries, grapes, and cherries. The fruit salad has a total of 280 pieces of fruit. There are twice as many raspberries as blueberries, three times as many grapes as cherries, and four times as many cherries as raspberries. How many cherries are there in the fruit salad?" (6.RP.a.3, 6.EE.b.7) Standard 6.SP.5.b: Describing the nature of the attribute under investigation, including how it was measured and its units of measurement, is not addressed in any lesson, although it is listed in the Unit Overview for Unit 8: Statistics. Prior knowledge is explicitly identified and linked to grade-level work. For example: Unit 3, Multi-Digit and Fraction Computation, Lesson 1, Tips for Teachers, reviews multiplication and division concepts learned in elementary grades as it introduces the sixth grade standard. "This lesson is approaching 6.NS.1. It reaches back to concepts students learned in earlier grades around multiplication and division in order for students to be able to extend on these concepts in following lessons in the unit." Unit 4, Rational Numbers, Lesson 1 objective states, "Extend the number line to include negative numbers. Define integers." (6.NS.6, 6.NS.6c). This is connected to prior knowledge of 3rd grade: Understand a fraction as a number on the number line and represent fraction on a number line diagram and use this knowledge to extend the number line to integers. (3.NF.2) Unit 6, Equations and Inequalities, Lesson Overview, teachers are reminded that this lesson brings together several concepts and skills students have worked on throughout the year. For example, some of those concepts are as follows: tables of equivalent values (6.RP.3), writing equations (6.EE.1), plotting points (6.NS.8), and determining values in ratio relationships (6.RP.3). Foundation skills (that link to several lessons) are also identified. For example, some of those skills are as follows: 5.OA.3, 6.RP.3, and 6.NS.6.c. This lesson contains several links to prior lessons such as Unit 3, Lesson 5, Target Task, "There are other ways to think about division of fractions. Try these two questions. They both use division, but why? And how do you know what to divide by what?" Question 1: "The water level in the reservoir has gone down 2 1/2 feet in the last month and a half. How fast is the water level going down per month?" Question 2: "Farmer Schmidt owns 34 of a square mile of land. Her field is a rectangle. One side is 2/3 of a mile. How long is the other side?" (6.NS.1) The instructional materials for Match Fishtank Grade 6 meet expectations that materials foster coherence through connections at a single grade, where appropriate and required by the standards. The materials include learning objectives that are visibly shaped by CCSSM cluster headings and problems and activities that connect two or more clusters in a domain or two or more domains, when these connections are natural and important. The Units are divided into Lessons focused on domains. Grade 6 standards are clearly identified in the Pacing Guide, Standard Map Document, and a CCSSM Lesson Map found in the Unit Summary of each Unit. Additionally, each lesson identifies the objectives that address specific clusters. Instructional materials shaped by cluster headings include the following examples: Unit 1, Understanding and Representing Ratios, Lesson 1, Objective, "Define ratio and use ratio language to describe associations between two or more quantities." (6.RP.A) Unit 2, Unit Rates and Percent, Lesson 2, Objective, "Find unit rates and use them to solve problems." (6.RP.A) Unit 3, Multi-digit and Fraction Computation, Lesson 5, Objective, "Solve and write story problems involving division with fractions." (6.NS.A) Unit 5, Numerical and Algebraic Expressions, Lesson 3, Objective, "Use variables to write algebraic expressions." (6.EE.A) Unit 6, Equations and Inequalities, Lesson 8, Objective, "Define and identify solutions to inequalities." (6.EE.B) Unit 6, Equations and Inequalities, Lesson 12, Objective, "Write equations for and graph ratio situations. Define independent and dependent variables," (6.EE.C) Unit 7, Geometry, Lesson 10, Objective, "Find volume of rectangular prisms with whole number and fractional edge lengths using unit and fractional unit cubes." (6.G.A) Unit 2, Ratios and Proportional Relationships, Lesson 2, 6.RP.2 and 6.RP.3.b are connected when students demonstrate an understanding of a unit rate through drawing a tape diagram. Anchor Problem 2 states, "Adam's Fruit Farm also has 100 acres, but he grows more than just apples. Oranges take up 60 acres of his farm. What percent of Adam's farm is oranges? What percent is not oranges? Draw a 10 x 1 tape diagram to model the situation." Unit 5, Numerical and Algebraic Expressions, Lesson 4, connects 6.EE.A and 6.NS.A as students evaluate expressions by multiplication and division of fractions and decimals. Anchor Problem 1 states, "A square prism is shown below. The formula $$V = s^2h$$ can be used to find the volume of the square prism. What is the volume of the prism when the side length of the base measures 1.5 inches and the height measures 8 inches?" Unit 6, Equations and Inequalities, Lesson 12, connects 6.RP.A and 6.EE.C as students write equations in situations involving ratios. Anchor Problem 1 states, "A recipe for sugar cookies calls for 1 cup of sugar for every 2 cups of flour." Questions posed: a. "Use the ratio to complete the table." b. "If you know the number of cups of sugar, s, in the recipe, how can you determine the number of cups of flour to use? What equation represents this relationship?" c. "If you know the number of cups of flour, f, in the recipe, how can you determine the number of cups of sugar to use? What equation represents this relationship?" d. "In each equation, what is the independent variable and what is the dependent variable?" Unit 7, Geometry, Lesson 9, 6.G.1 supports 6.NS.3 as students use their understanding of integers to represent polygons on the coordinate plane. Anchor Problem 1 states, "A new park is being built in the city. In the park, there will be a cemented walkway that will wind through the park. The walkway will be completely enclosed by a short, gated fence that will line the path on all sides of the path. Each square unit in the coordinate grid represents 1 square yard." Questions posed: "a. The city budget includes enough funds to include 50 square feet of cement and 60 yards of fencing. Will the budget cover the necessary expenses for cement and fencing? Defend your answer. b. Two statues will be placed at point (-1,2) and (-1,-4). How far apart, in units, are the two statues?" Unit 8, Statistics, Lesson 9 connect 6.SP.A and 6.SP.B as students begin to understand spread and variability of data sets. Anchor Problem 2 states, "Jamie is planning to cover a wall with red wallpaper. The dimensions of the wall are shown below. Questions posed: a. "How many square feet of wallpaper are required to cover the wall?" b. "Wallpaper comes in long rectangular strips that are 24 inches wide. If Jamie lays the strips of wallpaper vertically, how many strips will she use and how long will each strip be? Explain." c. "If Jamie lays the strips of wallpaper horizontally, can she cover the wall without wasting any wallpaper? Explain." The instructional materials for Match Fishtank Grade 6 meet expectations that the materials develop conceptual understanding of key mathematical concepts, especially where called for in specific standards or cluster headings. All units begin with a Unit Summary and indicate where conceptual understanding is emphasized, if appropriate. Lessons begin with Anchor Problem(s) that include Guiding Questions designed to help teachers build their students' conceptual understanding. The instructional materials include problems and questions that develop conceptual understanding throughout the grade level, especially where called for in the standards (6.RP.A, and 6.EE.3). For example: Unit 1, Understanding and Representing Ratios, Lesson 1, Anchor Problem 1, students develop conceptual understanding when introduced to ratios through the use of diagrams. An example of this is as follows: "In a recipe for oatmeal raisin cookies, the ratio of teaspoons of cinnamon to cups of raisins is 4:8. Draw a diagram to represent the quantities, and write two other ratio statements for the situation." (6.RP.A) Unit 1, Understanding and Representing Ratios, Lesson 4, introduces students to the concept of equivalent ratios. An example of this is Anchor Problem 1, "Are Heather and Audrey's ratios equivalent? Explain how you know." (6.RP.1) Unit 2, Unit Rates and Percent, Lesson 4, Anchor Problem 1, students are given two different prices for jugs of honey. Anchor Problem 1 states, "Would you rather buy one 5-pound jug of honey for $15.35, or three 1.5-pound bottles of honey for $14.39? Justify your answer." (6.RP.3) Unit 5, Numerical and Algebraic Expressions, Lesson 9, Anchor Problem 1 uses an area model to show the distributive property conceptually. "Two rectangles were combined to create a larger rectangle, as shown below. "Write as many expressions as you can to represent the area of the larger, outer rectangle." Guiding question: "How do your expressions connect back to the area model?" (6.EE.3) Unit 5, Numerical and Algebraic Expressions, Lesson 9, Anchor Problem 2 uses tape diagrams to discover the concept of using the distributive property to produce equivalent expressions, "The tape diagram represents the expression 3x + 4y. Draw a tape diagram that shows twice the value of 3x + 4y." Guiding Questions, "What does it mean to take "twice the value" of an expression? What does this look like in a diagram? Rearrange your diagram to group together the same values. What property is this? How does grouping your diagram in this way help you write a new expression?" (6.EE.3) Unit 6, Equations and Inequalities, Lesson 13, Anchor Problem, Question 1 states that students relate variables to the coordinate plane. Students use tables to discover relationships between dependent and independent variables and graph them appropriately: "Determine which variable is dependent and which variable is independent. Make a table showing the number of pencils for 3 – 7 packages. Plot the points in the coordinate plane. If Sarah has 168 pencils, how many packages did she purchase?" (6.EE.9) Unit 1, Understanding and Representing Ratios, Lesson 2, Target Task, students are asked to draw a picture and name two ratios for each given situation: "To make papier-mâché paste, mix 2 parts of water with 1 part of flour. A farm is selling 3 pounds of peaches for $5. A person walks 6 miles in 2 hours." (6.RP.A) Unit 3, Multi-digit and Fraction Computation, Lesson 2, Target Task, students use diagrams to develop the concept of division of fractions by whole numbers: "How much lemonade is in each glass? Write a division problem and draw a visual model" (6.NS.1) Unit 4, Rational Numbers, Lesson 3, Problems 1 & 2, (Open Up Resources: Grade 6, Unit 7, Lesson 1 ) give students an opportunity to work independently to demonstrate conceptual understanding of rational numbers by answering questions about temperature, elevation, and sea levels and in some cases to represent points on a vertical number line. "Here are two tables that show the elevations of highest points on land and lowest points in the ocean. Distances are measured from sea level. Drag the points marking the mountains and trenches to the vertical number line and answer the questions: a. Which point in the ocean is the lowest in the world? What is its elevation? b. Which mountain is the highest in the world? What is its elevation? c. If you plot the elevations of the mountains and trenches on a vertical number line, what would 0 represent? What would points above 0 represent? What about points below 0? d. Which is farther from sea level: the deepest point in the ocean, or the top of the highest mountain in the world? Explain." (6.NS.5) Unit 5, Numerical and Algebraic Expressions, Lesson 7, Problem Set Guidance, (Open Middle, Equivalent Expressions 1): Students work independently to explore the use of whole numbers to create equivalent expressions: "Using the whole numbers from 1-9 in the boxes below, create two expressions that are equivalent to one another. You can use each whole number at most once." (6.EE.3) Unit 5, Numerical and Algebraic Expressions, Lesson 9, Target Task, students complete the following: "For each problem, draw a diagram to represent the expression. Then use the diagram to write an equivalent expression. a. 4(2m + n) b. 5x + 15." (6.EE.3) Unit 6, Equations and Inequalities, Lesson 8, Target Task, students define and identify solutions to inequalities. Students are given a list of values and asked, "Which of the following values are solutions to the inequality 5x - 8 $$\lesseq$$ 42. Select all that apply." (6.EE.5) The instructional materials for Match Fishtank Grade 6 meet the expectations that they attend to those standards that set an expectation of procedural skill and fluency. The structure of the lessons includes several opportunities to develop fluency and procedural skills, for example: In each lesson, the Anchor problem(s) provides students with a variety of problem types to practice procedural skills. The instructional materials provide opportunities for students to independently demonstrate procedural skill and fluency throughout the grade level, especially where called for by the standards (6.NS.2, 6.NS.3, 6.EE.A). For example, students independently demonstrate fluency: Unit 3, Multi-Digit and Fraction Computation, Lesson 9, Target Task, Problem 1, students practice fluently adding, subtracting and multiplying decimals. "Calculate the product: 78.93 × 32.4." (6.NS.3) Unit 3, Multi-Digit and Fraction Computation, Lesson 10, Target Task, students are given the opportunity to independently demonstrate procedural skills in division of multi-digit numbers using the standard algorithm by responding to the question, "Use the standard algorithm to solve 392,196 ÷ 87. Check your answer using multiplication." (6.NS.2) Unit 3, Multi-Digit and Fraction Computation, Lesson 11, Target Task, Problem 1, students use the division algorithm to develop and maintain fluency in dividing whole numbers and decimals. "Find the decimal value of 3 ÷ 50 using any strategy. Then find the quotient using long division and show the answers are the same." (6.NS.2 and 6.NS.3) Unit 5, Numerical and Algebraic Expressions, Lesson 2, Mathematics Exponent Experimentation 2 activity is recommended as a Problem Set for the objective to evaluate numerical expressions involving whole-number exponents. This task supports fluency as students practice working with operations, decomposing numbers, and recognizing perfect squares and perfect cubes: "Here are some different ways to write the number 16: a) $$2^4$$ b) $$12 - (2^1+2^2)+ 500/50$$ c) $$2^3 + 2^3$$ d) $$2/3 \times 48^1 - (1+3)^2$$ . Find at least three different ways to write each value below. Include at least one exponent in all of the expressions you write. a. 81 b. $$2^5$$ c. 64/9" (6.EE.1) Unit 5, Numerical and Algebraic Expressions, Lesson 2, Anchor Problem 2 uses students' understanding of area models to compare the values of expressions using exponents and area models of squares and rectangles, for example: "Four expressions are shown below along with four area diagrams. Match each expression to a diagram. Then evaluate the expression and find the area of the diagram to demonstrate they are equivalent." (6.EE.1) The instructional materials provide opportunities for students to independently demonstrate procedural skills. These can include problems from Open Up Resources Grade 6-8 Mathematics, Open Middle, and EngageNY, Great Minds. For example: Unit 5, Numerical and Algebraic Expressions, Lesson 9, students generate equivalent expressions. For example, Problem Set Guidance, Open Middle Distributive Property, states, "Fill in the boxes below using the whole numbers 0 through 9 no more than one time each so that you can make a true equation." (6.EE.4) Unit 6, Equations and Inequalities, Lesson 10. Students work toward developing procedural skills in writing inequalities for real-world conditions. Anchor Problem 3 states, "Two similar situations are described below. Situation A: A backpack can hold at most 8 books. Situation B: A backpack can hold at most 8 pounds. Draw a graph for each situation to represent the solution set. Compare and contrast the two graphs." Problem Set Guidance provides additional practice. "Write an equation to represent each situation and then solve the equation. Andre drinks 15 ounces of water, which is 3/5 of a bottle. How much does the bottle hold? Use x for the number of ounces of water the bottle holds." (6.EE.8) In the Problem Set Guidance and Target Tasks, students engage with problems that have real-world contexts and opportunities for application, especially where called for by the standards (6.RP.3, 6.NS.1, 6.EE.7, 6.EE.9). The instructional materials include multiple opportunities for students to engage in routine and non-routine application of mathematical skills and knowledge. Students have opportunities to independently demonstrate the use of mathematics flexibly in a variety of contexts. These can include problems from Open Up Resources Grade 6-8 Mathematics, Open Middle, MARS Formative Assessment Lessons, Robert Kaplinsky, Yummy Math, EngageNY - Great Minds, and others. Examples of routine application include, but are not limited to those that are familiar situations and/or are presented in the CCSSM Table1: Common Addition and Subtraction Situations and Table 2: Common Multiplication and Division Situations. For example: Unit 1, Understanding and Representing Ratios, Lesson 1, Anchor Problem 3, "To make green-colored water, Brian mixes drops of green food dye and cups of water in a ratio of 4:3. a. Draw a double number line to represent the ratio of drops of green food dye to cups of water. b. Use your double number line to find 2 equivalent ratios. c. Brian's friend, Evan, uses a ratio of 20 drops of green food dye to 15 cups of water. Will Evan's water be the same color green as Brian's? Explain your reasoning." (6.RP.3) Unit 3, Multi-Digit and Fraction Computation, Lesson 6, Target Task, "You are stuck in a big traffic jam on the freeway and you are wondering how long it will take to get to the next exit, which is 1 1/2 miles away. You are timing your progress and find that you can travel 2/3 of a mile in one hour. If you continue to make progress at this rate, how long will it be until you reach the exit. Solve the problem with a diagram and explain your answer. Then write and solve an equation and show that it is the same as what you got in your diagram.?" (6.NS.1) Unit 6, Equations and Inequalities, Lesson 3, Anchor Problem 2, "At a market, a farmer sells apples for $1.33 per pound. At the end of a weekend, the farmer made $74.48 from selling apples. Which equation can be used to determine x, the number of pounds of apples the farmer sold over the weekend." (6.EE.7) Unit 6, Equation and Inequalities, Lesson 7, Anchor Problem 3, "The school librarian, Mr. Marker, knows the library has 1,400 books, but he wants to reorganize how the books are displayed on the shelves. Mr. Marker needs to know how many fiction, nonfiction, and resource books are in the library. He knows that there are four times as many fiction books as resource books. There are half as many nonfiction books as fiction books. a. If these are the only types of books in the library, how many of each type of book are in the library? b. Draw a tape diagram to represent the books in the library, and then write and solve an equation to determine how many of each type of book there are in the library." (6.EE.6, 6.EE.7) Unit 6, Equations and Inequalities, Lesson 13, Target Task, "Arian wants to save 20% of his paychecks in a savings account. a. Write an equation to represent the amount Arian should save, s from a paycheck in the amount of p dollars. b. Create a table of values with at least three or four different paycheck amounts. c. Plot the values in the coordinate plane to show the relationship between the amount Arian saves and the amount Arian earns. d. Explain how you could use your graph to find how much of a $60 paycheck Arian should put into his savings account." (6.EE.9, 6.RP.3.a) Examples of non routine application include, but are not limited to real-world context that are unfamiliar, novel, and/or unrehearsed. For example: Unit 2, Unit Rates and Percent, Lesson 14, Problem Set Guidance allows students to apply strategies, organize information and their workspace to keep track of their solution pathway. "Two congruent squares, ABCD and PQRS, have side length 15. They overlap to form the 15 by 25 rectangle AQRD shown. What percent of the area of rectangle AQRD is shaded?" One possible solution to this non-routine problem uses an equation to find the overlap in terms of given information which reflects the mathematical ideas described in cluster and 6.EE.B. (6.RP.2, 6.RP.3, 6.RP.3.c, 6.RP.3.d) Unit 3, Multi-Digit and Fraction Computation, Lesson 5, Anchor Problem 2, Handout 2, students solve and write story problems involving division with fractions. For example, a. "Make up and solve two of your own slicing problems. In Problem A, you should not have any cheese left over, and in Problem B, you must have some cheese left over. b. For each problem, you need to determine how much cheese you start off with: how long is your block of cheese? You also need to say how thick you want the slices of cheese to be—or you can decide how many slices you will need in total. Keep in mind that the thickness of each slice should be between 1/32 and 1/2 inches thick. c. After you create your problems, make a poster showing each problem and its solution. Each solution should include an explanation, at least one calculation, and a diagram." (6.NS.1) Unit 3, Multi-Digit and Fraction Computation, Lesson 6, Anchor Problem 1, students solve problems involving division with fractions. For example, "It requires 3/4 of a credit to play a video game for one minute. Emma has 7/8 credits. Can she play for more or less than one minute? Explain how you know. How long can Emma play the video game with her 7/8 credits? How many different ways can you show the solution?" (6.NS.1) The instructional materials in Grade 6 provide opportunities for students to independently demonstrate the use of mathematics flexibly in a variety of contexts. For example: Unit 1, Understanding and Representing Ratios, Lesson 18, Target Task, "In a bag of jelly beans, there are purple jelly beans (grape) and red jelly beans (cherry). For every 4 purple jelly beans, there are 7 red jelly beans. There are 902 jelly beans in the bag. How many of each flavor are there? Choose a strategy to solve the problem. Explain why you chose this strategy and how it shows your solution." (6.RP.3) Unit 3, Multi-Digit and Fraction Computation, Lesson 3, Target Task, "A jar has 5 tablespoons of honey in it. One serving of honey is 3/4 of a tablespoon. How many servings of honey are in the jar?" (6.NS.1) Unit 6, Equations and Inequalities, Lesson 1, Target Task, "Draw a tape diagram or balance for each equation or situation below. a. You purchase 4 gift cards, each in the same amount. You spend a total of $60. b. x + 6 = 18. c. 6x = 18." (6.EE.6, 6.EE.7) Unit 6, Equations and Inequalities, Lesson 5, Target Task, Problem 2, "Lee filled several jars with 1/4 cup of water in each jar. He used a total of 8 cups of water. Let j represent the number of jars that Lee filled. Write and solve an equation to find out how many jars Lee filled." (6.EE.7) Unit 6, Equations and Inequalities, Lesson 7, Target Task, "A town's total allocation for firefighters' wages and benefits in a new budget is $600,000. If wages are calculated at $40,000 per firefighter and benefits at $20,000 per firefighter, write an equation whose solution is the number of firefighters the town can employ if they spend their whole budget. Solve the equation." (6.EE.6, 6.EE.7) The instructional materials for Match Fishtank Grade 6 meet expectations that the three aspects of rigor are not always treated together and are not always treated separately. All three aspects of rigor are present in the instructional materials. Many of the lessons incorporate two aspects of rigor with an emphasis on application. Student practice includes all three aspects of rigor, though there are fewer questions for conceptual understanding. Unit 3, Multi-Digit and Fraction Computation, Lesson 2, Anchor Problem 1, students develop conceptual understanding for dividing fractions. "Leonard made 1/4 of a gallon of lemonade and poured all of it into 3 glasses, divided equally. How much lemonade is in each glass? Write a division problem and draw a visual model." (6.NS.1) Unit 4, Rational Numbers, Lesson 8, Anchor Problem 3, students write inequalities to compare rational numbers in real-world contexts to develop procedural and fluency skills."The elevation of New Orleans, Louisiana, is 7 feet below sea level. The elevation of Coachella, California, is -72 feet. Write an inequality to compare the two cities." (6.NS.7 a & b) Unit 8, Lesson 8, Statistics, Problem Set Guidance, students use what they have learned about mean and median and apply it to describe the center, spread, and overall shape of data. The Problem Set Guidance is as follows: "If a new student walked into our class, how many pockets might the new student be wearing? Which mathematical measure might be the best one to use for such a prediction? Explain your answer. Create a new set of data, different from your class, that has the same mean and median as your class data." (6.SP.2, 6.SP.5.d) Unit 2, Unit Rates and Percent, Lesson 5, Target Task, students engage in procedural fluency and application as they solve, "Market Place is selling chicken for $4.50 per pound. Stop and Buy is selling 5 pounds of chicken for $23.75. You need to buy 8 pounds of chicken. At these rates, which store is cheaper? How much cheaper is it?" (6.RP.3) Unit 4, Rational Numbers, Lesson 2, Target Task, students develop procedural skills and fluency while demonstrating conceptual understanding: "Write the integer that describes each of the following situations. Then represent the integer on a horizontal or vertical number line. Include the value 0 on your number line and use an appropriate scale. a. A deposit made of $15, b. A withdrawal of $75, c. A credit of $110, d. A temperature of 15 degrees below 0." (6.NS.5) Unit 6, Equations and Inequalities, Lesson 2, "Define a solution to an equation as the value of the variable that, when substituted in, makes the equation a true statement." Students develop procedural skill and fluency, and conceptual understanding through application as they test solutions using substitution and begin to translate situations to equations. Problem Set Guidance is as follows: "Ana is saving to buy a bicycle that costs $135. She has saved $98 and wants to know how much more money she needs to buy the bicycle. The equation 135 = x + 98 models this situation, where x represents the additional amount of money Ana needs to buy the bicycle. When substituting for x, which value(s), if any, from the set Grade 6, Mathematics Sample, ER Item Claim 2, Version 1.0 {0, 37, 98, 135, 233} will make the equation true? Explain what this means in terms of the amount of money needed and the cost of the bicycle." (6.EE.5) The instructional materials reviewed for Match Fishtank Grade 6 meet expectations that the Standards for Mathematical Practice are identified and used to enrich mathematics content within and throughout the grade-level. Each Unit Summary contains descriptions of how the Standards for Mathematical Practices are addressed and what mathematically proficient students should do. An example is the Unit Summary for Unit 3, Multi-digit and Fraction Computation, "By examining the structure of concrete models and patterns that emerge from these structures, students make sense of concepts such as multiplying by a reciprocal of a fraction when dividing or using long division as a shorthand to partial quotients (MP.8)." Lessons usually include indications of Mathematical Practices (MPs) within a lesson in one or more of the following sections: Criteria for Success, Tips for Teachers, or Anchor Problems Notes. An example in Unit 1, Understanding and Representing Ratios, Lesson 2, Criteria For Success is as follows: "Use drawings of ratios as a tool to better understand how two quantities are associated with each other (MP.1)." Lesson 4, Anchor Problem 2: "Continue to monitor student responses for accurate use of language as they describe ratios and their process of determining equivalent ratios (MP.6)." Another example is in Lesson 9, Tips for Teachers: "Students engage in MP.1 in this three-act task as they analyze the information given to them and determine how they can use equivalent ratios to fix the mix-up. They must map out their own strategy and check their answers, making adjustments as needed. Students also discover how they can apply ratio reasoning to support them in understanding the math in the problem and determine a solution (MP.4)." In some Problem Set Guidances, MPs are identified within the problem. An example is in Unit 6, Equations and Inequalities, Lesson 3, MARS Formative Assessment Lesson. It states, "This lesson also relates to all the Standards for Mathematical Practice, with a particular emphasis on: 2. Reason abstractly and quantitatively, 4. Model with mathematics, and 7. Look for and make use of structure" MP1 is connected to mathematical content in Unit 7, Geometry, Lesson 6, as students analyze diagrams to make sense of them in context and determine the math strategies they can use in their solution. They determine any missing measurements they may need and add labels or markings to the diagrams as needed, organizing their work along the way. An example is in the Tips for Teachers section. It is as follows: "Finally, students should ask themselves at the end if their answer makes sense for the context (MP.1)." Anchor Problem 1, "A carpenter is building a new wall for a house that he is renovating. He knows that there will be a door and a window in the wall. Around the door and window, he uses wooden board to create the wall. A blueprint of the wall is shown below. How much wooden board, in square feet, does the carpenter need to build the wall? Explain your reasoning." MP4: In Unit 2, Unit Rates and Percent, Lesson 5, Criteria for Success, students apply the mathematics they know "to model and solve complex problems involving rate." "Chris and David run along a bike path toward a pond. Anchor Problem is as follows: "Chris and David run along a bike path toward a pond. Chris can run 3 miles in 30 minutes, and David can run 5 miles in 60 minutes. They both start running at the same time at the start of the bike path, shown below. a. If both Chris and David run at their current rates, how long will it take each one to get to the pond? b. Who will be closer to the pond after 90 minutes? How much farther ahead will this person be in front of the other person?" Students have opportunities to take different approaches, organize and explain their strategies so that others, who may have taken a different approach, can follow their line of thinking. MP8 is connected to mathematical content in Unit 3, Multi-Digit and Fraction Computation, Lesson 4, in Anchor Problem 1. It states, "The number 3 is divided by unit fractions 1/2, 1/3, 1/4, and 1/5. For each division problem, draw a visual model to represent the problem and to find the solution. Then complete the rest of the chart and answer the questions that follow. What pattern do you notice? What generalization can you make? Explain your reasoning. Notes: For the multiplication problem, students may think of 1/2 × ?=3. This is not incorrect, as it is the related multiplication problem of the division problem shown. However, the focus of this problem is observing the pattern when dividing by a unit fraction; specifically, 3 × 2=? (MP.8)." The instructional materials reviewed for Match Fishtank Grade 6 meet expectations that the instructional materials carefully attend to the full meaning of each practice standard. Unit 2, Unit Rates and Percent, Lesson 1, Anchor Problem 1 recommends, "Rather than a teacher-led problem, this is a good opportunity to have students work in small groups and determine which strategy they would like to use. Groups can compare different strategies, and the class can discuss which approach they think is best." The problem is as follows: "Chichén Itzá was a Mayan city in what is now Mexico. The picture below shows El Castillo, also known as the pyramid of Kukulcán, which is a pyramid located in the ruins of Chichén Itzá. The temple at the top of the pyramid is approximately 24 meters above the ground, and there are 91 steps leading up to the temple. How high above the ground would you be if you were standing on the 50th step? The 33rd step? The 80th step?" Unit 7, Geometry, Lesson 17, Target Task, "Kelly has a rectangular fish aquarium that measures 18 inches long, 8 inches wide, and 12 inches tall. What is the maximum amount of water the aquarium can hold? If Kelly wanted to put a protective covering over the four glass walls and top of the aquarium, how much material will the cover need." This problem encourages students to make sense of the problem as they conceptualize volume and area through sketching and labeling the dimensions of the aquarium with the appropriate measurements to answer the questions. Unit 6, Equations and Inequalities, Lesson 7, Target Task, students "reason abstractly and quantitatively as they use symbols to represent situations, define their variables, and then interpret their numerical solutions in context: "A town's total allocation for firefighters' wages and benefits in a new budget is $600,000. If wages are calculated at $40,000 per firefighter and benefits at $20,000 per firefighter, write an equation whose solution is the number of firefighters the town can employ if they spend their whole budget. Solve the equation." Unit 8, Statistics, Lesson 5, Anchor Problem 2, "Students demonstrate an understanding of the mean or average, as well as an understanding of the relationship between the mean and the data values from which it was calculated." The problem is as follows: "After finding the average or fair share payment for each person, Person E decides to not take the job because he would be making less money. a. If Person E leaves, then what is the new average payment of persons A–D? b. What impact did Person E leaving have on the average payment?" MP 4: Students model with mathematics. Unit 3, Multi-Digit and Fraction Computation, Lesson 5, Anchor Problems 1 and 2, [Strategic Education Research Partner (SERP), "No Matter How You Slice It"], students model a real-world application using division of fractions. For example, "If you know the length of a block of cheese, can you determine how many slices it can make? Suppose you get a new block and you know how thick you want your slices. What do you need to know in order to tell how many sandwiches you can make?" Unit 7, Geometry, Lesson 9, Anchor Problem 2, students model with mathematics by using a 9 x 9 grid to design a garden and while making adjustments as they try to meet the requirements. "You are responsible for a small plot of land that measures 9 ft. x 9 ft. in the community garden. You want to include $$48 ft^2$$ of gardening space, and you want your garden to have a rectangular shape and a triangle shape. Draw a possible plan for your garden on the grid below. Make sure you do not go outside of the 9 ft. x 9 ft. space." MP 5: Students use appropriate tools strategically. Unit 5, Numerical and Algebraic Expressions, Lesson 5, Anchor Problem 1, "A company hires five people for the same job for one week. The amount that each person is paid for the week is shown in the table below." (A table is provided.) "Person D states that the payments are not fair since each person is doing the same job and brings the same set of skills to the job. Everyone agrees that they should all get paid the same amount. How much should each person get paid so that everyone gets the same amount? Assume that the company will spend the same amount as it currently is." Students can use a variety of tools to solve the problem. Unit 7, Geometry, Lesson 1, Anchor Problem 1, "A parallelogram is shown below. a. What strategies could you use to find the area of the parallelogram? b. Follow Steps 1- 4 of this GeoGebra applet Area of Parallelogram to explore the area of parallelograms. Try out different parallelograms by moving the red and blue dots. c. In general, how can you find the area of any parallelogram?" Students can choose their own strategy to solve the problem. MP.6: Students attend to precision. Unit 1, Understanding Ratios, Lesson 4, Anchor Problem 2, students express numerical answers with a degree of precision appropriate to the context of problem using the correct symbols: "How can you create ratios equivalent to 5:6? Create equivalent ratios and reason about how you can create one that correctly matches with 54 cashews." Unit 5, Numerical and Algebraic Expressions, Lesson 11, students "Define variables for real-world contexts with precision." For example, in the Target Task, "Abel runs at a constant rate. The table below shows how far Abel has run after a certain number of hours. Write an expression to represent the number of miles Abel ran after h hours." Unit 5, Numerical and Algebraic Expressions, Lesson 2, Anchor Problem 1, "Evaluate the following numerical expressions: a. 2(5+(3)(2)+4) b. 2((5+3)(2+4)) c. 2(5+3(2+4)). Can the parentheses in any of these expressions be removed without changing the value of the expression?" Unit 8, Statistics, Lesson 4, Anchor Problem 1 presents three histograms and students answer the following questions: "Describe the shape of each distribution and explain what it means about the data set. Which graph is skewed left? Skewed right? Symmetrical? If these histograms represented the wages that people at a company earned, which company would you want to work at? Why? (Assume the same scale in each graph.) When explaining their choice for part (b), students use structural features of the distributions in constructing their arguments." MP. 8: Students look for and express regularity in repeated reasoning. Unit 6, Equalities and Inequalities, Lesson 6, Anchor Problem 2, students generalize "through repeated reasoning, an equation to represent the relationship between percent, whole, and part: percent x whole = part." "30% of what number is 12?" Solve this problem first by drawing a diagram. Then write and solve an equation to verify your solution. a. Does the 12 represent the whole or the part? b. What would a tape diagram look like? c. What would a double number line look like? d. How can you use your diagram to find the missing value? e. What equation can be used to solve percent problems? f. What does the equation look like for these values? g. How will you solve the equation? h. Does your answer match what you found from the diagram?" Unit 7, Geometry, Lesson 4, Anchor Problem 1, students "develop the understanding through repeated reasoning that, regardless of the angles in a triangle, if the base and height are the same, then the area of the triangle is the same." "Four triangles were made on a geoboard. The pegs on a geoboard are equally spaced in the square grid. a. Which triangle has the greatest area? b. Which triangle has the least area? c. Do any of the triangles have the same area? Why is that the case?" The instructional materials reviewed for Match Fish Tank Grade 6 meet the expectations that the instructional materials prompt students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics. Unit 1, Understanding and Representing Ratios, Lesson 11, Problem Set Guidance, (EngageNY Mathematics Grade 6 Mathematics, Module 1, Topic B, Lesson 10, Exit Ticket ad Problem Set, Exercise 1) is an example. Students are given a table of Hours Worked v. Pay, which has an error and instructed, "The following tables were made incorrectly. Find the mistakes that were made, create the correct ratio table, and state the ratio that was used to make the correct ratio table." Unit 4, Rational Numbers, Lesson 4, Target Task is as follows: "Jane completes several example problems that ask her to find the opposite of the opposite of a number, and for each example, the result is a positive number. Jane concludes that when she takes the opposite of the opposite of a number, the result will always be positive. Do you agree with Jane? Use the number line below to support and justify your answer." Unit 5, Numerical and Algebraic Expressions, Lesson 8, Target Task is as follows: "Students were asked to write a pair of equivalent expressions. The work of four students is shown below. Harry ab=a+a3+a+b+b+b, Iris $$3a^2 = 3 \times 3 x\times a \times a$$, Jill a + a + 1 + a + 2 = 3a + 3, Kevin 2a + 3b = 2 + a + 3 + b. Which student(s) wrote an equivalent pair of expressions? Justify your answer." Unit 7, Geometry, Lesson 2, Target Task 2 is as follows: "Dan and Joe are responsible for cutting the grass on the local high school soccer field. Joe draws a diagonal line through the field, as shown in the diagram below, and says that each person is responsible for cutting the grass on one side of the line. Dan says that is not fair because he will have to cut more grass than Joe. Is Dan correct? Why or why not?" Unit 2, Unit Rates and Percent, Lesson 2, Problem Set Guidance,(Open Up Resources Grade 6 Unit 8 Practice Problems, Lesson 13), is as follows: "When he sorts the class's scores on the last test, the teacher notices that exactly 12 students scored better than Clare and exactly 12 students scored worse than Clare. Does this mean that Clare's score on the test is the median? Explain your reasoning." Unit 3, Mulit-digit and Fraction Computation, Lesson 1, Target Task is as follows: "Seventy-two students in the sixth-grade class are going on a field trip to the aquarium. The math teacher writes this division problem to represent how the students will be grouped for the field trip: 72 ÷ 6=? Abe says, 'This means that there are 6 students in each group.' Sam says, 'This means there are 6 groups of students.' Who is correct? Explain your reasoning and draw a diagram to support your answer." Unit 5, Numerical and Algebraic Expressions, Lesson 10, Problem Set Guidance (Open Up Resources Grade 6 Unit 6 Practice Problems, Lesson 11, Problem 2), is as follows: "Priya rewrites the expression 8y−24 as 8(y−3). Han rewrites 8y−24 as 2(4y−12). Are Priya's and Han's expressions each equivalent to 8y−24? Explain your reasoning." Unit 8, Statistics, Lesson 8, Anchor Problem 2 is as follows: "At the University of North Carolina (UNC) in the mid-1980's, the average starting salary for a Geography major was over $100,000 (equivalent to almost $300,000 today). At that same time, basketball star Michael Jordan was drafted into the NBA with one of the highest salaries in the league. He had just graduated from UNC with a degree in Geography. Explain why the mean is a misleading measure of center to represent the salary of geography students at UNC. What measure of center would better represent the salary of geography students at UNC? Explain your reasoning." Unit 1, Understanding and Representing Ratios, Lesson 5, Anchor Problem 1 is as follows: "A restaurant that specializes in making pancakes makes 1 batch of pancakes using a ratio of 2 cups of flour to 3 cups of milk. How much flour and milk will the restaurant use to make 2 batches of pancakes? To make 3 batches? Show your reasoning using a visual representation of your choice. On a busy Saturday, the restaurant uses 36 cups of milk for pancakes. How much flour does the restaurant use for the pancakes, and how many batches is this? Show your reasoning using a visual representation of your choice." Teachers are instructed to guide students in constructing viable arguments with the following Guiding Questions, "What visual representations could you use to represent the ratios in this problem? Which of these representations are reasonable to use for part (a)? Why? Which of these representations are reasonable to use for part (b)? Why? Are there any representations that would work well for both parts of the problem? For part (a) but not (b)? If the restaurant made 7 batches of pancakes on Sunday, how much milk and flour did the restaurant use? How does your visual representation help you see this?" Unit 4, Rational Numbers, Lesson 1, Anchor Problem 1, teachers are prompted to facilitate a discussion between students. "An extension of this problem could have students working in pairs, where one student makes a claim similar to Andrea, and the other student agrees or disagrees and explains his or her reasoning." Unit 4, Rational Numbers, Lesson 9, Notes for Anchor Problem 3 is as follows: "This is a good opportunity to have students work in pairs, perhaps after some initial independent time to determine their own responses. Students should defend their reasoning for choosing sometimes, always, or never, using counterexamples where relevant." Unit 5, Numerical and Algebraic Expressions, Lesson 8, Problem 3, teachers are prompted to allow time for students to share their solutions and explain their reasoning: "Are the two expressions below equivalent?" Guiding Questions: a. "Do the variables x and y represent the same number?" b. "Draw tape diagrams for the expressions to see which are equivalent." c. "How can you use substitution to determine or verify your answer?" Notes: "This is a good opportunity for students to use counterexamples in their explanations to show the two expressions are not equivalent." Unit 5, Numerical and Algebraic Expressions, Lesson 9, Anchor Problem 3, teachers are prompted to have students review four expressions written on the board to determine which are correct. Guiding Questions provided are: a. "How did Sam think about the perimeter?" b. "Where did he get the 2?" c. "How did Joanna think about perimeter?" d. "How is it different from Sam?" e. How did Kiyo think about perimeter?" f. "How did Erica think about perimeter?" g. "Whose thinking was she close to and why? Students analyze each of the expressions to understand how each one may have been created...Share and discuss students' analyses of the expressions so they may hear various arguments from the class." Unit 7, Geometry, Lesson 1, Anchor Problem 1, teachers are prompted to allow time for students to share their thinking to their solution to an area problem. There are guiding questions to ask students that support critiquing the work that they did in class. "Have students discuss in pairs first to compare responses. If students disagree with which parallelograms are marked correctly, have each student explain his or her thinking and justify his or her reasoning. Use the guiding questions to prompt student thinking." Vocabulary is introduced at the Unit Level and reinforced through a Vocabulary Glossary and in the Criteria For Success. For example: Each Unit has a vocabulary list with the terms and notation that students learn or use in the unit. For example, in Unit 1, Understanding and Representing Ratios' vocabulary includes the following words: ratio, part to part ratio, part to whole ratio, multiplicative relationship, ratio table, double number line, equivalent ratio and tape diagram. Unit 4, Rational Numbers, Lesson 9, Criteria For Success, "Define absolute value as the distance from zero on a number line. Understand that absolute value is a distance or magnitude and, therefore, is always positive or zero. Absolute value is never negative." Unit 4, Rational Numbers, Lesson 4, Anchor Problem 1 states, "Use this Anchor Problem to introduce and define opposite numbers, including the fact that zero is its own opposite." Unit 7, Geometry, Lesson 8, Anchor Problem 1 is as follows: "When students discuss their strategies in pairs, listen for how students explain their work. How are they describing the shapes they work with? How are they explaining their process of finding measurements and areas? Ensure students are accurate and precise in their explanations." The Match Fishtank Grade 6 materials support students at the lesson level by providing new vocabulary terms in bold print, and definitions are provided within the sentence where the term is found. Additionally, Anchor Problem Guiding Questions allow students to use new vocabulary in meaningful ways. For example: Unit 2, Unit Rates and Percents, Lesson 2, students "Define and understand a rate, associated with a ratio a:b, as a/b units of the first quantity per 1 unit of the second quantity. For example, if a person walks 6 miles in 2 hours, the person is traveling at a rate of 3 miles per hour." Unit 7, Geometry, Lesson 14, Anchor Problem 1 states, "A set of prisms and a set of pyramids are shown. Define and identify face, vertex, edge, and base in the various prisms and pyramids." There are eight units in each grade level. Each unit presents lessons in a consistent structure. During the Anchor Problems, which include guided instruction, step-by step procedures, and problem solving, students work on examples and problems to learn new concepts. Problem Set Guidance contains a variety of exercises that allow students to independently master, and demonstrate their understanding of the material. These can include problems from Open Up Resources Grade 6-8 Mathematics, Open Middle, MARS Formative Assessment Lessons, Robert Kaplinsky, Yummy Math, EngageNY - Great Minds, and others. Each lesson concludes with a Target Task intended for formative assessment. For Example: Unit 1, Understanding and Representing Ratios, Lesson 1, Anchor Problem 1, students describe the association between two groups of shapes using ratios. In the Problem Set Guidance (Open Up Resources Grade 6 Unit 2 Practice Problems, Lesson 1, Problems 1-3), students are given additional practice with this skill. In Problem 3, they are shown a picture of two animals: "Write two different sentences that use ratios to describe the number of eyes and legs in this picture." Unit 2, Unit Rates and Percent, Lesson 3, Anchor Problem 3, students find unit rates and use them to solve problems. An example is as follows: "Emeline can type 2 pages in 8 minutes. What are two rates associated with this ratio? Emeline has a deadline in 18 minutes, at which point she needs to be done typing an article. What is the greatest number of pages that Emeline can type before her deadline? Emeline has to type a 7-page article. How much time will it take her?" The Guiding Questions for teachers help to lead the student through the problem-solving process. Unit 3, Multi-Digit and Fraction Computation, Lesson 3, Anchor Problem 1, students use a visual model to solve a problem that involves dividing a whole number by a fraction. In the Problem Set Guidance, (Illustrative Mathematics Dividing by One-Half ), students are given additional practice with that skill. An example is as follows: "Solve the four problems below. Which of the following problems can be solved by finding 3÷1/2?" Unit 4, Rational Numbers, Lesson 4, Anchor Problem 1, students define and label opposite numbers on a number line through direct instruction and guiding questions. Unit 7, Geometry, Lesson 2, Anchor Problems, students find the area of right triangles, and are introduced to the general formula for the area of a triangle. Students continue to practice this skill in the Problem Set Guidance. Unit 8, Statistics, Lesson 3, Anchor Problem 3, students create histograms from tally charts to represent and interpret a data set. In the Problem Set Guidance: (EngageNY Mathematics Grade 6 Mathematics, Module 6, Topic A, Lesson 4, Exit Ticket) additional problems allow for independent practice and mastery, The lessons follow a logical, consistent format that intentionally sequence assignments, provide for a natural progression, and lead to full understanding for students. For example: The instructional materials for Match Fishtank Grade 6 meet the expectations that the instructional materials prompt students to produce content in a variety of ways. For example: Unit 1, Understanding and Representing Ratios, Lesson 2: Students use double number lines to represent ratios and identify equivalent ratios. Unit 3, Multi-Digit and Fraction Computation, Lesson 2: Students use visual models to divide whole numbers by fractions. Unit 4, Rational Numbers, Lesson 6: Students use a number line to explain the order of rational numbers. Unit 6, Equations and Inequalities, Lesson 1: Students use tape diagrams and balances to represent and solve equations. Unit 1, Understanding and Representing Ratios, Lesson 1, Anchor Problem 1 uses a ratio shapes handout allowing students to cut out and sort/group shapes as they describe ratios of groups of objects. Unit 3, Multi-Digit Fraction Computation, Lesson 14, Anchor Problem 3 and Problem Set Guidance both use domino cards and a "Factor Game" to teach and reinforce prime factorization. The game is connected to written methods as students transition to writing numbers as a product of prime factors. Unit 8, Statistics, Lesson 3, Problem Set Guidance, links to StatKey Descriptive Statistics for One Quantitative Variable to create virtual histograms from data sets. In the materials, facilitator notes for each lesson include questions that are provided in Anchor Problem notes for the teacher to guide students' mathematical development and to elicit students' understanding. The materials indicate that questions provided are intended to provoke thinking and provide facilitation through the mathematical practices, as well as, influencing the students to think through their work. For example: Unit 2, Unit Rates and Percent, Lesson 4, Anchor Problem and Guiding Questions 2: "Are you able to compare the two options just using the price? Why or why not? Are you able to compare the two options just using the pounds of honey? Why or why not? Why would the unit rate of cost per pound be a useful tool to compare? How can you find the unit rate for each option? What does each unit rate tell you about the options?" Unit 4, Rational Numbers, Lesson 3, Anchor Problem and Guiding Questions 2: "What does zero feet mean in this situation? Did the two friends travel in the same or different directions? How do you know? Sketch a number line to represent the elevations of the friends." Unit 7, Geometry, Lesson 12. Anchor Problem 1 and Guiding Questions : "What is the absolute value of each x−coordinate? Of each y−coordinate? What does it mean if two points are symmetrical about an axis? How far is each point from the axis? Would the points (−2,5) and (2,5) line up vertically or horizontally? Which axis would you fold along to match (−2,5) with (2,5)?" Unit 8, Statistics, Lesson 4, Anchor Problem 1 and Guiding Questions: "How are the first two graphs similar? How are they different from the third graph? Which graph would you describe as symmetrical? Why? What features make it symmetrical? A skewed distribution has values that are not typical of the rest of the data. These skewed data points can be on the low or the high end. Which graph would you say is skewed left (is skewed toward the smaller values or has a "tail" to the left)? Which graph would you say is skewed right (is skewed toward the larger values or has a "tail" to the right)?" Tips for Teachers section is present in all lessons, and supports teachers with resources, and an overview of the lesson. These Tips for Teachers sections provide some guidance on how to present the content. For example: Unit 3, Multi-Digit Fractions and Computation, Lesson 4, Tips of Teachers: In this lesson, students use visual models and patterns to develop the general rule for dividing by fractions. The focus of this lesson is on the development of the rule rather than on the use of it. Students will have opportunities for a lot of practice in upcoming lessons. There is guidance for teachers in the form of Guiding Questions and Notes to use when implementing Anchor Problems, Problem Set Guidance, and Target Tasks. Problem Set Guidance sections are optional problem sets within student-facing materials. There is no student edition, so guidance for ancillary materials is not needed. The Teacher Tools include several handouts that address the instructional approach of the program. An example is as follows: "Components of a Math Lesson (Grades 6-12)". In addition, there are handouts regarding several instructional strategies. An example is as follows: "A Guide to Academic Discourse" and "A Guide to Supporting English Learners". The strategies are not identified as research-based. Unit 3, Multi-Digit and Fractional Computation, Lesson 3, Anchor Problem 3 states "There are often common misconceptions around how to treat any remaining fractions of the wholes. In this case, we can count four 2/3 cups in the visual, and we see that there is one third of a cup left over. However, we are counting in servings of 2/3 cup, not 1 cup, so the remaining piece represents 1/2 of a 2/3 cup serving. If students struggle with this concept, there is an additional problem from Illustrative Mathematics' "Cup of Rice" referenced in the Problem Set Guidance that addresses this further." Unit 6, Equations and Inequalities, Lesson 5, Anchor Problem 2: "A common misconception with problems involving division is for students to divide to find the variable instead of multiply, especially in a problem like the one above where 10 is divisible by 5. Use the diagram to visually represent x as the value that is being divided into 5 pieces, where each piece is equal to 10. Substituting the value of 2 into the equation for x is another way to demonstrate it as an incorrect answer." Each lesson is designed with teacher-led Anchor Problems, Problem Set Guidance and Target Tasks. The lessons contain multiple opportunities for practice with an assortment of problems. The Anchor Problems provided the teacher with guiding questions and notes in order to provide feedback for students' learning. For example: Unit 2, Unit Rates and Percent, Lesson 11, Anchor Problem 2 , Notes include the following: "This is also a great opportunity to engage students' number sense skills. Discuss the different ways that students can solve 22/100×150. For example, 22/100 can be rewritten as 11/50 or as 0.22. There is further simplification that can happen with the 150 in the numerator and 100 or 50 in the denominator to simplify the problem a great deal." Unit 5, Numerical and Algebraic Expressions, Lesson 3, Anchor Problem 2 is as follows: Students write expressions to match several situations. The guiding questions for teachers ask students to use their expressions to solve problems. In Notes, the following is included: "Use this Anchor Problem to introduce students to simple expressions that involve variables to represent quantities (MP.2). If students get stuck, have them try out different values for the variables to see how the quantities interact with each other." Each unit provides an answer key for the Unit Assessment. The answer key provides each item number and the targeted standard. For example: Unit 8, Statistics, Assessment Item 5 correlates with 6.SP.2. Unit 3, Multi-Digit and Fraction Computation, Assessment Item 6 correlates with 6.NS.3. Unit 7, Geometry, Assessment Item 5b states, in the "Correct Answer and Scoring Guidance": Solution: "No, I do not agree with the student. Triangle A and Triangle B have the same measurements for the base and height, so they will have the same area. (Or equivalent) 1 pt for determining the student's claim is incorrect 1 pt for justification/explanation." The Lesson Structure provides support for sequencing instruction. Each lesson includes a list of key skills and concepts that students should practice. The program overview states that the lesson core consists of Anchor Problems that lend better to whole group instruction, small group guided discovery, or both. The guiding questions help scaffold and/or extend on each Anchor Problem, but there is no instruction for teachers on how to do this or handle student misconceptions. The Tips for Teachers and Anchor Problem Notes include limited strategies to help teachers sequence or scaffold lessons: "Ask students," "Encourage students to look closely," "Remind students of a definition," or "Point out to students." Examples include the following: Unit 3, Multi-digit and Fraction Computation, Lesson 6, Anchor Problem 1 Notes are as follows: "Encourage students to show and share multiple ways to solve this problem. One solution may include a tape diagram, another a number line approach. You can also reason through this solution using rates and determining that 1/8 of a credit is the same as 30 minutes. Discuss how writing 1/4 with a common denominator to 7/8 can offer new ways to solve the problem." Unit 6, Equations and Inequalities, Lesson 5, Anchor Problem, Notes are as follows: "Ask students how they would read the equation aloud, what operations are involved, how they would solve an equation with that operation, etc. If students struggle with the fraction, try replacing it with a whole number to gain a better understanding of the operations. However, be sure to replace the fraction and discuss the fractional answer." Students engage in tasks throughout lessons in the Anchor Problems, Target Tasks, Problem Set Activities, the 3-Act Math Modeling, and the Mathematics Assessment Project activities. They all present multiple entry points for students. For example: Unit 1, Understanding and Representing Ratios, Lesson 9, Target Task is as follows : "Fix the Egg Mixup: What I did: 2 eggs, 4 tablespoons flour; What I should have done: 2 eggs, 3 tablespoons flour." Presenting this scenario without posing a question allows multiple entry points and varied strategies for students. ELL students have support to facilitate their regular and active participation in learning mathematics (e.g., modifying vocabulary words within word problems). The ELL Design is highlighted in the teaching tools document, "A Guide to Supporting English Learners", which includes strategies that are appropriate for all, but no other specific group of learners. There are no general statements about ELL students and other special populations within the units or lessons. Specific strategies for support, accommodations, and/or modifications are mentioned in "A Guide to Supporting English Learners" that include sensory, graphic, and interactive scaffolding; oral language protocols, which include many cooperative learning strategies, some of which are mentioned in Teacher Notes; and using graphic organizers with empathize on lighter or heavier scaffolding. An example is as follows: Oral Language Protocols provide structured routines to allow students to master opportunities and acquire academic language. Several structures are provided with an explanation on ways to incorporation them that include Turn and Talk, Simultaneous Round Table, Rally Coach, Talking Chips, Number Heads Together, and Take a Stand. Ways to adapt the lessons or suggestions to incorporate them are not included within lessons, units, or summaries. Different cultural names and situations are represented in the materials, ie., Sorah, John, Alberto, Beth, and Pedro. The materials avoid pronouns, referencing a role instead, ie., the banker, a biologist, a scientist, the soccer team. "The Guide to Supporting English Learners" provides cooperative learning and grouping strategies which can be used with all students. However, there are very few strategies mentioned in the instructional materials. There are are no directions or examples for teachers in the materials to adapt the lessons or suggestions on when and how to incorporate them. For example: In Unit 8, Statistics, Lesson 1, includes the following suggestion: "This is a good opportunity to have students work in pairs in order to explain why a question is/is not a statistical question and to listen to other students explain their reasoning as well." The instructional materials reviewed for Match Fishtank Grade 6 integrate technology including interactive tools, virtual manipulatives/objects, and dynamic mathematics software in ways that engage students in the Mathematical Practices (MPs). Teachers and students have access to math tools and virtual manipulatives within a given activity or task, when appropriate. These include GeoGebra, Desmos, and other independent resources. For example: Unit 5, Numeric and Algebraic Expressions, Lesson 6, uses the following technology: Students use a Desmos applet to find match expressions for verbal statements. Unit 7, Geometry, Lesson 14, uses the following technology: Students have opportunities to use the GeoGebra applet to create three-dimensional shapes that open up to reveal the net. Digital materials reviewed for Math Fishtank Grade 6 are included as part of the core materials. They are web-­based and compatible with multiple internet browsers, e.g., Internet Explorer, Firefox, Google Chrome, Safari, etc. In addition, materials are "platform neutral," i.e., are compatible with multiple operating systems such as Windows and Apple and are not proprietary to any single platform. Materials allow for the use of tablets and mobile devices including iPads, laptops, Chromebooks, MacBooks, and other devices that connect to the internet with an applicable browser.
CommonCrawl
AV-EAST 2020 Affine Varieties: Embeddings, Automorphisms, Structure and Topology (AV-EAST-2020) Dedicated to 70th anniversary of Shulim Kaliman July 27-31, The Euler Mathematical Institute, St.Petersburg, Russia Register Contact us July 27-31, 2020. The Euler International Mathematical Institute, Saint-Petersburg, Russia. See contacts. Ivan Arzhantsev (Moscow), Gene Freudenburg (Western Michigan), Frank Kutzschebauch (Bern) Alisa Chistopolskaya, Sergey Gaifullin, Alexander Perepechko, Anton Shafarevich, Yulia Zaitseva Affine algebraic geometry studies algebraic subvarieties of complex affine space $\mathbb{C}^n$. It emerged as an independent subject at the appearance of three celebrated results in the decade of the 1970s: the topological characterization of the affine plane given by Ramanujam (1971), the Abhyankar-Moh Suzuki Theorem (1975), and the Cancelation Theorem of Fujita, Miyanishi and Sugie (1979). A major contributor to this field in the following generation is Shulim Kaliman, whose work this conference is intended to honor on the occasion of his 70th birthday. Among his many achievements, several of Kaliman's results are now fundamental to our understanding of $\mathbb{C}^3$: He showed that polynomials with general $\mathbb{C}^2$-fibers are variables (2002), that every free $(\mathbb{C},+)$ action on $\mathbb{C}^3$ is a translation (2004), and he contributed to the solution of the linearization problem for $(\mathbb{C},\times)$ actions on $\mathbb{C}^3$ (1997). Along with algebraic geometry, Kaliman made a significant contribution to the theory of several complex variables and to the symplectic geometry ("Kaliman modification"); he has created a lot of synergies between these areas of mathematics. The conference will feature 10-12 prominent plenary speakers from among Kaliman's research colleagues, such as Mikhail Zaidenberg, with additional talks that include younger researchers. Kaliman's work draws from an impressive range of tools, not only in algebra and geometry but also in topology and analysis. The title of the conference reflects this: the embeddings, automorphisms, structure and topology of affine varieties are themes running throughout Kaliman's work. The talks will focus on current trends within these themes, and will not only bring together active researchers in the field but will also seek to engage students and early career researchers in the study of affine algebraic geometry. Daniel Daigle (Ottawa) - TBA Adrien Dubouloz (Dijon) - TBA Hubert Flenner (Bochum) - TBC Franc ForstneriДЌ (Ljubljana) - TBA Leonid Makar-Limanov (Rehovot) - TBA Masayoshi Miyanishi (Osaka) - TBA Lucy Moser-Jauslin (Dijon) - TBA Karol Palka (Warsaw) - TBA Vladimir Popov (Moscow) - TBA Peter Russell (Montreal) - TBA Immanuel van Santen (Basel) - TBA Mikhail Zaidenberg (Grenoble) - TBA The registration will open soon, please contact us by e-mail for any questions. Please contact us by e-mail. The conference will take place at EIMI. The main accommodation is the Andersen Hotel, 4A Chapygina street. Other variants will be added later. [email protected] Pesochnaya nab. 10, St.Petersburg 197022, Russia Photo of Shulim Kaliman by Renate Schmid, Archives of the Mathematisches Forschungsinstitut Oberwolfach (doubled resolution, CC BY-SA 2.0 DE) · Powered by themefisher for Hugo.
CommonCrawl
AllTitleAuthorKeywordAbstractDOICategoryAddressFund Article Navigation > Acta Oceanologica Sinica > 2020 > 39(11): 69-81 Yongfeng Qi, Chenjing Shang, Huabin Mao, Chunhua Qiu, Changrong Liang, Linghui Yu, Jiancheng Yu, Xiaodong Shang. Spatial structure of turbulent mixing of an anticyclonic mesoscale eddy in the northern South China Sea[J]. Acta Oceanologica Sinica, 2020, 39(11): 69-81. doi: 10.1007/s13131-020-1676-z Citation: Yongfeng Qi, Chenjing Shang, Huabin Mao, Chunhua Qiu, Changrong Liang, Linghui Yu, Jiancheng Yu, Xiaodong Shang. Spatial structure of turbulent mixing of an anticyclonic mesoscale eddy in the northern South China Sea[J]. Acta Oceanologica Sinica, 2020, 39(11): 69-81. doi: 10.1007/s13131-020-1676-z Spatial structure of turbulent mixing of an anticyclonic mesoscale eddy in the northern South China Sea Yongfeng Qi1, † , Chenjing Shang2, † , Huabin Mao1, 3 , , , Chunhua Qiu4 , Changrong Liang1 , Linghui Yu1 , Jiancheng Yu5 , Xiaodong Shang1 State Key Laboratory of Tropical Oceanography, South China Sea Institute of Oceanology, Chinese Academy of Sciences, Guangzhou 510301, China Shenzhen Key Laboratory of Marine Bioresources and Eco-environmental Science, College of Life Science and Oceanography, Shenzhen University, Shenzhen 518060, China Ocean College, Zhejiang University, Zhoushan 316021, China The Center for Coastal Ocean Science and Technology, School of Marine Sciences, Sun Yat-sen University, Guangzhou 510275, China State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China Funds: The National Key R&D Plan of China under contract Nos 2017YFC0305904, 2017YFC0305804 and 2016YFC1401404; the National Natural Science Foundation of China under contract Nos 41876023, 41630970, 41806037, 41706137 and 41806033; the Guangdong Science and Technology Project under contract Nos 2019A1515111044, 2018A0303130047 and 2017A030310332; the Guangzhou Science and Technology Project under contract No. 201707020037; the Natural Science Foundation of Shenzhen University under contract No. 2019078; the Dedicated Fund for Promoting High-quality Economic Development in Guangdong Province (Marine Economic Development Project) under contract No. GDOE[2019]A03; the Independent Research Project Program of State Key Laboratory of Tropical Oceanography under contract Nos LTOZZ1902 and LTO1909. Corresponding author: E-mail: [email protected] Received Date: 2020-06-23 Accepted Date: 2020-08-10 Available Online: 2020-12-28 Upper turbulent mixing in the interior and surrounding areas of an anticyclonic eddy in the northern South China Sea (SCS) was estimated from underwater glider data (May 2015) in the present study, using the Gregg-Henyey-Polzin parameterization and the Thorpe-scale method. The observations revealed a clear asymmetrical spatial pattern of turbulent mixing in the anticyclonic eddy area. Enhanced diffusivity (in the order of 10–3 m2/s) was found at the posterior edge of the anticyclonic mesoscale eddy; on the anterior side, diffusivity was one order of magnitude lower on average. This asymmetrical pattern was highly correlated with the eddy kinetic energy. Higher shear variance on the posterior side, which is conducive to the triggering of shear instability, may be the main mechanism for the elevated diffusivity. In addition, the generation and growth of sub-mesoscale motions that are fed by mesoscale eddies on their posterior side may also promote the occurrence of strong mixing in the studied region. The results of this study help improve our knowledge regarding turbulent mixing in the northern SCS. mesoscale eddy, turbulent mixing, South China Sea, GHP parameterization, Thorpe-scale method †These authors contributed equally to this works. [1] Alford M H, Peacock T, Mackinnon J M, et al. 2015. The formation and fate of internal waves in the South China Sea. Nature, 521(7550): 65–69. doi: 10.1038/nature14399 [2] Bai Xiaolin, Liu Zhiyu, Zheng Quanan, et al. 2019. Fission of shoaling internal waves on the northeastern shelf of the South China Sea. Journal of Geophysical Research: Oceans, 124(7): 4529–4545. doi: 10.1029/2018JC014437 [3] Booker J R, Bretherton F P. 1967. The critical layer for internal gravity waves in a shear flow. Journal of Fluid Mechanics, 27(3): 513–539. doi: 10.1017/S0022112067000515 [4] Callaghan A H, Ward B, Vialard J. 2014. Influence of surface forcing on near-surface and mixing layer turbulence in the tropical Indian Ocean. Deep Sea Research Part I: Oceanographic Research Papers, 94: 107–123. doi: 10.1016/j.dsr.2014.08.009 [5] Capet X, McWilliams J, Molemaker M, et al. 2008. Mesoscale to submesoscale transition in the California current system. Part I: Flow structure, eddy flux, and observational tests. Journal of Physical Oceanography, 38(1): 29–43. doi: 10.1175/2007JPO3671.1 [6] Caruso M J, Gawarkiewicz G G, Beardsley R C. 2006. Interannual variability of the Kuroshio intrusion in the South China Sea. Journal of Oceanography, 62(4): 559–575. doi: 10.1007/s10872-006-0076-0 [7] Chelton D B, Schlax M G, Samelson R M. 2011. Global observations of nonlinear mesoscale eddies. Progress in Oceanography, 91(2): 167–216. doi: 10.1016/j.pocean.2011.01.002 [8] Chen Gengxin, Hou Yijun, Chu Xiaoqing. 2011. Mesoscale eddies in the South China Sea: Mean properties, spatiotemporal variability, and impact on thermohaline structure. Journal of Geophysical Research: Ocean, 116(C6): C06018 [9] Chow C H, Hu J H, Centurioni L R, et al. 2008. Mesoscale Dongsha cyclonic eddy in the northern South China Sea by drifter and satellite observations. Journal of Geophysical Research: Ocean, 113(C4): C04018. doi: 10.1029/2007JC004542 [10] Chu Xiaoqing, Xue Huijie, Qi Yiquan, et al. 2014. An exceptional anticyclonic eddy in the South China Sea in 2010. Journal of Geophysical Research: Oceans, 119(2): 881–897. doi: 10.1002/2013JC009314 [11] Dillon T M. 1982. Vertical overturns: A comparison of Thorpe and Ozmidov length scales. Journal of Geophysical Research: Oceans, 87(C12): 9601–9613. doi: 10.1029/JC087iC12p09601 [12] Ferrari R, Wunsch C. 2009. Ocean circulation kinetic energy: Reservoirs, sources, and sinks. Annual Review of Fluid Mechanics, 41: 253–282. doi: 10.1146/annurev.fluid.40.111406.102139 [13] Fu L L, Ferrari R. 2008. Observing oceanic submesoscale processes from space. Eos, Transactions, American Geophysical Union, 89(48): 488. doi: 10.1029/2008EO480003 [14] Garrett C, Munk W. 1972. Space-time scales of internal waves. Geophysical Fluid Dynamic, 3(3): 225–264. doi: 10.1080/03091927208236082 [15] Garrett C, Munk W. 1975. Space-time scales of internal waves: A progress report. Journal of Geophysical Research, 80(3): 291–297. doi: 10.1029/JC080i003p00291 [16] Gill A E, Green J S A, Simmons A J. 1974. Energy partition in the large-scale ocean circulation and the production of mid-ocean eddies. Deep Sea Research and Oceanographic Abstracts, 21(7): 499–528. doi: 10.1016/0011-7471(74)90010-2 [17] Gregg M C, Sanford T B, Winkel D P. 2003. Reduced mixing from the breaking of internal waves in equatorial waters. Nature, 422(6931): 513–515. doi: 10.1038/nature01507 [18] Guo Jingsong, Feng Ying, Yuan Yeli, et al. 2013. Kuroshio loop current intruding into the South China Sea and its shedding eddy. Oceanologia et Limnologia Sinica (in Chinese), 44(3): 537–544 [19] He Qingyou, Zhan Haigang, Cai Shuqun, et al. 2018. A new assessment of mesoscale eddies in the South China Sea: Surface features, three-dimensional structures, and thermohaline transports. Journal of Geophysical Research: Oceans, 123(7): 4906–4929. doi: 10.1029/2018JC014054 [20] Henyey F S, Wright J, Flatté S M. 1986. Energy and action flow through the internal wave field: An eikonal approach. Journal of Geophysical Research: Oceans, 91(C7): 8487–8495. doi: 10.1029/JC091iC07p08487 [21] Hu Jianyu, Zheng Quanan, Sun Zhenyu, et al. 2012. Penetration of nonlinear Rossby eddies into South China Sea evidenced by cruise data. Journal Geophysical Research: Oceans, 117(C3): C03010 [22] Huang Xiaodong, Chen Zhaohui, Zhao Wei, et al. 2016. An extreme internal solitary wave event observed in the northern South China Sea. Scientific Reports, 6: 30041. doi: 10.1038/srep30041 [23] Jia Yinglai, Chassignet E P. 2011. Seasonal variation of eddy shedding from the Kuroshio intrusion in the Luzon Strait. Journal of Oceanography, 67(5): 601–611. doi: 10.1007/s10872-011-0060-1 [24] Jia Yinglai, Liu Qinyu, Liu Wei. 2004. Eddy shedding from the Kuroshio bend at Luzon Strait. Journal of Oceanography, 60(6): 1063–1069. doi: 10.1007/s10872-005-0014-6 [25] Jing Zhao, Wu Lixin. 2013. Low-frequency modulation of turbulent diapycnal mixing by anticyclonic eddies inferred from the HOT time series. Journal of Physical Oceanography, 43(3): 824–835 [26] Klymak J M, Alford M H, Pinkel R, et al. 2011. The breaking and scattering of the internal tide on a continental slope. Journal of Physical Oceanography, 41(5): 926–945. doi: 10.1175/2010JPO4500.1 [27] Kunze E, Firing E, Hummon J M, et al. 2006. Global abyssal mixing inferred from lowered ADCP shear and CTD strain profiles. Journal of Physical Oceanography, 36(8): 1553–1576. doi: 10.1175/JPO2926.1 [28] Li Li, Nowlin W D Jr, Su Jilan. 1998. Anticyclonic rings from the Kuroshio in the South China Sea. Deep Sea Research Part I: Oceanographic Research Papers, 45(9): 1469–1482. doi: 10.1016/S0967-0637(98)00026-0 [29] Li Li, Pohlmann T. 2002. The South China Sea warm-core ring 94S and its influence on the distribution of chemical tracers. Ocean Dynamics, 52(3): 116–122. doi: 10.1007/s10236-001-0009-9 [30] Liang Changrong, Chen Guiying, Shang Xiaodong. 2017. Observations of the turbulent kinetic energy dissipation rate in the upper central South China Sea. Ocean Dynamics, 67(5): 597–609. doi: 10.1007/s10236-017-1051-6 [31] Liang Xinfeng, Thurnherr A M. 2011. Subinertial variability in the deep ocean near the East Pacific Rise between 9° and 10°N. Geophysical Research Letters, 38(6): L06606 [32] Liu Zhiyu, Lian Qiang, Zhang Fangtao, et al. 2017. Weak thermocline mixing in the North Pacific low-latitude western boundary current system. Geophysical Research Letters, 44(20): 10530–10539. doi: 10.1002/2017GL075210 [33] Mater B D, Venayagamoorthy S K, St Laurent L, et al. 2015. Biases in Thorpe-scale estimates of turbulence dissipation. Part I: Assessments from large-scale overturns in oceanographic data. Journal of Physical Oceanography, 45(10): 2497–2521. doi: 10.1175/JPO-D-14-0128.1 [34] Nan Feng, Xue Huijie, Chai Fei, et al. 2011a. Identification of different types of Kuroshio intrusion into the South China Sea. Ocean Dynamics, 61(9): 1291–1304. doi: 10.1007/s10236-011-0426-3 [35] Nan Feng, Xue Huijie, Xiu Peng, et al. 2011b. Oceanic eddy formation and propagation southwest of Taiwan. Journal of Geophysical Research: Oceans, 116(C12): C12045. doi: 10.1029/2011JC007386 [36] Nan Feng, Xue Huijie, Yu Fei. 2015. Kuroshio intrusion into the South China Sea: A review. Progress in Oceanography, 137: 314–333. doi: 10.1016/j.pocean.2014.05.012 [37] Osborn T R. 1980. Estimates of the local rate of vertical diffusion from dissipation measurements. Journal of Oceanography, 10(1): 83–89 [38] Park J H, Farmer D. 2013. Effects of Kuroshio intrusions on nonlinear internal waves in the South China Sea during winter. Journal of Geophysical Research: Oceans, 118(12): 7081–7094. doi: 10.1002/2013JC008983 [39] Polzin K L, Garabato A C N, Huussen T N, et al. 2014. Finescale parameterizations of turbulent dissipation. Journal of Geophysical Research: Oceans, 119(2): 1383–1419. doi: 10.1002/2013JC008979 [40] Qiu Chunhua, Mao Huabin, Liu Hailong, et al. 2019a. Deformation of a warm eddy in the northern South China Sea. Journal of Geophysical Research: Oceans, 124(8): 5551–5564. doi: 10.1029/2019JC015288 [41] Qiu Chunhua, Mao Huabin, Wang Yanhui, et al. 2019b. An irregularly shaped warm eddy observed by Chinese underwater gliders. Journal of Oceanography, 75(2): 139–148. doi: 10.1007/s10872-018-0490-0 [42] Qu Tangdong. 2000. Upper-layer circulation in the South China Sea. Journal of Physical Oceanography, 30(6): 1450–1460. doi: 10.1175/1520-0485(2000)030<1450:ULCITS>2.0.CO;2 [43] Qu Tangdong, Girton J B, Whitehead J A. 2006. Deepwater overflow through Luzon Strait. Journal of Geophysical Research: Oceans, 111(C1): C01002 [44] Shang Xiaodong, Liang Changrong, Chen Guiying. 2017. Spatial distribution of turbulent mixing in the upper ocean of the South China Sea. Ocean Science, 13(3): 503–519. doi: 10.5194/os-13-503-2017 [45] Shay T J, Gregg M C. 1986. Convectively driven turbulent mixing in the upper ocean. Journal of Physical Oceanography, 16(11): 1777–1798. doi: 10.1175/1520-0485(1986)016<1777:CDTMIT>2.0.CO;2 [46] St. Laurent L 2008. Turbulent dissipation on the margins of the South China Sea. Geophysical Research Letters, 35(23): L23615. doi: 10.1029/2008GL035520 [47] Su Zhan, Ingersoll A P, Stewart A, et al. 2016. Ocean convective available potential energy. Part II: Energetics of thermobaric convection and thermobaric cabbeling. Journal of the Physical Oceanography, 46(4): 1097–1115. doi: 10.1175/JPO-D-14-0156.1 [48] Sun Hui, Yang Qingxuan, Zhao Wei, et al. 2016. Temporal variability of diapycnal mixing in the northern South China Sea. Journal of Geophysical Research: Oceans, 121(12): 8840–8848. doi: 10.1002/2016JC012044 [49] Thomas L, Ferrari R. 2008. Friction, frontogenesis, and the stratification of the surface mixed layer. Journal of Physical Oceanography, 38(11): 2501–2518. doi: 10.1175/2008JPO3797.1 [50] Tian Jiwei, Yang Qingxuan, Zhao Wei. 2009. Enhanced diapycnal mixing in the South China Sea. Journal of Physical Oceanography, 39(12): 3191–3203. doi: 10.1175/2009JPO3899.1 [51] Wang Xiaowei, Peng Shiqiu, Liu Zhiyu, et al. 2016. Tidal mixing in the South China Sea: An estimate based on the internal tide energetics. Journal of Physical Oceanography, 46(1): 107–124. doi: 10.1175/JPO-D-15-0082.1 [52] Wang Guihua, Su Jilan, Chu P C. 2003. Mesoscale eddies in the South China Sea observed with altimeter data. Geophysical Research Letters, 30(21): 2121. doi: 10.1029/2003GL018532 [53] Wang Guihua, Su Jilan, Qi Yiquan. 2005. Advances in studying mesoscale eddies in South China Sea. Advance in Earth Science, 20(8): 882–886 [54] Wang Guihua, Xie Shangping, Qu Tangdong, et al. 2011. Deep South China Sea circulation. Geophysical Research Letters, 38(5): L05601 [55] Wang Dongxiao, Xu Hongzhou, Lin Jing, et al. 2008. Anticyclonic eddies in the northeastern South China Sea during winter 2003/2004. Journal of Oceanography, 64(6): 925–935. doi: 10.1007/s10872-008-0076-3 [56] Wang Qiang, Zeng Lili, Li Jian, et al. 2018. Observed cross-shelf flow induced by mesoscale eddies in the northern South China Sea. Journal of Physical Oceanography, 48(7): 1609–1628. doi: 10.1175/JPO-D-17-0180.1 [57] Xie Jieshuo, He Yinghui, Chen Zhiwu, et al. 2015. Simulations of internal solitary wave interactions with mesoscale eddies in the northeastern South China Sea. Journal of Physical Oceanography, 45(12): 2959–2978. doi: 10.1175/JPO-D-15-0029.1 [58] Xie Shangping, Xie Qiang, Wang Dongxiao, et al. 2003. Summer upwelling in the South China Sea and its role in regional climate variations. Journal of Geophysical Research: Oceans, 108(C8): 3261. doi: 10.1029/2003JC001867 [59] Yang Haijun, Liu Qinyu. 2003. Forced Rossby wave in the northern South China Sea. Deep Sea Research Part I: Oceanographic Research Papers, 50(7): 917–926. doi: 10.1016/S0967-0637(03)00074-8 [60] Yang Qingxuan, Nikurashin M, Sasaki H, et al. 2019. Dissipation of mesoscale eddies and its contribution to mixing in the northern South China Sea. Scientific Reports, 9: 556. doi: 10.1038/s41598-018-36610-x [61] Yang Qingxuan, Zhou Lei, Tian Jiwei, et al. 2014. The roles of Kuroshio intrusion and mesoscale eddy in upper mixing in the northern South China Sea. Journal of Coastal Research, 30(1): 192–198 [62] Yang Qingxuan, Zhao Wei, Liang Xinfeng, et al. 2016. Three-dimensional distribution of turbulent mixing in the South China Sea. Journal of Physical Oceanography, 46(3): 769–788. doi: 10.1175/JPO-D-14-0220.1 [63] Yang Qingxuan, Zhao Wei, Liang Xinfeng, et al. 2017. Elevated mixing in the periphery of mesoscale eddies in the South China Sea. Journal of Physical Oceanography, 47(4): 895–907. doi: 10.1175/JPO-D-16-0256.1 [64] Yuan Dongliang, Han Weiqing, Hu Dunxin. 2006. Surface Kuroshio path in the Luzon Strait area derived from satellite remote sensing data. Journal of Geophysical Research: Oceans, 111(C11): C11007. doi: 10.1029/2005JC003412 [65] Yuan Dongliang, Han Weiqing, Hu Dunxin. 2007. Anti-cyclonic eddies northwest of Luzon in summer-fall observed by satellite altimeters. Geophysical Research Letters, 34(13): L13610 [66] Zhang Huaimin, Bates J J, Reynolds R W. 2006. Assessment of composite global sampling: Sea surface wind speed. Geophysical Research Letters, 33(17): L17714. doi: 10.1029/2006GL027086 [67] Zhang Zhiwei, Tian Jiwei, Qiu Bo, et al. 2016. Observed 3D structure, generation, and dissipation of oceanic mesoscale eddies in the South China Sea. Scientific Reports, 6: 24349. doi: 10.1038/srep24349 [68] Zhang Zhengguang, Wang Wei, Qiu Bo. 2014. Oceanic mass transport by mesoscale eddies. Science, 345(6194): 322–324. doi: 10.1126/science.1252418 [69] Zhang Zhiwei, Zhao Wei, Qiu Bo, et al. 2017. Anticyclonic eddy sheddings from Kuroshio loop and the accompanying cyclonic eddy in the northeastern South China Sea. Journal Physical Oceanography, 47(6): 1243–1259. doi: 10.1175/JPO-D-16-0185.1 [70] Zhang Zhiwei, Zhao Wei, Tian Jiwei, et al. 2013. A mesoscale eddy pair southwest of Taiwan and its influence on deep circulation. Journal Geophysical Research: Oceans, 118(12): 6479–6494. doi: 10.1002/2013JC008994 [71] Zhao Zhongxiang. 2014. Internal tide radiation from the Luzon Strait. Journal of Geophysical Research: Ocean, 119(8): 5434–5448. doi: 10.1002/2014JC010014 [72] Zhao Zhongxiang, Klemas V, Zheng Quanan, et al. 2004. Remote sensing evidence for baroclinic tide origin of internal solitary waves in the northeastern South China Sea. Geophysical Research Letters, 31(6): L06302 [73] Zheng Quanan, Xie Lingling, Zheng Zhiwen, et al. 2017. Progress in research of mesoscale eddies in the South China Sea. Advances in Marine Science, 35(2): 131–158 [74] Zhou Chun, Zhao Wei, Tian Jiwei, et al. 2014. Variability of the deep-water overflow in the Luzon Strait. Journal of Physical Oceanography, 44(11): 2972–2986. doi: 10.1175/JPO-D-14-0113.1 Proportional views 通讯作者: 陈斌, [email protected] 沈阳化工大学材料科学与工程学院 沈阳 110142 百度学术搜索 万方数据库搜索 CNKI搜索 Figures(12) Article views(30) PDF downloads(0) Cited by() Yongfeng Qi1, †, Chenjing Shang2, †, Huabin Mao1, 3, , , Chunhua Qiu4, Changrong Liang1, Linghui Yu1, Jiancheng Yu5, 1. State Key Laboratory of Tropical Oceanography, South China Sea Institute of Oceanology, Chinese Academy of Sciences, Guangzhou 510301, China 2. Shenzhen Key Laboratory of Marine Bioresources and Eco-environmental Science, College of Life Science and Oceanography, Shenzhen University, Shenzhen 518060, China 3. Ocean College, Zhejiang University, Zhoushan 316021, China 4. The Center for Coastal Ocean Science and Technology, School of Marine Sciences, Sun Yat-sen University, Guangzhou 510275, China 5. State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China mesoscale eddy / turbulent mixing / South China Sea / GHP parameterization / Abstract: Upper turbulent mixing in the interior and surrounding areas of an anticyclonic eddy in the northern South China Sea (SCS) was estimated from underwater glider data (May 2015) in the present study, using the Gregg-Henyey-Polzin parameterization and the Thorpe-scale method. The observations revealed a clear asymmetrical spatial pattern of turbulent mixing in the anticyclonic eddy area. Enhanced diffusivity (in the order of 10–3 m2/s) was found at the posterior edge of the anticyclonic mesoscale eddy; on the anterior side, diffusivity was one order of magnitude lower on average. This asymmetrical pattern was highly correlated with the eddy kinetic energy. Higher shear variance on the posterior side, which is conducive to the triggering of shear instability, may be the main mechanism for the elevated diffusivity. In addition, the generation and growth of sub-mesoscale motions that are fed by mesoscale eddies on their posterior side may also promote the occurrence of strong mixing in the studied region. The results of this study help improve our knowledge regarding turbulent mixing in the northern SCS. The South China Sea (SCS) is the largest semi-enclosed marginal sea in the northwestern Pacific Ocean. Its large-scale currents are driven by the East Asian monsoon (Xie et al., 2003). It has been reported that the SCS experiences a range of multiscale dynamical processes including wind- and density-driven circulation (Qu, 2000; Wang et al., 2011), strong internal waves (Alford et al., 2015; Huang et al., 2016), enhanced turbulent mixing (Tian et al., 2009; Liang et al., 2017), and energetic mesoscale eddies (Wang et al., 2003; Zhang et al., 2013; Qiu et al., 2019b). Among these processes, mesoscale eddies with strong kinetic energies play an important role in the dynamics across a range of scales (Chelton et al., 2011) and are a key transport mechanism of oceanic material (Zhang et al., 2014). Mesoscale eddies in the northern SCS have received much attention in the past few decades, which is evident in both hydrographic datasets and satellite sea-level anomaly data (Li et al., 1998; Li and Pohlmann, 2002; Yuan et al., 2007; Chow et al., 2008; Wang et al., 2003, 2008; Chen et al., 2011; Chu et al., 2014; Zhang et al., 2016; Qiu et al., 2019b). Previous studies have examined eddy structures, eddy life-cycles (in terms of their origination, shifts, development, and decay), and the associated transport of energy and matter in the northern SCS (Wang et al., 2005; Nan et al., 2015; Zhang et al., 2016; Zheng et al., 2017; Qiu et al., 2019a). The prevailing dynamical paradigm is that the oceanic eddies are generated through the hydrodynamic instabilities of ocean currents with the release of available potential and kinetic energy built up by large-scale wind and surface buoyancy fluxes (Gill et al., 1974; Ferrari and Wunsch, 2009). The formation of mesoscale eddies in the SCS occurs mainly through shedding from the Kuroshio intrusion and its instability (Li et al., 1998; Jia et al., 2004; Yuan et al., 2006; Zhang et al., 2017) and the action of the local wind-stress jet (Wang et al., 2008). Eddies from the Luzon Strait usually follow one of three tracks, namely southwestward to Hainan Island, westward into the 1 000-m isobath, or southward (Nan et al., 2011b). A proportion of eddies can drift southwestward along the continental slope for more than a thousand kilometers and last for several months after their formation from a Kuroshio intrusion (Chen et al., 2011; Nan et al., 2011b; He et al., 2018). During their propagation, the mesoscale eddies can modulate basin-scale circulation (Yang and Liu, 2003), affecting small-scale internal waves (Xie et al., 2015), inducing cross-shelf flow (Wang et al., 2018), and evoking sub-mesoscale motions (Zhang et al., 2016). In addition to the mesoscale eddy processes, turbulent mixing in the northern SCS has received much attention in the last twenty years. The northern SCS is a hotspot of turbulent mixing (St. Laurent, 2008); it is reported that the turbulent mixing rates in the north SCS can be two orders of magnitude higher than that in the Pacific Ocean (Tian et al., 2009; Klymak et al., 2011). The elevated turbulent mixing in the SCS drives water exchange between the SCS and the Pacific (Zhou et al., 2014), and plays a key role in driving the SCS circulation (Qu et al., 2006). The breaking of internal waves (i.e., internal solitary waves, internal tides, and near-inertial internal waves) is considered the dominant factor driving the high-level mixing in the northern SCS (Yang et al., 2016; Bai et al., 2019). Most of those previous studies focusing on turbulent mixing in the SCS were based on large distance station measurements with scales reach dozens of kilometers. However, with low spatial measurements, it is difficult to depict spatial characteristics of mesoscale motions (such as mesoscale eddies) mixing. Much attention has been paid to mesoscale eddies and turbulent mixing individually. However, the influence of the mesoscale eddies on the turbulent mixing in the SCS has only recently received attention. Several mechanisms of turbulent mixing influenced by mesoscale eddies have been reported. Anticyclonic and cyclonic eddies have different effects on turbulent mixing in the north SCS, with the former strengthening mixing and latter weakening mixing, respectively (Yang et al., 2014). Potential causes of the enhanced mixing within the anticyclonic eddies include the breaking of strong internal tides radiating from the Luzon Strait and the breaking of near-inertial waves radiating from the eddy itself (Zhang et al., 2016), the eddy reinforced down-propagation of near-inertial waves (Yang et al., 2014), the near-inertial waves generated by the interaction of mesoscale eddies, and unique bottom topography (Sun et al., 2016). Recently, Yang et al. (2017) found that several times more turbulent mixing occurred in the surface mixed layer of the periphery of anticyclonic eddies compared to that in the eddy center. The more energized sub-mesocale motions in the periphery were a key factor leading to the spatial feature of mixing. The mesoscale eddies in the SCS can dissipate effectively over complex rough topography, and the generation of sub-mesoscale motions and lee waves are two pathways for the transfer of mesoscale eddy energy down to small dissipation scales (Yang et al., 2019). Although some researchers have studied the influence of mesoscale eddy on turbulent mixing in the SCS, more observations, especially high spatial resolution observations, are needed to examine the spatial structure of turbulent mixing of mesoscale eddy to identify the effects of the physical processes such as sub-mesoscale eddy. Here, based on the glider measurement results obtained in May 2015, we present an assessment of the spatial structure in high spatial resolution, less than 4 km of turbulent mixing within and surrounding an anticyclonic eddy in the northern SCS. 2. Data and methods 2.1. Data The primary data source was a Chinese underwater glider (Sea-Wing) deployed in the northern SCS (Fig. 1a). The glider passed through the center of an anticyclonic mesoscale eddy on May 18, 2015. The glider was released at 21°N, 119°E on April 1, 2015, traveled southwestward for more than 700 km, and was finally retrieved at 18°N, 114°E on June 1, 2015 (indicated by the black line in Fig. 1a). Temperature, salinity, and pressure were measured by a Seabird CTD installed on the glider. The glider profiled the water column to a depth of 1 000 m and captured 205 vertical temperature and salinity profiles. Figure 1. Trajectories of the anticyclonic eddy and underwater glider. a. Sea level anomaly (SLA) data for May 18, 2015. The red line indicates the track of the studied anticyclonic eddy. The black line shows the track of the underwater glider. The studied anticyclonic eddy is marked with a blue dashed line. The locations of the eddy center and glider on May 3, May 8, May 13, May 18, May 23, and May 28 are indicated. b. Absolute dynamic topography (ADT) on March 1, 2015. The red line indicates the track of the studied anticyclonic eddy. The yellow dash line in a and white dash line in b show an ADT of 118 cm. Bathymetry of the South China Sea with the 200 m, 1 000 m, 2 000 m and 3 000 m isobaths overlaid in a and b. To examine the surface characteristics of the studied anticyclonic eddy, satellite altimeter-based sea level anomaly (SLA) and absolute dynamic topography (ADT) data distributed by the Copernicus Marine Environment Monitoring Service (CMEMS, http://marine.copernicus.eu) were obtained. The SLA dataset merged observations from different altimetry satellites (Jason-3, Sentinel-3A, HY-2A, Saral/AltiKa, Cryosat-2, Jason-2, Jason-1, T/P, ENVISAT, GFO, and ERS1/2), and geophysically and meteorologically corrected for tides, ionospheric effects, and atmospheric pressure with gridded spatial and temporal resolutions of (1/4)° and one day, respectively. The SLA data represent the differences between ADT and mean dynamic topography (MDT). The specific product used was SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047, and a full description of the dataset is available at http://cmems-resources.cls.fr/documents/PUM/CMEMS-SL-PUM-008-032-051.pdf. The global (1/12)° reanalysis product of the Hybrid Coordinate Ocean Model (HYCOM) was used to investigate the surface velocity accompanying the mesoscale eddy (Section 4), which is available at http://hycom.org/dataserver/glb-reanalysis. This product assimilates multiple observational datasets, including satellite altimeter and sea-surface temperature (SST) data and in-situ T-S profiles from different instruments (e.g., CTDs, XBTs, and Argo floats). The HYCOM simulations had 40 vertical layers with daily archives and included three-dimensional velocity and T-S fields. It was previously reported that the HYCOM product performed well in simulating eddies in the northern SCS (Park and Famer, 2013; Zhang et al., 2017). The Aviso eddy atlas was adopted to display the variation in the physical fields of the eddy (https://www.aviso.altimetry.fr). This dataset contains a range of physical data, including eddy radius, eddy amplitude, and eddy rotation speed. The data-processing method for this dataset is described in detail by Chelton et al. (2011). Historical temperature and salinity data from the Argo dataset (http://argo.ucsd.edu) were also utilized to identify the origin of the eddy. Sea surface wind data from a blended wind dataset (Zhang et al., 2006) was used to exclude the effect of surface wind fields on mixing. The blended wind data combined multiple satellite observations and were used to fill any data gaps in both time and space (http://www.ncdc.noaa.gov/data-access/marineocean-data/blended-global/blended-sea-winds). 2.2. Gregg-Henyey-Polzin parameterization The Gregg-Henyey-Polzin (GHP) parameterization is one of the most widely used methods of quantifying ocean turbulence from CTD measurements (Gregg et al., 2003; Kunze et al., 2006). The GHP scaling based on internal wave-wave interaction theory and was first developed by Henyey et al. (1986). We employed the GHP parameterization to qualify the diapycnal diffusivities from CTD measurements installed on the glider. The GHP is expressed as follows: where K0=5×10–6 m2/s, $ \left\langle {\xi }_{z}^{2} \right\rangle $ represents the fine-scale internal wave strain variance inferred from observations, and $ {}_{\mathrm{G}\mathrm{M}}\!\left\langle {\xi }_{z}^{2} \right\rangle $ is the strain variance inferred from the Garrett and Munk (GM) spectrum (Garrett and Munk, 1972, 1975). In the GM model, an open-ocean internal wave-field was assumed at a fixed buoyancy frequency (N0=5.2×10–3 s–1) at the latitude of 30°. The functions f and N are the Coriolis and buoyancy frequencies, respectively, and Rw is the shear/strain variance ratio, which was set to 7 as suggested by Kunze et al. (2006) and used by Yang et al. (2016) to quantify the mixing in the SCS. To quantify strain $ \left\langle {\xi }_{z}^{2} \right\rangle $ , the glider profiles were first separated into half-overlapping 300-m-long segments, starting from the bottom and excluding the data in the surface mixed layer. The internal wave strain was estimated from the buoyancy frequency ${\xi }_{z}\!=\!({N}^{2}\!-\!\overline{{N}^{2}})/\overline{{N}^{2} }$ , where $\overline{{N}^{2}}$ is the mean value based on quadratic fitting to each buoyancy frequency segment. Strain variance was obtained as $ \left\langle {\xi }_{z}^{2} \right\rangle \!=\!{\displaystyle\int\nolimits _{\mathrm{m}\mathrm{i}\mathrm{n}\left({k}_{z}\right)}^{\mathrm{m}\mathrm{a}\mathrm{x}\left({k}_{z}\right)}} $ $S\left[{\xi }_{z}\right]\left({k}_{z}\right)\mathrm{d}{k}_{z}$ . For strain variance integration, the minimum integrated wavenumber was set to 0.042 rad/m corresponding to a vertical wavelength λz=150 m. This was chosen as a lower wavenumber might be influenced by strong background stratification in the pycnocline (Kunze et al., 2006; Jing and Wu, 2013). The upper bound is set to 0.419 rad/m, corresponding to vertical wavelength λz=15 m. The GM strain variance was computed over the same wavenumber band as follows: where E0=6.3×10–5 is the dimensionless energy level; b=1 300 m, which the scale depth of the thermocline; $\, {j}_{*} $ =3, which is the reference mode number; and $ {k}_{*}\!\!=\!\left({\text π}{j}_{*}N\right)/\left(b{N}_{0}\right) $ , which is the reference wavenumber. The GHP scaling was initially developed for the open ocean, and its applicability in the marginal sea should be further examined due to its potential limitations (Polzin et al., 2014). To improve the reliability of results, we also used the Thorpe-scale method to estimate the diffusivity. 2.3. Thorpe-scale method Based on the well-established relationship between the Ozmidov scale ($ {L}_{\mathrm{O}}\!\!=\!\!\sqrt{\varepsilon /{N}^{3}} $ , where $ \varepsilon $ is the dissipation rate) and the Thorpe scale (LT; Dillon, 1982), and the Osborn's relationship between the dissipation rate and diapycnal diffusivity ($ {K}_{\rho }\!\!=\!\!\varGamma \varepsilon /{N}^{2} $ , where $ \varGamma $ is the mixing efficiency and is selected as 0.2) (Osborn, 1980), the diapycnal diffusivity can be related to LT as follows: where LT can be calculated as the RMS displacement of a parcel, and the buoyancy frequency N is evaluated using the gradient of the reordered density profile. Here, the displacement is defined as the depth difference between a measured potential temperature profile and its reordered version (Mater et al., 2015). In this study, we applied this method to estimate the diffusivity and compared with the GHP results to improve the reliability of the results. 3.1. Water mass characteristics of the anticyclonic eddy and its origin The glider well transited the center of the eddy (Fig. 1). To identify the origin of the anticyclonic eddy, historical temperature and salinity data obtained from Argo data center for the Kuroshio Current in the western Pacific (19°–22°N, 121°–122.5°E) were compared with the in-situ T/S data. The historical profile of the Kuroshio Current was derived by averaging 654 data profiles. First, we separated the observed glider data into the following three categories (Fig. 2): (1) region within the eddy (115.5°–117°E); (2) left region outside the eddy (114°–115.5°E), and (3) right region outside the eddy (117°–118.5°E). The outside eddy left region corresponded to the anterior area of the eddy movement, and the outside eddy right region corresponded to the posterior area of the eddy. When comparing the water mass characteristics within the three regions, a notable feature is that the upper-layer water within the eddy region differed from that in the outside eddy (both left and right) regions, indicating different water genesis mechanisms. Figure 2. Mean T-S diagrams of water masses within and outside of the studied eddy (with the potential density of $ {\sigma }_{\theta } $ in kg/m3 contours overlaid). The blue, red, and yellow lines show data for the water forming the within eddy (115.5°–117°E), outside eddy left (114°–115.5°E), and outside eddy right (117°–118.5°E) sections, respectively. The black dashed line shows data for the Kuroshio Current water based on historical Argo T-S profiles within the black boxes of the inset figure. The T-S characteristics in each region of the eddy were then compared with the Kuroshio Current (Fig. 2). For the Kuroshio Current, the T-S curve showed a reversed "S" shape, indicating that the water was warmer and saltier in the upper layer (<300 m) but colder and fresher in the intermediate layer (300–1 000 m); the upper-layer salinity maximum and intermediate-layer salinity minimum reached 34.87 psu and 34.22 psu, respectively. For the north SCS water recorded by glider (the outside eddy left and right regions, water was cooler and fresher in the upper layer but warmer and salter in the intermediate layer; water in the upper-layer salinity maximum and intermediate-layer salinity minimum reached 34.68 psu and 34.26 psu, respectively. Based on their similar water mass characteristics, the upper-layer water in the anticyclonic eddy probably originated from the Kuroshio Current. To confirm the origin of the eddy, the movement track was derived from the SLA and ADT datasets. Following the identification criteria and tracking algorithm proposed by Chelton et al. (2011), the track of the studied eddy is shown in Fig. 1b; the eddy emerged southwest of Taiwan Island, after its formation, rotated in the nearby area for approximately one month before drifting southwestward along the continental slope. The similar water mass characteristics of the eddy and the Kuroshio Current water indicates that there was no notable exchange between the eddy and the northern SCS water during its southwestward movement, although the traveled distance encompassed four latitudes and passed the complex bottom topography of the Dongsha Islands area (Fig. 1). Before its formation, the ADT map shows that the eddy was shedding from the Kuroshio intrusion. Figure 1b shows the ADT for the Luzon Strait on March 1, 2015. The white line indicates an ADT of 118 cm, indicating that the eddy originated from the Kuroshio Current. The shading of the Kuroshio intrusion has also been observed by other scientists (Caruso et al., 2006; Jia and Chassignet, 2011; Nan et al., 2011a; Hu et al., 2012; Guo et al., 2013; Zhang et al., 2017). Recently, Zhang et al. (2017) reported that the barotropic instability of the Kuroshio Loop Current constitutes the primary generation mechanism for eddy shedding from the Kuroshio Current to the northern SCS. 3.2. Variation of the physical field of the anticyclonic eddy The eddy was continuously weakening as it moved in southwest direction. Based on the physical field data (Aviso eddy atlas data) for the period between May 3 and May 18 (Fig. 3), the radius of the eddy decreased by half from 140 km to 70 km, with the time of strength change was during May 12 and May 14. The eddy rotational speed decreased from 32 cm/s on May 3 to 27.2 cm/s on May 13 and then increased to approximately 29.2 cm/s on May 18. At the same time, the amplitude of the eddy decreased from 19 cm on May 3 to 11 cm on 18 May. These physical field variations indicate that both the kinetic energy and potential energy of the eddy were weakening during this period. Energy dissipation is one of the main mechanisms driving variation of the physical field characteristics of eddies (Zhang et al., 2016). In addition, here we focus on mixing processes and the diffusivity of the eddy (Section 3.4). Figure 3. Variations in the physical field of the studied anticyclonic eddy, including the eddy radius (a), rotational speed (b), and amplitude (c). 3.3. Structure of the anticyclonic eddy Matching the underwater glider tracks with the SLA satellite data, the glider cut through the eddy between May 13 and May 23, 2015 (Fig. 1). On May 18, the anticyclonic eddy entered the region 19.72°N, 116.13°E, when it had a radius (R) of approximately 137 km (Fig. 1a). Vertical profiles of potential temperature, salinity, potential density, and the baroclinic geostrophic velocities along the glider track are shown in Fig. 4. Here, the baroclinic geostrophic current vg at depth of z was calculated from the thermal wind relationship (Qiu et al., 2019b) that, $ {v}_{\mathrm{g-}\mathrm{g}\mathrm{l}\mathrm{i}\mathrm{d}\mathrm{e}\mathrm{r}}\!\!=$ ${v}_{0}\!-\!\!\dfrac{g}{f{\rho }_{0}}{\displaystyle\int \nolimits_{{z}_{0}}^{z}}\dfrac{\partial {\rho }_{z}}{\partial s}\mathrm{d}z $ , $ {v}_{\mathrm{g}}\!=\!\!\dfrac{{v}_{\mathrm{g-}\mathrm{g}\mathrm{l}\mathrm{i}\mathrm{d}\mathrm{e}\mathrm{r}}}{\mathrm{cos}\alpha } $ , where $ {v}_{\mathrm{g-}\mathrm{g}\mathrm{l}\mathrm{i}\mathrm{d}\mathrm{e}\mathrm{r}} $ is the geostrophic velocity perpendicular to the glider path, α is angle between the glider path and the line between the location of glider and eddy centre, ρ0 is the reference density (1 025 kg/m3), ρ is the potential density of seawater, and f is the local Coriolis parameter. As such, the geostrophic velocities were obtained by integrating the thermal wind relationship from a reference depth (z0) to the calculation depth (z). The reference depth (z0) was set to 1 000 m, where velocity v0 was assumed to be zero. Figure 4. Vertical profiles of physical fields of the eddy. Potential temperature (a), salinity (b), potential density along the glider track (c), and baroclinic geostrophic velocity across the track (d). The white dashed lines indicate the eddy center derived from SLA data. The positive and negative values in d represent northward and southward velocities, respectively. These calculations showed that a trough existed in the contours of temperature, salinity, and potential density; depressed contours were notable between 100 m and 300 m, with the depressed center being consistent with the position of the altimeter-derived eddy center (the white dashed line in Fig. 4). This indicates that the altimeter-derived location of the eddy center was reliable. The vertical profiles of salinity display a salty region between 100 m and 200 m, where salinity ≥34.8 psu (Fig. 4b), which is similar to the Kuroshio Current waters (Qu, 2000). The salty water originated from the Kuroshio Current, with Hu et al. (2012) pointing out that anticyclonic eddies carry high-salinity subsurface waters from the Northwest Pacific into the northern SCS (Fig. 2). The vertical distribution of baroclinic geostrophic velocities is shown in Fig. 4d. To the left of the eddy center, the velocity was positive; to the right, it was negative. The velocity increased in magnitude from the eddy center and reached its maximum of approximately –0.8 m/s at 117°E and 0.8 m/s at 115.3°E. The location of maximum velocity corresponded to the edge of the anticyclonic eddy; outside of the eddy edge, velocities began to drop. The depth where the geostrophic speeds were above 20 cm/s almost reached 600 m. In the northern South China Sea, it has been confirmed that axis of mesoscale eddies strongly tilts southwestward from surface to bottom (Zhang et al., 2016). The tilting distance reached up to ~100 km for both anticyclonic eddy and cyclonic eddy from surface to the depth of 2 000 m. This study also displayed a tilt feature. A notable feature was that the tilt of geostrophic velocities (Fig. 4d). In the centre of the eddy, the zero geostrophic velocities tilt westward around 0.3 degree of longitude from surface to 1 000 m, which was consistent with the study reported by Zhang et al. (2016). 3.4. Diapycnal diffusivities 3.4.1. GHP parameterization results A section of inferred diffusivity ($ {K}_{\rho } $ ) based on the GHP scaling is shown in Fig. 5. The diffusivity values ranged from a minimum of less than 10–5 m2/s to a maximum of >10–2.5 m2/s, which indicates marked asymmetry. High-level mixing and diffusivity values in the order of 10–3 m2/s were observed at the posterior side of the anticyclonic mesoscale eddy. At the anterior side, diffusivity was much weaker, with a maximum of approximately 10–4 m2/s. Figure 5. Estimates of GHP parameterization. a. 3D view of diffusivity, Kρ, along the glider track. The colors indicate SLA values on May 18, 2015; the black line shows the edge of the anticyclonic eddy, identified following Chelton et al. (2011); and the red dashed line indicates the eddy center. b. Averaged diffusivities by depth related to the eddy anterior side (blue) and posterior side (red). The dashed line indicates the averaged values. c. Variation in diffusivity by longitude. The vertical red and black dashed lines indicate the eddy center and edge, respectively. To further identify the spatial asymmetry in the eddy diffusivity, the diffusivity profiles were sorted and averaged for the posterior edge side of motion (between 116.5° and 117.7°E) and the anterior side (between 115° and 116°E) (Fig. 5b). The resulting composites show significant differences; the diffusivity at the posterior side of the eddy was approximately seven times higher than at the anterior side, from 100 m to 800 m, the range of which was effectively influenced by the eddy. We also averaged the diffusivity in each profile, and then displayed these averaged diffusivities by longitude (Fig. 5c). The maximum diffusivity was recorded at the posterior edge of the eddy, gradually weakening eastward towards the Luzon Strait and rapidly decreasing southwestward. Using GHP parameterization and Argo data, Yang et al. (2014) reported that depth-averaged diffusivity values have a linearly decreasing trend southwestwards from the Luzon Strait towards Hainan Island. We obtained a similar result, whereby diffusivity near the Luzon Strait was generally observed to be higher than in the southwest region of the glider track (Fig. 5c). However, due to the influence of the anticyclonic eddy, diffusivity did not show clear linearity as reported by Yang et al. (2014). 3.4.2. Thorpe-scale results An example of an overturn with a depth range from 475 m to 505 m was identified at 19.86°N, 117.031°E in the northern SCS, where the water depth is about 1 800 m (Fig. 6). The potential temperature in the water column was nearly uniform in vertically. The sorted and original potential temperature profiles show an obvious overturn with cooler water overlying warmer water. Figure 6. An example of overturn identified at 19.86°N, 117.031°E in the northern SCS. a. Vertical profiles of potential temperature. b. An overturn detected from the original and sorted potential temperature profiles in the depth range indicated by the orange box in a. Based on the detected overturns, the spatial pattern of diffusivity between 100 m and 800 m is reconstructed by Thorpe-scale estimates (Fig. 7). It shows a small amount of overturns were detected between 115.5° and 118°E, and few overturn existed less than 115.5°E. The disadvantage of the Thorpe-scale method is that overturns could not be easily detected in the upper layer if there exist strong stratifications. As shown in Fig. 7, few overturn was detected in the upper layer where the depth was lesser than 300 m. Abundant overturns were detected where the glider operated near the Luzon Strait (longitude>118°E), which is probably related to active internal waves there. We averaged the diffusivity in each profile, and then displayed these averaged diffusivities by longitude (Fig. 8). Before the averaged diffusivity was obtained, the nonexistent diffusivity in Fig. 7 was assumed to be the background diffusivity of 1×10–5 m2/s. It showed that the diffusivity at the posterior edge was greater than that at the anterior edge. Figure 7. The spatial pattern of diffusivity reconstructed by Thorpe-scale estimates. Figure 8. Variation in diffusivity derived with Thorpe-scale method by longitude. The vertical red and black dashed lines indicate the eddy center and edge, respectively. When comparing the results of GHP scaling (Fig. 5c) and Thorpe-scale method (Fig. 8), it showed some common characteristics. Similar turbulence levels and spatial patterns suggest that elevated diffusivities at the posterior side of the eddy are reliable in this study. In the upper ocean, mixing is mainly driven by wind-stress stirring and buoyancy flux (Shay and Gregg, 1986). The action of Reynolds stress on wind-induced vertical shear is limited to the upper layer (12.5-m depth) under low wind conditions (Callaghan et al., 2014). Furthermore, the effects of buoyancy flux only extend to the bottom of the mixed layer (Shang et al., 2017). In our study, we considered data that were collected below 100 m, and wind speeds were in the range of 1–8 m/s during the observation period. Wind data were obtained from the blended wind dataset (Zhang et al., 2006) and the low wind condition meaning that there is no evidence to suggest that the elevated diffusivities (Fig. 5a) below 100 m resulted from wind stress and buoyancy. We tracked eddy kinetic energy (EKE) corresponding to the moving path of the anticyclonic mesoscale eddy, and compared this with the bin-averaged diffusivities in relation to longitude (Fig. 9a). Our analysis showed that the diffusivity varied simultaneously with EKE, with the coefficient of determination (R2) of linear fitting was 0.17. This indicates that the pattern of observed diffusivity was likely caused by the eddy and was related to surface velocity and EKE. Figure 9. Comparison of eddy kinetic energy, diffusivity, and surface velocity. a. Eddy kinetic energy (EKE) corresponding to the moving path of the studied anticyclonic mesoscale eddy and the bin-averaged diffusivities by longitude. R2 is the linear fitting coefficient of determination. b. Bin-averaged surface velocity by longitude. The velocity was derived from HYCOM. It is known that internal tides provide one of the major dynamical pathways for large-scale energy transfer in the ocean to small-scale turbulent dissipation and mixing. Numerical simulations and observations have confirmed that the Luzon Strait is a multiple generation region of baroclinic internal waves (Zhao et al., 2004; Zhao, 2014; Alford et al., 2015; Wang et al., 2016). After their generation, low-mode internal waves can propagate into the northern SCS. The studied area is a region with active internal waves, the breaking of which may provide a particularly potent energy source for ocean mixing (Gregg et al., 2003). Previous studies have confirmed that, in the South China Sea, the mixing weakens further from Luzon Strait (Wang et al., 2016; Yang et al., 2016). As shown in Figs 5 and 8, diffusivity tended towards larger values close to the Luzon Strait while elevated diffusivities between 116° and 117.5°E did not simply relate to the energy source from internal waves but, rather, to the mesoscale eddy. Numerical simulation and satellite remote sensing observations have confirmed that interactions between internal waves and mesoscale eddies are active in this area (Xie et al., 2015). It is necessary to understand the dynamic mechanism of the enhanced mixing at the posterior edge side of the anticyclonic eddy (i.e., 116° to 117.5°E). As such, the fine structure shear variances were analyzed using the geostrophic velocities that inferred from the glider data (Fig. 10a), where shear variance was calculated as $ {S}^{2}\!=\!{\left({\partial u}/{\partial z}\right)}^{2}\!+\!{\left({\partial v}/{\partial z}\right)}^{2} $ . It is shown that elevated shear variance was found in the region effected by the anticyclonic eddy, with the maximum shear variance in the depth of about 100 m (Fig. 10a). We also compared shear variance among three regions, namely the posterior edge side (between 116.8° and 117.2°E), the anterior edge side (between 114.8° and 115.2°E), and the peripheral region (between 118° and 118.4°E). The maximum S2 values at the posterior edge side reached 3×10–6 s–2, which was larger than at the anterior and peripheral regions (Fig. 10b). Furthermore, the S2 values at the posterior edge were larger than at the other two regions throughout water column. Higher shear variance at the posterior side, which is prone to triggering shear instabilities, may be the main mechanism for the corresponding elevated diffusivity. Figure 10. Fine structure shear variances derived from geostrophic velocities. a. Section display. b. Averaged shear variance in the posterior edge region (red), anterior edge region (blue), and the peripheral region (yellow). The surface currents at the posterior edge region of the eddy were larger than the other two regions (Fig. 9b). In some cases, the velocity at the base of the eddy was distinct or even showed the opposite trend to that within the eddy, whereby strengthened shear can be generated in areas of stronger surface currents (Liang and Thurnherr, 2011; Zhang et al., 2014). The higher shears at the posterior side can also be caused by the eddy tilt. Because the eddy propagation in the northern SCS follows the regional sloping bottom topography, the topographic β effect, which exerts more influence to the water column near bottom in the stratified ocean is likely the cause for the observed vertically tilting structures (Zhang et al., 2016). As shown in Fig. 4d, the westward tilt of eddy reached 0.3° of longitude from surface to lower layer of 1 000 m, which mean the higher shears at the posterior side were partly contributed by the eddy tilt, and the background velocity field. The higher shear at the posterior side may have led to the breaking of internal waves, which could have caused the relatively high diffusivity values in the thermocline. Alternatively, internal waves may become trapped by higher shear at the posterior side of eddies, thereby transferring momentum to the deeper ocean and facilitating mixing (Booker and Bretherton, 1967; Zhang et al., 2014). Sub-mesoscale motion is a common physical process in the ocean and plays an important role in energy cascades from large-scale to small-scale motion (Fu and Ferrari, 2008; Chelton et al., 2011). In Fig. 3d, two velocity cores with maximum absolute geostrophic velocity values (approximately 0.8 m/s) were found at the edges of the eddy. In some case, strong turbulent mixing is found in the edge of the eddy and is controlled by the geostrophic velocity generated shear (Liu et al., 2017). Also, with such high velocities, a strong horizontal shear generates instability and favors sub-mesoscale eddy formation (Capet et al., 2008; Thomas and Ferrari, 2008). Sub-mesoscale motion has been previously reported at the posterior edge of anticyclonic eddies. For example, Qiu et al. (2019a) analyzed drifter buoy data and found clear signals of sub-mesoscale motion at the posterior edge of an anticyclonic eddy in the northern SCS but not at the anterior edge. Mesoscale eddies can feed the generation and growth of sub-mesoscale motions. For example, Zhang et al. (2016) reported the dynamical process of sub-mesoscale motion accounts for more than 50% of the dissipation processes of anticyclonic eddies in the northern SCS. In our study, sub-mesoscale signals could be seen in the density anomalies (Fig. 11), which exhibited significant oscillations. This spatial variability may be caused by ocean convection or other sub-mesoscale structures (Su et al., 2016; Qiu et al., 2019a). As shown in Fig. 11, the density anomaly at the posterior edge of the eddy had more significant oscillations than at the anterior edge. Here the density anomaly is calculated as $ {\rho }'\!=\!\rho\! -\!{\rho }_{_{\mathrm{b}}}\!\left(z\right) $ , where $ {\rho }_{_{\mathrm{b}}}\!\left(z\right) $ is a depth-dependent background density defined as horizontal mean over the interested domain between 114.5° and 118°E. In order to better validate the existence of sub-mesoscale motions with the glider data, a 50 km high-pass filter is used to process the density anomaly data in each layer of depth to eliminating the effects of larger scale fluctuations. Then we calculate the longitude-dependent standard deviation (std) of the high-passing data between the depth of 60 m and 300 m (Fig. 12) for each profile, for the vertical area affected by studied mesoscale eddy was mainly concentrated in that depth. It is supposed that the std of density anomaly can validate the richness of sub-mesoscale motion to some extent. Although the high-pass-filtered data may include the information on internal waves and tides, the variation of std of density anomaly at the order of killometers caused by internal waves and tides is possibly not significant. In Fig. 12, it shows the std of density anomaly at the posterior edge was higher than that at the anterior edge, which indicated that the features of sub-mesoscale motions were more abundant there. It should be noted that the energy cascade from mesoscale to sub-mesoscale eddies cannot be accounted for by GHP parameterization based on the wave-wave interaction theory but can be accounted for by Thorpe-scale method. As the results from the Thorpe-scale method shown in Fig. 8, the mixing at the posterior edge was enhanced, which indicated the effect of sub-mesoscale on the enhancement of vertical mixing. Based on these observations, it is reasonable to suggest that downward energy transfer to sub-mesoscale motion plays an important role in the dissipation of oceanic eddies, which may be a common phenomenon in the ocean. Figure 11. Contour map of density anomalies δρ. The white line shows the core of the eddy. The anterior and posterior edges are marked with pink boxes. Figure 12. Longitude-dependent standard deviation of the density anomaly between 60 m and 300 m. In this study, we estimated the spatial structure of turbulent mixing of an anticyclonic eddy in the northern SCS from underwater glider data (May 2015) using the Gregg-Henyey-Polzin parameterization and the Thorpe-scale method. Highly enhanced diffusivity rates on the order of 10–3 m2/s were found at the posterior edge of the studied anticyclonic eddy. In the anterior edge region, diffusivity was one order of magnitude lower, consistent with the previously reported background values for turbulent mixing in the north SCS (Zhang et al., 2016; Shang et al., 2017). Potential mechanisms for these high diffusivity values include the effects of higher background vertical shear variance due to the anticyclonic eddy, internal waves trapped by shear, and sub-mesoscale motion fed by the mesoscale eddy. In summary, the diffusivity data were highly asymmetrical in the mesoscale eddy area with a non-uniform horizontal and vertical structure. Although mixing processes in the northern SCS have been widely studied, the previous work has focused on large scale data with resolutions that greater than dozens of kilometers (Tian et al., 2009; Liang et al., 2017). Here, we identified mixing patterns based on a spatial resolution of approximately 4 km that was able to characterize sub-mesoscale motion. We believe that the observed mixing patterns of high spatial resolutions within the anticyclonic eddy will help to improve our knowledge of the turbulent mixing process in the northern SCS. The Chinese underwater glider was provided by the State Key Laboratory of Robotics, Shenyang Institute of Automation. We acknowledge CMEMS for sea-level anomaly data, HYCOM for surface velocity data, the Aviso eddy atlas for physical field data, the Blended sea wind dataset for sea-surface wind data, and the Argo dataset for historical temperature and salinity data. The numerical simulation is supported by High Performance Computing Division and HPC managers of Wei Zhou and Dandan Sui from the South China Sea Institute of Oceanology. [1] Alford M H, Peacock T, Mackinnon J M, et al. 2015. The formation and fate of internal waves in the South China Sea. Nature, 521(7550): 65–69. [2] Bai Xiaolin, Liu Zhiyu, Zheng Quanan, et al. 2019. Fission of shoaling internal waves on the northeastern shelf of the South China Sea. Journal of Geophysical Research: Oceans, 124(7): 4529–4545. [3] Booker J R, Bretherton F P. 1967. The critical layer for internal gravity waves in a shear flow. Journal of Fluid Mechanics, 27(3): 513–539. [4] Callaghan A H, Ward B, Vialard J. 2014. Influence of surface forcing on near-surface and mixing layer turbulence in the tropical Indian Ocean. Deep Sea Research Part I: Oceanographic Research Papers, 94: 107–123. [5] Capet X, McWilliams J, Molemaker M, et al. 2008. Mesoscale to submesoscale transition in the California current system. Part I: Flow structure, eddy flux, and observational tests. Journal of Physical Oceanography, 38(1): 29–43. [6] Caruso M J, Gawarkiewicz G G, Beardsley R C. 2006. Interannual variability of the Kuroshio intrusion in the South China Sea. Journal of Oceanography, 62(4): 559–575. [7] Chelton D B, Schlax M G, Samelson R M. 2011. Global observations of nonlinear mesoscale eddies. Progress in Oceanography, 91(2): 167–216. [9] Chow C H, Hu J H, Centurioni L R, et al. 2008. Mesoscale Dongsha cyclonic eddy in the northern South China Sea by drifter and satellite observations. Journal of Geophysical Research: Ocean, 113(C4): C04018. [10] Chu Xiaoqing, Xue Huijie, Qi Yiquan, et al. 2014. An exceptional anticyclonic eddy in the South China Sea in 2010. Journal of Geophysical Research: Oceans, 119(2): 881–897. [11] Dillon T M. 1982. Vertical overturns: A comparison of Thorpe and Ozmidov length scales. Journal of Geophysical Research: Oceans, 87(C12): 9601–9613. [12] Ferrari R, Wunsch C. 2009. Ocean circulation kinetic energy: Reservoirs, sources, and sinks. Annual Review of Fluid Mechanics, 41: 253–282. [13] Fu L L, Ferrari R. 2008. Observing oceanic submesoscale processes from space. Eos, Transactions, American Geophysical Union, 89(48): 488. [14] Garrett C, Munk W. 1972. Space-time scales of internal waves. Geophysical Fluid Dynamic, 3(3): 225–264. [15] Garrett C, Munk W. 1975. Space-time scales of internal waves: A progress report. Journal of Geophysical Research, 80(3): 291–297. [16] Gill A E, Green J S A, Simmons A J. 1974. Energy partition in the large-scale ocean circulation and the production of mid-ocean eddies. Deep Sea Research and Oceanographic Abstracts, 21(7): 499–528. [17] Gregg M C, Sanford T B, Winkel D P. 2003. Reduced mixing from the breaking of internal waves in equatorial waters. Nature, 422(6931): 513–515. [19] He Qingyou, Zhan Haigang, Cai Shuqun, et al. 2018. A new assessment of mesoscale eddies in the South China Sea: Surface features, three-dimensional structures, and thermohaline transports. Journal of Geophysical Research: Oceans, 123(7): 4906–4929. [20] Henyey F S, Wright J, Flatté S M. 1986. Energy and action flow through the internal wave field: An eikonal approach. Journal of Geophysical Research: Oceans, 91(C7): 8487–8495. [22] Huang Xiaodong, Chen Zhaohui, Zhao Wei, et al. 2016. An extreme internal solitary wave event observed in the northern South China Sea. Scientific Reports, 6: 30041. [23] Jia Yinglai, Chassignet E P. 2011. Seasonal variation of eddy shedding from the Kuroshio intrusion in the Luzon Strait. Journal of Oceanography, 67(5): 601–611. [24] Jia Yinglai, Liu Qinyu, Liu Wei. 2004. Eddy shedding from the Kuroshio bend at Luzon Strait. Journal of Oceanography, 60(6): 1063–1069. [26] Klymak J M, Alford M H, Pinkel R, et al. 2011. The breaking and scattering of the internal tide on a continental slope. Journal of Physical Oceanography, 41(5): 926–945. [27] Kunze E, Firing E, Hummon J M, et al. 2006. Global abyssal mixing inferred from lowered ADCP shear and CTD strain profiles. Journal of Physical Oceanography, 36(8): 1553–1576. [28] Li Li, Nowlin W D Jr, Su Jilan. 1998. Anticyclonic rings from the Kuroshio in the South China Sea. Deep Sea Research Part I: Oceanographic Research Papers, 45(9): 1469–1482. [29] Li Li, Pohlmann T. 2002. The South China Sea warm-core ring 94S and its influence on the distribution of chemical tracers. Ocean Dynamics, 52(3): 116–122. [30] Liang Changrong, Chen Guiying, Shang Xiaodong. 2017. Observations of the turbulent kinetic energy dissipation rate in the upper central South China Sea. Ocean Dynamics, 67(5): 597–609. [32] Liu Zhiyu, Lian Qiang, Zhang Fangtao, et al. 2017. Weak thermocline mixing in the North Pacific low-latitude western boundary current system. Geophysical Research Letters, 44(20): 10530–10539. [33] Mater B D, Venayagamoorthy S K, St Laurent L, et al. 2015. Biases in Thorpe-scale estimates of turbulence dissipation. Part I: Assessments from large-scale overturns in oceanographic data. Journal of Physical Oceanography, 45(10): 2497–2521. [34] Nan Feng, Xue Huijie, Chai Fei, et al. 2011a. Identification of different types of Kuroshio intrusion into the South China Sea. Ocean Dynamics, 61(9): 1291–1304. [35] Nan Feng, Xue Huijie, Xiu Peng, et al. 2011b. Oceanic eddy formation and propagation southwest of Taiwan. Journal of Geophysical Research: Oceans, 116(C12): C12045. [36] Nan Feng, Xue Huijie, Yu Fei. 2015. Kuroshio intrusion into the South China Sea: A review. Progress in Oceanography, 137: 314–333. [38] Park J H, Farmer D. 2013. Effects of Kuroshio intrusions on nonlinear internal waves in the South China Sea during winter. Journal of Geophysical Research: Oceans, 118(12): 7081–7094. [39] Polzin K L, Garabato A C N, Huussen T N, et al. 2014. Finescale parameterizations of turbulent dissipation. Journal of Geophysical Research: Oceans, 119(2): 1383–1419. [40] Qiu Chunhua, Mao Huabin, Liu Hailong, et al. 2019a. Deformation of a warm eddy in the northern South China Sea. Journal of Geophysical Research: Oceans, 124(8): 5551–5564. [41] Qiu Chunhua, Mao Huabin, Wang Yanhui, et al. 2019b. An irregularly shaped warm eddy observed by Chinese underwater gliders. Journal of Oceanography, 75(2): 139–148. [42] Qu Tangdong. 2000. Upper-layer circulation in the South China Sea. Journal of Physical Oceanography, 30(6): 1450–1460. [44] Shang Xiaodong, Liang Changrong, Chen Guiying. 2017. Spatial distribution of turbulent mixing in the upper ocean of the South China Sea. Ocean Science, 13(3): 503–519. [45] Shay T J, Gregg M C. 1986. Convectively driven turbulent mixing in the upper ocean. Journal of Physical Oceanography, 16(11): 1777–1798. [46] St. Laurent L 2008. Turbulent dissipation on the margins of the South China Sea. Geophysical Research Letters, 35(23): L23615. [47] Su Zhan, Ingersoll A P, Stewart A, et al. 2016. Ocean convective available potential energy. Part II: Energetics of thermobaric convection and thermobaric cabbeling. Journal of the Physical Oceanography, 46(4): 1097–1115. [48] Sun Hui, Yang Qingxuan, Zhao Wei, et al. 2016. Temporal variability of diapycnal mixing in the northern South China Sea. Journal of Geophysical Research: Oceans, 121(12): 8840–8848. [49] Thomas L, Ferrari R. 2008. Friction, frontogenesis, and the stratification of the surface mixed layer. Journal of Physical Oceanography, 38(11): 2501–2518. [50] Tian Jiwei, Yang Qingxuan, Zhao Wei. 2009. Enhanced diapycnal mixing in the South China Sea. Journal of Physical Oceanography, 39(12): 3191–3203. [51] Wang Xiaowei, Peng Shiqiu, Liu Zhiyu, et al. 2016. Tidal mixing in the South China Sea: An estimate based on the internal tide energetics. Journal of Physical Oceanography, 46(1): 107–124. [52] Wang Guihua, Su Jilan, Chu P C. 2003. Mesoscale eddies in the South China Sea observed with altimeter data. Geophysical Research Letters, 30(21): 2121. [55] Wang Dongxiao, Xu Hongzhou, Lin Jing, et al. 2008. Anticyclonic eddies in the northeastern South China Sea during winter 2003/2004. Journal of Oceanography, 64(6): 925–935. [56] Wang Qiang, Zeng Lili, Li Jian, et al. 2018. Observed cross-shelf flow induced by mesoscale eddies in the northern South China Sea. Journal of Physical Oceanography, 48(7): 1609–1628. [57] Xie Jieshuo, He Yinghui, Chen Zhiwu, et al. 2015. Simulations of internal solitary wave interactions with mesoscale eddies in the northeastern South China Sea. Journal of Physical Oceanography, 45(12): 2959–2978. [58] Xie Shangping, Xie Qiang, Wang Dongxiao, et al. 2003. Summer upwelling in the South China Sea and its role in regional climate variations. Journal of Geophysical Research: Oceans, 108(C8): 3261. [59] Yang Haijun, Liu Qinyu. 2003. Forced Rossby wave in the northern South China Sea. Deep Sea Research Part I: Oceanographic Research Papers, 50(7): 917–926. [60] Yang Qingxuan, Nikurashin M, Sasaki H, et al. 2019. Dissipation of mesoscale eddies and its contribution to mixing in the northern South China Sea. Scientific Reports, 9: 556. [62] Yang Qingxuan, Zhao Wei, Liang Xinfeng, et al. 2016. Three-dimensional distribution of turbulent mixing in the South China Sea. Journal of Physical Oceanography, 46(3): 769–788. [63] Yang Qingxuan, Zhao Wei, Liang Xinfeng, et al. 2017. Elevated mixing in the periphery of mesoscale eddies in the South China Sea. Journal of Physical Oceanography, 47(4): 895–907. [64] Yuan Dongliang, Han Weiqing, Hu Dunxin. 2006. Surface Kuroshio path in the Luzon Strait area derived from satellite remote sensing data. Journal of Geophysical Research: Oceans, 111(C11): C11007. [66] Zhang Huaimin, Bates J J, Reynolds R W. 2006. Assessment of composite global sampling: Sea surface wind speed. Geophysical Research Letters, 33(17): L17714. [67] Zhang Zhiwei, Tian Jiwei, Qiu Bo, et al. 2016. Observed 3D structure, generation, and dissipation of oceanic mesoscale eddies in the South China Sea. Scientific Reports, 6: 24349. [68] Zhang Zhengguang, Wang Wei, Qiu Bo. 2014. Oceanic mass transport by mesoscale eddies. Science, 345(6194): 322–324. [69] Zhang Zhiwei, Zhao Wei, Qiu Bo, et al. 2017. Anticyclonic eddy sheddings from Kuroshio loop and the accompanying cyclonic eddy in the northeastern South China Sea. Journal Physical Oceanography, 47(6): 1243–1259. [70] Zhang Zhiwei, Zhao Wei, Tian Jiwei, et al. 2013. A mesoscale eddy pair southwest of Taiwan and its influence on deep circulation. Journal Geophysical Research: Oceans, 118(12): 6479–6494. [71] Zhao Zhongxiang. 2014. Internal tide radiation from the Luzon Strait. Journal of Geophysical Research: Ocean, 119(8): 5434–5448. [74] Zhou Chun, Zhao Wei, Tian Jiwei, et al. 2014. Variability of the deep-water overflow in the Luzon Strait. Journal of Physical Oceanography, 44(11): 2972–2986. followshare Copyright © Acta Oceanologica Sinica Govern Body: China Association for Science and Technology Sponsor: Chinese Society for Oceanography E-mail: [email protected] Website: http://www.aosocean.com/ Supported by: Beijing Renhe Information Technology Co. Ltd DownLoad: Full-Size Img PowerPoint
CommonCrawl
Analog/RF Performance of T-Shape Gate Dual-Source Tunnel Field-Effect Transistor Nano Express Shupeng Chen ORCID: orcid.org/0000-0002-2897-70891, Hongxia Liu1, Shulong Wang1, Wei Li1, Xing Wang1 & Lu Zhao1 Nanoscale Research Letters volume 13, Article number: 321 (2018) Cite this article In this paper, a silicon-based T-shape gate dual-source tunnel field-effect transistor (TGTFET) is proposed and investigated by TCAD simulation. As a contrastive study, the structure, characteristic, and analog/RF performance of TGTFET, LTFET, and UTFET are discussed. The gate overlap introduced by T-shape gate can enhance the efficiency of tunneling junction. The dual-source regions in TGTFET can increase the on-state current (ION) by offering a doubled tunneling junction area. In order to further improve the device performance, the n+ pocket is introduced in TGTFET to further increase the band-to-band tunneling rate. Simulation results reveal that the TGTFET's ION and switching ratio (ION/IOFF) reach 81 μA/μm and 6.7 × 1010 at 1 V gate to source voltage (Vg). The average subthreshold swing of TGTFET (SSavg, from 0 to 0.5 V Vg) reaches 51.5 mV/dec, and the minimum subthreshold swing of TGTFET (SSmin, at 0.1 V Vg) reaches 24.4 mV/dec. Moreover, it is found that TGTFET have strong robustness on drain-induced barrier lowering (DIBL) effect. The effects of doping concentration, geometric dimension, and applied voltage on device performance are investigated in order to create the TGTFET design guideline. Furthermore, the transconductance (gm), output conductance (gds), gate to source capacitance (Cgs), gate to drain capacitance (Cgd), cut-off frequency (fT), and gain bandwidth (GBW) of TGTFET reach 232 μS/μm, 214 μS/μm, 0.7 fF/μm, 3.7 fF/μm, 11.9 GHz, and 2.3 GHz at 0.5 V drain to source voltage (Vd), respectively. Benefiting from the structural advantage, TGTFET obtains better DC/AC characteristics compared to UTFET and LTFET. In conclusion, the considerable good performance makes TGTFET turn into a very attractive choice for the next generation of low-power and analog/RF applications. Working on a manuscript? Avoid the common mistakes The scaling down of metal-oxide-semiconductor field-effect transistors (MOSFETs) brings significant improvement in integrated circuit (IC) power consumption, switching characteristic, circuit function, and IC density [1, 2]. But the irreconcilable contradiction between the scaling of the supply voltage and the reduction of the off-state leakage currents (IOFF) will finally result in the unacceptable high power consumption [3]. At the same time, reliability degradation caused by short-channel effects (SCEs) becomes more and more serious [4, 5]. In order to address these problems, it is valid to reduce subthreshold swing (SS) and supply voltage of the devices. Based on the band-to-band tunneling mechanism, tunnel field-effect transistors (TFETs) reach the subthreshold swing (SS) smaller than 60 mV/dec and could effectively reduce the supply voltage [6,7,8,9,10]. Moreover, due to the existence of the tunneling junction near the source, TFET usually has a small gate to source capacitance (Cgs) [1, 11] which is beneficial to the device frequency performance. Recent studies show that TFET seems to be a promising candidate for future low-power applications [12,13,14,15,16] and analog/RF applications [17,18,19]. However, due to the small effective tunneling area, the limited tunneling current becomes an inherent disadvantage in conventional P-I-N TFET, which leads to a low on-state operating current (ION). In order to improve the TFET performance, many new structures have been proposed in recent years [20,21,22,23,24,25]. Benefiting from the recessed gate, L-shape tunnel field-effect transistor (LTFET) [23, 24] and U-shape tunnel field-effect transistor (UTFET) [25] have been proposed to obtain high ION with a compact device structure. However, there is still much room for improvement in LTFET and UTFET and needs to spend more effort to study the analog/RF performance of these devices. In this paper, a T-shape gate dual-source tunnel field-effect transistor (TGTFET) with dual source is put forward and studied by TCAD simulation. The designed TGTFET can double the tunneling junction area compared with LTFET and UTFET. The gate overlap introduced by the designed T-shape gate can enhance the band-to-band tunneling rate (BBT rate). The simulation results show that the proposed TGTFET gains a higher ION (8.1 × 10− 5 A/μm at Vd = 1 V) than the LTFET and UTFET under the same condition. Both of the SSmin (at Vg = 0.1 V) and the SSavg (0~0.5 V Vg) of TGTFET are lower than 60 mV/dec (24.4 mV/dec and 51.5 mV/dec, respectively). TGTFET gains better input/output characteristic (gm = 232 μS/μm, gds = 214 μS/μm) than the UTFET and LTFET. Moreover, the capacitance characteristics of TGTFET, UTFET, and LTFET are discussed in detail. Finally, TGTFET gains better analog/RF performance (fT = 11.9 GHz and GBW = 2.3 GHz) compared to UTFET and LTFET. As a result, TGTFET with considerable good performance can be obtained.The structures of this paper are as follows: the "Methods" section includes the description of the structure and the parameters of TGTFET, LTFET [23, 24], and UTFET [25] as well as the TCAD simulation methods. The "Results and Discussion" section includes the description of the simulation results. In this section, the mechanism, characteristic, and analog/RF performance of TGTFET are studied and compared with the LTFET and UTFET. The influence of the device parameters on TGTFET is analyzed in detail too. The "Conclusions" section gives a conclusion of this paper. The structure of T-shape gate dual-source tunnel field-effect transistor (TGTFET) is illustrated in Fig. 1. The shape of the gate is similar to the alphabet letter "T" (green region). The dual-source regions are located on two sides of the gate (sapphire regions). Two n+ pockets (yellow regions) are inserted to increase the channel tunneling rate [20,21,22]. The n+ drain is placed in the bottom of the channel. Therefore, the T-shaped gate overlaps the n+ pockets in both the vertical and lateral directions. By this way, the electric field at the top of the tunneling junction can be increased. The electric field enhancement causes the energy band to bend more steeply. Finally, the electron tunneling rate is enhanced due to the corner electric field enhancement [26]. Schematic of the proposed T-shape gate dual-source tunnel field-effect transistor (TGTFET) Figure 2 shows the device structure of LTFET [23, 24], UTFET [25], and TGTFET. The gate overlap can help to enhance the tunneling efficiency of TGTFET. The dual-source regions in TGTFET can double the tunneling junction area compared with LTFET and UTFET. Comparison of a the proposed TGTFET, b UTFET, and c LTFET Parameters of silicon-based TGTFET, UTFET, and LTFET used in simulations are as follows: Hs = 30 nm (height of the source region), Hg = 40 nm (height of the recessed gate), Wg = 6 nm (width of the gate region), Hc = 15 nm (height of the channel region), Tp = 5 nm (thickness of the n+ pocket), ϕ = 4.33 eV (gate work function), Tox = 2 nm (thickness of the HfO2 gate dielectric), NS = 1 × 1020 cm−3 (p+ source doping concentration), ND = 1 × 1019 cm−3 (n+ drain doping concentration), Nsub = 1 × 1017 cm−3 (p− substrate doping concentration), and NP = 5 × 1018 cm−3 (n+ pocket doping concentration). The width coefficient in simulation is default to 1 μm. Simulations of TGTFET, UTFET, and LTFET are carried out in Silvaco Atlas TCAD tools. Non-local BTBT model is introduced in this simulation to bring the energy band spatial variation into account, which can help to facilitate the accuracy of the BTBT tunneling process. Lombardi mobility model is considered to make the channel mobility more accurate (by considering the surface scattering including the transverse field and doping concentration). Fermi statistics and band gap narrowing model is taken into account to fit the effect of the highly doped regions. Shockley-Read-Hall recombination model is taken into account in this paper, too. Device Mechanism and DC Characteristics with Different Parameters Figure 3a shows the transfer characteristics of the TGTFET with and without the gate overlap. With the additional gate overlap, the ION increases from 7.5 × 10−5 to 8.1 × 10−5 A/μm at Vg = Vd = 1 V. Figure 3b shows the transfer characteristic curves of TGTFET, UTFET, and LTFET. In order to make the comparison more accurate, the simulation models and geometric dimensions of these three devices are set to be identical. As a result, the TGTFET has about a twofold increase in ION compared with LTFET and UTFET, as shown in Fig. 3b. SSmin of TGTFET is 24.4 mV/dec at Vg = 0.1 V, and SSavg is 51.5 mV/dec when 0 V < Vg < 0.5 V. The switching ratios (ION/IOFF) are 6.7 × 1010 at Vg = Vd = 1 V and 6.5 × 108 at Vg = Vd = 0.5 V. Simulated a transfer characteristics of TGTFET with/without gate overlap and b transfer characteristics of TGTFET, UTFET, and LTFET Figure 4a, b shows the BBT rate of TGTFET with and without a 5-nm gate overlap. From Fig. 4c, we can clearly see that the device with a 5-nm gate overlap has a wider electron tunneling area under the device surface, which can lead to the ION increasing. Simulated BBT electron tunneling rate diagrams of a device without gate overlap, b device with 5-nm gate overlap, and c the BBT electron tunneling rate of two devices, at 1 nm below the device surface; Vg = Vd = 1 V Figure 5a, b shows the 3D diagram of electric fields of TGTFET with and without gate overlap. Two electric field peaks appear in TGTFET with a 5-nm gate overlap, as shown in the dashed circle in Fig. 5a. No electric field peak appears in Fig. 5b attributed to the absences of the gate overlap. Figure 5c shows the energy band structure under the surface of the device. The inset in Fig. 5c shows the cut line location. With the gate overlap, a larger tunneling window can be obtained. Thus, a higher BBT rate and ION can be achieved. 3D schematic diagram of electric fields of the device a with overlap and b without overlap; simulated c energy band diagrams from source to pocket region (1 nm below the oxide interface) Figure 6 shows the effects of n+ pocket on the performance of the TGTFET. The IOFF increases rapidly with the increasing of the n+ pocket doping concentration, as shown in Fig. 6a. The lower SS and greater ION can be obtained by decreasing the thickness of n+ pocket (Tp) from 7 to 3 nm when NP = 5 × 1018 cm−3, as shown in Fig. 6b. At the same time, no significant subthreshold current is noted in Fig. 6b. It can be confirmed from Fig. 6a that a relatively low doping concentration of n+ pocket will help to suppress the subthreshold current. Simulated drain currents with different n+ pocket a concentrations and b thicknesses at Vd = 1 V The impact of the gate height (Hg) and channel thickness (Hc) is shown in Fig. 7a, b, separately. A small ION and SS improvement appears when Hg is increasing. Because when Hg = 35 nm, there is an obvious energy band hump on the on-state current path, becoming a certain obstacle to the lucky electrons (electrons which passed the tunneling junction), as shown in Fig. 7c, which can result in Ion decrease. When Hg increases, the energy band hump is weakened, which cause the ION and SS improvement. A slight ION improvement is obtained with Hc decreasing, as shown in Fig. 7b. However, severe degradation on subthreshold characteristic can be observed when Hc decreases to 5 nm. This can be explained by the increasing subthreshold tunneling current at the corner of the n+ pocket, as shown in Fig. 8. Figure 8a shows the obvious off-state band-to-band tunneling phenomenon when Hc = 5 nm while Fig. 8b shows the IOFF current density when Hc = 5 nm. Simulated transfer characteristics of TGTFET with a different Hg, b different Hc, and c the conduction band hump on the current path Simulated diagrams of off-state a BTBT electron tunneling rate and b current density when Hc = 5 nm As shown in Fig. 9, the influence of drain to source voltage (Vd) is also taken into account in this paper. For Vd < 0.6 V, ION increases obviously with the increasing Vd, as shown in Fig. 9a. This is explained by the fact that the potential of the p-channel is slowly growing in response to the increasing Vd and results in the decreasing resistance of p-channel. For Vd > 1.8 V, shown in Fig. 9b, the ION almost does not increase with the increasing Vd, but IOFF increases considerably. This is because of the subthreshold tunneling current at the corner of the n+ pocket increasing rapidly with the increasing Vd. Finally, for 0.6 V < Vd < 1.8 V, TGTFET exhibits good and stable performance. As a result, TGTFET is robust to drain-induced barrier lowering (DIBL) and exhibits a good and stable performance in a larger applied voltage dynamic range. Simulated drain currents for a Vd ≤ 1 V and b Vd ≥ 1 V Analog/RF Performance of TGTFET, UTFET, and LTFET Figure 10 shows the transfer characteristics and transconductance curves of TGTFET, UTFET, and LTFET at Vd = 0.5 V. The transconductance (gm) can be obtained from the first derivative of the transfer characteristic curve, as shown in Eq. (1) [27,28,29]: $$ {g}_{\mathrm{m}}={dI}_{\mathrm{d}s}/{dV}_{\mathrm{gs}} $$ Fig. 10 a Transfer characteristics and b transconductance curves of TGTFET, UTFET, and LTFET at Vd = 0.5 V As a result, the maximum transconductance of TGTFET (232 μS/μm) is about two times larger than that of UTFET (120 μS/μm) and LTFET (110 μS/μm), as shown in Fig. 10. This is benefited from the current gain contributed by dual source and gate overlap. Figure 11 shows the output characteristics, output conductance (gds), and output impedance (Ro) curves of the TGTFET, UTFET, and LTFET. As shown in Fig. 11a, it can be clearly seen that the output current of the device increases with the increase of Vd, but when Vd reaches above 0.6 V, the output current tends to saturate. Through observation, it is easy to find that the output current of TGTFET is two times larger than that of UTFET and LTFET. Figure 11b shows the output conductance (gds) and output impedance (Ro) curves of the TGTFET, UTFET, and LTFET. The gds can be obtained through the derivation of the output current, as shown in Eq. (2) [27, 29] while Ro can be expressed as the reciprocal of the output conductance. $$ {g}_{\mathrm{ds}}={dI}_{\mathrm{ds}}/{dV}_{\mathrm{ds}} $$ a Output characteristics, b output conductance (gds), and c output impedance (Ro) curves of the TGTFET, UTFET, and LTFET Due to the advantages on output current, TGTFET gains the highest gds and the minimum Ro of these three devices. Under 1-V gate bias condition, TGTFET obtained the maximum gds of 214 μS/μm and the minimum Ro of 4.6 kΩ/μm under 0.45 V Vd. Under the same gate bias condition, UTFET and LTFET obtained the maximum gds of 113 μS/μm and 105 μS/μm and the minimum Ro of 9.0 kΩ/μm and 9.6 kΩ/μm under 0.4 V Vd. Moreover, in Fig. 11, it is not difficult to find out that the linear region of the device output characteristics shows certain nonlinearity. As shown in Fig. 11a, Ro decreases first and then increases with the increasing Vd. Some research groups give the corresponding physical process about this phenomenon [7, 30] but there are still some problems that have not been explained clearly. As we know, Ro is determined by the resistance of channel region and tunneling junction. When Vd < 0.4 V, Ro decreases with the increasing Vd. Consider the following situations, when Vd = 0 V and Vg = 1 V, none of the lucky electrons can be swept to the drain side, and almost all the electrons are trapped in the channel region by a relatively high drain barrier, as shown in the red dotted line frame in Fig. 12a, b. When 0 V < Vd < 0.4 V, with the increasing of Vd, the drain barrier becomes weaker (as shown in Fig. 12b). Thus, the electrons trapped in the channel region can pass through the drain barrier and then be collected by drain. This is a thermal excitation process of electrons from channel to drain. Finally, as the tunneling junction has been completely turned on (when Vg = 1 V), the tunneling current is always in a state of excess and the resistance introduced by tunneling junction can be ignored. At this time, Ro is determined by the channel resistance and Ro is decided by the electron thermal excitation process across the drain barrier. Thus, Ro decreases with the increasing of Vd. When Vd > 0.6 V, these three devices gradually enter the saturation area and Ro becomes larger. This is because when Vd is large, almost all the electrons through the tunneling junction are swept to the drain side by the relatively high electric field. The tunneling current becomes the limit of the drain current. In this condition, Ro is mainly determined by the tunneling junction. However, the tunneling efficiency cannot increase significantly while Vd is increasing. Vd has a small effect on the energy band structure of the tunneling junction (n+ pocket side), as shown in Fig. 12b. As a result, the tunneling current cannot increase obviously, and there is almost no ION increase with the continually increasing Vd (when Vd > 0.6 V), which means an impedance increases. Moreover, when 0.4 V < Vd < 0.6 V, Ro is determined by both the channel resistance and tunneling junction. a Schematic diagram of the energy band at Vd = 0 V and Vg = 1 V. b Simulation results of the energy band diagram at different biases of Vd It can be obtained from the above analysis that the Ro of TFET is influenced by both the tunneling process and the channel electron thermal excitation process. The main physical mechanisms can dominate Ro shifts with Vd variation. Finally, the Ro decreases first and then increases, thus causing the nonlinearity of the output characteristics. Incidentally, through the observation of Fig. 11b, it is easy to find that the output impedance of TGTFET is much smaller than that of the UTFET and LTFET. This is due to the better tunneling efficiency benefit from the dual-source and the lateral gate overlap structure of TGTFET. Figure 13 shows the energy band structure of TGTFET, UTFET, and LTFET with different applied voltages. The red dotted lines in the inset represent the position to draw the energy band (which is 15 nm below the surface, just at the 1/2 height of the source region). It can be seen that with a Vd increase from 0.1 to 0.5 V, the band structure of TGTFET, UTFET, and LTFET has an obvious trend of bending. This is because the drain voltage can pull down the electric potential of the tunneling junction near the drain side. This indicates that, for TGTFET, UTFET, and LTFET, the increase of Vd from 0.1 to 0.5 V is beneficial to tunneling efficiency. However, when Vd > 0.5 V, the change of the energy band with Vd increase is not worth mentioning. This is consistent with the analysis results in Fig. 12b. The energy band structure of a TGTFET, b UTFET, and c LTFET at Vg = 1 As we know, the gate capacitance (Cgg) of the device can greatly affect the frequency characteristics of the integrated circuits. For TGTFET, UTFET, and LTFET, Cgg generally consists of Cgs (capacitance of gate to source) and Cgd (gate to drain capacitance). Therefore, the characteristic of Cgg, Cgs, and Cgd is of great significance to evaluate the frequency characteristics and analog application ability of devices. Especially for TFET, the capacitance characteristics are quite different from MOSFET. Because of the existence of the tunneling junction at the source area, TFET usually has a small Cgs [1, 11]. Therefore, the Cgg of TFET is mainly determined by Cgd. Figure 14 shows the capacitance of TGTFET, UTFET, and LTFET versus Vg under Vd = 0.5 V and Vd = 0 V, separately. Capacitance of TGTFET versus Vg under a Vd = 0 V and b Vd = 0.5 V. Capacitance of UTFET versus Vg under c Vd = 0 V and d Vd = 0.5 V. Capacitance of LTFET versus Vg under e Vd = 0 V and f Vd = 0.5 V Through the observation of Fig. 14a, b, it is easy to find that the Cgs of TGTFET under 1-V gate voltage is 0.15 fF/μm at Vd = 0 V and 0.7 fF/μm at Vd = 0.5 V, which is far more smaller than that of the Cgd (5.8 fF/μm at Vd = 0 V and 3.7 fF/μm at Vd = 0.5 V). Thus, the Cgg of TGTFET is mainly determined by Cgd. When Vd = 0 V, Cgg and Cgd increase rapidly with the increasing Vg, as shown in Fig. 14a. This is because with the increase of Vg, electrons are aggregated to the gate interface in the device channel, which makes the capacitance rise rapidly. When Vd = 0.5 V, Cgd does not increase obviously until Vg is increased to more than 0.6 V, as shown in Fig. 14b. This is because when Vg is low, only few lucky electrons can pass through the tunneling junction and go into the channel. Some of these lucky electrons will be participating in the recombination process, and most of the others will be rapidly collected by drain due to the 0.5-V drain voltage. Therefore, it is very difficult for these lucky electrons to stay in the device channel. However, with the Vg increase, the number of lucky electrons increases rapidly. At this moment, neither of the drain collection nor of the electron-hole recombination process can rapidly deplete these lucky electrons. Thus, the electron concentration in the channel increases and the capacitance rises rapidly. As a result, the capacitance characteristic curve tends to shift right while Vd increases, as shown in Fig. 14a, b. The above analysis and phenomena are also applicable to UTFET and LTFET, as shown in Fig. 14c–f. In addition, the gate capacitance of UTFET at 0 V and 0.5 V Vd reached 6.2 fF/μm and 5.1 fF/μm, respectively, and that of the LTFET reached 3.4 fF/μm and 2.7 fF/μm, respectively. Since there is no direct overlap between the LTFET's gate and drain, and the distance between the gate and drain is relatively far, LTFET has the best capacitance characteristics and the smallest Cgg. In contrast, there is a direct overlap between the UTFET's gate and drain. Therefore, electrons near the drain side are more easily controlled by gate, thus resulting in a large Cgg of UTFET. For TGTFET, although the distance between the gate and drain is close, but there is a lightly doped channel region which can isolate the gate and drain. Thus, the capacitance of TGTFET is better than that of the UTFET, but slightly inferior to LTFET. Figure 15 shows the Cgd characteristics of TGTFET, UTFET, and LTFET versus Vd under different Vg. From the observation of Fig. 15a–v, it is not difficult to find that the Cgd characteristics of these three devices are similar. That is, for a fixed Vg, Cgd decreases with the increase of the Vd. On the other hand, for a fixed Vd, Cgd increases with the increase of Vg. Cgd characteristics of a TGTFET, b UTFET, and c LTFET versus Vd under different Vg As we know, both of the cut-off frequency (fT) and gain bandwidth (GBW) are the evaluation criteria for evaluating the frequency characteristics of devices. fT depends on the ratio of gm to Cgg, as shown in Eq. (3) [30, 31]. For a certain DC gain that equals 10, GBW can be expressed by the ratio of gm to Cgd, as shown in Eq. (4) [17]: $$ {f}_T=\frac{g_{\mathrm{m}}}{2\pi {C}_{\mathrm{gs}}\sqrt{1+2{C}_{\mathrm{gd}}/{C}_{\mathrm{gs}}}}\approx \frac{g_{\mathrm{m}}}{2\pi \left({C}_{\mathrm{gs}}+{C}_{\mathrm{gd}}\right)}=\frac{g_{\mathrm{m}}}{2\pi {C}_{\mathrm{gg}}} $$ $$ \mathrm{GWB}={g}_{\mathrm{m}}/2\pi 10{C}_{\mathrm{gd}} $$ Figure 16 shows the characteristic curves of the fT and GBW of TGTFET, UTFET, and LTFET. Benefiting from structural advantages, such as dual-source and lateral gate overlap introduced by the T-shaped gate, TGTFET obtains the most outstanding frequency characteristics compared with UTFET and LTFET. Under the condition of Vd = 0.5 V, the fT and GBW of TGTFET reached the maximum values of 11.9 GHz and 2.3 GHz, respectively. Benefiting from the long distance between gate and drain and without gate/drain overlap, LTFET obtains a small Cgg and good frequency characteristics. The fT and GBW of LTFET reach the 8.7 GHz and 2.1 GHz, separately. The capacitance characteristics of UTFET are inferior compared with that of TGTFET and LTFET. This is because the direct gate/drain overlaps. As a result, the maximum value of fT and GBW of UTFET can only reach 4.1 GHz and 0.5 GHz separately. The characteristic curves of a fT and b GBW of TGTFET, UTFET, and LTFET versus Vg at Vd = 0.5 V In this paper, a T-shape gate dual-source tunnel field-effect transistor (TGTFET) with good performance is proposed and investigated. The structure, mechanism, and the influence of device parameter on the characteristic of TGTFET are discussed. In addition, the characteristics of TGTFET, UTFET, and LTFET are discussed and compared in this paper. The dual-source regions are introduced to double the area of the tunneling junction. The gate overlap and the n+ pockets can obviously enhance the tunneling efficiency of the tunneling junction in TGTFET. Finally, the TGTFET with impressive characteristics (ION = 8.1 × 10−5 A/μm, ION/IOFF = 6.7 × 1010 and SSmin = 24.4 mV/dec) is obtained. At the same time, TGTFET is robust to DIBL, which means TGTFET can exhibit a good and stable performance in a larger applied voltage dynamic range. Furthermore, the analog/RF performance of TGTFET is studied and compared with UTFET and LTFET. The key parameter such as input/output characteristics, capacitance characteristics, GBW, and fT are analyzed. Benefiting from the no direct overlap between the gate and drain, TGTFET obtains a relatively small Cgd and Cgg. Finally, TGTFET with remarkable frequency characteristics (fT = 11.9 GHz and GBW = 2.3 GHz) is obtained. As a conclusion, it is expected that TGTFET can be one of the promising alternatives for the next generation of device in low-power and analog/RF applications. C gd : Gate to drain capacitance C gs : Gate to source capacitance f T : Cut-off frequency GBW: Gain bandwidth g ds : Output conductance g m : Transconductance Hc: Height of the channel layer Hg: Height of the gate electrode Height of the source layer LTFET: L-shape gate tunnel field-effect transistor N D : Doping concentration of n+ drain N P : Doping concentration of n+ pocket N S : Doping concentration of p+ source N sub : Doping concentration of p− substrate R o : TGTFET: T-shape gate dual-source tunnel field-effect transistor Tox: Thickness of the HfO2 gate dielectric Tp: Thickness of n+ pocket UTFET: U-shape gate tunnel field-effect transistor V d : Drain to source voltage V g : Gate to source voltage Wg: Width of the gate electrode Ionescu AM, Riel H (2011) Tunnel field-effect transistors as energy-efficient electronic switches. Nature. https://doi.org/10.1038/nature10679 V. Vijayvargiya and S. K. Vishvakarma. Effect of drain doping profile on double-gate tunnel field-effect transistor and its influence on device RF performance. IEEE Transactions on Nanotechnology. 2014; doi: https://doi.org/10.1109/TNANO.2014.2336812 D. Kim, Y. Lee and J. Cai et al. Low power circuit design based on heterojunction tunneling transistors (HETTs). IEEE ISLPED 2009; doi: https://doi.org/10.1109/TVLSI.2012.2213103 Hiblot G. et al. Accurate boundary condition for short-channel effect compact modeling in MOS devices. IEEE Transactions on Electron Devices 2015; doi: https://doi.org/10.1109/TED.2014.2368395 S. Bangsaruntip, G. M. Cohen, A. Majumdar et al. Universality of short-channel effects in undoped-body silicon nanowire MOSFETs. IEEE Electron Device Lett 2010; doi: https://doi.org/10.1109/LED.2010.2052231 J. Madan and R. Chaujar. Gate drain underlapped-PNIN-GAA-TFET for comprehensively upgraded analog/RF performance superlattices and microstructures 2017; doi: https://doi.org/10.1016/j.spmi.2016.12.034 G. Singh, S. I. Amin and S. Anand et al. Design of Si 0.5 Ge 0.5 based tunnel field effect transistor and its performance evaluation. Superlattices & Microstructures. 2016; doi: https://doi.org/10.1016/j.spmi.2016.02.027 Q. Huang, R. Huang and Z. Zhan et al. A novel Si tunnel FET with 36mV/dec subthreshold slope based on junction depleted-modulation through striped gate configuration. IEEE IEDM 2012; doi: https://doi.org/10.1109/IEDM.2012.6479005 U. E. Avci and I. A. Young. Heterojunction TFET scaling and resonant-TFET for steep subthreshold slope at sub-9nm gate-length. IEEE IEDM. 2013; doi: https://doi.org/10.1109/IEDM.2013.6724559 W. Y. Choi, B. G. Park and J. D. Lee et al. Tunneling field-effect transistors (TFETs) with subthreshold swing (SS) less than 60 mV/dec. IEEE Electron Device Lett. 2007; doi: https://doi.org/10.1109/LED.2007.901273 Appenzeller J., Lin Y. M., Knoch J. et al. Comparing carbon nanotube transistors - the ideal choice: a novel tunneling device design. IEEE Trans. Electron Devices. 2005; doi: https://doi.org/10.1109/TED.2005.859654 A. Villalon, G. L. Carval and S. Martinie et al. Further insights in TFET operation. IEEE Trans. Electron Devices. 2014; doi: https://doi.org/10.1109/TED.2014.2325600 V. Nagavarapu, R. Jhaveri and J. C. S. Woo. The tunnel source (PNPN) n-MOSFET: a novel high performance transistor. IEEE Trans. Electron Devices. 2008; doi: https://doi.org/10.1109/TED.2008.916711 N. Gupta, A. Makosiej and C. Anghel et al. Ultra-low-power compact TFET flip-flop design for high-performance low-voltage applications IEEE ISQED 2016; doi: https://doi.org/10.1109/ISQED.2016.7479184 N. Gupta, A. Makosiej and A. Vladimirescu et al. 3T-TFET bitcell based TFET-CMOS hybrid SRAM design for ultra-low power applications. DATE .2016; doi: https://doi.org/10.3850/9783981537079_0462 Chen S, Wang S, Liu H, et al. Symmetric U-shaped gate tunnel field-effect transistor. IEEE Transactions on Electron Devices. 2017; doi: https://doi.org/10.1109/TED.2017.2647809 Chen S, Liu H, Wang S, et al. Analog/RF performance of two tunnel FETs with symmetric structures. Superlattices & Microstructures 2017; doi: https://doi.org/10.1016/j.spmi.2017.07.013 Li W, Liu H, Wang S, et al. Reduced miller capacitance in U-shaped channel tunneling FET by introducing heterogeneous gate dielectric. IEEE Electron Device Lett 2017; doi: https://doi.org/10.1109/LED.2017.2661318 Wang Q, Wang S, Liu H et al (2017) Analog/RF performance of L- and U-shaped channel tunneling field-effect transistors and their application as digital inverters. Jpn J Appl Phys. https://doi.org/10.7567/JJAP.56.064102 D. B. Abdi and M. J. Kumar. In-built N+ pocket p-n-p-n tunnel field-effect transistor. IEEE Electron Device Lett. 2014; doi: https://doi.org/10.1109/LED.2014.2362926 W. Cao, C. J. Yao and G. F. Jiao et al. Improvement in reliability of tunneling field-effect transistor with p-n-i-n structure. IEEE Trans. Electron Devices. 2011; doi: https://doi.org/10.1109/TED.2011.2144987 A. Mallik, A. Chattopadhyay and S. Guin et al. Impact of a spacer–drain overlap on the characteristics of a silicon tunnel field-effect transistor based on vertical tunneling. IEEE Trans Electron Devices 2013; doi: https://doi.org/10.1109/TED.2013.2237776 Kim SW, Choi WY, Sun MC et al (2012) Design guideline of Si-based L-shaped tunneling field-effect transistors. Jpn J Appl Phys. https://doi.org/10.1143/JJAP.51.06FE09 S. W. Kim, J. H. Kim and T. J. K. Liu et al. Demonstration of L-shaped tunnel field-effect transistors. IEEE Trans. Electron Devices. 2016; doi: https://doi.org/10.1109/TED.2015.2472496 W. Wang, P. F. Wang and C. M. Zhang et al. Design of U-shape channel tunnel FETs with SiGe source regions. IEEE Trans. Electron Devices. 2014; doi: https://doi.org/10.1109/TED.2013.2289075 Y. Morita, T. Mori and S. Migita et al. Performance enhancement of tunnel field-effect transistors by synthetic electric field effect, IEEE Electron Device Lett. 2014; doi: https://doi.org/10.1109/LED.2014.2323337 Boucart K, Ionescu AM (2007) Length scaling of the double gate tunnel FET with a high-k gate dielectric. Solid State Electron. https://doi.org/10.1016/j.sse.2007.09.014 Narang R, Saxena M, GuptaR S, et al. Linearity and analog performance analysis of double gate tunnel FET: effect of temperature and gate stack. International Journal of VLSI Design & Communication Systems (VLSICS) 2011; doi: https://doi.org/10.1007/978-3-642-22543-7_47 Gupta S K, Baishya S. Analog and RF performance evaluation of dual metal double gate high-k stack (DMDG-HKS) MOSFETs. J Nano Electron Phys. 2013; Available: https://jnep.sumdu.edu.ua/en/component/ search/index.php?option=com_content&task=full_article&id=984 Akram M. W., B. Ghosh. Analog performance of double gate junctionless tunnel field effect transistor. Journal of Semiconductors. 2014; doi: https://doi.org/10.1088/1674-4926/35/7/074001 Mohankumar N, Syamal B, Sarkar CK (2009) Investigation of novel attributes of single halo dual-material double gate MOSFETs for analog/RF applications. Microelectron Rel. https://doi.org/10.1016/j.microrel.2009.06.006 In particular, we thank Dr. Wenxing Tian for the discussion and help in the process of writing this manuscript. This research is supported by the National Natural Science Foundation of China (Grant Nos. 61434007 and 61504100), the Foundation for Fundamental Research of China (Grant No. JSZL2016110B003), and the Major Fundamental Research Program of Shaanxi (Grant No. 2017ZDJC-26). School of Microelectronics, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices of Education, Xidian University, Xi'an, 710071, China Shupeng Chen, Hongxia Liu, Shulong Wang, Wei Li, Xing Wang & Lu Zhao Shupeng Chen Hongxia Liu Shulong Wang Xing Wang Lu Zhao SC puts forward the innovative results in this manuscript and completed the work of simulation and article writing. HL and SW supported the completion of this work and helped in the format modification and detail discussion. WL, XW, and LZ participated in the format modification and detail discussion. All authors read and approved the final manuscript. Correspondence to Hongxia Liu or Shulong Wang. Shupeng Chen was born in 1987. He received his B.S., M.S., and Ph.D degrees in Microelectronics from Xidian University in 2010, 2013, and 2017, respectively. He joined Xidian University in 2018. His current research interests include advanced CMOS device designs and steep-switching device designs. Hongxia Liu was born in 1968. She received her B.S., M.S., and Ph.D degrees in Microelectronics from North West University, Xi'an Jiaotong University, and Xidian University, Xi'an, China, in 1990, 1995, and 2002, respectively. She has been a professor of Microelectronics, Xidian University since 2002. Her current research interests include advanced CMOS and ultralow power device designs and reliability. Shulong Wang was born in 1983. He received his B.S. in Electronic Information Science and Technology from Xidian University in 2006 and received his M.S. and Ph.D degrees in electronic science and technology from Xidian University in 2009 and 2014, respectively. He joined Xidian University in 2014. His current research interests include advanced CMOS device designs and their applications to ultralow power integrated circuit. Wei Li was awarded a BS degree from the School of Electrical Engineering at Tianjin University of Technology, Tianjin, China, in 2014. He attended the Xidian University from 2014 to 2015. Since 2017, he has been working toward the Ph.D degree in advance at the School of Microelectronics. Xing Wang was born in 1991. He received Ph.D degree in Microelectronics from Xidian University in 2017. He joined Xidian University in 2017. His current research interests include high-k materials and advanced CMOS device designs. Lu Zhao was born in 1992. She received his B.S. degree in Microelectronics from Xidian University in 2014. She is currently working toward her Ph.D degree in Microelectronics from Xidian University. Her current research interests include high-k materials and advanced Ge-based CMOS device designs and fabrications. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Chen, S., Liu, H., Wang, S. et al. Analog/RF Performance of T-Shape Gate Dual-Source Tunnel Field-Effect Transistor. Nanoscale Res Lett 13, 321 (2018). https://doi.org/10.1186/s11671-018-2723-y T-shaped gate Recessed gate Tunnel field-effect transistor (TFET) Analog/RF performance
CommonCrawl
Reaction intermediates of MnO2 catalyzed H2O2 decomposition reaction Manganese dioxide catalyzes the decomposition of hydrogen peroxide to water and oxygen gas. But what are the intermediates in this catalyzed reaction? inorganic-chemistry catalysis decomposition Reaction conditions As Watts et al. have shown, the decomposition products of this Fenton-like reaction strongly depend on the $\mathrm{pH}$ of the solution.1 If performed in acidic conditions, the reaction generates mostly hydroxy radicals, but no reductants (which would be the hydroperoxide and superoxide anions). If, conversely, the reaction is held in neutral conditions, Watts et al. have shown that the reaction produces significantly more of the aforementioned reductants. Do et al. have conducted further research into the reaction mechanism at $\mathrm{pH}=7$ and concluded that the observed reaction order may be approximated as a pseudo 1st order reaction, where the ratio $\ce{[H2O2]}/\ce{[#MnO2]}$ is vital to the description of the decomposition rate.2 They have further shown that reactive intermediates, such as superoxide and hydroperoxide anions are generated by the reaction. After slightly modifying the pH towards alkaline conditions, the production rates for the reactive anions increased drastically. Reaction mechanism Do et al. have meticulously assembled a table with a proposed reaction pathway: And finally, I quote one of their closing statements:2 The existence of hydroperoxide/superoxide anion implies that the suggested reaction mechanism could be explained by hydrogen peroxide being decomposed, not only directly on the surface of manganese oxide, but also through a propagation reaction involving intermediates such as hydroperoxide/superoxide anion in solution. Watts, R., Sarasa, J., Loge, F., and Teel, A. Oxidative and Reductive Pathways in Manganese-Catalyzed Fenton's Reactions. J. Environ. Eng., 131(1), 2005, 158–164, DOI link. Si-Hyun Do, Bill Batchelor, Hong-Kyun Lee, Sung-Ho Kong. Hydrogen peroxide decomposition on manganese oxide (pyrolusite): Kinetics, intermediates, and mechanism. Chemosphere, Volume 75, Issue 1, March 2009, Pages 8-12, DOI link. tschoppitschoppi I am adding a more recent study to complement the answer given by tschoppi. This journal addresses the kinetics and mechanism for the decompostion of $\ce{H2O2}$ on transition metal oxide surfaces. However, the paper only considers $\ce{ZrO2}$, $\ce{TiO2}$, $\ce{Y2O3}$. Although $\ce{MnO2}$ is also a transition metal oxide, $\ce{MnO2}$ may be different enough to lead to different paths or kinetics. The authors even state: effects such as solution pH, type of oxide, temperature, and oxide particle size have profound effects on the kinetics and energetics of this type of reactions Thus, take the results of the study with a grain of salt. I will focus more on the mechanism part of the journal. According to the journal The kinetic experiments on the decomposition of $\ce{H2O2}$ together with the experiments on $\ce{HO•}$ detection show the existence of an adsorption step prior to decomposition. This type of process is also predicted with the DFT calculations. The decomposition of $\ce{H2O2}$ follows a similar mechanism for the three metal oxides studied. The obtained transition states are largely mediated by hydrogen bonding between $\ce {H2O2}$ and surface $\ce{HO}$ groups. Nevertheless, direct interaction between the oxygen atoms of $\ce{H2O2}$ and the metal atoms present in the oxide was also observed in the geometries of the transition states. The formation of two $\ce{HO}$ radicals as the primary product of the decomposition of $\ce{H2O2}$ is confirmed with both the DFT calculations and the experiments. One of these radicals can further abstract a $\ce{H}$ atom initially bound to a surface $\ce{O}$ and form $\ce{H2O}$. The other $\ce{HO}$ radical can adsorb to the surface by forming bonding states with the metal cation. Claudio M. Lousada, Adam Johannes Johansson, Tore Brinck, and Mats Jonsson. Mechanism of $\ce {H2O2}$ Decomposition on Transition Metal Oxide Surfaces J. Phys. Chem. C, 2010, 116 (17), DOI link. CoffeeIsLifeCoffeeIsLife $\begingroup$ you said that the statement, effects such as solution pH, type of oxide, temperature, and oxide particle size have profound effects on the kinetics and energetics of this type of reactions. Makes the paper not that credible. If you could please state any reason why you said so ,that would be really helpful for me to further understand the problem statement. $\endgroup$ – Omkar Kedge Why does Hydrogen Peroxide decompose? Mechanism of manganese dioxide induced decomposition of hydrogen peroxide When MnO2 acts as a catalyst to speed up the decomposition of H2O2 what does it actually do? Is there a catalyst for the decomposition for hydrogen peroxide that speeds it up but still makes the reaction light dependent? What are the most potent hydrogen peroxide decomposition catalysts available to an amateur? Why doesn't CaO decompose into Ca and O2? Demonstrating decomposition of hydrogen peroxide using iron(III) nitrate catalyst What is the order of the catalytic decomposition of hydrogen peroxide? What is the "mechanism" of the decomposition of hydrogen peroxide?
CommonCrawl
Computational methods DFT and TD-DFT calculation of new thienopyrazine-based small molecules for organic solar cells Mohamed Bourass1Email author, Adil Touimi Benjelloun1, Mohammed Benzakour1, Mohammed Mcharfi1, Mohammed Hamidi2, Si Mohamed Bouzzine2, 3 and Mohammed Bouachrine4 Chemistry Central Journal201610:67 Accepted: 20 October 2016 Novel six organic donor-π-acceptor molecules (D-π-A) used for Bulk Heterojunction organic solar cells (BHJ), based on thienopyrazine were studied by density functional theory (DFT) and time-dependent DFT (TD-DFT) approaches, to shed light on how the π-conjugation order influence the performance of the solar cells. The electron acceptor group was 2-cyanoacrylic for all compounds, whereas the electron donor unit was varied and the influence was investigated. The TD-DFT method, combined with a hybrid exchange-correlation functional using the Coulomb-attenuating method (CAM-B3LYP) in conjunction with a polarizable continuum model of salvation (PCM) together with a 6-31G(d,p) basis set, was used to predict the excitation energies, the absorption and the emission spectra of all molecules. The trend of the calculated HOMO–LUMO gaps nicely compares with the spectral data. In addition, the estimated values of the open-circuit photovoltage (Voc) for these compounds were presented in two cases/PC60BM and/PC71BM. The study of structural, electronics and optical properties for these compounds could help to design more efficient functional photovoltaic organic materials. π-conjugated molecules Thienopyrazine derivatives Organic solar cells TD-DFT Optoelectronic properties Voc (open circuit voltage) The organic bulk heterojunction solar cells (BHJ) are considered as one of the promising alternative used for renewable energy. This is attributed to their several advantages to fabricate the flexible large-area devices and also to their low cost compared to other alternatives based on inorganic materials [1, 2]. Generally, the organic BHJ solar cells based on the mixture of electron donor (material organic) and electron acceptor materials as PCBM or its derivatives and have been utilized in the aim to harvest the sunlight. Over the past few years, considerable effort has been focused on improving organic solar cells (OSC) performance to achieve power conversion efficiencies (PCE) of 10%. The following strategies have been adopted for this purpose [3–13]: (1) design of the new photoactive materials able to increase the efficiency of photoconversion such as fullerenes and π-conjugated semiconducting polymers; (2) use of functional layers of buffering, charge transport, optical spacing, etc., and; (3) morphological tuning of photoactive films by post-annealing, solvent drying, or processing by using additives. After many efforts, the design of the organic BHJ solar cells based on polymer semiconducting (PSCs) as an electron donating and PCBM as an electron accepting showed impressive performances in converting solar energy to electrical energy. Finally, the power conversion efficiency (PCE) was improved in the range of 7–9.2% [14–21] for single layer PSCs and 10.6% [14] for tandem structured PSCs. These kinds of solar cells based on polymers have potential applications in next-generation solar cells compared to dye-sensitized solar cells (DSSC) and inorganic thin-film. On the other hand, considerable research has been directed to developing an efficient small-molecule organic used as a semiconductors and to improve their performance in the organic solar cells (OSCs), with the near-term goal of achieving a PCE comparable to that of polymer solar cells (PSCs) [22–24]. Small-molecule organic semi-conductors are more suitable than polymer-based ones for mass production because the latter suffer from poor reproducibility of the average molecular weight, high dispersity, and difficulties in purification. Recently, the small molecule for organic solar cells (SMOSCs) with PCEs exceeding 6% have been reported [25] thus making solution-processed SMOSCs strong competitors to PSCs. This inspires us to develop a new low band gap for small molecules for organic solar cells application. In order to achieve high current density in SMOSCs, utilizing new donor molecules that can efficiently absorb the sunlight at the maximum solar flux region (500–900 nm) of the solar spectrum, because the energy conversion efficiency of the small molecule for organic solar cells is directly attached to the light harvesting ability of the electron donor molecules. In addition, to get high open circuit voltage (Voc), the HOMO levels of the donor molecules should be down a −5.0 eV, in which this factor is calculated by the difference between the HOMO and LUMO levels of the donor and acceptor materials, respectively. The most small molecule organic semiconductors used in solar cells have a push–pull structure comprising electron donors and acceptors in objective to enhance the intramolecular charge transfer (ICT) and the band gap becomes narrow and then, yielding higher molar absorptivity [22–25]. A common strategy to enhance the power conversion efficiency of low band gap conjugated molecules as an alternating (D-A) or (D-π-A) structures because this improves the excitation charge transfer and transport [26]. Different authors described in recent studies the importance of compounds with D-π-A structure and their role in the elaboration of the organic solar cell [27–29]. The organic material based on thienopyrazine has been used as a donor unit; still receive considerable attention for their exceptional optoelectronic properties [30, 31]. Knowledge about the optoelectronic properties of these new materials can help with the design of new materials with optimized properties for solar energy conversion. In our previous works [32, 33], we have reported a theoretical study of photovoltaic properties on a series of D-π-A structures of thienopyrazine derivatives as photoactive components of organic BHJ solar cells. In order to obtain materials with more predominant capability, the development of novel structures is now being undertaken following the molecular engineering guidelines, the theoretical studies on the electronic structures of these materials have been done in order to rationalization the properties of known ones and the prediction those of unknown ones [26]. As is known, the knowledge of the HOMO and LUMO levels of the materials is crucial in studying organic solar cells. The HOMO and LUMO energy levels of the donor and of the acceptor compounds present an important factor for photovoltaic devices which determine if the charge transfer will be happen between donor and acceptor. The thienopyrazine derivatives would be much more promising for developing the panchromatic materials for photovoltaic, and thus, provide much higher efficiencies if new absorption bands could be created in the visible light region. In this paper, we report a strategy to control the band-gap and different optoelectronics properties by using the DFT method on a series of no symmetrical branched molecules based on thienopyrazine as a central core and cyanoacrylic acid as the end group connected with different π-conjugated groups Xi, as shown in Fig. 1. We think that the presented study for these compounds listed in Fig. 1 bout their structural, electronic and optical properties could help to design more efficient functional photovoltaic organic materials, for aim to find the best material which is used as a donor electron in BHJ device in the solar cell. Chemical structure of study compounds Pi (i = 1–6) All calculations were carried out using density functional theory (DFT) with B3LYP (Becke three-parameter Lee–Yang–Parr) exchange-correlation functional [34]. 6-31G(d,p) was used as a basis set for all atoms (C, N, H, O, S). Recently, Tretiak and Magyar [35] have demonstrated that the charge transfer states can be achieved in D-π-A structure a large fraction of HF exchange is used. A newly designed, functional, the long range Coulomb-attenuating method (CAM-B3LYP) considered long-range interactions by comprising 81% of B88 and 19% of HF exchange at short-range and 35% of B88 and 65% of HF exchange at long-range [36]. Furthermore, The CAM-B3LYP has been used especially in recent work and was demonstrated its ability to predict the excitation energies and the absorption spectra of the D-π-A molecules [37–40]. Therefore, in this work, TD-CAM-B3LYP method has been used to simulate the vertical excitation energy and electronic absorption spectra. It is important to take into account the solvent effect on theoretical calculations when seeking to reproduce or predict the experimental spectra with a reasonable accuracy. Polarizable continuum model (PCM) [41] has emerged in the last two decades as the most effective tools to treat bulk solvent effects for both the ground and excited states. In this work, the integral equation formalism polarizable continuum model (IEF-PCM) [42, 43] was used to calculate the excitation energy. The oscillator strengths and excited state energies were investigated using TD-DFT calculations on the fully DFT optimized geometries. By using HOMO and LUMO energy values for a molecule, chemical potential, electronegativity and chemical hardness can be calculated as follows [44]: $$\mu = \left( {E_{HOMO} + E_{LUMO} } \right)\;/\; 2$$ Chemical potential $$\eta = \left( {E_{LUMO} - E_{HOMO} } \right)\;/\; 2$$ (Chemical hardness), $$\chi =-\;(E_{HOMO} + E_{LUMO} )\;/\;2$$ (electronegativity), all calculations were performed using the Gaussian 09 package [45]. Ground state geometry The optimized structures of all molecules obtained with the B3LYP/6-31G(d,p) level, are presented in Fig. 2. Optimized geometries obtained by B3LYP/6-31G(d,p) of the studied molecules Figure 2 shows the definition of torsional angles Φ1 and Φ2 between D and π-spacer A and π-spacer respectively, intramolecular charge transfer (ICT) which is represented by the π-spacer and the bridge bonds between D and π-spacer and A and π-spacer were marked as LB1 and LB2 respectively, using compound [P1] as an example (see Fig. 2). Torsional angles Φ1 and Φ2 are the deviation from coplanarity of π-spacer with the donor and acceptor and the LB1 and LB2 are the bond lengths of π-spacer from the donor and acceptor. The torsional angles (Φ1 and Φ2), and bridge lengths (LB1 and LB2) are listed in Table 1. Optimized selected bond lengths and bond angles of the studied molecules obtained by B3LYP/6-31G(d,p) level [the unit of bond lengths is angstroms (Å), the bond angles and dihedral angles is degree (°)] As shown in Table 1, all calculations have been done by using DFT/B3LYP/6-31G(d,p) level. The large torsional angle Φ1 of the compounds P1, P2, P3, P4, P5 and P6 suggest that strong steric hindrance exists between the donor and π-spacer. For P2, the dihedral angles Φ1 formed between the donor group and π-spacer is 0.78°, indicating a smaller conjugation effect compared to the other compounds where the coplanarity can be observed, but this geometry of P2 allows inhibiting the formation of π-stacked aggregation efficiently. Furthermore, the dihedral angles Φ2 of all compounds is very small (2.77, 2.95, 2.85, 2.82, 2.84 and 2.76) wich indicates that the acceptor (cyanoacrylic unit) is coplanar with π-spacer (thiophene–thienopyrazine–thiophene). In the excited state (S1), we remark that the dihedral angles Φ1 for all compounds are significantly decreased in comparison with those in the ground state (S0), except P2 and P6, Φ1 is almost similar to that of the ground state. It indicates that the nature of the S1 state of the molecular skeleton of all compounds is different from the S0 state, and the complete coplanarity in S1 state triggers the fast transfer of the photo-induced electron from S0 to S1. The shorter value from the length of bridge bonds between π-spacer and the donor (LB1) and in another side between π-spacer and acceptor (LB2) favored the ICT within the D-π-A molecules. However, in the ground state (S0) the calculated critical bond lengths LB1 and LB2 are in the range of 1.421–1.462 Å showing especially more C=C character, except the compound P6, which enhances the π-electron delocalization and thus decreases the LB of the studied compounds and then favors intramolecular charge transfer ICT. On the other hand, upon photoexcitation to the excited state (S1), the bond lengths and torsional angles for these compounds significantly decreased in comparison with those in the ground state (S0), especially the linkage between the π-spacer and the acceptor moiety (LB2). These results indicate that the connection of acceptor group (2-cyanoacrylic acid) and the π-bridge is crucial for highly enhanced ICT character, which is important for the absorption spectra red-shift. Electronic properties Among electronic applications of these materials is their use as organic solar cells, we note that theoretical knowledge of the HOMO and LUMO energy levels of the components is crucial in studying organic solar cells. The HOMO and LUMO energy levels of the donor and of the acceptor components for photovoltaic devices are very important factors to determine whether the effective charge transfer will happen between donor and acceptor. The experiment showed that the HOMO and LUMO energies were obtained from an empirical formula based on the onset of the oxidation and reduction peaks measured by cyclic voltammetry. But in the theory, the HOMO and LUMO energies can be calculated by DFT calculation. However, it is noticeable that solid-state packing effects are not included in the DFT calculations, which tend to affect the HOMO and LUMO energy levels in a thin film compared to an isolated molecule as considered in the calculations. Even if these calculated energy levels are not accurate, it is possible to use them to get information by comparing similar oligomers or polymers. The calculated frontier orbitals HOMO, LUMO and band gaps by using B3LYP/6-31G(d,p) level of six compounds (P1, P2, P3, P4, P5and P6) are listed in Table 2. The values of HOMO/LUMO energies are −5.025/−3.057 eV for P1, −5.276/−3.293 eV for P2, −5.091/−3.099 eV for P3, −5.139/−3.124 eV for P4, −5.155/−3.140 eV for P5 and −3.140/−3.159 for P6 and corresponding values of energy gaps are 1.968 eV for P1, 1.983 eV for P2, 1.992 eV for P3, 2.015 eV for P4, 2.015 eV for P5 and 2.171 eV for P6. The calculated band gap Eg of the studied model compounds increases in the following order P1 < P2 < P3 < P4 = P5 < P6. The much lower Eg of P1, P2 and P3 compared to that of P6 indicates a significant effect of intramolecular charge transfer, which would make the absorption spectra red shifted. However, the Eg values of P1, P2 and P3 are smaller than that of P6. This is clearly due to the effect of the electron-donor unit which is strong of P1, P2, and P3 than that of other compounds. All molecules present low energy gap are expected to have the most outstanding photophysical properties especially P1. Calculated EHOMO, ELUMO levels, energy gap (Eg), dipole moment (ρ) and other quantum parameters chemical as electronegativity (χ), chemical potential (μ) and chemical hardness (η) values of the studied compounds obtained by B3LYP/6-31G(d,p) level EHOMO (eV) ELUMO (eV) Eg (eV) μ (eV) η (eV) χ (eV) ρ (Debye) −5.025 −4.2175 7.552​ PCBM Quantum chemical parameters Generally, the molecules having a large dipole moment, possesses a strong asymmetry in the distribution of electronic charge, therefore can be more reactive and be sensitive to change its electronic structure and its electronic properties under an external electric field. Through the Table 2, we can observe that the dipole moment (ρ) of compounds P1 and P4 are greater than others compounds, therefore we can say that these compound are more reactive that other compound, indeed, these compounds are more favorite to liberate the electrons to PCBM. On another side, we note that the PCBM has the smallest value of the chemical potential (μ = −4.9) compared to six compounds (P1, P2, P3, P4, P5, and P6) (see Table 2), this is a tendency to view the electrons to escape from compound Pi has a high chemical potential to PCBM which has a small chemical potential, therefore PCBM behaves as an acceptor of electrons and others compounds Pi behave as a donor of electrons. For the electronegativity, we remark that the PCBM has a high value of electronegativity than other compounds (P1, P2, P3, P4, P5, and P6) (Table 2), thus the PCBM is the compound that is able to attract to him the electrons from others compounds. In another hand, we remark that the PCBM compound has a high value of chemical hardness (η) in comparison with other six compounds, this indicates that the PCBM is very difficult to liberate the electrons, while the other compounds are good candidates to give electrons to the PCBM (see Table 2). Figure 3 shows the frontier molecular orbitals for all the Six compounds (computed at B3LYP/6-31G(d,p) level). The FMOs of all six models have analogous distribution characteristics. All HOMOs show the typical aromatic features with electron delocalization for the whole conjugated molecule and are mainly localized at the donor parts and conjugated spacer, whereas the LUMOs are concentrated on the π-spacer and at the acceptor moieties (cyano acrylic unit). In another hand, the HOMO possesses an anti-bonding character between the consecutive subunits, while the LUMO of all oligomers shows a bonding character between the two adjacent fragments, so the lowest lying singlet states are corresponding to the electronic transition of π–π* type. Therefore the photoexcited electron will be transferred from donor moiety (donor of an electron) to the acceptor group during the excitation process, which is of benefit to the injection of the photoexcited electrons to the LUMO of the semiconductor (PCBM). In another side, we remark that the acceptor group (–CCNCOOH) of all compound has a considerable contribution to the LUMOs which could lead to a strong electronic coupling with PCBM surface upon photoexcitation electron and thus improve the electron injection efficiency, and subsequently enhance the short-circuit current density Jsc. The contour plots of HOMO and LUMO orbitals of the studied compounds Pi Photovoltaic properties Generally, the power conversion efficiency (PCE) is the most commonly used parameter to compare the performance of various solar cells, and to describe it for any compounds, some important parameters has been evaluated such as the short-circuit current density (JSC), the open circuit voltage (VOC), the fill factor (FF), and the incident photon to current efficiency (Pinc). The power conversion efficiency (PCE) was calculated according to the following Eq. (1): $$PCE\, = \;\frac{{J_{SC} \;V_{OC} FF}}{{P_{inc} }}$$ where the JSC is estimated by the maximum current which flows in the device under illumination when no voltage is applied, in which dependent on the morphology of the device and on the lifetime and the mobility of the charge carriers [46]. The maximum open-circuit voltage (Voc) of the BHJ is determined by the difference between the HOMO of the donor (π-conjugated molecule) and the LUMO of the acceptor, taking into account the energy lost during the photo-charge generation [47, 48]. It has been found that the VOC is not very dependent on the work functions of the electrodes [49, 50]. The theoretical values of open-circuit voltage Voc of the BHJ solar cell have been calculated from the following expression [47, 48]: $$V_{OC} \; = \;\left| {E_{HOMO}^{Donor} } \right|\; - \;\left| {E_{LUMO}^{Acceptor} } \right|\; - \;0.3$$ where the represents the elementary charge, and the value of 0.3 V is an empirical factor. Scharber et al. [48] proposed the Eq (2) using −4.3 eV as LUMO energy for the PC71BM. In addition, low LUMO of the π-conjugated compounds and a high LUMO of the acceptor of the electron (PC71BM, PC60BM) increase the value of VOC, which contributes a high efficiency of the solar cells [48, 50]. The theoretical values of the open circuit voltage Voc of the studied molecules range from 1.499 to 1.804 eV in the case of PC60BM and 0.425 to 0.73 eV in the case of PC71BM (Table 3), these values are sufficient for a possible efficient electron injection into LUMO of the acceptor. Energy values of ELUMO (eV), EHOMO (eV), Egap (eV) and the open circuit Voltage Voc (eV) and LUMOdonor−LUMOacceptorof the studied molecules obtained by B3LYP/6-31G(d,p) level Voc (eV)/PC60BM LD − LA(PC60BM) PC61BM In other side the Table 3 and the Fig. 4 show that the differences (LD − LA) of LUMO energy levels between those new designed donors (P1, P2, P3, P4, P5 and P6) and the acceptor of PC60BM is larger than 0 eV except P2. The same remark in case PC71BM, the differences (LD − LA) energy is also larger than 0 eV, which ensures efficient electron transfer from the donor to the acceptor (PC60BM, PC71BM) except P2 in case PC60BM because is more lower to 0 eV. This makes the transfer of electron from this compound (P2) to LUMO of PC60BM very difficult (LUMO of P2 is located below to LUMO of PC60BM). Sketch of B3LYP/6-31G(d,p) calculated energies of the HOMO, LUMO level of study molecules Therefore, all the studied molecules can be used as BHJ because the electron injection process from the excited molecule to the conduction band of PCBM and the subsequent regeneration is possible in an organic sensitized solar cell. It is possible to assess the ideal performance donor, according to the position of its [ELUMO (donor) − ELUMO (acceptor)] energy and its band gap (Fig. 5). Theoretically, a maximum energy conversion efficiency of about 10% could be achieved for CPOs [51, 52] an oligomer having a LUMO energy level between −3.8 and −4.0 eV and a band gap between 1.2 and 1.9 eV has a theoretical power conversion efficiency between 8 and 10%. In a tandem configuration, the combination of two polymers band gap of 1.8 eV and 1.5 or 1.5 and 1.2 eV in two active layers separated to increase the effectiveness of a complete device for achieving a conversion efficiency of energy theoretical about 15%. We note that the higher power conversion efficiency could be achieved for P2 is 4 and 3% for P3. Calculated efficiency under AM1.5G illumination for single junction devices based on composites that consist of a donor with a variable band gap and LUMO level and an acceptor with a variable LUMO level [34] To understand the electronic transitions from our compounds, the quantum calculation on electronic absorption spectra in the gaseous phase and solvent (chloroform) was performed using TD-DFT/CAM-B3LYP/6–31G(d, p) level. The calculated absorption wavelengths (ʎ max), oscillator strengths (ƒ) and vertical excitation energies (E) for gaseous phase and solvent (chloroform) were carried out and listed in Table 4. The spectra show a similar profile for all compounds which present a main intense band at higher energies from 548.16 to 591.46 nm for gas phase and 574.33 to 625.38 for chloroform solution and were assigned to the ICT transitions. From Table 4, we could find that as the donor group changing, the first vertical excitation energies (E) were changed in decreasing order in both phases (gaseous and solvated): P6 > D5 > P4 > P2 > P3 > P1 showing that there is a red shift when passing from P6 to P1. We remark that the transition which has the larger oscillator strength is the most probable transition from the ground state to an excited state of all transitions, corresponding to excitation from HOMO to LUMO of gas phase and chloroform solution, This electronic absorption corresponds to the transition from the molecular orbital HOMO to the LUMO excited state, is a π–π* transition. These results indicate that all molecules have only one band in the Visible region (λabs > 400 nm) (Fig. 6) and P1 could harvest more light at the longer-wavelength which is beneficial to further increase the photo-to-electric conversion efficiency of the corresponding solar cells. So the lowest lying transition can be tuned by the different π-spacer. Absorption spectra data obtained by TD-DFT methods for the title compounds at CAM-B3LYP/6-31G(d,p) optimized geometries in the gas phase and in solvent phase (chloroform) In the gas phase In solvent phase MO/character λabs (nm) Eex (eV) HOMO → LUMO Simulated UV–visible optical absorption spectra of the title compounds with the calculated data at the TD-DFT/CAM-B3LYP/6-31G(d,p) level in chloroform solvent In order to study the emission photoluminescence properties of the studied compounds Pi (i = 1 to 6), the TDDFT/CAM-B3LYP method was applied to the geometry of the lowest singlet excited state optimized at the CAM-B3LYP/6–31 (d, p), and the theoretical emission calculations with the strongest oscillator are presented in Table 5. The emission spectra arising from the S1 state is assigned to π* → π and LUMO → HOMO transition character for all molecules. Through analyzing the transition configuration of the fluorescence, we found that the calculated fluorescence has been just the reverse processed of the lowest lying absorption. Moreover, the observed red-shifted emission of the photoluminescence (PL) spectra when passing from P1 to P6 is in reasonable agreement with the obtained results of absorption. We can also note that relatively high values of Stocks Shift (SS) are obtained from all compounds P1 (179.64 nm), P2 (176.64), P3 (181.49 nm), P4 (178.33 nm), P5 (177.26 nm) and P6 (152.68 nm) (Table 5), this indicate that the compounds which have a weak Stocks Shift present a minimal conformational reorganization between ground state and excited state. Indeed, this stops the intermolecular transfer charge and delaying the injection phenomenon from LUMO of the compounds to LUMO of PCBM. In fact, the Stokes shift, which is defined as the difference between the absorption and emission maximums (EVA–EVE), is usually related to the bandwidths of both absorption and emission bands [53]. Emission spectra data obtained by TD-DFT methods for the title compounds at B3LYP/6–31G(d,p) optimized geometries in chloroform solvent Excited state Main composition ʎmax emis (nm) ΔE (eV) Radiative life times (ns) LUMO → HOMO Excited state lifetimes The radiative lifetimes (in au) have been computed for spontaneous emission using the Einstein transition probabilities according to the following formula [54]: $$\tau ={C^{3} } {\bigg/}{2(E_{Flu} )^{2} {\text{f}}}$$ where (c) is the velocity of light, E Flu is the excitation energy, and ƒ is the oscillator strength (O.S.). The computed lifetimes (τ), for the title compounds are listed in Table 5. However, an increase in lifetimes of Pi will retard the charge recombination process and enhance the efficiency of the photovoltaics cells. So, long radiative lifetimes facilitate the electron transfer upon the photoexcited electron, from LUMO of electron-donor to LUMO of electron-acceptor, thus lead to high light-emitting efficiency. The radiative lifetimes of the study compounds are from 7.61 to 7.11 ns and increases in the following order P4 < P1 < P2 < P5 < P3 < P6. This result is sufficient to obtain a high light-emitting efficiency, especially for P6. We have used the density functional theory method to investigate the geometries and electronic properties of some thienopyrazine-derivatives in alternate donor-π-acceptor structure. The modification of chemical structures can greatly modulate and improve the electronic and optical properties of pristine studied materials. The electronic properties of new conjugated materials based on thienopyrazine and heterocyclic compounds and different acceptor moieties have been computed by using 6-31G(d,p) basis set at a density functional B3LYP level, in order to guide the synthesis of novel materials with specific electronic properties. The concluding remarks are: The predicted band gaps by using DFT-B3LYP/6-31G(d,p) are in the range of 1.968–2.171 eV, knowing that the small band gap due to the increasing of the displacement of the electron between donor and acceptor spacer is very easy. The much lower Eg of P1, P2, and P3 compared to other compounds a significant effect of intramolecular charge transfer. However, the Eg values of P1, P2 and P3 are smaller than that of P6. The theoretical values of the open circuit voltage Voc of the studied molecules range from 1.499 to 1.804 eV in the case of PC60BM and 0.425 to 0.73 eV in the case of PC71BM, these values are sufficient for a possible efficient electron injection. After the results, we note that all the studied molecules can be used as BHJ because the electron injection process from the excited molecule to the conduction band of PCBM and the subsequent regeneration is possible in an organic sensitized solar cell. It is concluded that We note that the higher power conversion efficiency could be achieved for P2 is 4 and 3% for P3. The TD-DFT calculations, at least TD-CAM-B3LYP/6-31G(d,p) was used to replicate the optical transitions in order to predict the excited and emission states; the predicted result of the absorption wavelengths for P1, P2, P3, P4, P5, and P6 is 805.02, 794.65, 801.53, 793.82, 790.72 and 727.01 nm respectively. The decreasing of the band gap of these six materials due to increasing the absorption wavelengths, then the best commands which can be used in photovoltaic cells such as donor of electronic, is one which has the small band gap and large wavelengths, thus all compounds (1–6) are appropriate to do this role. MB, ATB, MB and MM done the quantum calculation, analyzed and interpreted the data of materials, analysis tools or data; wrote the paper. MH, SMB and MB proposed the studied compounds and checked the analyzed and interpreted the data of materials, analysis tools or data. All authors read and approved the final manuscript. This work was supported by Volubilis Program (No MA/11/248), and the convention CNRST/CNRS (Project chimie 1009). ECIM/LIMME, Faculty of Sciences Dhar El Mahraz, University Sidi Mohamed Ben Abdallah, Fez, Morocco Equipe d'Electrochimie et Environnement, Faculté des Sciences et Techniques, University Moulay Ismaïl, Meknes, Morocco Centre Régional des Métiers d'Education et de Formation, BP 8, Errachidia, Morocco ESTM, (LASMAR), University Moulay Ismaïl, Meknes, Morocco Sariciftci NS, Heeger AJ, Nalwa HS (1997) Handbook of organic conductive molecules and polymers. Wiley, New York, p 414Google Scholar Chen HY, Hou J, Zhang S, Liang Y, Yang G, Yang Y, Li G (2009) Polymer solar cells with enhanced open-circuit voltage and efficiency. Nat Photonics 3(11):649–653View ArticleGoogle Scholar Hoppe H, Sariciftci NS (2006) Morphology of polymer/fullerene bulk heterojunction solar cells. J Mater Chem 16(1):45–61View ArticleGoogle Scholar Helgesen M, Søndergaard R, Krebs FC (2010) Advanced materials and processes for polymer solar cell devices. J Mater Chem 20(1):36–60View ArticleGoogle Scholar Park SH, Roy A, Beaupre S, Cho S, Coates N, Moon JS, Heeger AJ (2009) Bulk heterojunction solar cells with internal quantum efficiency approaching 100 and percent. Nat Photonics 3(5):297–302View ArticleGoogle Scholar Price SC, Stuart AC, Yang L, Zhou H, You W (2011) Fluorine substituted conjugated polymer of medium band gap yields 7 % efficiency in polymer-fullerene solar cells. J Am Chem Soc 133(12):4625–4631View ArticleGoogle Scholar Zhou H, Yang L, Stuart AC, Price SC, Liu S, You W (2011) Development of fluorinated benzothiadiazole as a structural unit for a polymer solar cell of 7 % efficiency. Angew Chem 123(13):3051–3054View ArticleGoogle Scholar Ma W, Yang C, Gong X, Lee K, Heeger AJ (2005) Thermally stable, efficient polymer solar cells with nanoscale control of the interpenetrating network morphology. Adv Funct Mater 15(10):1617–1622View ArticleGoogle Scholar Yang C, Lee JK, Heeger AJ, Wudl F (2009) Well-defined donor–acceptor rod–coil diblock copolymers based on P3HT containing C 60: the morphology and role as a surfactant in bulk-heterojunction solar cells. J Mater Chem 19(30):5416–5423View ArticleGoogle Scholar Lee K, Kim JY, Park SH, Kim SH, Cho S, Heeger AJ (2007) Air-stable polymer electronic devices. Adv Mater 19(18):2445–2449View ArticleGoogle Scholar Lee JK, Coates NE, Cho S, Cho NS, Moses D, Bazan GC, Heeger AJ (2008) Efficacy of TiOx optical spacer in bulk-heterojunction solar cells processed with 1, 8-octanedithiol. Appl Phys Lett 92(24):3308Google Scholar Peet J, Kim JY, Coates NE, Ma WL, Moses D, Heeger AJ, Bazan GC (2007) Efficiency enhancement in low-bandgap polymer solar cells by processing with alkane dithiols. Nat Mater 6(7):497–500View ArticleGoogle Scholar Lee JK, Ma WL, Brabec CJ, Yuen J, Moon JS, Kim JY, Heeger AJ (2008) Processing additives for improved efficiency from bulk heterojunction solar cells. J Am Chem Soc 130(11):3619–3623View ArticleGoogle Scholar You J, Dou L, Yoshimura K, Kato T, Ohya K, Moriarty T, Yang Y (2013) A polymer tandem solar cell with 10.6 % power conversion efficiency. Nat Commun 4:1446View ArticleGoogle Scholar Chu TY, Lu J, Beaupré S, Zhang Y, Pouliot JR, Wakim S, Tao Y (2011) Bulk heterojunction solar cells using thieno [3,4-c] pyrrole-4,6-dione and dithieno [3, 2-b: 2′, 3′-d] silole copolymer with a power conversion efficiency of 7.3 %. J Am Chem Soc 133(12):4250–4253View ArticleGoogle Scholar Sharma SS, Sharma GD, Mikroyannidis JA (2011) Improved power conversion efficiency of bulk heterojunction poly(3-hexylthiophene): PCBM photovoltaic devices using small molecule additive. Sol Energy Mater Sol Cells 95(4):1219–1223View ArticleGoogle Scholar Son HJ, Wang W, Xu T, Liang Y, Wu Y, Li G, Yu L (2011) Synthesis of fluorinated polythienothiophene-co-benzodithiophenes and effect of fluorination on the photovoltaic properties. J Am Chem Soc 133(6):1885–1894View ArticleGoogle Scholar Amb CM, Chen S, Graham KR, Subbiah J, Small CE, So F, Reynolds JR (2011) Dithienogermole as a fused electron donor in bulk heterojunction solar cells. J Am Chem Soc 133(26):10062–10065View ArticleGoogle Scholar Small CE, Chen S, Subbiah J, Amb CM, Tsang SW, Lai TH, So F (2012) High-efficiency inverted dithienogermole-thienopyrrolodione-based polymer solar cells. Nat Photonics 6(2):115–120View ArticleGoogle Scholar Dou L, You J, Yang J, Chen CC, He Y, Murase S, Yang Y (2012) Tandem polymer solar cells featuring a spectrally matched low-bandgap polymer. Nat Photonics 6(3):180–185View ArticleGoogle Scholar He Z, Zhong C, Su S, Xu M, Wu H, Cao Y (2012) Enhanced power-conversion efficiency in polymer solar cells using an inverted device structure. Nat Photonics 6(9):591–595View ArticleGoogle Scholar Roncali J (2009) Molecular bulk heterojunctions: an emerging approach to organic solar cells. Acc Chem Res 42(11):1719–1730View ArticleGoogle Scholar Walker B, Kim C, Nguyen TQ (2010) Small molecule solution-processed bulk heterojunction solar cells. Chem Mater 23(3):470–482View ArticleGoogle Scholar Demeter D, Rousseau T, Leriche P, Cauchy T, Po R, Roncali J (2011) Manipulation of the open-circuit voltage of organic solar cells by desymmetrization of the structure of acceptor–donor–acceptor molecules. Adv Funct Mater 21(22):4379–4387View ArticleGoogle Scholar Sun Y, Welch GC, Leong WL, Takacs CJ, Bazan GC, Heeger AJ (2012) Solution-processed small-molecule solar cells with 6.7 % efficiency. Nat Mater 11:44–48View ArticleGoogle Scholar Bundgaard E, Krebs FC (2007) Large-area photovoltaics based on low band gap copolymers of thiophene and benzothiadiazole or benzo-bis (thiadiazole). Sol Energy Mater Sol Cells 91(11):1019–1025View ArticleGoogle Scholar Tian H, Yang X, Cong J, Chen R, Teng C, Liu J, Sun L (2010) Effect of different electron donating groups on the performance of dye-sensitized solar cells. Dyes Pigm 84(1):62–68View ArticleGoogle Scholar Han H, Liang M, Tang K, Cheng X, Zong X, Sun Z, Xue S (2011) Molecular design of triarylamine dyes incorporating phenylene spacer and the influence of alkoxy substituent on the performance of dye-sensitized solar cells. J Photochem Photobiol A 225(1):8–16View ArticleGoogle Scholar Kono T, Murakami TN, Nishida JI, Yoshida Y, Hara K, Yamashita Y (2012) Synthesis and photo-electrochemical properties of novel thienopyrazine and quinoxaline derivatives, and their dye-sensitized solar cell performance. Org Electron 13(12):3097–3101View ArticleGoogle Scholar Campos LM, Tontcheva A, Günes S, Sonmez G, Neugebauer H, Sariciftci NS, Wudl F (2005) Extended photocurrent spectrum of a low band gap polymer in a bulk heterojunction solar cell. Chem Mater 17(16):4031–4033View ArticleGoogle Scholar Nietfeld JP, Schwiderski RL, Gonnella TP, Rasmussen SC (2011) Structural effects on the electronic properties of extended fused-ring Thieno [3, 4-b] pyrazine analogues. J Org Chem 76(15):6383–6388View ArticleGoogle Scholar Bourass M et al (2013) DFT theoretical investigations of p-conjugated molecules based on thienopyrazine and different acceptor moieties for organic photovoltaic cells. J Saudi Chem Soc. doi:10.1016/j.jscs.2013.01.003 Google Scholar Bourass M, Fitri A, Benjelloun AT, Mcharfi M, Hamidi M, Serein-Spirau F, Bouachrine M (2013) DFT and TDDFT investigations of new thienopyrazine-based dyes for solar cells: Effects of electron donor groups. Der Pharma Chemica 5(5):144–153Google Scholar Becke AD (1993) Density-functional thermochemistry. III. The role of exact exchange. J Chem Phys 98(7):5648–5652View ArticleGoogle Scholar Magyar RJ, Tretiak S (2007) Dependence of spurious charge-transfer excited states on orbital exchange in TDDFT: large molecules and clusters. J Chem Theory Comput 3(3):976–987View ArticleGoogle Scholar Yanai T, Tew DP, Handy NC (2004) A new hybrid exchange-correlation functional using the Coulomb-attenuating method (CAM-B3LYP). Chem Phys Lett 393(1):51–57View ArticleGoogle Scholar Preat J (2010) Photoinduced energy-transfer and electron-transfer processes in dye-sensitized solar cells: TDDFT insights for triphenylamine dyes. J Phys Chem C 114(39):16716–16725View ArticleGoogle Scholar Camino B, De La Pierre M, Ferrari AM (2013) Photoelectrochemical properties of the CT1 dye: A DFT study. J Mol Struct 1046:116–123View ArticleGoogle Scholar Irfan A, Jin R, Al-Sehemi AG, Asiri AM (2013) Quantum chemical study of the donor-bridge-acceptor triphenylamine based sensitizers. Spectrochim Acta Part A Mol Biomol Spectrosc 110:60–66View ArticleGoogle Scholar Jungsuttiwong S, Tarsang R, Sudyoadsuk T, Promarak V, Khongpracha P, Namuangruk S (2013) Theoretical study on novel double donor-based dyes used in high efficient dye-sensitized solar cells: the application of TDDFT study to the electron injection process. Org Electron 14(3):711–722View ArticleGoogle Scholar Tomasi J, Mennucci B, Cammi R (2005) Quantum mechanical continuum solvation models. Chem Rev 105(8):2999–3094View ArticleGoogle Scholar Cossi M, Barone V (2001) Time-dependent density functional theory for molecules in liquid solutions. J Chem Phys 115(10):4708–4717View ArticleGoogle Scholar Adamo C, Barone V (2000) A TDDFT study of the electronic spectrum of s-tetrazine in the gas-phase and in aqueous solution. Chem Phys Lett 330(1):152–160View ArticleGoogle Scholar Pearson RG (1986) Absolute electronegativity and hardness correlated with molecular orbital theory. Proc Natl Acad Sci 83(22):8440–8441View ArticleGoogle Scholar Frisch MJ, Trucks GW, Schlegel HB, Scuseria GE, Robb MA, Cheeseman JR, Montgomery JA Jr, Vreven T, Kudin KN, Burant JC, Millam JM, Iyengar SS, Tomasi J, Barone V, Mennucci B, Cossi M, Scalmani G, Rega N, Petersson GA, Nakatsuji H, Hada M, Ehara M, Toyota K, Ukuda R, Hasegawa J, Ishida M, Nakajima T, Honda Y, Kitao O, Nakai H, Klene M, Li X, Knox JE, Hratchian HP, Cross JB, Adamo C, Jaramillo J, Gomperts R, Stratmann RE, Yazyev O, Austin AJ, Cammi R, Pomelli C, Ochterski JW, Ayala PY, Morokuma K, Voth GA, Salvador P, Dannenberg JJ, Zakrzewski VG, Dapprich S, Daniels AD, Strain MC, Farkas O, Malick DK, Rabuck AD, Raghavachari K, Foresman JB, Ortiz JV, Cui Q, Baboul AG, Clifford S, Cioslowski J, Stefanov BB, Liu G, Liashenko A, Piskorz P, Komaromi I, Martin RL, Fox DJ, Keith T, Al-Laham MA, Peng CY, Anayakkara A, Challacombe M, Gill PMW, Johnson B, Chen W, Wong MW, Gonzalez C, Pople JA (2009) Gaussian 09, Revision A02. Gaussian Inc, Wallingford CTGoogle Scholar Shaheen SE, Brabec CJ, Sariciftci NS, Padinger F, Fromherz T, Hummelen JC (2001) 2.5 % efficient organic plastic solar cells. Appl Phys Lett 78(6):841–843View ArticleGoogle Scholar Wu Z, Fan B, Xue F, Adachi C, Ouyang J (2010) Organic molecules based on dithienyl-2, 1, 3-benzothiadiazole as new donor materials for solution-processed organic photovoltaic cells. Sol Energy Mater Sol Cells 94(12):2230–2237View ArticleGoogle Scholar Scharber MC, Mühlbacher D, Koppe M, Denk P, Waldauf C, Heeger AJ, Brabec CJ (2006) Design rules for donors in bulk-heterojunction solar cells—towards 10 % energy-conversion efficiency. Adv Mater 18(6):789–794View ArticleGoogle Scholar Brabec CJ, Cravino A, Meissner D, Sariciftci NS, Fromherz T, Rispens MT, Hummelen JC (2001) Origin of the open circuit voltage of plastic solar cells. Adv Funct Mater 11(5):374–380View ArticleGoogle Scholar Frohne H, Shaheen SE, Brabec CJ, Müller DC, Sariciftci NS, Meerholz K (2002) Influence of the anodic work function on the performance of organic solar cells. ChemPhysChem 3(9):795–799View ArticleGoogle Scholar Koster LJA, Mihailetchi VD, Blom PWM (2006) Bimolecular recombination in polymer/fullerene bulk heterojunction solar cells. Appl Phys Lett 88(5):052104View ArticleGoogle Scholar Minnaert B, Burgelman M (2007) Efficiency potential of organic bulk heterojunction solar cells. Prog Photovoltaics Res Appl 15(8):741–748View ArticleGoogle Scholar May V, Kühn O (2000) Intramolecular Electronic Transitions. Charge and Energy Transfer Dynamics in Molecular Systems, 3rd edn. Wiley, New York, p 255–307Google Scholar Lukeš V, Aquino A, Lischka H (2005) Theoretical study of vibrational and optical spectra of methylene-bridged oligofluorenes. J Phys Chem A 109(45):10232–10238View ArticleGoogle Scholar
CommonCrawl
Identification of stable quantitative trait loci (QTLs) for fiber quality traits across multiple environments in Gossypium hirsutum recombinant inbred line population Muhammad Jamshed1, Fei Jia1, Juwu Gong1,2, Koffi Kibalou Palanga1, Yuzhen Shi1, Junwen Li1, Haihong Shang1, Aiying Liu1, Tingting Chen1, Zhen Zhang1, Juan Cai1, Qun Ge1, Zhi Liu3, Quanwei Lu4, Xiaoying Deng1, Yunna Tan1, Harun or Rashid1, Zareen Sarfraz1, Murtaza Hassan5, Wankui Gong1 & Youlu Yuan1 BMC Genomics volume 17, Article number: 197 (2016) Cite this article The identification of quantitative trait loci (QTLs) that are stable and consistent across multiple environments and populations plays an essential role in marker-assisted selection (MAS). In the present study, we used 28,861 simple sequence repeat (SSR) markers, which included 12,560 Gossypium raimondii (D genome) sequence-based SSR markers to identify polymorphism between two upland cotton strains 0–153 and sGK9708. A total of 851 polymorphic primers were finally selected and used to genotype 196 recombinant inbred lines (RIL) derived from a cross between 0 and 153 and sGK9708 and used to construct a linkage map. The RIL population was evaluated for fiber quality traits in six locations in China for five years. Stable QTLs identified in this intraspecific cross could be used in future cotton breeding program and with fewer obstacles. The map covered a distance of 4,110 cM, which represents about 93.2 % of the upland cotton genome, and with an average distance of 5.2 cM between adjacent markers. We identified 165 QTLs for fiber quality traits, of which 47 QTLs were determined to be stable across multiple environments. Most of these QTLs aggregated into clusters with two or more traits. A total of 30 QTL clusters were identified which consisted of 103 QTLs. Sixteen clusters in the At sub-genome comprised 44 QTLs, whereas 14 clusters in the Dt sub-genome that included 59 QTLs for fiber quality were identified. Four chromosomes, including chromosome 4 (c4), c7, c14, and c25 were rich in clusters harboring 5, 4, 5, and 6 clusters respectively. A meta-analysis was performed using Biomercator V4.2 to integrate QTLs from 11 environmental datasets on the RIL populations of the above mentioned parents and previous QTL reports. Among the 165 identified QTLs, 90 were identified as common QTLs, whereas the remaining 75 QTLs were determined to be novel QTLs. The broad sense heritability estimates of fiber quality traits were high for fiber length (0.93), fiber strength (0.92), fiber micronaire (0.85), and fiber uniformity (0.80), but low for fiber elongation (0.27). Meta-clusters on c4, c7, c14 and c25 were identified as stable QTL clusters and were considered more valuable in MAS for the improvement of fiber quality of upland cotton. Multiple environmental evaluations of an intraspecific RIL population were conducted to identify stable QTLs. Meta-QTL analyses identified a common chromosomal region that plays an important role in fiber development. Therefore, QTLs identified in the present study are an ideal candidate for MAS in cotton breeding programs to improve fiber quality. Cotton (Genus Gossypium) is a well-known and highly important industrial crop that has been grown in more than 80 countries located in tropical and subtropical regions [1]. It is used as an important source of natural fiber, seed oil and proteins [2]. The genus Gossypium comprises approximately 45 diploid species and five tetraploid species. Two tetraploid species, G. hirsutum and G. barbadense, and two diploid species, G. herbaceum and G. arboreum have been extensively cultivated around the world, with G. hirsutum covering >90 % of the total world production and is generally referred to as upland cotton [3]. Upland cotton has a high yield potential, whereas G. barbadense has superior fiber quality attributes that subsequently gives it a 30–50 % price advantage over upland cotton [4], whereas the low yield and poor adaptation of G. barbadense restricts its production to specific regions around the world. To fulfill the global requirements of the growing human population and the recent advancement in spinning technology justify the need for increased cotton fiber yield and improved cotton fiber traits. Fiber quality traits and yield components are quantitative traits that are negatively correlated [5]. Therefore, it is very difficult to improve all these traits simultaneously by using conventional breeding procedures. Moreover, this would also be laborious and time consuming [6]. Marker-assisted selection (MAS) is prestigious blessing that breaks the linkage among these traits, as it directly selects genetic markers that are tightly linked to quantitative trait loci (QTLs) other than the conventional procedure of indirectly selecting strains with superior phenotypic performance for breeding. Recent developments in field of molecular markers have allowed plant breeders to identify and evaluate complex agronomical traits. The construction of a molecular genetic map is a foundation for the genetic dissection of important economical and agronomical traits, MAS, and map-based cloning [7]. The first molecular linkage map was constructed in 1994 [8]. Since then, several genetic maps have been constructed including interspecific [9–14] and intraspecific crosses [15–20], to explore the cotton genome and to identify QTLs. However, most fiber QTLs obtained from interspecific crosses have limited applications to upland cotton breeding programs [21, 22] as most of markers used in interspecific cross do not show polymorphism in intraspecific crosses [23]. Saturated intraspecific upland cotton maps are useful but more challenging to construct because of the markedly low rate of polymorphisms of molecular markers within G. hirsutum. To overcome this obstacle scientists have employed different mapping populations or used whole-genome sequence-based markers. They used populations involving more than two parents, which have higher polymorphism rates in intraspecific crosses, namely, from 6.6 to 13.7 %, thereby ensuring a surge in genetic diversity and facilitating the identification of more QTLs [19, 23, 24]. Recently physical genome drafts of G. raimondii [25, 26] G. arboreum [27] and G. hirsutum [28, 29] have been completed which could be utilized in the construction of a high-density linkage map and investigate complex traits such as fiber quality. A previous study suggested that the tetraploid species originated from the hybridization of two diploid species, G. arboreum (A genome) and G. raimondii (D genome) about 1–2 million years ago [2]. Furthermore, more QTLs for fiber traits have been mapped to the Dt sub-genome of upland cotton compared to that in the At sub-genome, thus suggesting that it may play an important role in fiber developments [30–32]. A high-coverage genetic map constructed by Tang et al. [33] with SSR markers developed from G. raimondii BAC-end sequences has revealed that these D genome-based primers are widely distributed and suitable for whole-genome mapping. Therefore, because of the importance of the Dt sub-genome in determining fiber quality traits [23], we used D genome (G. raimondii) sequence-based SSR primers [26], together with SSR primers from Cotton Marker Database (http://www.cottonmarker.org/) to construct an intraspecific linkage map. Previously, Sun et al. [18] reported a linkage map based on an intraspecific cross of upland cotton cultivars sGK9708 and 0–153. They used 200 SSR markers to construct a genetic map and identified 50 QTLs for fiber quality in the F2, F2:3 and RIL populations in 4 environments. We added 603 primers to our published genetic map and identified QTLs for fiber quality in 11 environments, including four previously reported environments [18] (Table 1) to augment our previous results from the same intraspecific RIL (F6:8) population of upland cotton. Furthermore we conducted a meta-analyses with Biomercator V4.2 [34] using the fiber QTLs identified from the present study, those previously reported in F2, F2:3 and RIL population [18], and those generated from meta-analyses conducted by Said et al. [35, 36], along with three succeeding QTLs studies [33, 37, 38]. We identified some stable and consistent QTLs that aggregated into clusters in upland cotton. These QTL clusters can be made more valuable to MAS to improve the fiber quality of upland cotton. Table 1 Details of 11 environments used to evaluate 196 RIL along with their parents Assessment of phenotypic performance The phenotypic performance of the five fiber traits was observed to continuously segregate, and transgressive segregation was observed. Very low absolute skewness and kurtosis values showed that these traits were normally distributed (Table 2). The results of correlation analyses of fiber quality traits in RILs are presented in Table 3. Positive correlations between any of the two traits, which included fiber elongation (FE), fiber length (FL), fiber strength (FS), and fiber uniformity (FU), were observed, with a significance level of 0.01. Fiber micronaire (FM) was negatively correlated with FL and FS. ANNOVA revealed that fiber quality traits presented significant environmental and genetic effects (P < 0.01, Table 4). A broad sense heritability test was also performed for all fiber traits as defined elsewhere [39]. Fiber elongation had the lowest heritability (0.27), whereas that of other fiber traits was high, ranging from 0.80 (FU) to 0.93 (FL). Table 2 The observed phenotypic performance of mean values of fiber quality traits of two parents and RILs in 11 environments Table 3 Correlation analyses among fiber quality traits based on eleven environments for RIL Table 4 ANNOVA and a broad sense heritability of fiber quality traits in RIL population Construction of a genetic map In the present study, we obtained 851 primer pairs that were clearly polymorphic between the two parents, 0–153 and sGK9708. These 851 primer pairs generated 997 loci, in which 132 pairs produced two loci, 13 pairs yielded three loci, and two pairs resulted in four loci. All 997 loci were used in the construction of a linkage map. A total of 793 loci were grouped into 76 linkage groups. Seventy three groups were assigned to 26 chromosomes of upland cotton (Additional file 1). Three groups could not be associated with any chromosome. We named these "UD" following the number. The total recombinant length of this map was 4,110 cM, which represented approximately 93.2 % [40] of the total length of the cotton genome, with an average distance of 5.2 cM between adjacent markers. The At sub-genome spanned 1,635 cM, consisted of 269 markers on 37 linkage groups, and with an average distance of 6.1 cM between adjacent markers. Thirty six groups were assigned to the Dt sub-genome and comprised 524 markers spanning 2,327.4 cM, with an average of 4.6 cM between adjacent loci (Table 5). Chromosomes c4, c5, c14, c16 and c25 had more markers compared to the other chromosomes. Among these, c25 had 113 loci that encompassed204 cM, with an average distance of 1.9 cM between two adjacent markers. The smallest group, c11, had 8 markers, and a total length of 37.8 cM. Table 5 Genomic distributions of SSR markers and identified QTLs Segregation distortion of SSR markers Segregation distortion is a common occurrence in plants [41], including cotton [7]. We observed severe segregation distortions at a rate of about 45 % (Table 5). Among the 361 distorted loci, 241 (67.1 %) favored sGK9708 alleles and 119 (32.9 %) involved 0–153 alleles. A total of 36 segregation distortion regions (SDRs) were detected on 20 chromosomes (Additional file 1). The At sub-genome contained 10 SDRs, whereas the Dt sub-genome comprised 26 SDRs. The largest SDR was on c25, which consisted of 26 distorted loci. The highest number of SDRs on one chromosome was 5, which was observed in c16 and c25. One chromosome (c21) contained 3 SDRs, 6 chromosomes (c4, c13, c14, c18, c20, and c26) comprised 2 SDRs, whereas the remaining 11 chromosomes (c2, c5–c9, c15, c17, c19, c23, and c24) harbored only 1 SDR. Collinearity between the linkage and physical map Loci collinearity between linkage map and the G. hirsutum physical map of various chromosomes is presented in Fig. 1. Some loci whose physical location was not confirmed were excluded from the analysis. The overall loci order on the genetic map was in agreement with the order of corresponding sequences on the At and Dt sub-genomes of G. hirsutum. In the At sub-genome (c1–c13), 1.76 GB corresponded to 1,635 cM, whereas in the Dt sub-genome (c14–c26) 774 Mb was equivalent to 2,327 cM. Collinearity analyses between genetic map 0–153 and physical map of G. hirsutum. a Collinearity analyses between genetic map of 0–153 from C1-C13 (total distance 1635 cM) with corresponding sequence on At sub-genome (1.16GB) of G. hirsutum. b Collinearity analyses between C14-C26 (total distance 2327 cM) of genetic map with corresponding sequence of Dt sub-genome (776 Mb) of G. hirsutum QTL mapping of fiber quality traits A total of 165 QTLs for five fiber traits were identified on 24 chromosomes using the composite interval mapping method [42]., Forty seven QTLs identified in a minimum of 3 and a maximum of 10 environments were declared as stable QTLs, of which 12 QTLs were described as stable in our previous report [18], whereas 35 were novel. The physical map was also used to identify QTLs that confirmed 69 QTLs, including 43 stable ones. Two chromosomes, c14 and c25 had more QTLs. No QTL was detected on c1 and c8. Approximately 58 QTLs were identified on the At sub-genome chromosomes, whereas 107 QTLs were localized to the Dt sub-genome chromosomes. QTLs positions with their observed phenotypic variance (PV) and nearest loci are listed in Additional file 2 and graphically presented in Additional file 1. Fiber length In total, 31 QTLs for FL were detected on 11 chromosomes, including c4, c6, c7, c14, c16, c18, c21, c22, c23, c24, and c25 (Additional file 3). The highest number of QTLs on one chromosome was 6 (c25). Four chromosomes, c6, c16, c22, and c24, harbored only one QTL. Twelve QTLs for FL were identified in only one environment and 5 QTLs were detected in two environments. Fourteen QTLs were identified in 3 or more environments and declared as stable QTLs. Nine stable QTLs for FL on c4, c7, c16, c23 and c25 have favorable alleles from parent 0–153, whereas 5 stable QTLs on c14, c18 and c21 showed favorable alleles from parent sGk9708. The QTL on c4, qFL-C4-2, was identified in three environments, explaining 5.8–8.1 % of the observed PV. Two QTLs on c7, qFL-C7-1 and qFL-C7-2 were also identified in 4 environments described in our previous report [18]. The QTL qFL-C7-1 was stable and identified in 3 environments, explaining 5.8–12.1 % of the observed PV. Three QTLs on c14, qFL-C14-1, qFL-C14-2 and qFL-C14-3 were identified in 8, 6, and 3 environments, explaining 8.1–13.1 %, 7.1–11.5 %, and 6.3–8.1 % of the observed PVs, respectively. The QTL qFL-C14-2 was also identified in our previous report [18] in 3 environments. The QTL on c16, qFL-C16-1 was identified in three environments, explaining 5.7–7.5 % of the observed PV. The QTL on c18, qFL-C18-3 was identified in a single environment in our previous report [18] and now in four environments, explaining 5.2–11.0 % of the detected PV. The QTL on c21, qFL-21-1 was identified in seven environments, explaining 8.7–23.6 % of the observed PV. The QTL on c23, qFL-C23-2 was identified in a single environment in our previous report [18] and now in three environments, explaining 9.0–14.9 % of the observed PV. Five QTLs on c25, qFL-C25-2, qFL-C25-3, qFL-C25-4, qFL-C25-5, and qFL-C25-6 were respectively identified in 4, 6, 5, 5, and 3 environments, explaining 5.2–10 %, 6.8–9.4 %, 6.8–11.8 %, 6.5–10.5 % and 8.6–10.6 % of the observed PVs, respectively. Two QTLs, qFL-C25-2 and qFL-C25-3, were also previously identified in four environments [18]. In total, 14 QTLs out of 31 were also identified during QTL analysis with the physical map including 11 stable QTLs. Fiber strength A total of 35 QTLs for FS were identified on 13 chromosomes including c4, c6, c7, c9, c11, c12, c13, c14, c18, c19, c21, c23, and c25 (Additional file 3). The highest number of QTLs on one chromosome was 7 (c25). Five chromosomes, c6, c9, c11, c12, and c19, harbor a single QTL. Twenty-one QTLs for FS were identified in only one environment and six QTLs were identified in two environments. Eight QTLs were detected in three or more environments and declared as stable QTLs. Six stable QTLs for FS on c7 and c25 have favorable alleles from parent 0–153, whereas two stable QTLs on c14 showed favorable alleles from parent sGk9708. The QTL on c7, qFS-C7-1, was identified in 10 environments, explaining 12.2–26.7 % of the observed PV. The QTL qFS-C7-2 was identified in seven environments, explaining 7.9–11.2 % of the observed PV. Both stable QTLs were also previously identified in four environments [18]. The QTL on c14, qFS-C14-3 was identified in eight environments, explaining 4.9–13.7 % of the observed PV. The QTL qFS-C14-4 was identified in four environments explaining 5.4–8.5 % of the detected PV. Four QTLs on c25, qFS-C25-3, qFS-C25-4, qFS-C25-5, and qFS-C25-6, were respectively identified in 3, 5, 6 and 7 environments, explaining 7.9–17.0 %, 8.4–15.0 %, 5.4–15.0 %, and 6.4–15.8 % of the observed PVs. Two QTLs, qFS-C25-3 and qFS-C25-4 were also earlier identified in four environments [18]. All eight stable QTLs were also detected and confirmed through physical map analysis. Fiber elongation For the FE trait, 32 QTLs were identified and located on 13 chromosomes including c3, c4, c7, c10, c13, c14, c15, c19, c21, c22, c23, c25, and c26, explaining 3.15–17.9 % of the observed PV (Additional file 3). The highest number of QTLs on one chromosome was 8 (c25). Six chromosomes, c10, c13, c21, c22, c23, and c26, harbored a single QTL. Eighteen QTLs were identified in one environment, whereas four QTLs were identified in two environments. Ten QTLs for FE were detected and described as stable QTLs. Six stable QTLs for FE on c4, c22, and c25 have favorable alleles from parent 0–153, whereas four stable QTLs on c14 showed favorable alleles from parent sGk9708. Two QTLs on c4, qFE-C4-2 and qFE-C4-3 were respectively identified in three and five environments, explaining 4.6–8.5 % and 5.5–12.4 % of the observed PVs, respectively. The QTL, qFE-C4-2 was also previously identified in four environments [18]. Four QTLs on c14 qFE-C14-1, qFE-C14-2, qFE-C14-3, and qFE-C14-4 were respectively identified in 4, 3, 4 and 3 environments, explaining 8.8–17.9 %, 7.4–14.3 %, 8–15 % and 9.8–11.8 % of the observed PVs. The QTL on c22, qFE-C22-1, was identified in three environments, explaining 7.2–13.8 % of the observed PV. Three stable QTLs on c25, qFE-C25-4, qFE-C25-5, and qFE-C25-6 were identified in 3, 4, and 3 environments, explaining 5.6–9.4 %, 5.6–10.1 % and 6.8–10.4 % of the observed PVs, respectively. The QTL qFE-C25-4 was also earlier identified in three environments [18]. In total, 13 QTLs, including 10 stable ones were also identified and confirmed through physical map-based QTL analysis. Fiber uniformity For FU, 32 QTLs were identified and located on 14 chromosomes including c2, c4, c5, c6, c7, c10, c12, c13, c14, c16, c18, c19, c23, and c25, explaining 1.8–18.2 % of the observed PV (Additional file 3). The highest number of QTLs on one Chromosome was 7 (c25). Seven chromosomes, c5, c6, c12, c13, c18, c19, and c23 harbored a single QTL. Twenty QTLs were identified in one environment, whereas seven 7 QTLs were detected in two environments. Five QTLs for FU were identified as stable QTLs. Three stable QTLs for FU on c7, c13, and c25 have favorable alleles from parent 0–153, whereas two stable QTLs on c14 showed favorable alleles from parent sGk9708. The QTL on c7, qFU-C7-1 was identified in six environments, explaining 7.0–18.2 % of the observed PV. This was also previously identified in the F2:3 and RIL populations in two environments [18]. The QTL on c13, qFU-C13-1 was identified in three environments, explaining 4.4–6.5 % of the observed PV. It was same QTL that we earlier identified in two environments [18]. Two stable QTLs on c14, qFU-C14-2 and qFU-C14-3, were respectively identified in five and four environments, explaining 6.7–14.2 % and 7.6–10.1 % of the observed PVs. The QTL on c25, qFU-C25-5 was identified in four environments, explaining 6.4–8.0 % of the observed PV. This was also earlier identified in four environments [18]. In total, 13 QTLs including five stable ones were also confirmed through QTL analysis using a physical map. Fiber micronaire A total of 35 QTLs were identified for FM on 16 chromosomes including c3, c4, c5, c6, c7, c10, c13, c14, c15, c16, c17, c20, c21,c23, c24, and c25 (Additional file 3). The highest number of QTLs on one chromosome was 6 (c25). Eight chromosomes, c3, c7, c10, c13, c17, c21, c23, and c24 harbored a single QTL. Eighteen QTLs were identified in one environment, whereas seven QTLs were identified in two environments. Ten QTLs were identified as stable QTLs. Three stable QTLs on c3, c4 and c16 have favorable alleles from parent 0–153, whereas seven stable QTLs on c7, c14 and c25 comprised favorable alleles from parent sGk9708. The QTL on c3, qFM-C3-1 was identified in three environments, explaining 5.3–5.6 % of the observed PV. It was also identified in our previous report in one environment [18]. The QTL on c4, qFM-C4-2 was identified in four environments, explaining 7.7–8.7 % of the observed PV. The QTL on c7, qFM-C7-1, was identified in five environments, explaining 9.6–16.7 % of the observed PV. The QTL on c16, qFM-C16-3, was identified in three environments, explaining 5.2–7.9 % of the observed PV. It was also identified in our previous report [18] in one environment. Two QTLs on c14, qFM-C14-2 and qFM-C14-3, were identified in four environments, explaining 6.1–9.1 % and 6.5–8.6 % of the observed PVs, respectively. The QTL, qFM-C14-3 was also identified in our previous report [18] in one environment. Four QTLs on c25, qFM-C25-1, qFM-C25-2, qFM-C25-4, and qFM-C25-5, were respectively identified in 4, 5, 3, and 4 environments explaining 7.3–10.3 %, 5.2–9.9 %, 6.3–8.5 % and 6.2–10.5 % of the observed PVs. The QTL, qFM-C25-4 was also previously identified [18] in four environments. In total, 15 QTLs, including eight stable ones, were also identified and confirmed through physical map analysis. QTL clusters and meta-analysis QTL clustering is a common phenomenon in plants and also observed in cotton [32, 43, 44]. We identified 30 clusters on 11 chromosomes including c3, c4, c6, c7, c10, c12, c13, c14, c16, c21 and c25. Most of stable QTLs fall in these cluster regions. Six clusters having QTLs for all five fiber traits were identified on c7, c14 and c25, among which, the cluster on c7 c7-cluster 1 contained five QTLs that were tightly linked to markers PGML00802 and NAU2627 explaining 5.9–26.7 % of the observed PV. Two QTL clusters on c25, c25-cluster 2 and c25-cluster 4 contained nine and five QTLs that were tightly linked to markers TMK19, BNL3806b, PGML00463b, SWU19198, and NBRI1529 explaining 5.5–17.0 % and 6.2–14.5 % of the observed PVs, respectively. Three clusters on c14, c14-cluster 2, c14-cluster 3, and c14-cluster 4 each contained five QTLs that were tightly linked to marker SWU14535, PGML00989, NAU3393, SWU 14507, CSHES150, BNL3099, and COT99 and explaining 5.7–15 %, 5.4–11.8 % and 6.3–10.0 % of the observed PVs, respectively. The details of each cluster are summarized in Additional file 4. In the meta-analysis, a total of 38 meta-cluster regions on 11 chromosomes were identified, which included c4, c5, c7, c12, c13, c14, c15, c16, c20, c23, and c25 (Additional file 5). The results showed that some clusters in the 0-153хsGK9708 genetic map, (which were very close) were grouped into the same 20-cM meta-cluster region on the consensus map and part of same meta-cluster (Additional files 6 and 7). Twenty-nine QTLs were projected on consensus chromosome 4 (Cons.c4), which resulted into 2 QTL meta-clusters. C4-m-cluster-1 has 14 QTLs, while C4-m-cluster-2 has seven QTLs (Fig. 2). Fifty-three QTLs were projected on Cons.c7 which yielded three QTL clusters. C7-m-cluster-1, C7-m-cluster 2, and C7-m-cluster-3 contained 12, 21 and 6 QTLs respectively (Fig. 2). Seventy-six QTLs were projected on Cons.c14 which resulted in four meta-clusters. C14-m-cluster-1, C14-m-cluster-2, C14-m-cluster-3, and C14-m-cluster-4 contained 5, 16, 8, and 15 QTLs respectively (Fig. 2). Sixty-eight QTLs were projected on Cons.c25 which resulted in four QTL clusters. C25-m-cluster-1, C25-m-cluster-2, C25-m-cluster-3, and C25-m-cluster-4 contained 18, 15, 21, and 6 QTLs, respectively. The details of the remaining QTLs are summarized in Additional file 5. The cluster on Cons.c4, C4-m-cluster-1 from the 45–65 cM interval was situated between markers DPL0196 and NAU3093 (40,406,319–59,166,290 bp). Cluster, C4-m-cluster-2 from 73 to 93 cM interval was located between markers DPL0451 and CIR218 (60,349,199–62,668,683 bp). The cluster on Cons.c7, C7-m-cluster-1, from the 20–36 cM interval was localized between markers NAU5303 and NAU3918 (3,545,485–8,213,231 bp). Cluster, C7-m-cluster-2 from the 40–58 cM interval was located between markers BNL1597 and NAU2186 (9,280,354–15,534,308 bp). Cluster, C7-m-cluster-3 from 60 to 72 cM interval was situated between markers NAU1085 and CIR238 (16,350,941–2,178,086 bp). The cluster on Cons.c14, C14-m-cluster-1, from 0 to 15 cM interval was localized between markers SWU14174 and SWU14188 (17,025,534–21,454,750 bp). Cluster, C14-m-cluster-2 from 20 to 36 cM interval was localized between markers BNL3099 and COT099 (49,640,545–50,515,032 bp). C14-m-cluster-3 from 38 to 54 cM interval was between markers NAU3393 and PGML0989 (11,844,310–17,029,745 bp) and C14-m-cluster-4 from the 58–78 cM interval was between markers DPL0354 and BNL3033 (62,556,024–70,746,352 bp). Result of Meta analyses by Biomercator 4.2. QTLs belong to same cluster regions have same color. Length of each QTL vertically represents the confidence intervals. Consensus Chromosome 4 (Cons.c4) has two clusters, Cons.c7 has 3 and Cons.c14 has 4 clusters Genetic map The identification of stable QTLs for superior agronomically significant traits and the construction of a high-resolution map are essential for MAS. Several intraspecific genetic maps have been reported; however, these contain some gaps that limit its applicability in generating a high-density genetic map. Major obstacles in the construction of a high-resolution map in intraspecific crosses include a low rate of polymorphism within G. hirsutum and the presence of fixed homozygous genetic blocks [23]. Therefore, there is a need to identify additional markers that covers these gaps in the genetic map. In the present study, an updated genetic map based on our previous report showing 190 markers [18] is described. We have added 586 markers including 386 (41 % of the total number of markers) novel SWU primers. Among these 793 markers, 524 were mapped to the Dt sub-genome and 269 were mapped to the At sub-genome. In our previous report, chromosomes c4, c7, c13, c14, c18, and c25 were identified as important and rich in QTLs for fiber quality traits [18]. Most of the new markers that we have successfully added to the map have been localized to these chromosomes, thereby enabling us to dissect these QTLs into clusters at a higher resolution, as well as identify some important stable QTLs for specific superior features. In the current map, 20 chromosomes harbored more than one linkage group, which indicates a relatively low rate of polymorphism in intra specific crosses which was observed at a rate of 2.9 % in the present study. The observed relatively low rate of polymorphism suggests that the genetic distance between the two parents was very narrow, thereby indicating the need for a saturated intra-specific map. Therefore, our next goal is to develop new SSR and SNP primers that would facilitate in the construction of a saturated genetic map. Segregation distortion Among the 793 mapped primers, 361 showed distortion from the normal Mendelian ratio, which is 1:1 in the case of RILs. This severe distortion was also reported by Sun et al. [18] and commonly occurs in RIL populations that were developed from an introgressed line parent. This high ratio of segregation distortion in our population may be attributed to parent 0–153, which is an introgressed line. Tang et al. [33] also reported similar results (41.8 %) in their RIL population with introgressed parental line, 7235. Segregation distortion could be influenced by various factors including genetic factors such as genetic drift [45] and the environment. However, it does not significantly impact the estimation of QTL position and effect [46]. The broad sense heritability estimates of fiber quality traits were high for FL, FM, FS and FU, indicating that the QTLs identified in this population are more reliable and useful in MAS for cotton breeding. Distribution of QTLs among At and Dt sub-genomes The distribution of QTLs was not uniform in the At and Dt sub-genomes. Among the 165 QTLs identified, 58 QTLs (35 % of the total) were identified in the At sub-genome, whereas 107(65 % of the total) were identified in the Dt sub-genome. Previous comparative meta-analyses conducted by Rong et al. [32], Lacape et al. [43] and Said et al. [36] have indicated that in cotton a higher number of QTLs for fiber traits resided within Dt sub-genome chromosomes, and gene expression among homologous pairs were not uniform [44, 47]. Yu et al. [48] also observed 35 % more QTLs in the Dt sub-genome in an inter specific backcross inbred line population. In the present study a higher number of loci were mapped to the Dt sub-genome. This observation might be due to the presence of more SSR markers that were developed from the D genome sequence [26], although this phenomenon has also been previously described by Yu et al. [49] in their BC1 population. However we also observed that some At sub-genome chromosomes also have more loci than its homologous counterparts in Dt sub-genome chromosomes. This unequal distribution of loci indicates the presence of active regions with more recombination frequencies in the upland cotton genome [4]. Similarly, QTLs on both pairs were also not homogeneous. Most importantly, homology was observed between homologous pair c6-c25 and c7-c16, which harbored QTL clusters and were in agreement with the findings of previous reports [23, 43]. Comparison of the tetraploid cotton genome with its ancestors shows that only the A genome (G. arboreum) produces spinnable fibers, whereas the D genome (G. raimondii) lacks this characteristic. After polyploidization, transposable elements tend to be more active, especially in the Dt sub-genome, compared to that in the At sub-genome. Furthermore, the Dt sub-genome also has a higher mutation rate than the At sub-genome [28]. These findings might also contribute to our observation that the Dt sub-genome harbored more QTLs than the At sub-genome. However, the additional of novel markers for the At Sub-genome may improve the assessment of the contribution of each sub-genome in fiber quality traits. Consistency with previously reported fiber QTLs It is very difficult to compare different QTLs that have been reported in various populations, although this is necessary to fully understand the behavior of complex traits, particularly in a changing environment. In present study, 325 markers were designated as novel SSRs (Additional file 8). However, some regions did not have common markers at QTLs and thus we were unable to compare these with the findings of previous reports. However some stable QTLs with common markers have been identified and were used in our meta-analyses. We identified 38 cluster regions. When a meta-cluster contained stable QTLs from our RIL population and QTLs were identified by recent meta-analyses report [35], this was considered as the same cluster. We also confirmed the previous meta-analyses report [35], which in turn allowed us to declare a true stable QTL in this consensus genomic region. For example Lacape et al. [12], Shen et al. [5, 6] and Sun et al. [18] reported QTLs for fiber strength and length that were linked to primers BNL3806, TMK19, and BNL1440 on c25. We have identified two clusters that were tightly linked to these primers. Four QTLs for fiber quality traits FE, FL, FM and FS were closely linked to primer BNL3806 and TMK19. Four QTLs for the fiber quality traits, FE, FL, FS, and FU were tightly linked to BNL1440. These QTLs were in two meta-cluster regions C25-cluster-1:0–20 cM and C25-cluster-2-25-45 cM. Our results confirm the findings of Said et al. [36] as well as declare that these QTLs are indeed stable. We also verified its physical position in the genome sequence of G. hirsutum. QTL analysis on the basis of the physical map also confirmed that these loci were closely linked to these fiber quality traits. However, additional studies confirming the presence of putative genes in this region are warranted. Meta-clusters that harbor QTLs from our RIL population and the latest QTL studies except for those identified by Said et al. [36] were regarded as new meta-clusters in the present study. Of the 38 meta-clusters, 31 clusters with 314 QTLs were considered similar to that of a previous report [36]. In Addition, we identified seven novel cluster regions with 55 QTLs for fiber quality traits in the present study. The cluster on Cons.c4, C4-m-cluster-1, which contained 14 QTLs including five fiber quality traits FE, FS, FL, FU, and FM was considered as novel. Three stable QTLs identified in our RIL population qFE-C4-3, qFM-C4-2, and qFL-C4-2 and one stable QTL identified by Tang et al. [33], qFS04.1 were also detected in this cluster region. The cluster on c7, C7-m-cluster-3 which contained six QTLs for three fiber traits FL, FS, and FU was considered as a novel cluster. One stable QTL, qFS-C7-2, which was identified in our RIL population and one QTL, qFU07.1 identified by Tang et al. [33], were also confined in this cluster region. On Cons.c14, C14-m-cluster-2 and C14-m-cluster-3 were respectively identified as novel clusters. The C14-m-cluster-2, contained 16 QTLs including six stable QTLs for five fiber quality traits, were identified in our RIL population. C14-m-cluster-3 contained three stable QTLs that were identified in our RIL population and one stable QTL qFS14.1, that was earlier identified by Tang et al. [33]. On c15 and c20, C15-m-cluster-4 and C20-m-cluster-3 were considered as novel clusters, respectively (Additional file 5). On c25, C25-m-cluster-4 which contained six QTLs for fiber quality trait was considered as a novel cluster. Fine mapping of c25 was also performed and discussed separately [50]. QTLs detected in different environments are stable QTLs [51], that may be utilized in MAS and RIL population are useful in the detection of stable QTLs in multiple environments [52]. We have identified 165 QTLs, of which 30 QTL clusters were identified in an intraspecific RIL population in 11 environments. Meta analyses results have revealed that 90 fiber QTLs in the RIL population were in agreement with the findings of previous reports. We have identified seven novel cluster regions that contained 55 fiber QTLs, including 33 QTLs from the RIL population. QTL clusters on c4, c7, c14 and c25 were identified as stable across multiple environments and populations. Therefore, these clusters were considered important for cotton breeders and can be utilized in MAS to improve fiber quality. Mapping population A segregation population consisting of 196 F6:8 RIL individuals were derived from a cross between two upland cotton strains, 0–153 and sGK9708. Strain sGK9708 is insect resistant with moderate fiber quality and high yield potential, whereas strain 0–153 has excellent fiber quality with low yield. The cross was made in 2001 and recombinant inbred lines were developed as detailed by Sun et al. [18]. From 2007 to 2013 multi-environmental evaluations were conducted in six different locations throughout China with two replications in each environment (Table 1). Sun et al. [18] reported four environments from the year 2007 to 2008. We added seven more environments with three additional locations to the total phenotypic data set (Table 1). These evaluation procedures were also earlier described by Zhang et al. [50]. Fiber samples were collected from each line to investigate fiber quality traits. 30 normally opened bolls were collected from each plot. Fiber quality traits were measured using an HVI-100 instrument (user technologies, Switzerland) at the Cotton Fiber Quality Inspection and Testing Center of Ministry of Agriculture, Anyang, China. The fiber quality traits included FE, FL, FM, FS and FU. These observed phenotypic data were analyzed by using the software SPSS20.0 (SPSS, Chicago, IL, USA). For ANOVA, we used the SAS statistical software (version 8.1; SAS institute, Cary NC). To calculate broad sense heritability the following equation was used $$ {{\mathrm{H}}^2}_{\mathrm{B}}\left(\mathrm{Broadsenseheritability}\right)={\upsigma}^2\mathrm{G}/\left(\upsigma \mathrm{G}+{\upsigma}^2\mathrm{G}\ast \mathrm{E}/{\mathrm{n}}_{\mathrm{e}}+{\upsigma}^2\mathrm{E}/{\mathrm{n}}_{\mathrm{e}}{\mathrm{n}}_{\mathrm{r}}\right) $$ Where σ2G is genotypic variance, σ2G*E is genotype * environment variance, and σ2E is variance of error. Young leaves were collected from each line and stored at −80 °C. Genomic DNA from the parents and 196 RILs was extracted using a modified CTAB method as described by Paterson et al. [53]. PCR amplification was performed in a total reaction volume of 10 μL containing 6.15 μL ddH2O, 1 μL 10× buffer (with 1.5 mL Mg+), 0.5 μL dNTPs (10 mM), 0.5 μL each primer, 0.15 μL of Taq polymerase (500U) and 1.2 μL of genomic DNA (30 ng/μL). PCR amplification conditions comprised of an initial denaturation at 95 °C for 3 min, followed by 30 cycles of denaturation at 94 °C for 1 min, annealing at 57 °C for 30s and an extension at 72 °C for 60s followed by a final elongation at 72 °C for 5 min, and then held at 4 °C until analysis. PCR products were electrophoresed on an 8 % non-denatured polyacrylamide gel and silver staining was used for visualization of bands. SSR analyses A total of 28,891 primers pairs, including 12,560 SWU primers (D genome sequence-based), were used to detect polymorphisms between the two parents. Approximately 851 polymorphic primers were selected and used in genotyping 196 recombinant inbred lines. All loci were named according to their respective primer names. In the case of multiple loci generated by single primer pair that showed a different segregation pattern from that of the main band, a suffix of a/b/c was used after the primer name to differentiate loci according to increasing molecular size. The details of the primers used in the present study are listed in Additional file 7. The SWU primers were synthesized by Beijing Genomics Institute (Beijing, China), whereas all other primers were synthesized by Invitrogen, Co. Ltd. (Shanghai, China) and Bio Asia, China (Beijing, China). Construction of the genetic map and QTL analyses A linkage map was constructed using JoinMap 4.0 [54] with a logarithm of odds (LOD) threshold of >7 and a maximal distance of 50 cM. Recombination frequencies were converted to map distance using the Kosambi map function [55]. For some groups that have mixed markers belonging to different chromosomes, a higher LOD score of >9 was used to separate these into small groups. Linkage groups were assigned to its respective chromosome based on previous reports [18, 19, 20, 33, 56 57] and marker mapping information from the CottonGen database (http://www.cottongen.org/). Small groups that were mapped to the same chromosome were recalculated to combine these into one group. A minimum LOD score of 6 was used to combine these groups. In the case of c20 and c23, an LOD score of 5 was used to combine small linkage groups into one. The G. hirsutum fasta sequence was downloaded from http://www.cottongen.org/ and used to check co-linearity of loci between the linkage map and the G. hirsutum physical map. QTL analyses and meta-analysis Windows QTL Cartographer 2.5 [57] was used for QTL mapping. The composite interval mapping method [42] was used at a walking speed of 1 cM and using a 1000-permutation test. QTLs for the same trait across different environments were declared the same when its confidence interval overlapped. A QTL identified in at least three environments was declared as stable. We used a physical map in which loci were arranged according to their position on the G. hirsutum genome, and QTL analysis was performed using the composite interval mapping method as earlier described. Meta-analysis was performed with Biomercator 4.2 [34] as described elsewhere [36]. A previous meta-QTL analyses established a QTL data-base [35] consisting of 2,274 QTLs, which included 437 highly consistent QTLs for fiber quality traits from 58 QTL reports on upland cotton [35]. We downloaded its QTL information, including names and CI from www.cottonqtldb.org. We used the high-density consensus map [58] as a reference to project our QTLs and performed chromosome-wise meta-analyses. A total of 850 fiber QTLs from six QTLs reports including 165 fiber QTLs from our RIL population, 50 fiber QTLs from the F2,F2:3, and RIL populations of same parents [18], and 635 fiber QTLs from previous reports literatures [33, 35, 37, 38] were thereby generated. For meta-analyses, two separate input files were prepared, a map file and a QTL file. The map file contained distances between markers on each chromosome, and the QTL file contained 12 columns, where each row represented a single QTL in a given environment, i.e., QTL name, trait name, trait ontology, experiment place, year, chromosome name, linkage group name, LOD score, observed PV value (R2), most likely position of the QTL, CI start position and CI end position. First both files were loaded into the software and checked for map connectivity. Then QTLs were projected on a consensus map and meta-analyses were performed for each trait. Four models were thus generated, each with an Akaike information criterion (AIC) value. The model with lowest AIC value was selected and used for the identification of mQTL position, whereas QTL clusters were determined manually. The QTLs within the region of 20 cM on the consensus map were considered as part of same cluster as earlier defined by Said et al. [36]. AIC: Akaike information criterion FE: FL: FM: FU: H2 B : broad sense heritability LOD: logarithm of odds marker-assisted selection PV: phenotypic variance QTL: quantitative trait loci SDRs: segregation distortion regions SSR: simple sequence repeats Alkuddsi YA, Rao MG, Patil SS, Joshi M, Gowda TH. Heterosis Studies and per se Performance of Intra Hirsutum Hybrids (G. hirsutum x G. hirsutum) for Kapas Yield and its Components in Cotton. Cotton Genomics Genet. 2013;4(6):73–92. doi:10.5376/cgg.2013.04.0006. Chen ZJ, Scheffler BE, Dennis E, Triplett BA, Zhang T, Guo W, Chen X, Stelly DM, Rabinowicz PD, Town CD et al. Toward sequencing cotton (Gossypium) genomes. Plant physiol. 2007;145(4):1303–10. doi:10.1104/pp.107.107672. Page JT, Huynh MD, Liechty ZS, Grupp K, Stelly D, Hulse AM, Ashrafi H, Van Deynze A, Wendel JF, Udall JA. Insights into the evolution of cotton diploids and polyploids from whole-genome re-sequencing. G3 (Bethesda). 2013;3:1809–18. doi:10.1534/g3.113.007229. Ulloa M, Saha S, Jenkins JN, Meredith WR, McCarty JC, Stelly DM. Chromosomal assignment of RFLP linkage groups harboring important QTLs on an intraspecific cotton (Gossypium hirsutum L.) joinmap. J Hered. 2005;96(2):132–44. Shen XL, Guo WZ, Lu QX, Zhu XF, Yuan YL, Zhang TZ. Genetic mapping of quantitative trait loci for fiber quality and yield trait by RIL approach in Upland cotton. Euphytica. 2007;155(3):371–80. Shen XL, Guo WZ, Zhu XF, Yuan YL, Yu JZ, Kohel RJ, Zhang TZ. Molecular mapping of QTLs for fiber qualities in three diverse lines in Upland cotton using SSR markers. Mol Breeding. 2005;15(2):169–81. Shi Y, Li W, Li A, Ge R, Zhang B, Li J, Liu G, Liu A, Shang H, Gong J. Constructing a high-density linkage map for Gossypium hirsutum x Gossypium barbadense and identifying QTLs for lint percentage. J integr plant biol. 2015;57(5):450–67. doi:10.1111/jipb.12288. Reinisch AJ, Dong JM, Brubaker CL, Stelly DM, Wendel JF, Paterson AH. A detailed RFLP map of cotton, Gossypium hirsutum x Gossypium barbadense: chromosome organization and evolution in a disomic polyploid genome. Genetics. 1994;138(3):829–47. Jiang C, Wright RJ, El-Zik KM, Paterson AH. Polyploid formation created unique avenues for response to selection in Gossypium (cotton). Proc Natl Acad Sci U S A. 1998;95(8):4419–24. Kohel RJ, Yu J, Park YH, Lazo GR. Molecular mapping and characterization of traits controlling fiber quality in cotton. Euphytica. 2001;121(2):163–72. Mei M, Syed NH, Gao W, Thaxton PM, Smith CW, Stelly DM, Chen ZJ. Genetic mapping and QTL analysis of fiber-related traits in cotton (Gossypium). Theor Appl Genet. 2004;108(2):280–91. Lacape JM, Nguyen TB, Courtois B, Belot JL, Giband M, Gourlot JP, Gawryziak G, Roques S, Hau B. QTL analysis of cotton fiber quality using multiple Gossypium hirsutum x Gossypium barbadense backcross generations. Crop Sci. 2005;45(1):123–40. Frelichowski JE, Palmer MB, Main D, Tomkins JP, Cantrell RG, Stelly DM, Yu J, Kohel RJ, Ulloa M. Cotton genome mapping with new microsatellites from Acala 'Maxxa' BAC-ends. Mol Genet Genomics. 2006;275(5):479–91. Ulloa M, Wang C, Hutmacher RB, Wright SD, Davis RM, Saski CA, Roberts PA. Mapping Fusarium wilt race 1 resistance genes in cotton by inheritance, QTL and sequencing composition. Mol Genet Genomics. 2011;286(1):21–36. Shappley ZW, Jenkins JN, Meredith WR, McCarty JC. An RFLP linkage map of Upland cotton, Gossypium hirsutum L. Theor Appl Genet. 1998;97(5–6):756–61. Zhang ZS, Xiao YH, Luo M, Li XB, Luo XY, Hou L, Li DM, Pei Y. Construction of a genetic linkage map and QTL analysis of fiber-related traits in upland cotton (Gossypium hirsutum L.). Euphytica. 2005;144(1–2):91–9. Wang BH, Guo WZ, Zhu XF, Wu YT, Huang NT, Zhang TZ. QTL mapping of fiber quality in an elite hybrid derived-RIL population of upland cotton. Euphytica. 2006;152(3):367–78. Sun FD, Zhang JH, Wang SF, Gong WK, Shi YZ, Liu AY, Li JW, Gong JW, Shang HH, Yuan YL. QTL mapping for fiber quality traits across multiple generations and environments in upland cotton. Mol Breeding. 2012;30(1):569–82. Zhang K, Zhang J, Ma J, Tang SY, Liu DJ, Teng ZH, Liu DX, Zhang ZS. Genetic mapping and quantitative trait locus analysis of fiber quality traits using a three-parent composite population in upland cotton (Gossypium hirsutum L.). Mol Breeding. 2012;29(2):335–48. Liang QZ, Hu C, Hua H, Li ZH, Hua JP. Construction of a linkage map and QTL mapping for fiber quality traits in upland cotton (Gossypium hirsutum L.). Chinese Sci Bull. 2013;58(26):3233–43. Lin ZX, Zhang YX, Zhang XL, Guo XP. A high-density integrative linkage map for Gossypium hirsutum. Euphytica. 2009;166(1):35–45. Shang L, Liang Q, Wang Y, Wang X, Wang K, Abduweli A, Ma L, Cai S, Hua J. Identification of stable QTLs controlling fiber traits properties in multi-environment using recombinant inbred lines in Upland cotton (Gossypium hirsutum L.). Euphytica. 2015;205(3):877–88. Fang DD, Jenkins JN, Deng DD, McCarty JC, Li P, Wu J. Quantitative trait loci analysis of fiber quality traits using a random-mated recombinant inbred population in Upland cotton (Gossypium hirsutum L.). BMC Genomics. 2014;15:397. doi:10.1186/1471-2164-15-397. Qin H, Guo W, Zhang YM, Zhang T. QTL mapping of yield and fiber traits based on a four-way cross population in Gossypium hirsutum L. Theor Appl Genet. 2008;117(6):883–94. Wang KB, Wang ZW, Li FG, Ye WW, Wang JY, Song GL, Yue Z, Cong L, Shang HH, Zhu SL et al. The draft genome of a diploid cotton Gossypium raimondii. Nat Genet. 2012;44(10):1098–103. doi:10.1038/ng.2371. Paterson AH, Wendel JF, Gundlach H, Guo H, Jenkins J, Jin DC, Llewellyn D, Showmaker KC, Shu SQ, Udall J et al. Repeated polyploidization of Gossypium genomes and the evolution of spinnable cotton fibres. Nature. 2012;492(7429):423–7. doi:10.1038/nature11798. Li F, Fan G, Wang K, Sun F, Yuan Y, Song G, Li Q, Ma Z, Lu C, Zou C et al. Genome sequence of the cultivated cotton Gossypium arboreum. Nat Genet. 2014;46:567–72. doi:10.1038/ng.2987. Li F, Fan G, Lu C, Xiao G, Zou C, Kohel RJ, Ma Z, Shang H, Ma X, Wu J et al. Genome sequence of cultivated Upland cotton (Gossypium hirsutum TM-1) provides insights into genome evolution. Nat Biotech. 2015;33:524–30. doi:10.1038/nbt.3208. Zhang T, Hu Y, Jiang W, Fang L, Guan X, Chen J, Zhang J, Saski CA, Scheffler BE, Stelly DM et al. Sequencing of allotetraploid cotton (Gossypium hirsutum L. acc. TM-1) provides a resource for fiber improvement. Nat Biotech. 2015;33:531–7. doi:10.1038/nbt.3207. Jiang C, Wright RJ, Woo SS, DelMonte TA, Paterson AH. QTL analysis of leaf morphology in tetraploid Gossypium (cotton). Theor Appl Genet. 2000;100(3–4):409–18. Paterson AH, Saranga Y, Menz M, Jiang CX, Wright RJ. QTL analysis of genotype x environment interactions affecting cotton fiber quality. Theor Appl Genet. 2003;106(3):384–96. doi:10.1007/s00122-002-1025-y. Rong J, Feltus EA, Waghmare VN, Pierce GJ, Chee PW, Draye X, Saranga Y, Wright RJ, Wilkins TA, May OL et al. Meta-analysis of polyploid cotton QTL shows unequal contributions of subgenomes to a complex network of genes and gene clusters implicated in lint fiber development. Genetics. 2007;176(4):2577–88. Tang SY, Teng ZH, Zhai TF, Fang XM, Liu F, Liu DJ, Zhang J, Liu DX, Wang SF, Zhang K et al. Construction of genetic map and QTL analysis of fiber quality traits for Upland cotton (Gossypium hirsutum L.). Euphytica. 2015;201(2):195–213. Arcade A, Labourdette A, Falque M, Mangin B, Chardon F, Charcosset A, Joets J. BioMercator: integrating genetic maps and QTL towards discovery of candidate genes. Bioinformatics. 2004;20(14):2324–6. Said JI, Knapka JA, Song M, Zhang J. Cotton QTLdb: a cotton QTL database for QTL analysis, visualization, and comparison between Gossypium hirsutum and G. hirsutum × G. barbadense populations. Mol Genet Genomics. 2015;290(4):1615–25. Said JI, Song M, Wang H, Lin Z, Zhang X, Fang DD, Zhang J. A comparative meta-analysis of QTL between intraspecific Gossypium hirsutum and interspecific G. hirsutum × G. barbadense populations. Mol Genet Genomics. 2014;290(3):1003–25. Tan ZY, Fang XM, Tang SY, Zhang J, Liu DJ, Teng ZH, Li L, Ni HJ, Zheng FM, Liu DX et al. Genetic map and QTL controlling fiber quality traits in upland cotton (Gossypium hirsutum L.). Euphytica. 2015;203(3):615–28. doi:10.1007/s10681-014-1288-9. Shao QS, Zhang FJ, Tang SY, Liu Y, Fang XM, Liu DX, Liu DJ, Zhang J, Teng ZH, Paterson AH et al. Identifying QTL for fiber quality traits with three upland cotton (Gossypium hirsutum L.) populations. Euphytica. 2014;198:43–58. Hallauer AR, Carena MJ, Miranda Filho JB. Quantitative Genetics in Maize Breeding. New York: Springer; 2010. p. 664. Rong JK, Abbey C, Bowers JE, Brubaker CL, Chang C, Chee PW, Delmonte TA, Ding XL, Garza JJ, Marler BS et al. A 3347-locus genetic recombination map of sequence-tagged sites reveals features of genome organization, transmission and evolution of cotton (Gossypium). Genetics. 2004;166(1):389–417. Xian-Liang S, Xue-Zhen S, Tian-Zhen Z. Segregation distortion and its effect on genetic mapping in plants. Chin J Agric Biotechnol. 2006;3(03):163–9. Zeng ZB. Precision mapping of quantitative trait loci. Genetics. 1994;136(4):1457–68. Lacape JM, Llewellyn D, Jacobs J, Arioli T, Becker D, Calhoun S, Al-Ghazi Y, Liu S, Palai O, Georges S et al. Meta-analysis of cotton fiber quality QTLs across diverse environments in a Gossypium hirsutum x G. barbadense RIL population. BMC Plant Biol. 2010;10:132. Said JI, Lin Z, Zhang X, Song M, Zhang J. A comprehensive meta QTL analysis for fiber quality, yield, yield related and morphological traits, drought tolerance, and disease resistance in tetraploid cotton. BMC Genomics. 2013;14:776. Zhang ZS, Hu MC, Zhang J, Liu DJ, Zheng J, Zhang K, Wang W, Wan Q. Construction of a comprehensive PCR-based marker linkage map and QTL mapping for fiber quality traits in upland cotton (Gossypium hirsutum L.). Mol Breeding. 2009;24(1):49–61. Zhang L, Wang S, Li H, Deng Q, Zheng A, Li S, Li P, Li Z, Wang J. Effects of missing marker and segregation distortion on QTL mapping in F2 populations. Theor Appl Genet. 2010;121(6):1071–82. Flagel L, Udall J, Nettleton D, Wendel J. Duplicate gene expression in allopolyploid Gossypium reveals two temporally distinct phases of expression evolution. BMC Biol. 2008;6:16. Yu J, Zhang K, Li S, Yu S, Zhai H, Wu M, Li X, Fan S, Song M, Yang D et al. Mapping quantitative trait loci for lint yield and fiber quality across environments in a Gossypium hirsutum x Gossypium barbadense backcross inbred line population. Theor Appl Genet. 2013;126(1):275–87. Yu Y, Yuan D, Liang S, Li X, Wang X, Lin Z, Zhang X. Genome structure of cotton revealed by a genome-wide SSR genetic map constructed from a BC1 population between Gossypium hirsutum and G. barbadense. BMC Genomics. 2011;12(1):1–14. Zhang Z, Li J, Muhammad J, Cai J, Jia F, Shi Y, Gong J, Shang H, Liu A, Chen T, Ge Q, Palanga KK, Lu Q, Deng X, Tan Y, Li W, Sun L, Gong W, Yuan Y. High Resolution Consensus Mapping of Quantitative Trait Loci for Fiber Strength, Length and Micronaire on Chromosome 25 of the Upland Cotton (Gossypium hirsutum L.). PloS ONE. 2015;10(8):e0135430. Su CF, Lu WG, Zhao TJ, Gai JY. Verification and fine-mapping of QTLs conferring days to flowering in soybean using residual heterozygous lines. Chinese Sci Bull. 2010;55(6):499–508. Ning ZY, Chen H, Mei HX, Zhang TZ. Molecular tagging of QTLs for fiber quality and yield in the upland cotton cultivar Acala-Prema. Euphytica. 2014;195(1):143–56. Paterson AH, Brubaker CL, Wendel JF. A rapid method for extraction of cotton (Gossypium spp.) genomic DNA suitable for RFLP or PCR analysis. Plant Mol Biol Rep. 1993;11(2):122–7. van Ooijen JW. JoinMap 4.0: Software for the Calculation of Genetic Linkage Maps in Experimental Populations. Wageningen: Kyazma B.V; 2006. Kosambi DD. The estimation of map distance from recombination values. Ann Eugen. 1944;12:172–5. Liu D, Liu F, Shan X, Zhang J, Tang S, Fang X, Liu X, Wang W, Tan Z, Teng Z et al. Construction of a high-density genetic map and lint percentage and cottonseed nutrient trait QTL identification in upland cotton (Gossypium hirsutum L.). Mol Genet Genomics. 2015;290(5):1683–700. Wang S, Basten CJ, Zeng ZB. Windows QTL Cartographer 2.5. Raleigh: Department of Statistics, North Carolina State University; 2012. http://statgen.ncsu.edu/qtlcart/WQTLCart.htm. Blenda A, Fang DD, Rami JF, Garsmeur O, Luo F, Lacape JM. A high density consensus genetic map of tetraploid cotton that integrates multiple component maps through molecular marker redundancy check. Plos ONE. 2012;7(9):e45739. The National Natural Science Foundation of China (31371668, 31471538), the National High Technology Research and Development Program of China (2012AA101108), the National Agricultural Science and Technology Innovation Project for CAAS, and the Henan Province Foundation with Cutting-edge Technology Research Projects (142300413202) supported this study. The funding agencies played no role in the study design, data collection and analysis, decision to publish, or the preparation of the manuscript. State Key Laboratory of Cotton Biology, Key Laboratory of Biological and Genetic Breeding of Cotton, The Ministry of Agriculture, Institute of Cotton Research, Chinese Academy of Agricultural Sciences, Anyang, 455000, Henan, China Muhammad Jamshed , Fei Jia , Juwu Gong , Koffi Kibalou Palanga , Yuzhen Shi , Junwen Li , Haihong Shang , Aiying Liu , Tingting Chen , Zhen Zhang , Juan Cai , Qun Ge , Xiaoying Deng , Yunna Tan , Harun or Rashid , Zareen Sarfraz , Wankui Gong & Youlu Yuan College of Agronomy, Xinjiang Agricultural University, Key Laboratory of Agro-Biotechnology, Urumqi, 830052, Xinjiang, China Juwu Gong College of Bioscience and Biotechnology, Hunan Agricultural University, Changsha, 410128, Hunan, China Zhi Liu Anyang College of Technology, Anyang, 455000, Henan, China Quanwei Lu Department of Materials Science and Engineering, College of Engineering, Peking University, Beijing, 100871, China Murtaza Hassan Search for Muhammad Jamshed in: Search for Fei Jia in: Search for Juwu Gong in: Search for Koffi Kibalou Palanga in: Search for Yuzhen Shi in: Search for Junwen Li in: Search for Haihong Shang in: Search for Aiying Liu in: Search for Tingting Chen in: Search for Zhen Zhang in: Search for Juan Cai in: Search for Qun Ge in: Search for Zhi Liu in: Search for Quanwei Lu in: Search for Xiaoying Deng in: Search for Yunna Tan in: Search for Harun or Rashid in: Search for Zareen Sarfraz in: Search for Murtaza Hassan in: Search for Wankui Gong in: Search for Youlu Yuan in: Correspondence to Wankui Gong or Youlu Yuan. YLY and WKG conceived and designed the experiments. MJ performed D genome-based SSR analysis. JF, JG and KKP performed analysis using the remaining SSR primers. MJ drafted the manuscript. YLY and WKG contributed to the final editing of manuscript. YZS, JWG, HHS, AYL, TTC, ZZ, JC, QG, ZL, QWL, XYD, YNT, HR, ZS, and MH collected field data from six experimental areas during different years. All authors contributed in the interpretation of results and approved the final manuscript. Genetic linkage map of an intraspecific RIL population. (ZIP 7480 kb) Fiber QTLs identified in an intraspecific RIL population and their position. (XLSX 39 kb) Position and markers details of identified QTLs for fiber quality traits in an intraspecific RIL. (XLSX 37 kb) Details of QTL clusters on the 0–153 genetic map of a RIL. (XLSX 29 kb) Summary of meta-clusters on a consensus map. (XLSX 10 kb) Identified QTLs for fiber traits in a RIL and comparison with the findings of previous reports. (XLSX 46 kb) Meta-analysis results of the remaining chromosomes. (DOCX 672 kb) Details of the primers used in the study. (XLSX 10 kb) Jamshed, M., Jia, F., Gong, J. et al. Identification of stable quantitative trait loci (QTLs) for fiber quality traits across multiple environments in Gossypium hirsutum recombinant inbred line population. BMC Genomics 17, 197 (2016) doi:10.1186/s12864-016-2560-2 Received: 30 August 2015 Accepted: 29 February 2016 Recombinant inbred line Upland cotton Multiple environments SSR markers Meta-QTL analyses Stable QTLs
CommonCrawl
Dynamic graph cut based segmentation of mammogram S. Pitchumani Angayarkanni1, Nadira Banu Kamal2 & Ranjit Jeba Thangaiya3 SpringerPlus volume 4, Article number: 591 (2015) Cite this article This work presents the dynamic graph cut based Otsu's method to segment the masses in mammogram images. Major concern that threatens human life is cancer. Breast cancer is the most common type of disease among women in India and abroad. Breast cancer increases the mortality rate in India especially in women since it is considered to be the second largest form of disease which leads to death. Mammography is the best method for diagnosing early stage of cancer. The computer aided diagnosis lacks accuracy and it is time consuming. The main approach which makes the detection of cancerous masses accurate is segmentation process. This paper is a presentation of the dynamic graph cut based approach for effective segmentation of region of interest (ROI). The sensitivity, the specificity, the positive prediction value and the negative prediction value of the proposed algorithm are determined and compared with the existing algorithms. Both qualitative and quantitative methods are used to detect the accuracy of the proposed system. The sensitivity, the specificity, the positive prediction value and the negative prediction value of the proposed algorithm accounts to 98.88, 98.89, 93 and 97.5% which rates very high when compared to the existing algorithms. The population based cancer registry evidently shows from the various statistics, that the incidence of breast cancer is rapidly rising, amounting to a significant percentage of all cancers in women. Breast cancer is the commonest cancer in urban areas in India and accounts for about 25–33% of all cancers in women. Over 50% of the breast cancer patients in India, being in stages 3 and 4 will definitely face the survival problem (Hassanien and Ali 2011). The survival rate can be increased only through early diagnosis. Image processing technique together with data mining is used for extraction and analysis of the ROI. Tumor can be classified into three categories normal, benign and malignant. A normal tumor is a mass of tissue which exists at the expense of healthy tissue. Malignant tumor has no distinct border. They tend to grow rapidly, increasing the pressure within the breast cells and can spread beyond the point from which they originate. Thus they grow faster than benign tumors and cause serious health problems if, left unnoticed. Benign tumors are composed of harmless cells and they have clearly defined borders. They can be completely removed and are unlikely to recur. MRI mammogram images taken after the appropriate segmentation of the tumor make classification of tumor into malignant, benign and normal a difficult task, due to complexity and variation in tumor tissue characteristics like its shape, size, grey level intensities and location. Effective segmentation techniques results in accurate classification of such cancerous masses. A database of 1,528 mammograms, originating from the mammography image analysis society (MIAS), digital database for screening mammography, University of South Florida DDSM Resource, LLNL/UCSF database (Lawrence Livermore National Laboratories (LLNL), University of California at San Francisco) and Nijmegen digital mammogram database were used for the study. Image preprocessing and enhancement The main objective behind the preprocessing step is to enlarge the intensity difference between objects and background. Preprocessing technique increases the optical inspection of an image. The proposed approach improves the image data by suppressing unwanted distortions and enhance the important image features. This will produce reliable representations of breast tissue structures. The fuzzy transformation function for computing the fuzzy plane value P is defined as follows: α = min β1 = (α + γ)/2 β2 = (max + γ)/2 γ = max/2 The histogram equalization of the gray levels in the original image can be characterized using five parameters:(α, β1, γ, β2, max). The aim is to decrease the gray levels below β1, and above β2. Intensity levels between β1 and γ, and β2 and γ are stretched in opposite directions towards the mean γ (Fig. 1). Histogram of the input image. Step 1: Fuzzification: The following fuzzy rules are used for contrast enhancement: Rule-1: If α ≤ ui < β1 then P = 2 ((ui − α)/(γ − α))2 If β1 ≤ ui < γ then P = 1 − 2 ((ui − γ)/(γ − α))2 If γ ≤ ui < β2 then P = 1 − 2((ui − γ)/(max − γ))2 If β2 ≤ ui < max then P = 2 ((ui − γ)/(max − γ))2 where ui = f(x,y) is the ith pixel intensity Step 2: Fuzzy Modification Step 3: Defuzzification The quality of the preprocessed image is to be checked with the following parameters like peak signal to noise ratio (PSNR), noise standard deviation (NSD), mean square error (MSE), equivalent number of looks (ENL). Image segmentation and ROI extraction The region of interest i.e. the tumor region is segmented using the Graph cut method. The main purpose of using this method for segmentation is that it segments the mammogram into different mammographic densities. It is useful for risk assessment and quantitative evaluation of density changes. Apart from the above advantage it produces the contour (closed region) or a convex hull which is used for analyzing the morphological and novel features of the segmented region. The above technique results in efficient formulation of attributes which helps in classification of the ROI into benign, malignant or normal. Graph cuts have been used in recent years for interactive image segmentation (Hassanien and Badr 2003). The core ideology of graph cuts is to map an image onto a network graph, and construct an energy function on the labeling, and then do energy minimization with dynamic optimization techniques. This study proposes a new segmentation method using iterated graph cuts based on multi-scale smoothing. The multi-scale method can segment mammographic images with a stepwise process from global to local segmentation by iterating graph cuts. The modified graph cut approach used by K. Santle Camilus (Hassanien and Badr 2003) is implemented in this project. Steps involved in graph cut segmentation are: Form a graph Sort the graph edges Merging regions based on threshold From the mammogram image a graph G = (V, E) is constructed such that V represents the pixel values of the 3 × 3 image and E represents the edges defined between the neighboring pixels. The weight of any edge W(Vi, Vj) is a measure of dissimilarity between the pixels Vi and Vj. The weight for an edge is measured by means of considering the Euclidian distance between the two pixels Vi and Vj (Ertas et al. 2001; Shah et al. 2011; Masek et al. 2001; Thamaraichelvi and Yamuna 2013; Jayadevappa et al. 2009; Benfield et al. 2007; Elnakib et al. 2011). It is represented by the equation $$ {\text{W}}\left( {{\text{Vi}},{\text{Vj}}} \right) = \sqrt {(\varvec{xi} - \varvec{xj})^{2} + (\varvec{yi} - \varvec{yj})^{ 2} } $$ $$ {\text{Vi}} = \left( {{\text{xi}},{\text{yi}}} \right)\quad {\text{Vj}} = \left( {{\text{xj}},{\text{yj}}} \right) $$ Sort the edges in ascending order of their weights such that W(e1) ≤ W(e2). Pick one edge ei in the sorted order from ei to en where ei is between two groups of pixels which determines whether to merge the two groups of pixel to form a single group or not. Each vertex is considered as a group. The two groups which satisfies the merge criteria are merged together. The different groups of pixels representing different regions or objects are obtained. Determining the merge criteria: When the pixels of a group have intensity values similar to the pixels of the other group, then intuitively the calculated IRM between these groups should be small. The expected smaller value of the IRM to merge these two regions is tested by comparing it with the dynamic threshold. Hence, the merge criterion, to merge the two regions, R 1 and R 2, is defined as: $$ {\text{Merge}}\left( {{\text{R}}_{ 1} ,{\text{R}}_{ 2} } \right),\quad {\text{if IRM}}\left( {{\text{R}}_{ 1} ,{\text{R}}_{ 2} } \right) \le DT({\text{R}}1,R2) $$ Figure 2 specifies the weighted calculation applied to the input image. Figure 3 shows how graph cut method is applied on a 3 × 3 image. Figure 4 shows the stage by stage output of the proposed method and the segmented region is shown in Fig. 5. Weight calculation for the 3 × 3 matrix. Graph cut approach. a Input image, b ROI, c segmented boundaries, d edge, e pectoral muscle identification indicated by red color, f ground truth value represented by white. Segmented image. Performance measure of the proposed mathematical approach at each stage was estimated. Tabulation in Table 1 clearly shows a high PSNR value which shows that the image is highly enhanced (Camilus et al. 2010). Table 1 PSNR tabulation The Table 2 below depicts the interpretation between the two approaches using the quantitative measures to determine the overall classification accuracy (Zhang et al. 2012; Annamalai et al. 2009; Ramaswamy and Rose 2009; Peng et al. 2010; Artan et al. 2012). Table 2 Segmentation technique comparision Segmentation accuracy Segmentation accuracy is depicted in Table 3. Table 3 Segmentation accuracy metrics Computational efficiency Table 4 clearly depicts the computational efficiency of the proposed method is efficient compared to the other existing technique. Table 4 Computational efficiency of the proposed method Metrics for evaluating the segmentation technique includes The region-based criteria mutually compare the machine segmented regions with the correct ground truth regions. Let A(I, J) denote the machine segmented region and B(I, J) denotes the ground truth region then the region overlap acceptance is controlled by the threshold k = 0.75 then Region overlap Local refinement error $$ {\text{E}}\left( {{\text{A}},{\text{B}},{\text{k}}} \right) = {{\left| {{\text{R}}\left( {{\text{A}},{\text{k}}} \right)/{\text{R}}\left( {{\text{B}},{\text{k}}} \right)} \right|} \mathord{\left/ {\vphantom {{\left| {{\text{R}}\left( {{\text{A}},{\text{k}}} \right)/{\text{R}}\left( {{\text{B}},{\text{k}}} \right)} \right|} {{\text{R}}\left( {{\text{A}},{\text{k}}} \right)}}} \right. \kern-0pt} {{\text{R}}\left( {{\text{A}},{\text{k}}} \right)}} $$ Edgel matching Overlay the original with segmented image and compute correspondence via min-cost assignment on bipartite graph. The F-measure value is shown in Fig. 6. F-measure. The proposed mathematical approach yields a high level of accuracy within a minimum period of time that confirms the efficiency of the algorithm. The GUI based CAD system was developed using Scilab and R2. The segmentation speed accounts to 6 ms using graph cut based Otsu's thresholding. The main goal of classifying the tumors into benign, malignant and normal is achieved with a great accuracy compared to other techniques because of the implementation of the accurate segmentation technique employed. The proposed technique is computationally efficient as specified in the tabulation above. Further the complexity of the algorithm in asymptotic sense is equivalent to o(log n). Angayarkanni SP, Kamal NB, Thavavel V (2012) Automatic detection and classification of cancerous masses in mammogram. In: Third international conference on computing communication and networking technologies Annamalai M, Guo D, Susan M, Steiner J (2009) An oracle white paper: oracle database 11 g DICOM medical image support Artan Y, Haider MA, Langer DL, van der Kwast TH, Evans AJ, Yang Y et al (2012) A boosted Bayesian multiresolution classifier for prostate cancer detection from digitized needle biopsies. IEEE Trans Biomed Eng 59:1205–1218 Benfield MC, Grosjean P, Culverhouse PF, Irigoien X, Sieracki ME, Lopez-Urrutia A et al (2007) RAPID: research on automated plankton identification. Oceanography 20:172–187 Bojar K, Nieniewski M (2008) Mathematical morphology (MM) features for classification of cancerous masses in mammograms, information technologies in biomedicine. Adv Soft Comput 47:129–138. Springer Camilus KS, Govindan VK, Sathidevi PS (2010) Computer-aided identification of the pectoral muscle in digitized mammograms. J Digit Imaging 23(5):562–580 Elnakib A, Gimel G, Suri JJ, El-baz A, Gimel'farb G (2011) Medical image segmentation: a brief survey. In: El-Baz AS, Acharya UR, Laine AF, Suri JS (eds) Medical image segmentation. Springer, New York, pp 1–39 Erickson BJ, Bartholmai B (2002) Computer-aided detection and diagnosis at the start of the third millennium. J Digit Imaging 15:59–68 Ertas G, Gulcur HO, Aribal E, Semiz A (2001) Feature extraction from mammographic mass shapes and development of a mammogram database, engineering in medicine and biology society. In: Proceedings of the 23rd annual international conference of the IEEE, vol 3, pp 2752–2755 Hassanien AE, Ali JMH (2011) Enhanced rough sets rule reduction algorithm for classification digital mammography. J Intell Syst 13:151–171 Hassanien AE, Badr A (2003) A comparative study on digital a mammography enhancement algorithms based on fuzzy theory. Sci Inform Control 12:21–31 Jayadevappa D, Srinivas Kumar S, Murty DS (2009) A hybrid segmentation model based on watershed and gradient vector flow for the detection of brain tumor. Int J Signal Process Image Process Pattern Recognit 2(3):29–42 Masek M, Chandrasekhar R, Desilva CJS, Attikiouzel Y (2001), Spatially based application of the minimum cross-entropy thresholding algorithm to segment the pectoral muscle in mammograms. In: The seventh Australian and New Zealand intelligent information systems conference, Nov. 18–21, pp 101–106 Mu T, Nandi AK, Rangayyan RM (2008) Classification of breast masses using selected shape, edge-sharpness, and texture features with linear and kernel-based classifiers. J Digit Imaging 21:153–169 Peng B, Wang Y, Yang X (2010) A multiscale morphological approach to local contrast enhancement for ultrasound images. In: Proceedings of the 2010 international conference on computational and information sciences, ICCIS'10, pp 1142–1145. IEEE Computer Society, Washington, DC Ramaswamy S, Rose K (2009) Towards optimal indexing for relevance feedback in large image databases. IEEE Trans Image Process 18(12):2780–2789. doi:10.1109/TIP.2009.2028929 Shah H, Ghazali R, Nawi NM (2011) Using artificial bee colony algorithm for MLP training on earthquake time series data prediction. J Comput 3(6):135–142 Thamaraichelvi B, Yamuna GY (2013) A novel efficient kernelized fuzzy C-means with additive bias field for brain image segmentation. In: Proceedings of the international conference on communication and signal processing, pp 68–72 Zakeri FS, Behnam H, Ahmadinejad N (2012) Classification of benign and malignant breast masses based on shape and texture features in sonography. J Med Images Syst 36:1621–1627 Zhang Z, Tan T, Huang K, Wang Y (2012) Three-dimensional deformable-model-based localization and recognition of road vehicles. IEEE Trans Image Process 21(1):1–13 A mathematical model for effective detection and segmentation of cancerous masses has been proposed. All authors read and approved the final manuscript. Compliance with ethical guidelines Competing interests The authors declare that they have no competing interests. Department of Computer Science, Lady Doak College, Madurai, Tamil Nadu, India S. Pitchumani Angayarkanni Department of M.C.A., TBAK College, Kilakarai, Ramnad, Tamil Nadu, India Nadira Banu Kamal Department of M.C.A., Karunya University, Coimbatore, Tamil Nadu, India Ranjit Jeba Thangaiya Correspondence to S. Pitchumani Angayarkanni. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Angayarkanni, S.P., Kamal, N.B. & Thangaiya, R.J. Dynamic graph cut based segmentation of mammogram. SpringerPlus 4, 591 (2015). https://doi.org/10.1186/s40064-015-1180-7 Fuzzification Graph cut Otsu's method and ROC
CommonCrawl
oxidation state of o in o2 The oxidation number of fluorine is always –1. Much progress has been made toward understanding the discharge mechanism, but the mechanism of the oxygen evolution reaction (OER) on charge (i.e., oxidation) remains less understood. Ask your question. I'm assuming superscript to make O 2-coz I can't see the other way around ever being true. The oxidation number of oxygen is -2 and there are two oxygen atoms, so the total oxidation number for the oxygen in CO2 is -4. The oxidation number of oxygen is -2 and there are two oxygen atoms, so the total oxidation number for the oxygen in CO2 is -4. The sum of oxidation states in a molecule or polyatomic ion add up to the charge. Don't forget that there are 2 chromium atoms present. Water molecules can dissociate into surface hydroxyl groups through a self-catalyzed process under ambient conditions. Potassium, being an alkaline metal i.e belonging to the first group of the periodic table, is a very metallic element. Here, the vital role of water in promoting the oxidation of SO2 by O2 on carbonaceous soot surfaces was identified at the atomic level by first-principles calculations. (ii) Structures of Xenon fluorides cannot be explained by Valence Bond approach. Oxygen can take multiple oxidation states. What is the oxidation number of chlorine atom in ClO −? 1. For a single ion, the oxidation state is the charge of the ion. The cerium oxidation state in ceria-doped Rh/Al2O3 catalysts during the methane steam reforming (MSR) reaction was determined by in situ X-ray absorption spectroscopy and online mass spectrometry at 773 K. The catalysts were characterized by electron microscopy and X-ray diffraction. Oxidation-reduction reaction - Oxidation-reduction reaction - Oxidation states: The idea of assigning an oxidation state to each of the atoms in a molecule evolved from the electron-pair concept of the chemical bond. O.N. The important rules for this problem are: The oxidation number of "H" is +1, but it is -1 in when combined with less electronegative elements. 1 0. This particular compound is sodium peroxide.. You're right that usually oxygen has a charge of -2, but in this case, there's no way that each $\ce{Na}$ can have an oxidation state of +2.. Because oxidation reactions are characerized by an increased in the number of oxygen atoms in the product compared to the reactant (hence the word "oxidation"), we know that going from O3 to O2 is NOT an oxidation reaction, and is therefore a reduction reaction. In compounds O is nearly always -2, H is nearly always +1. Now, 1/2 O2 is another way of saying just plain O, and the same rules apply here. The oxidation number of "O" in compounds is usually -2, but it is -1 in … This contains the superoxide ion (O2)-. So that's the reason why oxygen has a +2 oxidation state in OF2. O has a total charge of zero and therefore the oxidation number of O would have to be zero as well. What is the oxidation state of chromium in the dichromate ion, Cr 2 O 7 2-? Trending questions . Severe haze episodes typically occur with concurrent high relative humidity. Oxidation state of oxygen in hydrogen peroxide is? The sum of the oxidation numbers in a monatomic ion is equal to the overall charge of that ion. View Answer. View Answer. Log in. Check out a sample Q&A here. Calculate oxidation number of B r in B r 3 O 8 and S in S 4 O 6 2 − . Ask question + 100. This fits with the charge of the peroxide anion ($2 \times -1 = -2$), and as $\ce{BaO2}$ is a neutral compound, the sum of all oxidation numbers is 0. What is the oxidation number of oxygen atoms in O 2 2−? The oxidation number of Ba is +II, and the oxidation number of each of the oxygens in the peroxide anion is -I. MEDIUM. So you then work backwards, deciding if it's $\ce{Na+}$ then you have +2 from the sodium, and oxygen must have an average oxidation number of -1 per oxygen atom. Click hereto get an answer to your question ️ Explain the following giving an appropriate reason in each case. check_circle Expert Answer. Try it risk-free for 30 days Try it risk-free Ask a question. Create your own flashcards or choose from millions created by other students. What is the oxidation number of oxygen atom in ClO −? The oxidation number of H is +1, but it is -1 in when combined with less electronegative elements. The resulting atom charges then represent the oxidation state for each atom. The basic rules for assigning oxidation states are: The oxidation state of uncombined elements is always 0. So Cl goes from +3 on the left to -1 on the right or gain of 4 e for each Cl. See Answer. MEDIUM. BROWSE SIMILAR CONCEPTS. Since oxygen is more electronegative than hydrogen, therefore oxidation no. Kinetische Hemmung , Überspannung. > You assign oxidation numbers to the elements in a compound by using the Rules for Oxidation Numbers. Reduction is the gain of electrons. The oxidation state of the oxygen is -2, and the sum of the oxidation states is equal to the charge on the ion. MEDIUM. That gives each oxygen a -1 oxidation state. Koyyadadhanik29 Koyyadadhanik29 09.04.2018 Chemistry Secondary School +5 pts. 2n + 7(-2) = -2. n = +6. HAS TO BE ZERO.-> oxidation state of Fe in Fe^3+ The chemical species IS MONOATOMIC SPECIES, e.g. Atoms within a molecule are held together by the force of attraction that the nuclei of two or more of them exert on electrons in the space between them. Find an answer to your question Oxidation state of Cr in K2[Cr(CN)2(O2)(O2)NH3]) 1. As a general rule, oxidation state of oxygen is nearly always 2- so when you have an unknown eventually, you can use that as a known value to help calculate the unknown. The Oxidation of state of Oxygen in ##Na_2O_2 ## is -1 The structure of the the molecule sodium per oxide is with each sodium joined to one oxygen and the two oxygen joined with one bond to each other and the other bond to the sodium atoms. MEDIUM. That means if oxygen combines with an element which is more electronegative than it will surely possess a positive oxidation state. die Oxidation von H 2O leichter erfolgen sollte. More than 50 million students study for free with the Quizlet app each month. What is the oxidation number of manganese atom in MnO 2? Galvanische Zellen 2 H 2O + 2 Na + + 2 Cl - H 2(g) + Cl 2(g) + 2 NaOH Absch ätzen der ben ötigten Spannung: E E (Cl ) Eo (H2O) 1,36 ( 0,83 ) V 2,19 V Re d o Ox o Zelle = − + = − + − = − Chemie im Nebenfach 22 View Answer. The oxidstion number is +1 We can consider oxidation state of 02 be 2x and F2 is (-1)×2 hence 2x-2=0 so x=1 Trending questions. Become a member and unlock all Study Answers. Quizlet is the easiest way to study, practice and master what you're learning. Ask your question. View Answer. View Answer. Want to see the step-by-step answer? The oxidation state of carbon in carbon dioxide is IV^+. Thus, the nickel atom has an oxidation state of +4, since the compound is neutral. Oxygen has an oxidation state of -1/2. The oxidation states of rhodium and cerium during MSR are related. O goes from -4 total on the left to 0 on the right or loss of 4e. In that case, the superscript will be your answer. Oxygen takes its standard oxidation state of -II; C therefore takes its maximum oxidation state. Increasing the temperature lowers the value of the equilibrium constant K p 3. Warning: Because these are simple sums it is tempting to try to do them in your head. The presence of a vanadium (V) oxide catalyst increases the equilibrium yield of S O 3 Join Yahoo Answers and get 100 points today. of hydrogen in H 2 O 2 will be +1. Moreover, there are even more exceptions to the rule of thumb cited above, for example, the Oxidation State Rules. Which of the following, oxidation number of carbon is -1? Join now. Want to see this answer and more? Fundamental research into the Li–O2 battery system has gone into high gear, gaining momentum because of its very high theoretical specific energy. Assertion O 3 can act as an oxidising agent as well as a reducing agent but S O 2 can act only as an oxidant. So Cl is reduced and O … Oxygen ## 1s^2 2s^2 2p^4## typically forms two bond to fill the two half filled p orbitals. Net Ionic Equation; Alkaline Earth Metals; Complete Ionic Equation; Oxidation And Reduction; Acid Base … The correct sequence of the oxidation state of underlined element is: N a 2 [F e (C N) 5 N O], K 2 T a F 7 , M g 2 , P 2 O 7 , N a 2 S 4 O 6 , N 3 H. MEDIUM. Oxidation state rules state that oxygen atoms in peroxides have an oxidation state of -1 (in non-peroxide compounds oxygen usually has an oxidation state of -2). -> oxidation state of O in O2. The O atoms each have an oxidation state of -2. The oxidation number of "H" is +1. View Answer. = +3.-> oxidation states of Cu, Cl in CuCl2 USUALLY, you want oxidation states for EACH atom. The sum of oxidation numbers in a neutral compound is 0. Log in. Grund: Sog. Increasing the pressure increases the equilibrium yield of S O 3 2. Oxidation numbers for elements in natural state is 0. Many compounds with luster and electrical conductivity maintain a simple stoichiometric formula; such as the golden TiO, blue-black RuO 2 or coppery ReO 3, all of obvious oxidation state.Ultimately, however, the assignment of the free metallic electrons to one of the bonded atoms has its limits and leads to unusual oxidation states. In compounds, O is usually -2. It has only one electron in its valence shell and can only take the +1 oxidation state. What are the oxidation states of C in elemental C, CO, and carbon suboxide, C_3O_2? Get answers by asking now. The peroxide ion (O2 2-) is about the only place that oxygen has an oxidation state of anything other than -2. Using postulated rules. Join. Gesamtreaktion: Oxidation und Reduktion Anodenprozess : Oxidation von Cl - III. (i) O2 and F2 both stabilize higher oxidation states of metals but O2 exceeds F2 in doing so. Find the oxidation number of S in S O 4 2 − ion? The oxidation number of "O" is -1. 1. The oxidation number of underlined element in [F e (N O) (H 2 O) 5 ] S O 4 compound is : HARD. Fluorine being the most electronegative element (electro negativity of 4.0 on Pauling scale) will in any case (except in fluorine gas) have an oxidation state of -1. There are IDENTICAL ATOMs, ELECTRONEGATIVITY HAS TO SHARE Electrons SIMMETRICALLY : there aren't gain or loss, O.N. Still have questions? COMES FROM ITS ELECTRICAL CHARGE....O.N. Fe2O3 has 3 O atoms, for a total oxidation no of -6; the total oxidation state of the products must equal the total of reactants, so the total oxidation no of Fe2O3 is 0; this means the oxidation number for … This cover image, based on the article, number 2000127, by Xiaojun Hu and co‐worker, shows the competitive oxidation of Fe‐C melts by O 2 and CO 2.There are four reactions on the surface of Fe‐C melts, and the isotope gases of 18 O 2 and 13 CO 2 are used to distinguish the specific roles. The oxidation number of any atom in its elemental form is 0. On the right K is +1, Cl is -1 and O2 is 0. 2 S O 2 (g) + O 2 (g) ⇌ 2 S O 3 (g) Δ H = − 1 9 7 k J m o l − 1 Which statements are correct? Oxidation state in metals. The oxidation state of O is -1. Join now. C = +4 oxidation state O = -2 oxidation state. What is reduced? La Pavoni Bar T3, Dark Souls 3 Aldrich Faithful Invasions, How Do You Get Mumps, Black And Decker 36v Hedge Trimmer Review, Utopian Socialists And Marxists, Traditional South African Braai Salads, Average Salary In Utrecht, French Fish Called Bar, Ginger In Yoruba Language,
CommonCrawl
Oberwolfach Reports The EMS Publishing House is now EMS Press and has its new home at ems.press. Please find all EMS Press journals and articles on the new platform. Full-Text PDF (572 KB) | Introduction as PDF | Metadata | Table of Contents | OWR summary Online access to the full text of Oberwolfach Reports is restricted to the subscribers of the journal, who are encouraged to communicate their IP-address(es) to their agent or directly to the publisher at [email protected] Volume 5, Issue 4, 2008, pp. 2763–2814 DOI: 10.4171/OWR/2008/49 Published online: 2009-09-30 Von Neumann Algebras and Ergodic Theory of Group Actions Dietmar Bisch[1], Damien Gaboriau[2], Vaughan F. R. Jones[3] and Sorin Popa[4] (1) Vanderbilt University, Nashville, USA (2) École Normale Supérieure de Lyon, France (3) Vanderbilt University, Nashville, United States (4) University of California Los Angeles, United States The workshop \emph{Von Neumann Algebras and Ergodic Theory of Group Actions} was organized by Dietmar Bisch (Vanderbilt University, Nashville), Damien Gaboriau (ENS Lyon), Vaughan Jones (UC Berkeley) and Sorin Popa (UC Los Angeles). It was held in Oberwolfach from October 26 to November 1, 2008. This workshop was the first Oberwolfach meeting on von Neumann algebras and orbit equivalence ergodic theory. The organizers took special care to invite many young mathematicians and more than half of the 28 talks were given by them. The meeting was very well attended by over 40 participants, leading senior researchers and junior mathematicians in the field alike. Participants came from about a dozen different countries including Belgium, Canada, Denmark, France, Germany, Great Britain, Japan, Poland, Switzerland and the USA. The first day of the workshop featured beautiful introductory talks to orbit equivalence and von Neumann algebras (Gaboriau), Popa's deformation/rigidity techniques and applications to rigidity in II$_1$ factors (Vaes), subfactors and planar algebras (Bisch), random matrices, free probability and subfactors (Shlyakhtenko), subfactor lattices and conformal field theory (Xu) and an open problem session (Popa). There were many excellent lectures during the subsequent days of the conference and many new results were presented, some for the first time during this meeting. A few of the highlights of the workshop were Vaes' report on a new cocycle superrigidity result for non-singular actions of lattices in SL(n,$\mathbb R$) on ${\mathbb R}^n$ and on other homogeneous spaces (joint with Popa), Ioana's result showing that every sub-equivalence relation of the equivalence relation arising from the standard SL(2,$\mathbb Z$)-action on the $2$-torus ${\mathbb T}^2$ is either hyperfinite, or has relative property (T), and Epstein's report on her result that every countable, non-amenable group admits continuum many non-orbit equivalent, free, measure preserving, ergodic actions on a standard probability space. Other talks discussed new results on fundamental groups of II$_1$ factors, L$^2$-rigidity in von Neumann algebras, II$_1$ factors with at most one Cartan subalgebra, subfactors from Hadamard matrices, a new construction of subfactors from a planar algebra and new results on topological rigidity and the Atiyah conjecture. Many interactions and stimulating discussions took place at this workshop, which is of course exactly what the organizers had intended. The organizers would like to thank the Mathematisches Forschungsinstitut Oberwolfach for providing the splendid environnment for holding this conference. Special thanks go to the very helpful and competent staff of the institute. No keywords available for this article. Bisch Dietmar, Gaboriau Damien, Jones Vaughan, Popa Sorin: Von Neumann Algebras and Ergodic Theory of Group Actions. Oberwolfach Rep. 5 (2008), 2763-2814. doi: 10.4171/OWR/2008/49 © 2022 EMS Publishing House. All rights reserved.
CommonCrawl
Analysing Gordon's trade-off by adapting Thurow's approach of pure public good to the German energy sector Holger Schlör1Email author, Wolfgang Fischer1 and Jürgen-Friedrich Hake1 Published: 5 December 2016 We analyse Gordon's trade-off by adapting Thurow's approach of pure public good using the example of the German energy sector which is in a transition process to a low-carbon sustainable energy system (Energiewende). The income distribution and the energy expenditures of households are interpreted as public goods. Their distribution is measured with the Atkinson index, which determines how the quality of life, as measured in income and energy expenditures, is distributed among society. We use the disaggregated consumption and income for 39.409 million German households. Our socio-economic analysis focuses on six household types. Our analysis shows that among German households, energy expenditures are more equally distributed than private consumption in general and income. The rather (but by far not completely) equal distribution of energy expenditures confirms Smil's finding that energy is the universal currency (Sen, On Economic Inequality, 1973) for people's welfare and can be seen as an indicator of the basic needs of households irrespective of household income. Nevertheless, low-income households have to spend a higher share of their income on energy to avoid energy poverty. Further price increases could lead to an unequal distribution and rising energy poverty. The socio‐economic conditions of society and its energy sector have to be addressed in a transition processes. Energy poverty constitutes an infringement of the sustainability concept. If society does not take distributional effects into account, the transition process itself could be jeopardized. Gordon's trade‐off Atkinson index Sustainable energy system Sustainable development is a process in which society and political decision makers have to balance ecological, economic, and social targets. Equal rights and equality in terms of "equivalent living conditions" (Article 74 German constitutional law) are key elements of the social pillar of sustainability. Gordon's trade-off Modern societies are confronted with Gordon's1 trade-off [14], that is to say, their democratic constitutions guarantee all citizens the same political rights and obligations [27]. However, this democratic guarantee of equality is contrasted with economic inequality as the result of economic market forces which produce unequal income, consumption opportunities, and life prospects [14, 29]. Individuals have the same political rights, but their social participation opportunities correlate not only with these rights but also with their individual success in economic processes [7, 14]. Individuals are affected by two institutions—economic market processes and the constitution—which grant different positions in society according to their specific institutional rules.2 The constitutions of democratic systems grant their citizens rights without any preconditions, whereas their position within the economic market system is based on their success in this system [27]. Economic institutions can "generate substantial disparities among citizens in living standards and material welfare [14]." The political institutions of the government are, on the one hand, confronted with and have to manage a socio-economic democratic system that guarantees the same rights to each individual without any preconditions, and on the other hand, with an economic system in which individual success is based mainly on individual performance. Society and its government have to find a way to balance the trade-off between these two principles to avoid political tensions between social groups and households, because of the trade-off between the conflicting principles of society's democratic institutions and those of the economic market system: "At some points along the way, society confronts choices that offer somewhat more equality at the expense of efficiency or somewhat more efficiency at the expense of equality. In the idiom of the economist, a trade-off emerges between equality and efficiency [14]." Political projects such as the German Energiewende can be implemented more easily if social justice is taken into account, i.e. the distribution of the material welfare of society [26]. Hence, we can summarize that Gordon's trade-off is the result of the relations between two competing institutions (the democratic system and the economic market system). This competition is confirmed by Stiglitz, who illustrates that these conflicts arising from the trade-off are not the "result of the forces of nature, of abstract forces. [They are] the result of government policies that shape and direct the forces of technology and markets and broader societal forces [36]." In other words, Gordon's trade-off is politically shapeable by the institutions of society and has to be analysed so that this management process can avoid mismanagement on the basis of flawed data. The need for such an analysis is also stressed by Acemoglu and Robinson [1], who argue "that economic analysis needs to identify, theoretically and empirically, conditions under which politics and economics run into conflict, and then evaluate policy proposals taking this conflict and the potential backlashes it creates into account [2]." These conflicts could endanger policy conceptions such as the German energy transition [13]. Our analysis tries to reveal societal obstacles in the socio-economic conditions of society which have to be addressed in transition processes and will show the necessity of political discourse concerning Gordon's trade-off, because transition processes are not only technical problems but increasingly also socio-economic problems that have to be solved. No one in society can escape from these unsolved problems. Hence, we will analyse Gordon's trade-off in the context of Thurow's theory of public goods. Thurow—distribution of public goods Private and public goods The idea of public goods was developed in 1954 by Samuelson in his paper "The Pure Theory of Public Expenditure" [23]. He explains the characteristics of a public good: "that each individual's consumption of such a good leads to no subtraction from any other individual's consumption of that good [23]." Public goods "can be enjoyed by everyone and from which no one can be excluded [24]." Hence, we can classify the private and public goods consumed by households [10] and needed for the well-being of households [17] into four major categories [10] (Table 1). Classification of goods Pure private good Impure public good Club good Pure public good Source: D. Brümmerhoff, [10] In the case of private goods, the use of such a good by one consumer excludes other consumers from consuming it (i.e. food). In contrast, a dike is a pure public good, because everyone behind it is protected. A club good [11, 28] refers to, for instance, the use of a gym. If the monthly fee is paid, everyone in the gym may use the equipment. A congested road is an impure public good—no one can be excluded from the use of the road, but there will be rivalry in using the road in the case of congestion [17]. Thurow's public good approach The distribution of income was already interpreted as a pure public good by Thurow in 1971 [39], because every individual is confronted with the same distribution of income. No individual can be excluded from the advantages and disadvantages of a given distribution of income, and there is also non-rivalry in the consumption of the advantages and disadvantages [37, 40] of a given distribution of income [39]. Every individual is confronted with the same distribution of income, because as Joseph Stiglitz explains: "Widely unequal societies do not function efficiently and their economies are neither stable nor sustainable … there comes a point when inequality spirals into economic dysfunction for the whole society [37]." Everyone needs a functioning society to sustain their social position [37]. That is to say, the distribution of income is a pure public good [39] which sustains the functioning of society. It functions like a dike to stabilize the socio-economic system. We will enlarge Thurow's approach of a public good by interpreting not only the income distribution of German households but also the distribution of their energy expenses as a public good, because the participation of all households in the energy system is an important factor in the success of any country's economy. The energy system is a dike for the socio-economic system which needs a competitive infrastructure. We therefore also interpreted the performance of the energy system as a public good for society, because no individual can be excluded from the advantages or disadvantages of the energy system and there is also non-rivalry in the consumption of the advantages or disadvantages of the energy system. Hence, we will expand Thurow's idea of a pure public good by including household energy consumption as a parameter for the quality of the German energy system. In the following, the distribution of the two public goods—income and energy system—will be analysed with the Atkinson index on the basis of the German household expenditure survey (EVS) database. The index is based on social theories [5] and regards society as "a cooperative project for the mutual [5]" benefit of all members of society. The Atkinson index is a normative distribution measure. The index is based on a social welfare function, which implies diminishing marginal utility of income [5, 15]. The index thereby assumes additive social welfare, which is the sum of the individual utility of society members. This concept is based on utilitarian individual philosophy [15]. In this philosophy, the welfare of the other members of society is not part of the individual utility function [5]: Each individual simply maximizes his own utility and does not care about the other individuals. The welfare of the individual is measured independently of the income of other individuals [5, 15]. Hence, the level of possible energy consumption is based on the net income, and energy consumption is part of the social welfare function (SWF), as the following definition of the welfare function shows: $$ \begin{array}{l}\begin{array}{l}\mathrm{S}\mathrm{W}\mathrm{F}={\displaystyle \sum_{i=1}^nU\left(Y{\left(PC\Big(EC\right)}_i\right)},\ \mathrm{Y}=\mathrm{income},\ \mathrm{U}=\mathrm{utility}\ \mathrm{level},\ \mathrm{n}=\mathrm{number}\ \mathrm{of}\ \mathrm{households}\hfill \\ {}\kern20.5em \mathrm{E}\mathrm{C}=\mathrm{energy}\ \mathrm{consumption},\ \mathrm{P}\mathrm{C}=\mathrm{total}\ \mathrm{private}\ \mathrm{consumption}\hfill \end{array}\\ {}\kern6em \end{array} $$ In our theoretical approach (utilitarianism), an "outside observer" has to compare the individual members of society with each other. His instrument is the Atkinson index [15]. The Atkinson index calculates how society can assess the distribution of individual income and consumption expenditures between the different income classes of the social groups.3 The index defines maximum inequality with 1 and maximum equality with 0 [26] and fulfils six mathematical axioms thus allowing it to measure inequality [26]. The Atkinson index has a specific feature for calculating distribution, namely the epsilon parameter ε [3, 4]. The epsilon parameter of Eq. (1) "defines how sensitively the Atkinson index should interpret inequalities [25]." The value ranges from zero to infinity. If society does not give any consideration to the distribution of income, then the value is zero (low inequality aversion). If society cares only about the lowest income group, then the value moves towards infinity (high inequality aversion).4 "The larger epsilon is, the more strongly the Atkinson index reacts to inequalities [27]." Epsilon can therefore represent the inequality aversion of society and can be interpreted as the mathematical parameter of Gordon's trade-off. $$ \mathrm{Gordon}\hbox{'}\mathrm{s}\ \mathrm{Trade}\hbox{-} \mathrm{off}=\frac{\mathrm{Social}\ \mathrm{Equity}}{\mathrm{Economic}\ \mathrm{Efficiency}}=\mathrm{Inequality}\ \mathrm{Aversion}=\mathrm{Epsilon}\ \mathrm{Parameter}\ \mathrm{of}\ \mathrm{Atkinson}\ \mathrm{Index} $$ With the determination of the epsilon parameter, Gordon's trade-off becomes measurable by the Atkinson index. Epsilon relates two institutions to each other: the societal trade-off between social equality based on a democratic constitution and market economic efficiency. Researchers, social stakeholders, or legislators can define the social meaning of inequality for socio-economic development and can define Gordon's trade-off by the epsilon parameter. In a political discourse, society can develop a social view of its own understanding of how individuals treat and see each other in society which can also be expressed in the tax system. Epsilon confronts a society with its self-assessment as a just, fair society but also as an efficient market economy [25, 27]. We use the Atkinson index to determine the distributional effect of gross income, net income, private consumption, and energy expenditures [3]. The value of the Atkinson index is Thurow's public good. It defines the distribution of income and energy expenditure and the shape of the dike which prevents economic and social distortions of the socio-economic system. For our analysis, we use the modified Atkinson index (AIXtype) to analyse the inequality of these issues: $$ {\mathrm{AIX}}_{\mathrm{type}}=1-{\left[{\displaystyle \sum_{i=1}^n{\left(\frac{X_{i, type}}{\overline{X_{\mathrm{type}}}}\right)}^{1-\varepsilon }}{f}_{i, type}\right]}^{\frac{1}{1-\varepsilon }},\kern0.5em X={Y}^G,{Y}^N,\ \mathrm{P}\mathrm{C},\ E,\ EK,\ EW,\ \mathrm{f}\mathrm{o}\mathrm{r}\ \varepsilon \ne 1. $$ $$ {\mathrm{AIX}}_{\mathrm{type}}=1- \exp \left[{\displaystyle \sum_{i=1}^n{f}_{i, type}{ \log}_e\frac{X_{i, type}}{\overline{X_{\mathrm{type}}}}}\right],\ \mathrm{X}={\mathrm{Y}}^G,{\mathrm{Y}}^N,\ \mathrm{P}\mathrm{C},\ E,\ EK,\ EW,\ for\ \varepsilon =1 $$ \( {Y}_{i,\mathrm{type}}^G \) represents gross income of individuals, \( {Y}_{i, type}^N \) the net income of individuals, PC i,type consumption expenditure, E i,type energy consumption expenditure, EW i,type residential energy consumption expenditure, EK i,type car energy consumption expenditure in the i th income range (n sum of the income classes) in the household type (singles, singles with child(ren), couples, couples without child(ren), couples with child(ren)), f i,type is the proportion of the population in the particular household type with income in the i th income range, \( {\overline{X}}_{\mathrm{type}} \) is the mean household value for six income and expenditure issues (Y G , Y N , K, E, EK, EW) of the household types, and the epsilon parameter (ε) is the same for all groups. Database—German household expenditure survey data The German household expenditure survey (EVS) provides data sets on German economic life and the consumer behaviour of private households [34]. Every 5 years, the Federal Statistical Office questions a selection of German households (0.2% of all German households) about their income, expenditures, assets, consumer goods, and residential situation. The 2008 survey was the tenth survey, following surveys in 1962/63, 1969, 1973, 1978, 1983, 1988, 1993, 1998, 2003 [16, 35]. The EVS for 2008 was published in 2011 [31, 32]. The EVS for 2013 was not published in 2015. The EVS data sets provide an overview of the social conditions and socio-economic development of the population in Germany. The data sets are important not only for German social politics but also for all other socio-economic fields of politics [33]. Private households are the central object of investigation in the framework of the EVS. Our analysis focuses on the following household types: Single households Single households with child(ren) Couples without child(ren) Couples with child(ren) Other households5 In our model, we consider all 39.409 million households which took part in the EVS survey, of which 15.537 million (30.1%) are single households, 1.339 million are single households with child(ren) (2.6%), and 17.381 million are couples (33.7%) living in one household, while 11.441 million of the couples households have no children (22.2%) and 5.940 million of the couples households have child(ren) (11.5%). We also consider the 5.152 million as other households ("sonstige Haushalte"). The following table shows how German households are distributed among social groups and income groups. We analyse nine income classes as Table 2 shows. Distribution of households 2008 Distribution of German households among the different household types and income groups Number of households in 100 Single with child(ren) Couples sine Couples with Other households Proportion of the social group in all households in % of total households Distribution of the households among the different social groups Source: Schlör et al. 2015 [31, 32] The table shows the distribution of the households over the nine income classes. The relatively largest group of all households (25.8%) is the income class € 2600–€ 3600, whereas within the single households, the income class € 900–€ 1300 has the largest relative proportion (22%). Within the single households with child(ren), the largest relative grouping (26.1%) is the income class € 1500–€ 2000, while couples have the biggest share (25.1%) in the income class of € 2600–€ 3600 and couples without children have the highest share (24.9%) in the income group of € 2600–€ 3600. Couples with child(ren) have the biggest share (28.4%) in the income group of € 3600–€ 5000. Nearly one third of the other households (29.3%) belong to the highest income group (€ 5000–€ 18,000). Our paper measures the distribution of the public goods (income distribution and energy system) with the Atkinson index [3, 4]. In the first step, we analyse the first part of Gordon's trade-off, i.e. the success of the household groups in the economic process, i.e. the income and consumption expenditures of the different household types. Real distribution Disposable income of private households according to their social position Our analysis is focused on five household types (single households, single households with child(ren), couples, couples without child(ren), couples with child(ren)), which are part of the group of all households. We analyse the real distribution of income, of consumption, and of energy expenses. In the first step, we analyse the dispersion of income [12, 18–21, 38], consumption, and energy use. We define dispersion as the ratio of the income, consumption, and energy expenditures of the highest income group to the average household of the social group. Couples without children achieved the highest average monthly gross income in 2008 (€ 9222), followed by other households (€ 9152) and couples (€ 9136). Singles and couples with child(ren) achieved nearly the same level of gross income (€ 9083, € 9037), whereas the gross income of singles with children in the highest income group is significantly lower (€ 7990). The dispersion of the gross income varies significantly between the household types. We can identify three major groups: The highest dispersion is found in the single households group (4.14, 3.43). The second group consists of all couples and couples without children (1.97, 2.18). The income dispersion reaches its lowest value in the groups containing couples with child(ren) and other households (1.66, 1.67) (Table 3). Gross income of private households in Germany 2008 according to their household type Net income groups in € Single with children All households Gross income dispersiona Source: Own calculation based on German Federal Statistical Office, 2011, aIncome dispersion: Ratio of the gross income of the highest income group to the gross income of the average household of the social group. Italic numbers own estimation Monthly net income The monthly net income of private households also varies strongly with the social status of the main income recipient, as the following table shows (Table 4). Net income of private households in Germany 2008 according to their household type Income groups in € Net income dispersiona Source: Own calculation based on German Federal Statistical Office, 2011, /=no declaration, the number of cases is too small. Italic numbers are own estimation aIncome dispersion: ratio of the net income of the highest available income group to the net income of the average household of the social group Couples with children achieved the highest average monthly net income in 2008 (€ 4191), followed by couples (€ 3662), couples without child(ren) (€ 3387), and singles with and without child(ren) (€ 1943, € 1726). The dispersion of the net income varies significantly between the household types. Once again, the first group contains single households where the dispersion decreases from 4.09 to 3.3. The second group contains couples and couples without child(ren) (1.9, 2.1). They have a significantly lower dispersion than the single households. The income dispersion reaches its lowest value in the group containing couples with child(ren) and other households (1.6). The comparison of net and gross income shows that the German income tax system reduces the dispersion in this particular household type. Expenditure of private households according to their social position Monthly private consumption Expenditure for private consumption also varies between the different household types, as the following Table 5 shows. The single households spend an average of € 1418 per month, singles with child(ren) € 1740, couples € 2757, couples without child(ren) € 2622, couples with child(ren) € 3017, and other households € 3142. The consumption expenditures increase with rising income without reaching a saturation point. The consumption dispersion is significantly lower than the income dispersion. Private consumption Private consumption of private households in Germany 2008 according to their household type Consumption dispersiona Source: Own calculation based on German Federal Statistical Office, 2011./= no declaration, the number of cases is too small aConsumption dispersion: ratio of the consumption of the highest available income group to the consumption of the average household of the social group The consumption dispersion of singles (2.35) and singles with child(ren) (2.12) is the highest of all households analysed, followed by couples (1.53, 1.62, 1.38) and other households (1.46). Their dispersion is much lower, and they have more similar consumption patterns than the single households. In the following, we analyse the energy expenditures of the households. Monthly energy consumption The expenditures for energy consumption of the households will be analysed in more detail to obtain a picture of the real distribution of energy consumption in Germany. This includes car energy and residential energy expenditures and total energy expenditures as summarized in Table 6. Energy consumption—car, residential, and total Energy consumption of private households in Germany 2008 according to their social position Couple without child(ren) Couple with child(ren) Car energy expenditure in € Car energy dispersiona Residential energy expenditure in € Residential energy dispersionb Total energy expenditure in € Total energy dispersionc Source: German Federal Statistical Office, 2010, aresidential energy dispersion: ratio of the residential energy expenditures of the highest income group to the residential energy expenditure of the average household of the social group Source: German Federal Statistical Office, 2010, bresidential energy dispersion: ratio of the residential energy expenditures of the highest income group to the residential energy expenditure of the average household of the social group Source: Own calculation based on German Federal Statistical Office, 2011, cTotal energy dispersion: ratio of the total energy expenditures of the highest income group to the total energy expenditure of the average household of the social group Energy expenses for cars Energy expenses for cars include expenses for fuel and lubricants in the six social groups. The single households without and with children spend nearly the same amount (€ 50 and € 67, respectively) on car energy, whereas the couples without child(ren) spend on average € 111 and the couples with child(ren) and couples spend € 150 and € 124, respectively. The other households have on average the highest expenditures on car energy: € 160. With rising income, expenses for car energy increase continuously without reaching a saturation point. The dispersion of energy expenditure between the household types is significantly lower compared to income and overall consumption. In the case of car energy expenditure, it ranges from 1.18 to 1.94. Residential energy expenditure With respect to expenses for residential energy, all three couple household types have nearly the same expenditures for residential energy (€ 165, € 163, € 169). The single households with child(ren) (€ 119) have insignificantly higher residential energy expenditure than all single households (€ 93). The other households have the highest expenditures for residential energy, with an average of € 201. With rising income, expenses for residential energy increase continuously, reaching a saturation point before the highest income group only in the case of singles with child(ren). In the other household types, the residential energy expenditure increases without reaching a saturation point. Generally, the dispersion in the case of residential energy is lower than that of car energy. All household types show a dispersion between 1.17 and 1.65. Total energy expenditure When we now sum up the car and residential energy expenditures to calculate the total energy expenditures. We see that couples with child(ren) (€ 319) have nearly the highest energy expenditures followed by the other two couple household types (€ 274, € 289), whereas the two single household types have lower energy expenditures (€ 143, € 186). The other households have the highest energy expenditures: € 361. With rising income, the total energy expenses increase and reach a saturation point before the highest income group only in the household type singles with child(ren). In the other household types, the total energy expenditures increase without reaching a saturation point before the highest income group. Hence, the dispersion varies between households. Couple (1.18, 1.28, 1.33) and single households show a slightly higher dispersion (1.55, 1.75), whereas the other households have a dispersion similar to the couple households (1.32). In the following, we also present the distribution of expenditures for another basic good: food and beverages. The comparison between food and energy enables us to classify the energy distribution results. The expenditures for food and beverages differ among the households. But the dispersion of food expenditures is the lowest of all analysed types of consumption and income (Table 7). Food and beverage expenditures of private households in Germany 2008 according to their household type Food dispersiona Source: Own calculation based on German Federal Statistical Office, 2011. Italic numbers own estimation aFood consumption dispersion: ratio of the food consumption of the highest income group to the food consumption of the average household of the social group The single households spend on average €182 for food and beverages. These expenditures reach their saturation point at € 222 per month in the highest income class. The food consumption of singles with children increases on average by about € 100 to € 281 per month and reaches its saturation point in the income group of € 3600–5000 (€ 366) before the top income group, which consumes less (€ 357). The social group of couple households consumes on average food and beverage for € 400 a month, and this consumption reaches its highest value in the highest income group with € 486. Couples without children (€ 360, € 432) consume on average and in the top group less than all couples. Food and beverage consumption increases on average in the social group of couples with children to € 478 a month and in the top income group this rises to € 547. The social group of other households has the highest monthly food consumption with on average € 483 and in the top group € 603. The food consumption dispersion for other households (1.25) and single parents (1.27) is the highest of all households analysed, followed by couples (1.2, 1.2, 1.14). Couples with children have food consumption patterns that are more similar than the other households. Our analysis shows how the household types' heterogeneous levels of success in the economic system may be measured in income and consumption expenditures. In the following, we examine how the real distribution of expenses and income is perceived by the households against the background of differing levels of inequality aversion within society, i.e. how society assesses the distribution of income and expenditures against their normative perception of inequality. Normative distribution In the following, we examine how the real distribution of expenses and income is perceived by the households against the background of differing levels of inequality aversion within society, i.e. how society assesses the distribution of income and expenditures against their normative perception of inequality. In our analysis, the epsilon parameter of the Atkinson index ranges from 1 to 2.5, whereas (ε = 1, 1.5) represents a low inequality aversion of society and (ε = 2, 2.5) represents a high inequality aversion of German society. In the case of the single households, the net income (0.149–0.299) is more equally distributed than the gross income (0.176–0.356). This illustrates the effectiveness of the German tax system in reducing some of the inequality of the German economic market system. The consumption patterns of the singles (0.066–0.149) are distributed more equally between the households than the two income types. In the case of energy consumption, the expenditures on residential energy (0.023–0.053) are nearly equally distributed between the households. On the other hand, the expenditures for car energy are more unequally distributed in this household group than the gross income (0.165–0.388). Residential energy expenditures are of central importance for the households irrespective of their income, whereas individual mobility (cars) is not necessarily required by all households. For the single households, the public transport system is an alternative. This explains why in the single households the car energy values of the Atkinson index are higher than the residential energy. Table 8 shows that "food" is the most equally distributed (0.006–0.018) item of the analysed data sample. As expected, food is the main basic good for single households. Atkinson index of single households Atkinson index 2008—singles Atkinson epsilon Residential energy Car energya Foodb Source: Own calculations 2016 aCar energy = fuel and lubricants bFood, beverages (non-alcohol and alcohol), and tobacco Singles with child(ren) As in the household groups of all single households, the net income of single households with child(ren) is more equally distributed than the gross income. The data confirms that the German tax system evens out the inequalities of the economic market system to some extent. The gross income of single households with child(ren) is more unequally distributed (0.125–0.258) than the income of the group consisting of all single households. This is also valid for the net income. We can also see that the distribution of private consumption (0.056–0.121) and of all energy expenditures (0.038–0.087) is more equal in this household type than car energy expenditures (0.106–0.262). Table 9 illustrates that also in this social group food consumption is the most equally distributed consumption issue. Atkinson index of single households with child(ren) Atkinson index 2008—singles with child(ren) In the couple group, the gross income (0.138–0.323) is again more unequally distributed than the net income (0.118–0.277) due to the German tax system (Table 10). Atkinson index of all couples households Atkinson index 2008—all couples All couples This is also valid for the consumption patterns (0.05–0.124) and energy expenditures (0.025–0.067). The residential energy expenditures (0.025–0.034) of this household group are again the most equally distributed issue in this household group. The results also show that car energy expenditures (0.047–0.139) are more unequally distributed than residential energy expenditures but more equally distributed than in the case of single households. Food consumption is distributed in the same way in the couple households (0.011–0.038) as in the single households with children. In the case of the gross and net income, we see again that, because of the tax system, the net income (0.124–0.355) is more equally distributed than the gross income (0.150–0.355). We can assert that residential energy (0.017–0.041) is again the most equally distributed good. Private consumption (0.053–0.128) is distributed in a manner similar to car energy (0.057–0.148), and a little more unequally than energy expenditures. The food consumption of the couple households with children (0.008–0.020) is more equally distributed than that of all couples. Table 11 also documents the basic need character of food consumption, because it is the most equally distributed good of these households. Atkinson index of couples without child(ren) Atkinson index 2008—couples without child(ren) The effects of the German tax system as an instrument to reduce income inequality can also be confirmed by the analysis of the gross (0.104–0.267) and net income (0.091–0.227) of couples with children (Table 12). Atkinson index of couples with child(ren) Atkinson index 2008—couples with child(ren) Private consumption in this household group is relatively equally distributed. But the results show that car energy expenditures are also equally distributed and we can see a clear contrast to the single households, where car energy expenditures are distributed very unequally. We can conclude from this that car energy expenditures are not necessarily an essential good for single households, but for the couples, especially for those with children, they are indispensable. In the households of couples with children, food consumption is also very equally distributed, and the Atkinson index (0.007–0.018) is a good indicator of that. The final household type in our analysis is the group containing other households. This household group also confirms the effects of the German tax system, which reduces income inequality between the members of that household type (0.176–0.337 to 0.149–0.326). Table 13 shows that the inequality assessed by the modified Atkinson index increases with rising epsilon irrespective of which issue is analysed. The energy expenditures (0.048–0.133) of that group are more equally distributed than the overall private consumption (0.065–0.167). The residential energy expenditures (0.024–0.069) are more equally distributed than the car energy expenditures (0.030–0.129). Food consumption is more unequally distributed in the group of all other households than in the other household groups. The values of the Atkinson index (0.025–0.065) are near the values of the residential energy. The other households group, which includes, for example, parents-in-law, children over 18 and groups sharing an apartment, is more heterogeneous than the single and couple households, which explains the higher Atkinson index. Atkinson index of other households Atkinson index 2008—other households We can therefore summarize that the household group of couples with child(ren) is the most homogeneous group and that their net income is more equally distributed than their gross income. Private consumption is more equally distributed than both income types, and energy services are distributed almost equally between the household types. However, the single households are the most heterogeneous household group and show a more differentiated distribution picture than the couple households. In both single household types, the German tax system significantly reduces the inequality between households. In the case of epsilon 2.5—representing a high inequality aversion—the German tax system reduces the Atkinson index of single households from 0.356 to 0.299. But also in the single households, private consumption is more equally distributed than income, and energy expenditures are still the most equally distributed expenditure type (0.055–0.125). What is striking in this group is the fact that car energy expenditures are the most unevenly distributed expenditure type. We have seen that energy expenditures are more equally distributed than private consumption and income types. The nearly equal distribution of energy expenditures confirms Smil's assumption that energy is the universal currency [30] for people's welfare and can be seen as an indicator of the basic needs of the households, whereby "basic" means something different in different countries—for Germany basic needs means an energy consumption which offers social participation. These basic energy needs are to a large extent, but not completely, independent of people's income situation. This means that the lower income groups have to spend a very high percentage of their income on energy services compared to the higher income groups (Table 14). Households with a net income lower than € 900 are divided into two major groups. The singles in this income group spend between 11.9 and 13.6% of their income on energy services. They spend between 3 and 4 basis points more than the average household in this social group and nearly 10 basis points more than the highest income group. Energy consumption of private households in relation to net income* Germany 2008 according to their social position—in % Source: Own calculation based on German Federal Statistical Office, 2011. Italic numbers: own estimation, limited data basis in this income group *Total energy dispersion: ratio of the total energy expenditures of the highest income group to the total energy expenditures of the average household of the social group However, we get a different picture in the social group of couples households: the couple households of the income group <€ 900 spend more than 25% of their net income on energy services. Rising energy prices would affect these households directly. In this case, they would have to rearrange the expenditures in their household budgets. They would have to reduce other expenditures to maintain their use of energy services at its current level; otherwise, they would lose access to modern energy services which are "crucial to human well-being and to a country's economic development" as the IEA stated. There is a danger that these households will be confronted with energy poverty, which can be defined as a "condition wherein a household is unable to access energy services [8]" at its accustomed level, and so there is a growing need for energy governance. Energy poverty constitutes an infringement of the sustainability concept: environmental, economic, and social targets have to be balanced in the transition to a low-carbon economy. Our analysis reveals that energy poverty and the socio-economic conditions of society and its energy sector have to be addressed in transition processes to a sustainable society and have to be at the centre of any energy transition process and its political discourse. The analysis of Gordon's trade-off shows that transition processes such as the German Energiewende are not only technical problems but increasingly also socio-economic problems that have to be solved by energy governance [6], and because of Thurow's public good approach, no one in society can escape from the unsolved problems of Gordon's trade-off. The analysis using the Atkinson index can reveal deeper insights into the self-perception of society and the conception of justice and equality, which are central pillars of a sustainable society. The epsilon parameter thereby enables us to parameterize this perception and conception in measuring the distribution of consumption and income. Our analysis is necessary, because every economic and political reform has distributional effects. If politicians do not consider these effects (energy poverty), they can endanger the total reform of the energy sector (Energiewende), because people will turn away from the goals of the reform [1, 9]. Acceptance of reforms such as the German Energiewende will thus decline. The transformation of current energy systems into sustainable systems is on the agenda of all European countries (EU climate policy). Therefore, such a transformation could (and probably will) also lead to rising electricity prices, placing an above-average strain on the lowest income groups. Moreover, this regressive effect will appear in all categories of expenditure if prices increase, no matter whether this is caused by political decisions or market forces. Our index can also be applied to other countries with respect to energy and other household expenditures, if the respective national statistical office provides the necessary household survey data for the analysis. Our index can then provide decision makers and institutions with information on how (un)equally the costs of transformation processes are distributed between the different income groups. We used energy in our analysis because it is one of the basic needs, and the energy sector is at the centre of the German transformation process: the Energiewende. Energy poverty caused by the Energiewende—as a synonym for a lack of societal participation in the transformation process, at least in highly developed countries—can endanger the whole transformation process. Political strategies to strengthen participation should therefore focus on the regressive effect of high energy prices. Decision makers and political institutions can decide in a public discourse which categories of expenditures should be analysed and which are more important and relevant to justify political interventions to reduce the inequality caused by rising prices. The index could also deliver information about the differences in income distribution in EU countries. For this analysis, we need reliable and comparable statistical data for the whole of Europe. However, in our view, two important political obstacles are looming: Firstly, it is difficult enough to find common political ground in domestic policy between the different political actors and interest groups in order to distribute the costs of national transformation policies. Secondly, this challenge is raised to a completely different level if wealth is to be redistributed between EU states (Euro crisis, Greek debt crisis) to a much larger extent than is the case today (EU Regional Fund, Structural Fund etc.). To summarize, our concept has both a detection (revealing the implicit preferences) and potentially also an orientation function (defining explicit societal preferences with respect to the degree of homogeneity of a society). Kermit Gordon (1916–1976) was Director of the United States Bureau of the Budget (now the Office of Management and Budget) (December 28, 1962–June 1, 1965) during the administration of Lyndon Johnson, and he was also the president of the Brookings Institution. He oversaw the creation of the first budgets for Johnson's Great Society domestic agenda. Gordon was a member of the Council of Economic Advisors, 1961–1962. For our analysis, we take up the definition of an institution offered by Rawls. Institutions in Rawls's sense are the constitution, economic and social conditions, freedom of thought, freedom of conscience, economic markets with competition, and private property [22]. Nicholas Barr shows that the Gini coefficient has two disadvantages for measuring inequality, which are avoided by the Atkinson index [5]. The Gini coefficient is not an unambiguous measure because, as Hauser and Barr have shown, different distributions can lead to the same Gini coefficient [13, 52]. Hence, we decided to use the Atkinson index to estimate the distributional effects of increasing energy prices [27]. This analytical view is based on Rawls' theory of justice, where inequality is determined by the "position of the least advantaged members of society. Where epsilon lies between these extremes depends on the importance attached to redistribution towards the bottom [3]." Other households include, e.g. parents-in-law, children over 18, and groups sharing an apartment. HS initiated the research idea of analyzing Gordon's trade-off and developed the Atkinson model based on EVS data. HS and WF designed and organized all the research for this study. WF reviewed the theory of public goods. JFH had a leading role in the literature review and the analysis of the real distribution of the EVS data. All the authors contributed to the conclusion and the outlook of the study. All authors read and approved the final manuscript. Forschungszentrum Jülich, Institute of Energy and Climate Research, IEK-STE: Systems Analysis and Technology Evaluation, Jülich, Germany Acemoglu D, Robinson J (2012) Why nations fail: the origins of power, prosperity, and poverty. Crown Business, New YorkGoogle Scholar Acemoglu D, Robinson JA (2013) Economics versus Politics: Pitfalls of Policy Advice. J Econ Perspect 2, 173–192, doi:10.1257/jep.27.2.173 Atkinson AB (1983) The economics of inequality. Clarendon, OxfordGoogle Scholar Atkinson AB (1970) On the measurement of inequality. J Econ Theory 2:244–263MathSciNetView ArticleGoogle Scholar Barr N (1993) The economics of the welfare state. Stanford University Press, StanfordGoogle Scholar Bazilian M, Nakhooda S, Van De Graaf T (2014) Energy governance and poverty. Energy Research & Social Science 1:217–225View ArticleGoogle Scholar Bell D (1996 (1976)) The cultural contradictions of capitalism. Basic Books, New YorkGoogle Scholar Bouzarovski S, Petrova S, Sarlamanov R (2012) Energy poverty policies in the EU: a critical perspective. Energy Policy 49:76–82View ArticleGoogle Scholar Braunberger G (2013) Ökonomen verstehen zu wenig von Politik (unter unterschätzen Verteilungsthemen). In: FAZ blogs (ed) Das Fazit-wirtschaftsblog. FAZ, Frankfurt/MGoogle Scholar Brümmerhoff D (2007) Finanzwissenschaft. Oldenbourg Verlag, MunichGoogle Scholar Buchanan JM (1965) An economic theory of clubs. Economica 32:1–14View ArticleGoogle Scholar Edmond C, Veldkamp L (2009) Income dispersion and counter-cyclical markups. J Monet Econ 56:791–804View ArticleGoogle Scholar German Federal Ministry of Economics and Technology (Bmwi) (2012) Germany's new energy policy. BMWI, BerlinGoogle Scholar Gordon K (1975) Foreword. In: Okun AM (ed) Equality and efficiency: the big tradeoff. The Brookings Institution, Washington D.CGoogle Scholar Hauser R (1996) Zur Messung individueller Wohlfahrt und ihrer Verteilung. In: Chlumsky J, Wiegert R (eds) Wohlfahrtsmessung - Aufgabe der Statistik im gesellschaftlichen Wandel. Statistisches Bundesamt, Wiesbaden, pp 13–38Google Scholar Jung S (2001) Privater Verbrauch in Deutschland. DUV, WiesbadenView ArticleGoogle Scholar Kaul I, Grunberg I, Stern MA (1999) Global public goods. UNDP, New YorkView ArticleGoogle Scholar Metwally MM, Jensen RC (1973) A note on the measurement of regional income dispersion. Econ Dev Cult Chang 22:135–136View ArticleGoogle Scholar Mulas-Granados C, Sanz I (2008) The dispersion of technology and income in Europe: evolution and mutual relationship across regions. Res Policy 37:836–848View ArticleGoogle Scholar Park J (2006) Dispersion of human capital and economic growth. J Macroecon 28:520–539View ArticleGoogle Scholar Ramos HM, Sordo MA (2003) Dispersion measures and dispersive orderings. Statistics & Probability Letters 61:123–131MathSciNetView ArticleMATHGoogle Scholar Rawls J (1971) A theory of justice. Harvard University Press, CambridgeGoogle Scholar Samuelson PA (1954) The pure theory of public expenditure. Rev Econ Stat 36:387–389View ArticleGoogle Scholar Samuelson PA, Nordhaus WD (2010) Economics. McGraw-Hill Education (Asia), New YorkGoogle Scholar Schlör H, Fischer W, Hake J-F (2012) Measuring social welfare, energy and inequality in Germany. Appl Energy 97:135–142View ArticleGoogle Scholar Schlör H, Fischer W, Hake J-F (2012) Social welfare, income, consumption, energy, and the inequality aversion of society—a case study from Germany. J Eur Econ 11:356–377Google Scholar Schlör H, Fischer W, Hake J-F (2013) Sustainable development, justice and the Atkinson index: measuring the distributional effects of the German energy transition (in press). Applied Energy 112:1493-1499Google Scholar Scotchmer S (2008) Clubs. In: Durlauf SN, Blume LE (eds) The new Palgrave dictionary of economics. Palgrave Macmillan, BasingstokeGoogle Scholar Sen A (1973) On economic inequality. Norton, New YorkView ArticleGoogle Scholar Smil V (1994) Energy in world history. Westview, BoulderGoogle Scholar Bundesamt S (2011) Einkommens- und Verbrauchsstichprobe - Einkommensverteilung in Deutschland 2008. Wirtschaftsrechnungen, WiesbadenGoogle Scholar Bundesamt S (2011) Einkommens- und Verbrauchsstichprobe -Einnahmen und Ausgaben privater Haushalte 2008. Wirtschaftsrechnungen, WiesbadenGoogle Scholar Statistisches Bundesamt (2013) Wirtschaftsrechnungen Einkommens- und Verbrauchsstichprobe Aufgabe, Methode und Durchführung. Fachserie Wirtschaftsrechnungen Fachserie 15, Heft 7Google Scholar Statistisches Bundesamt (Federal Statistical Office) (2005) Einkommens- und Verbrauchsstichprobe - Aufgabe, Methode und Durchführung der EVS. Fachserie Wirtschaftsrechnungen 15, Heft 7Google Scholar Statistisches Bundesamt (Federal Statistical Office) (2005) Einkommens- und Verbrauchsstichprobe - Einnahmen und Ausgaben privater Haushalte 2003. Fachserie Wirtschaftsrechnungen 15, Reihe 1Google Scholar Stiglitz J (2012) Price of inequality. Norton, LondonGoogle Scholar Stiglitz JE (2012) The 1 percent's problem. Vanity Fair, LondonGoogle Scholar Theil H, Fiebig DG (1986) The measurement of income and price dispersion in cross-country demand analysis. Econ Lett 22:391–393View ArticleGoogle Scholar Thurow LC (1971) The income distribution as a pure public good. Q J Econ 85:327–336View ArticleGoogle Scholar Wilkinson R, Pickett K (2010) The spirit level. Why equality is better for everyone. Penguin, LondonGoogle Scholar In these collections Sustainable Energy; A Systems Approach
CommonCrawl
How it works Log in Sign up Show that $tr(\sqrt{\sqrt A B \sqrt A})\leq 1$ , where both $A$ and $B$ are positive semidefinite with $tr(A)=tr(B)=1.$ Show that \[ tr(\sqrt{\sqrt A B \sqrt A})\leq 1,\]where both $A$ and $B$ are positive semidefinite with $tr(A)=tr(B)=1.$ Linear Algebra Matrices Systems of Linear Equations Jonas Rickard First we show that \[tr (\sqrt{\sqrt{A} B \sqrt{A}})=tr (\sqrt{AB}). \] Note that $\sqrt{A} B \sqrt{A}$ and $AB$ are similar matrices, since $AB = \sqrt{A} (\sqrt{A} B \sqrt{A}) \sqrt{A}^{-1}$. Hence \[\sqrt{AB}=\sqrt{A} \sqrt{\sqrt{A} B \sqrt{A}} \sqrt{A}^{-1}.\] By similarity $\sqrt{AB}$ has the same eigenvalues as $\sqrt{\sqrt{A} B \sqrt{A}}$, and hence the same trace. Thus it is enough to shwo that $tr (\sqrt{AB})\leq 1$. For a positive semidefinite matrix $C$, $|C|$ is defined as $|C|=(C^*C)^{1/2}.$ On the other hand by the polar decomposition there is a unitary matrix $U$ such that $C=U|C|$ and consequently $U^*C=|C|.$ Hence for $C=B^{1/2}A^{1/2} $ we have $$(A^{1/2}BA^{1/2})^{1/2}=|B^{1/2}A^{1/2}|.$$ Thus $$U^*B^{1/2}A^{1/2}=|B^{1/2}A^{1/2}|$$ for a unitary matrix $U.$ Let $\{e_n\}_{n=1}^d$ be an orthonormal basis and $f_n=Ue_n.$ Then $\{f_n\}_{n=1}^d$ is an orthonormal basis and $$ {\rm tr}\, |B^{1/2}A^{1/2}|={\rm tr}\, [U^*B^{1/2}A^{1/2}]=\sum_{n=1}^d \langle U^*B^{1/2}A^{1/2}e_n,e_n\rangle \\ =\sum_{n=1}^d \langle A^{1/2}e_n,B^{1/2}Ue_n\rangle = \sum_{n=1}^d \langle A^{1/2}e_n,B^{1/2}f_n\rangle.$$ By ppplying the Cauchy-Schwarz inequality twice we get $${\rm tr}\, |B^{1/2}A^{1/2}|\le \sum_{n=1}^d \| A^{1/2}e_n\|\|B^{1/2}f_n\| \\ \le\left (\sum_{n=1}^d\| A^{1/2}e_n\|^2\right )^{1/2} \left (\sum_{n=1}^d\| B^{1/2}f_n\|^2\right )^{1/2} \\ =({\rm tr}\,A)^{1/2}({\rm tr}\,B)^{1/2}= 1.$$ Savionf The answer is accepted. Solving a system of linear ODE with complex eigenvalues How do I evaluate and interpret these sets of vectors and their geometric descriptions? Find the null space of the matrix $\begin{pmatrix} 1 & 2 & -1 \\ 3 & -3 & 1 \end{pmatrix}$ Linear Transformation Problems Find $x$ so that $\begin{bmatrix} 2 & 0 & 10 \\ 0 & x+7 & -3 \\ 0 & 4 & x \end{bmatrix} $ is invertible [Rotations in R^3 ] Consider R∶ R^3 → R^3 the linear transformation that rotates π/3 around the z-axis Find $a,b,c$ so that $\begin{bmatrix} 0 & 1& 0 \\ 0 & 0 & 1\\ a & b & c \end{bmatrix} $ has the characteristic polynomial $-\lambda^3+4\lambda^2+5\lambda+6=0$ two short Linear Algebra questions matchmaticians Help & Support Request Get the Matchmaticians app Copyright © 2019 - 2023 Matchmaticians LLC - All Rights Reserved Enter a search term to find results in questions
CommonCrawl
The Critique of Knud Jahnke and a New Meteor Exposure Age Analysis Last year, a critique by Knud Jahnke appeared on the astro-ph preprint arXiv, in it, my meteoritic-based reconstruction of the cosmic ray flux was heavily criticized. Below, I elaborate on why this criticism is invalid. I also describe a better statistical analysis, one which unequivocally demonstrates that a 143 Myr periodicity does indeed exist in the meteoritic data. If you landed on this page by accident (i.e., it appears somewhat out of context), I would suggest first reading my description on the spiral arm → cosmic ray → climate link, and the cosmic ray flux signature in the iron meteorites. General Remarks on Jahnke's critique The manuscript by Jahnke (which was not accepted for publication by A&A) is an attempt to repeat my previous analyses (e.g., PRL and New Astronomy papers linked from here). Although Jahnke raises a few interesting aspects, his analysis suffers from several acute problems, because of which he obtains his negative result, that is, that there is no statistically significant periodicity in the data and that there is no evidence for cosmic ray flux variability. By far, the most notable problem is that Jahnke's analysis does not consider the measurement errors. In his analysis, poorly dated meteorites were given the same weight as those with better exposure age determinations. As I show below, this has a grave effect on the signal to noise ratio (S/N) and consequently, on the statistical significance of any result he obtains. I begin by summarizing the few benign points considered by Jahnke. Then I describe at length the main faults in his analysis, and follow by carrying out a more suitable statistical analysis based on the Rayleigh spectrum method and show that a periodic signal of 145 Myr is present in the data, at a statistically significant confidence level, even if one considers the more stringent grouping chosen by Jahnke. In the appendix I describe at length why the statistical tool I employ here is better than that used by Jahnke (at least for the type of signals we are looking for in the data) or in fact by myself in the previous analyses. Moreover, the calculation described in the appendix quantifies the S/N degradation brought about by considering poorly dated meteorites, as Jahnke did. This quantitatively shows why his analysis could not have obtained any signal at a statistically significant level. Simply said, the Jahnke's analysis literally introduces more noise than it does a signal and its null result is basically meaningless. Figure 1: An Iron Meteorite An Iron meteorite, a large sample of which can be used to reconstruct the past cosmic ray flux variations. The reconstructed signal reveals a 145 Myr periodicity shown below. This particular one is part of the Sikhote Alin meteorite that fell over Siberia in the middle of the 20th century, it broke off its parent body about 300 Million years ago. Detailed Comments Some benign remarks Since I am not a meteoriticist, I have no way of judging whether all iron meteorites of a single classification group (according to the old or the recent classification) originated from the same parent body, nor whether they broke off at the same event. Thus, it is better in this respect to be more conservative. That is, as Jahnke points out, once recent claims that several iron groups should be grouped together were published, whether they are correct or not, we should consider the more conservative point of view to reduce the possibility that clustering is the result of single events producing multiple meteorites. Nevertheless, from the fact that we have some cases where the exposure ages of many meteorites span a long time range (a few 108yr), it is clear that at least some meteorites of the same iron group classification do not originate from the same meteoritic break up event. As Jahnke points out, the K-S analysis does indeed have a bias, with a higher sensitivity in the center. Such that choosing a phase insensitive statistic is not a bad idea at all. However, as I show below, the analysis done by Jahnke is actually a degradation of the analysis previously done by myself for other reasons. Moreover, the insinuation that I tuned the phase of the K-S to artificially get a better significance has no place. The phase used in the K-S is the phase of the data obtained if the zero-point for the folding is today. That is, there was no tuning in this respect and it should have been checked by Jahnke before making these insinuations. Critical Points The main difference between the analyses of Jahnke and my previous analysis, is not any of those mentioned by Jahnke. Instead, the main difference is that Jahnke considered meteorites which have very poor error determination. As I show in the appendix, this considerably degrades the signal to noise ratio (S/N). This can be easily seen. If one adds a meteorite which is not expected to contribute any signal (because its phase in the periodicity is effectively unknown due to the error), then the noise is increased without increasing the signal, thereby decreasing the S/N. This will in turn degrade the statistical significance of any positive result. For example, it degrades a signal significant at the 99% level to the notably less significant 90% level. In my previous analysis, I simply discarded meteorites with a quoted error of 100 Myr or more (which actually corresponds to about 70 Myr or more, since Vosage & Feldman overestimated their errors, as can be seen once the potassium ages are compared with other exposure ages). In the present analysis, I simply weigh them according to their expected contribution to the signal. Although the Hierarchical clustering method I employed in the previous analysis is not ideal, it has one major advantage over the "up" or "down" methods devised by Jahnke. That is, Jahnke's method introduces a systematic error in the age of the clustered meteorites. This can be seen by comparing the meteor "clusters" which were assigned different ages according to the direction of the clustering. In all these instances of a different assigned age, the "up" clustering gave an exposure ages which was typically 40 Myr higher than in the "down" case. Clearly, it is unacceptable to have a 1/6 of the meteorites with such a large systematic error. That is to say, the hierarchical method is not ideal, but the modification suggested by Jahnke is even worse. The systematic errors will degrade the signal. Instead of degrading the Hierarchical method as Jahnke did, I introduce below a method which does not suffer from the above problems which arise when clustering the meteorites, though it does consider the possibility that meteorites of the same iron group could arise from the same breakup event. The method also has the advantage that it can straightforwardly be expanded to analyze the error distribution, and independently show that the error include the same 145 Myr cycle. Re-analysis using the Rayleigh Periodogram In his analysis, Jahnke introduced several modifications (whether legitimate, such as being more conservative with the meteoritic grouping, or illegitimate, such as carelessly using poorly dated meteorites) which reduced the S/N. Thus, it is worthwhile to see whether another statistical method exists which is better suited to answer the question of whether the clustering is real or not. The method described in this section is based on the Rayleigh periodogram analysis. It has various advantages and disadvantages relative to the K-S, and its Kuiper statistic cousin. I begin by describing the method. I then extend it to the analysis of the distribution of errors, and describe its statistically significant results for a 145 Myr signal in the meteoritic data. In the appendix, I compare the method to the Kuiper statistic and show that at least for the type of signals we expect to see in the data, it is a much stronger tool. The Rayleigh Analysis The Rayleigh analysis (RA) method is a statistical tool useful for establishing periodic deviations from the uniform occurrence of discrete events. As shown in the appendix, the RA method is better suited for this type of analysis than the Kolmogorov-Smirnov or its Kuiper deviate. This is because the statistical significance obtained for sinusoidal deviations, which are of the kind we are searching here, is notably higher than the significance obtained in the K-S or Kuiper statistic. This is not to say that the RA is better in general. There are many cases where K-S and Kuiper statistics are better suited. For example, they are more appropriate when analyzing deviations from a given nonuniform distribution or finding non-periodic deviations. The essence of the RA method is finding statistical deviations from a 2D random walk generated by the set of random events. For comparison, the K-S and Kuiper statistics rely on deviations between the observed cumulative distribution of the random events and that of the given distribution. In essence, they are 1D random walks with some constraints. More specifically, for each period tested by the RA method, the discrete events are assigned phases corresponding to their occurrence within the given period $ p$. A 2D random walk can then be constructed based on the phases, where a step is taken in the direction of $ \cos(\phi_i) {{{\hat{\bf e}}_1}} + \sin(\phi_i) {{\hat{\bf e}}_2}$ where $ {{\hat{\bf e}}_1}$ and $ {{\hat{\bf e}}_2}$ are the two directions of the random walk, and the phase is $ \phi_i = 2 \pi t_i / p$. This walk essentially addresses the question of whether the phases are uniformly distributed or whether concentrated around a certain preferred phase. If, for example, the events are assigned constant weights, the sum of the walk can be described by the vector $$ {\bf R}(p) = {1\over \sqrt{N}} \left( \sum_i \cos \left( 2 \pi t_i \over p\right) {{\hat{\bf e}}_1} + \sum_i \sin \left( 2 \pi t_i \over p \right) {{\hat{\bf e}}_2} \right). $$ If the events exhibit no periodicity $p$, then the walk should be a random walk. The vector sum R(p) should be 0 on average, and the probability that the power PR ≡ R(p)2, also called the Rayleigh power, will be larger than a value a is given by (e.g., Bai, ApJ, 397:584, 1992): $$ {\mathrm{Prob}} \left( P_R > a \right) = \exp (-a). $$ If a signal is present in the data, the phases will be aligned if the data points are folded over the period present in the data, and a large PR will be obtained. Note that this probability is for a given frequency. If we are studying a range of frequencies, then we should increase the probability according to the effective number of frequencies we have. For example, with a 1/1000 Myr-1 resolution (because the data spans about 1000 Myr), we have ~6 independent frequencies between 100 and 250 Myr (the range taken by Jahnke), each one could randomly yield a large value. The analysis is more complicated because of two important reasons irrespective of the actual method employed (whether K-S, Kuiper or Rayleigh). First, the meteorites do not have the same weight. Some meteorites, for example, are poorly dated. If they have an error larger than the half the period, then their phase in the Rayleigh analysis will be totally random. Clearly, such "measurements" will only add noise but no real signal, thereby decreasing the SNR (as was done by Jahnke). Second, meteorites of the same Iron group classification and similar ages are most likely products of the same parent objects which crumbled in the same event (or related events). This implies that not all clustering should be attributed to cosmic ray flux variations. We shall now discuss both. Effect of meteoritic age error If a meteor is poorly dated, it is less likely to appear in the right phase to give the right signal (irrespective of the analysis). In the limit of a large error, it will contribute no signal but will increase the noise. It is therefore wise not to attribute the same weight to all meteorites. In the Rayleigh analysis, a gaussian distribution for the phases implies that a meteorite is only expected to contribute with a reduced weight of $w_i = \exp \left( - {2 \pi^2 \sigma^2 / p^2} \right)$. This is obtained after integrating the sine and cosine functions with a gaussian distribution. Thus, a more appropriate form for the power PR(p) is actually: $$ P_R(p) = {1 \over \sum_i w_i} \left[ \left( \sum_i \cos \left( 2 \pi t_i \over p\right) w_i \right)^2 + \left( \sum_i \sin \left( 2 \pi t_i \over p \right) w_i \right)^2 \right]. $$ That is, poorly dated meteorites contribute very little, as they are assigned their actual weight corresponding to their actual contribution towards a signal. Note that the errors quoted by Vosage et al., are larger than the actual errors. This can be seen if the Potassium age determinations are compared with the other independent exposure age methods (such as using 10Be). Once this is done using the data of Lavielle et al. (EPSL, 170:93, 1999), while remembering the systematic correction necessary for the methods employing short lived isotopes, one obtains that the typical error between the Potassium age determinations and the other methods is about 70% of the quoted error of the Vosage et al. data (The error contribution from the other methods is expected to be small according to the quoted errors they have). Meteoritic multiplicity As mentioned numerous times before, some of the clustering could be due to the breakup of a single parent body in one or a few related events. To avoid this problem, it was first suggested in my first analysis to cleanup the data by merging together meteorites of the same iron group classification. As pointed out by Jahnke, this does not come without its problems. Moreover, the cleanup method suggested by Jahnke has even worse problem, as it introduces an unacceptable systematic error in the ages. The Rayleigh periodogram offers a straightforward extension which does not introduce the systematic errors or the ambiguity of knowing to which cluster a meteorite should be clustered. Instead of clustering the meteorites together, they can be analyzed separately, with the following two modifications. First, since meteorites with the same Iron group classifications and a small age separation (e.g., < 100 Myr) are likely to be related, they should be considered with a total weight of one meteorite (that is, their weight can be reduced by a factor of the number of members in the group). In some cases, there are many meteorites of the same classification which span a longer range. For example, there are 18 IIIAB group meteorites in a 290 Myr range. Clearly, they don't all arise from the same event, nor are they 18 independent break up events. Thus, it is reasonable to reduce the weight of each meteorites by a factor of 18 / (290Myr/100Myr). Using this modification, we don't increase the statistical weight of a single breakup cluster, nor do we add systematic errors by accidentally merging wrong meteorites. However, the statistical analysis is more complicated and we cannot use the above equation for Prob(PR>a). We will use instead a Monte Carlo simulation. The modified Rayleigh periodogram and the related Errorgram Following the above points, I carried out a modified Rayleigh analysis, with a variable weight (assigning a weight according to the error, and then limiting the total weight for each cluster, based on the recent, more stringent, Iron classification). To estimate the statistical significance, I did a Monte Carlo simulation. Meteoritic break up events were randomly chosen between 100 and 900 Myr, and then smeared with a 200 Myr gaussian distribution to avoid edge effects. Each meteorite was assigned an error from those actually measured. A group of meteorites was simulated as a number of breakup events according to the width of the distribution and calculated as below. The ages of meteorites comprising each breakup event are chosen as the breakup event plus a random error realized using the assigned error of the given meteorite. The actual number of events a group comprises is calculated as above: total range/100 Myr. If the number is smaller than 1, then 1 group is taken. If larger than one, the number of events is taken as the truncated integer of the obtained number with a probability of one minus the faction obtained. In the other cases, the higher integer is taken as the number of groups. (For example, in the example mentioned above, the 2.9 groups were chosen as 3 events in 90% of the time, and 2 events in 10% of the time). The Rayleigh periodogram and the variable error permit yet another analysis. Because the errors are not fixed, they too can independently contain a clustering signal. We define an error vector E as ${\bf E} = E_1 {{\hat{\bf e}}_1} + E_2 {{\hat{\bf e}}_2}$ with: $$ E_1(p) = {1\over \sqrt{N}} \left( \sum_i \sigma_i \cos \left( 2 \pi t_i \over p\right) - {1\over N} \sum_i \sigma_i \sum_i \cos \left( 2 \pi t_i \over p \right) \right) $$ $$E_2(p) = {1\over \sqrt{N}} \left( \sum_i \sigma_i \sin \left( 2 \pi t_i \over p\right) - {1\over N} \sum_i \sigma_i \sum_i \sin \left( 2 \pi t_i \over p \right) \right) , $$ where σi is the error on measurement ti. And its power can be defined as $ P_E = E_1^2 + E_2^2$. Clearly, if there is no real clustering in the data, then the errors $\sigma_i$ are not expected to be correlated with ti, in which case $\sum_i \sigma_i \cos() \rightarrow (\sum_i \sigma_i) (\sum_i \cos())/N$, and E1 → 0 and similarly E2 → 0, even if there is a notably large R . In other words, if a large R is obtained as a statistical fluke, there is no reason a priori for a large E to arise, since there is no reason for the fluke to arrange the errors as well. However, if the signal in PR(p) is real and not coincidental, we expect a large PE(p) as well. This is because we expect the ages with larger errors to exhibit less clustering around the phase of maximum R, since it is easier for poorly dated meteorites to stray off the preferred phase. We therefore expect E to be in opposite direction of R. This implies that having a large E can be used as an independent indicator to show that the data has a real correlation. Moreover, for consistency, we can also calculate the angle cosine μ between the vectors, which should be ~ -1 if the signal is real: $$ \mu = {{\bf E} \cdot {\bf R} \over |{\bf E}| | {\bf R}|}. $$ since the error size and the clustering are expected to be inversely correlated for a real signal. Results of the Rayleigh analysis I now proceed to perform the Rayleigh analysis using the stricter grouping suggested by Jahnke. The Rayleigh power spectrum PR(p), the independent error spectrum PE(p) and the angle between the R and E vectors were calculated as a function of frequency. The results are depicted in the figure. Figure 2: Power Spectra From bottom to top: The two independent power spectra—the Rayleigh power PR and the independent error spectrum PE. Except for the coincidence of the spectra at a frequency of f ~ 7/1000 Myr-1, the two power spectra do not correlate. Note also that the probability to get a peak with a given amplitude falls exponentially with the amplitude. The middle panel depicts the combined spectrum, showing a remarkable peak at f ~ 7 /1000 Myr-1. The top panel depicts the angle between R and E. For a real signal, not only do we expect to see large PR and PE, but this angle is expected to be of order 180°, and indeed it is. The first point evident from the figure is that PR(f) has a prominent peak at f ~ 7 /1000 Myr-1. The rough estimate for the probability to obtain such a peak between 1/250 Myr-1 and 1/100 Myr-1 (the range used by Jahnke) is 6xexp(-5.5)≅2.5% (6 is the number of independent frequencies, assuming the meteoritic ages span about 1000 Myr). A Monte Carlo simulation (as described above) gives a more reliable estimate, which is actually 6%. That is, there is a 6% chance that a random set of meteorites (even if they have internal multiplicities from breakups), would yield as large a PR as observed. The results of the appendix also explain why the analysis performed by Jahnke could not yield any signal. Degrading the 6% statistical significance any further by letting poorly dated meteorites deteriorate the S/N, or by using the Kuiper statistic with its lower statistical efficiency (relative to the Rayleigh analysis, for this type of a signal), is evidently more than sufficient to remove any trace of the 145 Myr periodicity. But this is not all. Besides PR there is also the independent PE signal. The Monte Carlo simulation yields a probability of 3.7% to obtain the signal in the error distribution. The combined probability obtained in the Monte Carlo simulation is 0.2% (since it is roughly the product for the probabilities of both spectra, it indicates that the signals are indeed independent). Last, for consistency, we see that the angle between R and E is around 180°. That is, the vectors point in opposite directions as predicted. Clearly, the 145 Myr signal is present in the data at high statistical significance. We can refine the test and ask ourselves, what is the probability that a meteoritic exposure age signal will be present in the data which agrees with the periodicity found in ice-age epochs on Earth, that is, with a period of 145 ± 7 Myr. The probabilities for this to occur are 1.0%, 0.6% and 0.06% respectively. Clearly, it would be shear coincidence for the meteoritic exposure ages to happen to (a) exhibit a periodicity in their their exposure ages (b) independently agree in their error distribution (c) agree in the phase between the errors and clustering, (d) and happens to agree with the climatic periodicity! Note also that this doesn't even take into account the fact that the phase of the 145 Myr period in the meteoritic data is in agreement with the phase of the ice-age epochs (which would contribute yet another factor of 5 in the (im)probability estimate). I have shown above that the analysis of Jahnke is critically flawed by considering poorly dated meteorites. As I demonstrate in the appendix, this can easily reduce a 1% significant peak to the 10% level. This is the main reason why the 145 Myr peak disappeared from Jahnke's analysis, not the fact that he was more stringent in the clustering he used. As for the two new clustering methods introduced by him, the methods give rise to unacceptable systematic errors and should therefore not be used. In the new method described above, based on the Rayleigh analysis, no re-clustering is assumed, only a reduction of the meteoritic weights. This has the advantage of introducing no systematic error and lacks the ambiguity present when clustering a large group of meteorites spread over a long period. Moreover, the Rayleigh analysis appears to be a better method for the type of signals we are testing for (see also Leahy et al., ApJ, 272:256, 1983, who compares the Rayleigh analysis to epoch folding and reach similar conclusions). Once we use a method which is expected to yield a better S/N, we recover the 145 Myr periodicity, with a high confidence level, even if we keep ourselves to a much more stringent data set, as restricted by Jahnke. It is also worthwhile noting that the periodicity found in the meteoritic exposure ages is not unrelated to other signals, which have consistently the same period and phase. In particular, the cosmic ray flux is predicted, using purely astronomical data, to vary at roughly the same period (135±25 Myr) and phase. Thus, the fact that a signal is present should not be surprising at all. On the contrary, it would have been a great surprise if no signal would have been observed in the meteorites! Since cosmic rays are suspected as being a climate driver (with the first suggestion by Ney, already in 1959!), it is also no surprise that various sedimentational and independently geochemical reconstructions of the terrestrial climate reveal climate variations with the same period and phase as that seen in the exposure ages of meteorites. Appendix: Rayleigh Analysis and Kuiper statistics comparison Comparison between the Rayleigh Analysis and Kuiper statistics for finding a periodic signal Above, I presented a statistical analysis based on the Rayleigh periodogram. The question arises, why is this analysis better than the Kuiper analysis (which is the modified K-S analysis used by Jahnke). The answer is that in general, it does not have to be the case, but it certainly is for the type of signals we are looking for. In particular, the K-S and Kuiper analyses, are better suited for finding deviations from a non-uniform distribution and from non-periodic distribution. The aim of the statistical analyses introduced, either by Jahnke or by myself, is to estimate the probability with which one can rule out the null hypothesis, that the meteorites are distributed homogeneously. To see the type of statistical significances the methods can yield, we will look at a simple, yet realistic distribution for the ages, and calculate the significances with which the above null hypothesis can be ruled out. Lets assume that the signal in the data has a probability distribution function of $$ P(t)= {1\over \Delta T} \left[1+\alpha \sin \left(2 \pi t \over p_0\right)\right] $$ where ΔT ~ 900 Myr is the total interval over which we have meteorites. If we have N measurements with the above distribution, the normalized Rayleigh amplitude will be: $$ {\bf R}(p) = {1\over \sqrt{N}} \left( \sum_i^N \cos \left( 2 \pi t_i \over p\right) {{\hat{\bf e}}_1} + \sum_i^N \sin \left( 2 \pi t_i \over p \right) {{\hat{\bf e}}_2} \right) $$ We are interested in the expected signal in the large N limit. Thus, we can approximate the sum with an integral $\sum_i \rightarrow N \int P(t) dt$. Thus, $$ {\bf R}(p) = {\sqrt{N} \over \Delta T } \int \left[1+\alpha \sin \left(2 \pi t \over p_0\right)\right] \left( \cos \left( 2 \pi t_i \over p\right) {{\hat{\bf e}}_1} + \sin \left( 2 \pi t_i \over p \right) {{\hat{\bf e}}_2} \right) $$ We are also interested in the peak amplitude, obtained when p = p0. Also, to first approximation, we have that $\overline{\sin} \approx 0$ and $\overline{\sin \cos} \approx 0$, while $\overline{\sin^2} \approx 1/2$. Hence, we find: $$ {\bf R}(p_0) = {\sqrt{N} \over \Delta T } \Delta T {\alpha \over 2} {{\hat{\bf e}}_2} = {\sqrt{N} \alpha \over 2} {{\hat{\bf e}}_2}. $$ We are interested in the power, which is: $$ P_R = R^2(p_0) = { N \alpha^2 \over 4}. $$ This should be compared with the null hypothesis. If the events are random, then we expect $\overline{R} = 0$. The probability that R2 will be larger than a value a is given by $$ {\mathrm{Prob}} \left( P_R > a \right) = \exp (-a) $$ Thus, the probability that random events will produce a signal as significant as that of a real sinusoidal signal is: $$ {\mathrm{Prob}} \left( P_R > a_{signal} \right) = \exp \left(- {N \alpha^2 \over 4} \right) $$ If we are interested in 1% probability, we find that the number of events we need is roughly Nmin≅18/α2. (i.e., 18 measurements for α=1 or 72 for α=1/2). In the case of the Kuiper statistics, we first need to calculate the largest displacement between the cumulative distribution and the homogeneous distribution. Given the above probability distribution function, folded between 0 and p0, the cumulative function normalized to p0=1 is: $$ {\mathrm{Prob}} (t>\tau) = \tau - \left( {\alpha \over 2 } \left[ \cos\left( {2 \pi \tau} + \phi_0 \right) - \cos\left( \phi_0 \right) \right] \right) $$ While for the homogeneous case it is: $$ {\mathrm{Prob}}(t>\tau) = {\tau } $$ The maximum distance above plus the maximum distance below the homogeneous distribution is independent of $\phi_0$ and gives: $$ V = {\alpha \over \pi} $$ According to the numerical recipes book, the probability to get a fluctuation this large is given by: $$ {\mathrm{Prob}}(>V) = Q_{KP} \left( \left[ \sqrt{N} + 0.155 +0.24/\sqrt{N} \right]V\right) $$ where $$ Q_{KP} (\lambda) \equiv 2 \sum_{j=1}^\infty (4j^2 \lambda^2 - 1)\exp(-2j^2\lambda^2) $$ With this function, one can calculate the required N to reach a certain accuracy. If we are interested in a 1% goal, and α≅1, we require about 37 measurements. For α≅1/2, we required 153 points. Thus, for this distribution function, the Rayleigh analysis requires half as many points to reach the same statistical significance. Conversely, for the same number of points, the Kuiper statistic will result with a less significant result for the type of probability distribution function we are interested in. For example, if α≅0.7, 40 points are required to reach a 1% significance with the Rayleigh statistic. The same 40 points, would result with a 28% significance (which is insignificant!) with the Kuiper statistic. Appendix B: Degradation by poorly dated meteorites The above results also explains why adding noisy points heavily degrades the statistical significance. Suppose we double the number of points, by adding noisy points that have no correlation with the signal. In such a case, we double N, but decrease α by a factor of 2. Since the significance is a function of Nα2, the significance will deteriorate. For example, if we have about 40 points with α≅0.7, the significance will deteriorate from 1% to 10%. It is therefore important not to include in the analysis points with poor error determinations. More online Material The Milky Way Spiral Arms, Ice-Age Epochs and the Cosmic Ray Connection Cosmic Rays and Climate - A general description of the topic the link Climate Debate - More on related debates. Anthropogenic or Solar? - More on the attribution of global warming. Have you heard of the silent Earthquake? Transformation of Forces in Relativity Influenza and a Human Chain Reaction
CommonCrawl
A Quantized Johnson Lindenstrauss Lemma: The Finding of Buffon's Needle Abstract: In 1733, Georges-Louis Leclerc, Comte de Buffon in France, set the ground of geometric probability theory by defining an enlightening problem: What is the probability that a needle thrown randomly on a ground made of equispaced parallel strips lies on two of them? In this work, we show that the solution to this problem, and its generalization to \(N\) dimensions, allows us to discover a quantized form of the Johnson-Lindenstrauss (JL) Lemma, i.e., one that combines a linear dimensionality reduction procedure with a uniform quantization of precision \(\delta>0\). In particular, given a finite set \(\mathcal S \subset \mathbb R^N\) of \(S\) points and a distortion level \(\epsilon>0\), as soon as \(M > M_0 = O(\epsilon^-2 log S)\), we can (randomly) construct a mapping from \((\mathcal S, \ell_2)\) to \((\delta\mathbb Z^M, \ell_1)\) that approximately preserves the pairwise distances between the points of \(\mathcal S\). Interestingly, compared to the common JL Lemma, the mapping is quasi-isometric and we observe both an additive and a multiplicative distortions on the embedded distances. These two distortions, however, decay as \(O(\sqrt{\log S}/M)\) when \(M\) increases. Moreover, for coarse quantization, i.e., for high \(\delta\) compared to the set radius, the distortion is mainly additive, while for small \(\delta\) we tend to a Lipschitz isometric embedding. Finally, we prove the existence of a "nearly" quasi-isometric embedding of \((\mathcal S, \ell_2)\) into \((\delta\mathbb Z^M, \ell_2)\). This one involves a non-linear distortion of the $ell_2$-distance in \(\mathcal S\) that vanishes for distant points in this set. Noticeably, the additive distortion in this case is slower, and decays as \(O(\sqrt[4]{\log S}/M)\). "AlterSense" Compressed Sensing "Johnson Lindenstrauss lemma" "nonlinear dimensionality reduction" quantization random projections Quantization and Compressive Sensing Dequantizing Compressed Sensing with Non-Gaussian Constraints Weighted fidelity in non-uniformly quantized compressed sensing
CommonCrawl
a. What velocity minimizes energy expenditure? b. Read an article on how mathematical methods can be used to study animal behavior, and write a paragraph on whether you think such methods are valid. You may wish to begin with the reference cited in this exercise. The energy expended by a parakeet as a function of its velocity (in km/h)\mathrm{km} / \mathrm{h})km/h) is: E(v)=1v[0.074(v−35)2+22]E(v)=\frac{1}{v}\left[0.074(v-35)^{2}+22\right] E(v)=v1​[0.074(v−35)2+22] Create an account to view solutions By signing up, you accept Quizlet's Terms of Service and Privacy Policy Use calculus to sketch the graph of the given function. f(x)=3x5−5x3+4f(x)=3 x^5-5 x^3+4 f(x)=3x5−5x3+4 A cruise line estimates that when each deluxe balcony stateroom on a particular cruise is priced at ppp thousand dollars, then qqq tickets for staterooms will be demanded by travelers, where q=300−0.7p2q=300-0.7 p^2q=300−0.7p2. a. Find the elasticity of demand for the stateroom tickets. b. When the price is $8,000(p=8)\$ 8,000(p=8)$8,000(p=8) per stateroom, should the cruise line raise or lower the price in order to increase total revenue? Find constants AAA and BBB so that the graph of the function f(x)=Ax−35+Bxf(x)=\frac{A x-3}{5+B x} f(x)=5+BxAx−3​ will have x=2x=2x=2 as a vertical asymptote and y=4y=4y=4 as a horizontal asymptote. Once you find AAA and BBB, sketch the graph of f(x)f(x)f(x). The first derivative f′(x)f^{\prime}(x)f′(x) of a certain function f(x)f(x)f(x) is given. In each case, (a) Find intervals on which fff is increasing and decreasing. (b) Find intervals on which the graph of fff is concave up and concave down. (c) Find the xxx coordinates of the relative extrema and inflection points of fff. (d) Sketch a possible graph for f(x)f(x)f(x). f′(x)=x2−4xf^{\prime}(x)=x^2-4 x f′(x)=x2−4x
CommonCrawl
Volumen 3 (2018): Heft 3 (August 2018) Journal of Data and Information Science Delayed recognition: recent developments and a proposal to study this phenomenon as a fuzzy concept Ronald Rousseau Online veröffentlicht: 15 Aug 2018 Volumen & Heft: Volumen 3 (2018) - Heft 3 (August 2018) Seitenbereich: 1 - 13 Eingereicht: 12 Jun 2018 Akzeptiert: 10 Jul 2018 DOI: https://doi.org/10.2478/jdis-2018-0011 © 2018 Ronald Rousseau, published by SciendoThis work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. Zahlen und Tabellen A publication suffering from delayed recognition is a publication that received very little attention shortly after publication, but received recognition later. Stephen Cole proposed to use citations as a proxy for recognition (Cole, 1970). Although recognition can be given in many ways—receiving tenure is another important way in which scientists are recognized for their achievements—collecting received citations is the most practiced way to operationalize the notion of delayed recognition. This contribution is not meant as a review of the topic, but we concentrate on a few recent developments. Yet, among the many papers written by colleagues on delayed recognition we single out for mention: (Bornmann et al., 2018; Burrell, 2005; Du & Wu, 2016; El Aichouchi & Gorry, 2018; Garfield, 1980; Glänzel et al., 2003; Ke et al., 2015; Li & Ye, 2012; van Raan, 2004, 2015, 2017). In this short paper we will discuss three aspects: naming of the phenomenon, recent methods based on a cumulative citation curve and re-interpretation of delayed recognition as a fuzzy concept. Naming of the concept The concept of delayed recognition in relation to persons or articles has also been described as premature discovery, suffering from Mendel's syndrome, being late bloomers or being ahead of one's time. Mendel's work on the rules of heredity is often considered as the prototype case. Yet, Mendel's work was not totally unknown before the 20th century as mentioned by Garfield (1970), giving reference to Zirkle (1964). In an article published in 2004, Ton van Raan proposed the name "sleeping beauty" for an article suffering delayed recognition (van Raan, 2004). This catchy term took on immediately: on June 3, 2018 van Raan's article had already received 176 citations in the Web of Science (WoS). When the value, importance or usefulness of such a "sleeping beauty" is finally recognized in another article, denoted here as P, serving as a wake-up call for the scientific community (leading to general recognition of the "sleeping beauty"), article P is referred to as the Prince, continuing the metaphor of the story of the Sleeping Beauty. The act of "awakening" the sleeping beauty is then sometimes referred to as "the kiss." Sugimoto and Mostafa (2018) recalled that, in the context of sleeping beauties, Braun et al. (2010, p. 198) discussed the "ideal couple" and further sexualized the metaphor by discussing male and female dominance and "absolute superiority": a measurement of the relative citations achieved by the prince and the sleeping beauty. Finally, they introduced the notion of chastity of sleeping beauties, in terms of the number of articles that awoke the dormant article and mentioned the possible unfaithful behavior of princes. Clearly a form of sexualization of citation trajectories has been—and is still—going on. It is clear that these types of metaphors, continuing with "brave girls" for articles which are immediately recognized (Ye & Bornmann, 2018) have the tendency to become more and more gender-loaded. For this reason Sugimoto & Mostafa (2018) wrote an editorial, decrying this "clear violation of sociocultural norms". They made a plea to future authors that the use of any such terms, despite connections to historical roots in the literature, should be avoided. As a consequence they stated that JASIST's author guidelines will be adapted to make this policy explicit and clear. As a reaction Hu et al. (2018) proposed the metaphor of gender-neutral terms "hibernator" and "awakener" to replace the terms "sleeping beauty" and "prince". It is, of course, an open question if any metaphor is really useful. A new approach to determine articles with delayed recognition based on a cumulative citation curve The approach proposed by Ke et al. (2015) Although being a sleeping beauty sounds like a yes/no situation, it is clear that delayed recognition is not a clear-cut phenomenon and a sleeping beauty in the eyes of one person may not be one in the eyes of a colleague. A similar observation holds in relation to the citation database used for collecting citations. To solve this problem Ke et al. (2015) turned delayed recognition into a time-dependent continuous phenomenon by defining a beauty coefficient at time T, denoted as B(T). In the next section we return to the fact that these authors turned a yes/no phenomenon into a continuous one. Now we focus on the practical way in which they did this. Let c(t) denote the yearly citation curve of an article, i.e., c(t) is the number of citations received in year t. The publication year is year t = 0 and t takes values between 0 and T. Let cm > 0 be the maximum yearly number of received citations by this article, for which we assume that it happened in year tm, with 0 < tm ≤ T. The line connecting (0,c(0)) and the peak (tm,cm) = (tm,c(tm)), which is referred to as the recognition line, is denoted as y(t), and has equation: y(t)=cm−c(0)tmt+c(0)$$\begin{array}{} \displaystyle y(t)=\frac{c_{m}-c(0)}{t_{m}}t+c(0) \end{array}$$ (1) B(t)=∑t=0tmcm−c(0)tmt+c(0)−c(t)max{1,c(t)}$$\begin{array}{} \displaystyle B(t)=\sum_{t=0}^{t_{m}}\left(\frac{\frac{c_{m}-c(0)}{t_{m}}t+c(0)-c(t)}{\max\{1,c(t)\}} \right) \end{array}$$ (2) The numerator of a term in B(T) is equal to the—signed—difference between the recognition line and the citation value. As the denominator of this term is equal to the number of citations (unless this number is zero, in which case the denominator is 1) each term in the sum determining B(T) is a relative value. If c(t) has a concave trajectory then B(T) is negative. If c(t) is approximately linear then B(T) is (close to) zero. If c(t) is convex then B(T) is positive. If now each term in the sum determining B(T) is non-negative, then the following properties hold. All else staying the same, B(T) is increasing when cm increases. All else staying the same, B(T) decreases when c(t), with t fixed and different from 0 or tm, increases as the numerator decreases and the denominator increases. Using cumulative citation curves In recent papers Du & Wu (2017, 2018) note some disadvantages of the definition proposed by Ke et al. (2015), the most important one being the high importance given to the peak. They claim that the determination of the B-value works well for publications that after discovery have huge numbers of citations every year, but for publications with fewer citations, it may lead to some unwanted results. They, moreover, consider the role of the denominator in the original definition as just a way to avoid division by zero. For these reasons these authors propose a different approach, not based on the citation curve, c(t), but on the cumulative citation curve C(t)=∑n=0tc(n).$\begin{array}{} C(t)=\sum^{t}_{n=0}c(n). \end{array}$ Using a variation on the Du and Wu approach based on the cumulative citation curve we will propose a description of delayed recognition as a fuzzy phenomenon. Delayed recognition as a fuzzy phenomenon Now we propose a framework to study delayed recognition of an article at a given moment in time, say T. More precisely, we consider the question: does this article suffers delayed recognition or has it in the past (while now it, perhaps, behaves like a normal article, already receiving a declining number of citations). Studying this question we consider three aspects: "delayed," "recognition" and fuzzy membership. When it comes to the "delayed" part, this implies that one must wait a certain period before one may say that there is a delay. In this study we wait at least ten years (see further for details), but further investigations are needed to study the influence of this starting time. Does it matter if one starts investigations 10 years after publication or is 15 or 20 years better? Next we come to the "recognition" part. We propose to concentrate on the 1% most cited publications in the same publication year as the publication under investigation. A choice must further be made to include all publication types in this 1% or only normal articles (or normal articles and reviews). We think that here all choices are valid, i.e., have some scientific value, but the choice must be stated clearly. Finally we come to the most difficult part: constructing a framework to come to a fuzzy membership value. This value, between zero and one, must in a meaningful way express to which extent an article can be said to belong to the fuzzy set of publications with delayed recognition. This membership function, as calculated at time T, is denoted as DR(T). If an article is not "recognized", i.e. it does not belong to the 1% most cited, it is not ahead of its time and its DR(T) value is set equal to 0 for any T. Our approach is based on ideas from Ke et al. (2015) and Du and Wu (2018). To the best of our knowledge Ke et al. (2015) were the first to state that suffering delayed recognition is not a yes-no situation. They introduced a parameter-free measure that quantifies the extent to which a specific paper can be considered to suffer delayed recognition. Papers with citations growing linearly with time have B = 0. B is non-positive for papers whose citation trajectory is a concave function of time and positive for papers with a convex citation curve. Du and Wu (2017, 2018) proposed a similar measure, but based on the cumulative citation curve. We will calculate a partial membership function, denoted as K(t), for each time t between 10 and T. The final DR(T) value is then equal to: DR(T)=Max10≤t≤T {0,K(t)}$$\begin{array}{} \displaystyle DR(T)=Max_{10 \leq t \leq T}\ \{0, K(t)\} \end{array}$$ (3) The use of the maximum function in formula (3) avoids that the DR coefficient diminishes over time, which is against the definition of the concept of delayed recognition: once an article is accepted to have suffered delayed recognition this cannot be undone. When determining K(t) for given t, we define C(n) equal to the cumulative number of received citations at the beginning of year n (where year 0 is the publication year) and hence C(0) = 0; c(1) = C(1) denotes the number of citations received during the publication year. If C(t) = 0 then K(t) is set equal to 0. If now C(t) ≠ 0, we consider the line y(n) connecting the origin (0, 0) with the point (t, C(t)). This line, which we call the recognition line at time t, has equation y(n)=C(t)tn$$\begin{array}{} \displaystyle y(n)=\frac{C(t)}{t}n \end{array}$$ (4) Now we calculate the sum of the differences in each n, 0 ≤ n ≤ t, between the line y(n) and the cumulative citation curve C(n). This sum is denoted as S(t). S(t)=∑n=0tC(t)tn−C(n)$$\begin{array}{} S(t)=\sum^{t}_{n=0}\left(\frac{C(t)}{t}n-C(n)\right) \end{array}$$ (5) The largest possible value of S(t) occurs when all C(n) are zero except C(t). This happens if the publication receives its first citation in the year t. Yet, we are not interested in that year, but just use this value as a reference. For this case, S(t)=∑n=0t−1C(t)tn=C(t)t(t−1)t2=(t−1)C(t)2.$\begin{array}{} S(t)=\sum^{t-1}_{n=0}\left(\frac{C(t)}{t}n\right)=\frac{C(t)}{t}\frac{(t-1)t}{2}=\frac{(t-1)C(t)}{2}. \end{array}$ Finally K(t) is defined as the ratio of the observed S(t) value over the largest possible one: K(t)=2(t−1)C(t)S(t)$$\begin{array}{} \displaystyle K(t)=\frac{2}{(t-1)C(t)}S(t) \end{array}$$ (6) leading to a value between -1 and +1. K(t) is negative if C(n) is always situated above the line y(n) and certainly positive when C(n) is always situated under this line. Yet, K(t) may also be positive when parts of C(n) are above the recognition line y(n). We note that if an article receives its first citation in year 10 and is 'recognized' then, based on equation (3), its DR(T) value is equal to one, for all T ≥ 10. Theoretical examples If the cumulative citation curve is everywhere concave then K(t) is always negative and DR(T) = 0 for every T. Similarly, if c(n) is constant: c(n) = a > 0, then C(n) = a∗n and the recognition line has equation y(n) = a∗n. Clearly, also here DR(T) = 0, agreeing with the fact that there is no delayed recognition. If citations grow linearly in time, then c(n) = b∗n(b > 0), C(n) = b∗n(n+1)/2 and C(T) = b∗T(T+1)/2 and hence y(n) = b(T+1)2n.$\begin{array}{} \displaystyle \frac{b(T+1)}{2}n. \end{array}$ Consequently, DR(T) = 1/3 (The calculation is included in the appendix). This result is different from the one obtained by using Ke et al.'s B. Their B-value is zero, although citations grow with time, indicating a delay in recognition. We further remark that this value for linear growth can be used as a kind of benchmark when comparing to other citation curves. Recall that linear growth in citation corresponds to quadratic growth in cumulative citations as illustrated in Figure 1 for b = 0.5. Quadratic cumulative growth corresponding to linear growth Some real-world examples In this contribution we provide three examples, leaving more investigations to further research. As a real-world example we begin with Romans' article (Romans, 1986), an article studied by van Raan (2004). This article got its first citation in 1995 (n = 10), followed by 11 more in 1996. Since then it kept on receiving citations with a peak in 1999, in which year it received 32 citations. The WoS includes 520,862 publications of article type published in 1986. Among these, the article ranked 5209 received 229 citations. As Romans' article received 374 citations it belongs to the top 1% most-cited (data collected on June 5 2018). Figure 2 shows the cumulative citation curve, the recognition line for the year 2017, when K(2017) is 0.241 and the recognition line in the year 1996. Its DR value is equal to 1.0 obtained in the year 1996, which is the first year for which we perform a calculation. Hence DR(T) = 1 for all T ≥ 10. We recall that these calculations are performed at the beginning of the year: n = 0 correspond to the publication year and citations received during the publication year are associated with the year n=1. Cumulative citation curve of Romans (1986) and two recognition lines. Next we consider Leaky et al. (1964). This article has been studied as a sleeping beauty in (Tobias, 1996). The WoS contains 127,018 publications of article type published in 1964. Among these the article ranked 1271 received 242 citations. As Leakey et al. (1964) received 348 citations it belongs to the top 1% most-cited (data collected on June 5 2018). Its DR-value is 0.225 which is obtained in the latest year studied, namely 2017. This value is smaller than the benchmark value of 0.333 obtained for linear growth. Its lowest K(n) value is -0.084, which was obtained for n = 39 (the year 2003). Note that this value was obtained several years after Tobias (1996) had declared this article to be a sleeping beauty! Figure 3 shows the cumulative citation curve, its final recognition line and the situation in the year 2003, when the recognition line was situated under the citation curve. Cumulative citation curve of Leakey et al. (1964) and two recognition lines. K-values for Leakey et al. (1964). Year t K(t) 1974 0.014 1985 -0.019 1996 -0.032 2007 -0.007 1975 0.020 1986 -0.055 1997 -0.032 2008 0.027 1977 -0.006 1988 -0.011 1999 -0.059 2010 0.060 1978 -0.013 1989 0.013 2000 -0.057 2011 0.085 1982 0.023 1993 0.005 2004 -0.041 2015 0.164 This leads us to question Tobias' paper (1996). What did he claim? It is important to know that, actually, Tobias was a co-author of the Leakey et al. (1964) paper. In his paper from 1996 he described how their findings were not accepted by their colleagues, but that step by step the original objections against their findings and corresponding theory fell away and, in his words, by 1984 their findings were accepted. This happened twenty years after their publication and hence, these findings were—rightly—described as a premature discovery. Honesty forces us to include that even today the exact position of Homo habilis in the development of the genus Homo is not yet convincingly determined. The citation curve does not show any sign of this observation. We think this illustrates the very important fact that using citations is just an operationalization and experts may, rightly, have other opinions. We note that this article and Romans' are also under-cited influential and hence citation chimeras in the sense of (Hu & Rousseau, 2018). This term refers to the fact that these articles are exceptional in terms of received citations and in terms of second-generation citations. Finally, we consider one of our own articles, namely (Otte & Rousseau, 2002). Again, we first check if it belongs to the top 1% most-cited articles. The WoS contains 813,472 publications of article type published in 2002. Among these the article ranked 8,135 received 280 citations. As Otte and Rousseau (2002) received 368 citations it belongs to the top 1% most-cited (data collected on June 5, 2018). The K(t)-values first decline somewhat before they start increasing. Only in the latest year the maximum is reached: DR(2017) = 0.523. The cumulative citation curve and the recognition line for 2017 are shown in Figure 4. Cumulative citation curve of Otte and Rousseau (2002). K-values for Otte and Rousseau (2002). 2011 0.515 We reviewed recent developments related to the study of delayed recognition, leading to the idea to consider delayed recognition as a fuzzy concept. We proposed a method to obtain fuzzy membership values. One of the requirements for suffering delayed recognition, is that the article must belong to the 1% most-cited ones. This means that at most 1% of the articles under consideration have a non-zero fuzzy membership value, and probably much less than 1%. The value 0.333 for linear growth in citations can be considered a benchmark for comparisons. Besides proper hibernators (sleeping beauties) who have a long period with no or few citations, articles suffering delayed recognition may have a convex cumulative citation curve, such as in the case of linear growth in citations. Examples of these two types are shown in this contribution: Romans (1986) being a proper hibernator and Leakey et al. (1964) and Otte & Rousseau (2002) being examples of the second type. We made the important observation that using citations to study delayed recognition is just a—convenient—operationalization of the concept, but that experts may agree on delayed recognition long before this is shown by citations. This is illustrated by the case of Leakey et al. (1964). This leads to the question: How good (adequate) is citation analysis for detecting premature discoveries? As this contribution is just a feasibility study, many questions are left unanswered, such as: What are typical values for membership functions? Wouldn't it be better to use normalized citation scores instead of absolute ones as done here? If so, how to normalize: with respect to the database, with respect to the field, or both (Bornmann et al., 2018)? Can this framework, by focusing on negative values and years immediately after the publication year, also be used for characterizing early recognition (flash-in-the-pan)? If so, how? These questions are left as topics for further research. Finally we mention the obvious limitation: as all citation studies also this one is database dependent. Bornmann, L., Ye, A.Y., & Ye, F.Y. (2018). Identifying "hot papers" and papers with "delayed recognition" in large-scale datasets by using dynamically normalized citation impact scores. Scientometrics, https://doi.org/10.1007/s11192-018-2772-0.BornmannL.YeA.Y.YeF.Y.2018Identifying "hot papers" and papers with "delayed recognition" in large-scale datasets by using dynamically normalized citation impact scoresScientometricshttps://doi.org/10.1007/s11192-018-2772-010.1007/s11192-018-2772-0609665730147199Search in Google Scholar Braun, T., Glänzel, W., & Schubert, A. (2010). On Sleeping Beauties, Princes and other tales of citation distributions ... Research Evaluation, 19(3), 195–202.BraunT.GlänzelW.SchubertA.2010On Sleeping BeautiesPrinces and other tales of citation distributions ... Research Evaluation19319520210.3152/095820210X514210;Search in Google Scholar Burrell, Q.L. (2005). Are "Sleeping Beauties" to be expected? Scientometrics, 65(3), 381–389.BurrellQ.L.2005Are "Sleeping Beauties" to be expected?Scientometrics65338138910.1007/s11192-005-0280-5Search in Google Scholar Cole, S. (1970). Professional standing and the reception of scientific discoveries. American Journal of Sociology, 76, 286–306.ColeS.1970Professional standing and the reception of scientific discoveriesAmerican Journal of Sociology7628630610.1086/224934Search in Google Scholar Du, J. & Wu, Y. (2016). A bibliometric framework for identifying "Princes" who wake up the "Sleeping Beauty" in challenge-type scientific discoveries. Journal of Data and Information Science, 1(1), 50–68.DuJ.WuY.2016A bibliometric framework for identifying "Princes" who wake up the "Sleeping Beauty" in challenge-type scientific discoveriesJournal of Data and Information Science11506810.20309/jdis.201605Search in Google Scholar Du, J. & Wu, Y. (2017). A parameter-free index for identifying under-cited sleeping beauties in science. Proceedings of ISSI 2017—The 16th International Conference on Scientometrics and Informetrics. Wuhan University, China, (148–157).DuJ.WuY.2017A parameter-free index for identifying under-cited sleeping beauties in scienceProceedings of ISSI 2017—The 16th International Conference on Scientometrics and Informetrics.Wuhan UniversityChina148157Search in Google Scholar Du, J. & Wu, Y. (2018). A parameter-free index for identifying under-cited sleeping beauties in science. Scientometrics https://doi.org/10.1007/s11192-018-2780-0.DuJ.WuY.2018A parameter-free index for identifying under-cited sleeping beauties in scienceScientometricshttps://doi.org/10.1007/s11192-018-2780-010.1007/s11192-018-2780-0Search in Google Scholar El Aichouchi, A., & Gorry, P. (2018). Paul Hagenmüller's contribution to solid state chemistry: A scientometric analysis. Journal of Solid State Chemistry, 262(June), 156–163.El AichouchiA.GorryP.2018Paul Hagenmüller's contribution to solid state chemistry: A scientometric analysisJournal of Solid State Chemistry262June15616310.1016/j.jssc.2018.02.003Search in Google Scholar Garfield, E. (1970). Would Mendel's work have been ignored if the Science Citation Index was available 100 years ago? Current Contents. Reprinted in: Essays of an Information Scientist, Vol. 1, 1962–1973 (p. 69–70) 1973, Philadelphia, ISI.GarfieldE.1970Would Mendel's work have been ignored if the Science Citation Index was available 100 years ago?Current Contents. Reprinted in: Essays of an Information Scientist11962–197369701973PhiladelphiaISISearch in Google Scholar Garfield, E. (1980). Premature discovery or delayed recognition—Why? Current Contents, 21. Reprinted in: Essays of an Information Scientist, Vol.4, 1979–1980 (p.488–493) 1980, Philadelphia, ISI.GarfieldE.1980Premature discovery or delayed recognition—Why? Current Contents, 21Reprinted in: Essays of an Information Scientist41979–19804884931980PhiladelphiaISISearch in Google Scholar Glänzel, W., Schlemmer, B., & Thijs, B. (2003). Better late than never? On the chance to become highly cited only beyond the standard bibliometric time horizon. Scientometrics, 58(3), 571–586.GlänzelW.SchlemmerB.ThijsB.2003Better late than never? On the chance to become highly cited only beyond the standard bibliometric time horizonScientometrics58357158610.1023/B:SCIE.0000006881.30700.eaSearch in Google Scholar Hu, X.J., Zhang, Y.N., Hu, X.Y., & Rousseau, R. (2018). Hibernators, their awakeners and the roles of subsequent authoritative citers. Malaysian Journal of Library & Information Science, 23(1), 103–113.HuX.J.ZhangY.N.HuX.Y.RousseauR.2018Hibernators, their awakeners and the roles of subsequent authoritative citersMalaysian Journal of Library & Information Science23110311310.22452/mjlis.vol23no1.7Search in Google Scholar Hu, X.J. & Rousseau, R. (2018). Do citation chimeras exist? The case of under-cited influential articles suffering delayed recognition. Journal of the Association for Information Science and Technology (submitted).HuX.J.RousseauR.2018Do citation chimeras exist? The case of under-cited influential articles suffering delayed recognitionJournal of the Association for Information Science and Technology (submitted)10.1002/asi.24115Search in Google Scholar Ke, Q., Ferrara, E., Radicchi, F., & Flammini, A. (2015). Defining and identifying sleeping beauties in science. Proceedings of the National Academy of Sciences of the United States of America, 112(24), 7426–7431.KeQ.FerraraE.RadicchiF.FlamminiA.2015Defining and identifying sleeping beauties in scienceProceedings of the National Academy of Sciences of the United States of America112247426743110.1073/pnas.1424329112447597826015563Search in Google Scholar Leakey, L.S.B., Tobias, P. V., & Napier, J. R. (1964). A new species of the genus Homo from Olduvai Gorge. Nature, 202(4927), 7–9.LeakeyL.S.B.TobiasP. V.NapierJ. R.1964A new species of the genus Homo from Olduvai GorgeNature20249277910.1038/202007a014166722Search in Google Scholar Li, J., & Ye, F.Y. (2012). The phenomenon of all-elements-sleeping-beauties in scientific literature. Scientometrics, 92(3), 795–799.LiJ.YeF.Y.2012The phenomenon of all-elements-sleeping-beauties in scientific literatureScientometrics92379579910.1007/s11192-012-0643-7Search in Google Scholar Otte, E. & Rousseau, R. (2002). Social network analysis: A powerful strategy, also for the information sciences. Journal of Information Science, 28(6), 441–453.OtteE.RousseauR.2002Social network analysis: A powerful strategy, also for the information sciencesJournal of Information Science28644145310.1177/016555150202800601Search in Google Scholar Romans, L.J. (1986). Massive N = 2a supergravity in ten dimensions. Physics Letters B, 169(4), 374–380.RomansL.J.1986Massive N = 2a supergravity in ten dimensionsPhysics Letters B169437438010.1142/9789814542340_0015Search in Google Scholar Sugimoto, C. R., & Mostafa, J. (2018). A note of concern and context: On careful use of terminologies. Journal of the Association for Information Science and Technology, 69(3), 347–348.SugimotoC. R.MostafaJ.2018A note of concern and context: On careful use of terminologiesJournal of the Association for Information Science and Technology69334734810.1002/asi.24014Search in Google Scholar Tobias, P.V. (1996). Premature discoveries in science with especial reference to "Australopithecus" and "Homo Habilis". Proceedings of the American Philosophical Society, 140(1), 49–64.TobiasP.V.1996Premature discoveries in science with especial reference to "Australopithecus" and "Homo Habilis"Proceedings of the American Philosophical Society14014964Search in Google Scholar van Raan, A.F.J. (2004). Sleeping Beauties in science. Scientometrics, 59(3), 467–472.van RaanA.F.J.2004Sleeping Beauties in scienceScientometrics59346747210.1023/B:SCIE.0000018543.82441.f1Search in Google Scholar van Raan, A.F.J. (2015). Dormitory of physical and engineering sciences: Sleeping Beauties may be sleeping innovations. PLoS ONE, 10(10): e0139786.van RaanA.F.J.2015Dormitory of physical and engineering sciences: Sleeping Beauties may be sleeping innovationsPLoS ONE1010e013978610.1371/journal.pone.0139786460716026469987Search in Google Scholar van Raan, A.F.J. (2017). Sleeping beauties cited in patents: Is there also a dormitory of inventions? Scientometrics, 110(3), 1123–1156.van RaanA.F.J.2017Sleeping beauties cited in patents: Is there also a dormitory of inventions?Scientometrics11031123115610.1007/s11192-016-2215-8531108728255185Search in Google Scholar Ye, F.Y., & Bornmann, L. (2018). "Smart girls" versus "sleeping beauties" in the sciences: The identification of instant and delayed recognition by using the citation angle. Journal of the American Society for Information Science and Technology, 69(3), 359–367.YeF.Y.BornmannL.2018"Smart girls" versus "sleeping beauties" in the sciences: The identification of instant and delayed recognition by using the citation angleJournal of the American Society for Information Science and Technology69335936710.1002/asi.23846Search in Google Scholar Zirkle, C. (1964). Some oddities in the delayed discovery of Mendelism. Journal of Heredity, 55(2), 65–72.ZirkleC.1964Some oddities in the delayed discovery of MendelismJournal of Heredity552657210.1093/oxfordjournals.jhered.a107293Search in Google Scholar Identifying grey-rhino in eminent technologies via patent analysis Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases①
CommonCrawl
What exactly is a "balanced modulator"? While studying for his U.S. General class license, my son wondered about the "balanced modulator" referred to by a few questions in the FCC pool. Quoting KB6NU's General Class study guide to give a sense of the two questions: Filters are also used in amateur radio transmitters. A filter is used to process signals from the balanced modulator and send them to the mixer in a single-sideband phone transmitter. (G7C01) A balanced modulator is the circuit used to combine signals from the carrier oscillator and speech amplifier and send the result to the filter in a typical single-sideband phone transmitter. (G7C02) Coming from the SDR world, I'm much more familiar with "mixers" being used for frequency conversion, and I tend to think of SSB from that perspective (i.e. raw baseband that's simply been upconverted). Other test pool questions want me to conceive of SSB as generated by an AM signal that's somehow had its carrier [sharply!?] filtered out afterward. Neither of those sounds like this. In short, the term "balanced modulator" is unknown to me (despite working through the same exam pool a few years ago ;-] ) and my initial web searches on a small screen didn't turn up very much information either. Among Google results on a bigger screen today, the Armstrong phase modulator does come up but that seems more for FM than SSB, and, as far as I can tell, only uses a balanced modulator rather than explaining what one is…. What are the core principle(s) of a balanced modulator? What is the significance of them being "balanced"? In broad strokes how would one design/build one in practice? Are they still used in commercial SSB transceivers and/or modern kit radio circuitry? modes ssb mixer Jim MacKenzie VE5EV natevw - AF7TBnatevw - AF7TB $\begingroup$ Noting another reference to "balanced modulator" that I found while cleaning up browser tabs: hamwhisperer.com/2011/01/… — this page seems put the "filtered AM" concept to the modulator component but I'm not sure if that's just for teaching/memorization aid vs. a fully accurate technical explanation. $\endgroup$ – natevw - AF7TB Sep 6 '18 at 23:36 To put it simply, a balanced modulator is a mixer which has two inputs and two outputs. The outputs are (input1 + input2) and (input1 - input2). So, if the inputs are 100kHz and 1kHz, the outputs are 101kHz and 99kHz. For transmitting: If the inputs are 1MHz and (voice), the outputs are the lower sideband of the voice and the upper sideband of the voice, centered on 1MHz. Effectively a "double sideband, suppressed carrier" signal of voice modulated on a 1MHz (suppressed) carrier. One of the sidebands can be filtered out (if you want to transmit USB or LSB) when it's transmitted. For receiving: Working the other way, you mix the received single sideband with a 1MHz signal, to generate enough of the original voice to be intelligible (since the lower and upper sidebands are mirrors of each other, you only need one to rebuild the original signal, it's just slightly trickier to tune by ear). Scott Earle♦Scott Earle Partial answer, because I don't know the actual electronics theory, but I hope which will help make progress towards a complete answer: A balanced modulator is a mixer with a particular feature. Basic analog mixer designs tend to include the carrier in the output, whereas a balanced modulator is one which is designed to "suppress" the carrier. Hence, if you feed an audio signal and a RF carrier into a mixer that is not "balanced", you will get AM output (possibly with unstable carrier power level?), whereas a balanced modulator will give you DSB-SC. If you're generating SSB by filtering the output of a mixer fed by the audio signal, this means that you only need to filter out the opposite sideband rather than also the carrier — twice the frequency difference and hence less demanding on the filter. Kevin Reid AG6YO♦Kevin Reid AG6YO $\begingroup$ Ah, yes! While studying for the GROL late at night I came across a similar mention in Gordon West WB6NOA's book. Appreciate the memory jog AND the explanation/affirmation that a balanced modulator and balanced mixer are indeed the same. $\endgroup$ – natevw - AF7TB Sep 6 '18 at 23:32 $\begingroup$ Specifically, per this, a balanced modulator feeds two AM modulators with opposite phases of the AF signal (and the same LO) and takes their difference. The carrier outputs of the two modulators cancel, but the sidebands reinforce. $\endgroup$ – hobbs - KC2G Sep 7 '18 at 0:51 $\begingroup$ Ah! @hobbs-N2EON that page has a great explanation and the "balanced" name now makes complete sense. Great simple concept too… a summary of that page would make a great answer if anyone wants to take a swing at it. $\endgroup$ – natevw - AF7TB Sep 7 '18 at 4:37 $\begingroup$ Yes! Any old diode has a non-linear response and will work as a mixer. But a ring of four diodes, balanced just right, can do a four quadrant multiplication of the two input signals, which suppresses the carrier and leaves only the sidebands. $\endgroup$ – tomnexus Sep 8 '18 at 4:40 $\begingroup$ Folks, it looks like you're putting together a better answer in these comments. Consider writing that answer instead. $\endgroup$ – Kevin Reid AG6YO♦ Sep 8 '18 at 4:41 A balanced modulator is a kind of mixer. Specifically, it's one that works in all four "quadrants", that is combinations of each of the two input voltages being positive or negative. Along with that come certain other expectations about the device. We could draw a graph of the two input voltages $v_1$ and $v_2$, and it would look like this: Any possible input to the mixer at an instant is a point somewhere on this graph. If both voltages are zero, that's the point in the middle. As $v_1$ increases, we go up. As $v_2$ increases, we go right. Let's consider simple amplitude modulation where $v_1$ is the baseband signal, and $v_2$ is the carrier, at 50%, 100%, and 150% modulation. On the left for each case is the ideal mixer output, and on the right a shaded region showing the region in which the inputs will fall on the graph under these conditions. At 50% modulation, the baseband signal is always significantly more than zero, so only the upper parts of the first and second quadrant are encountered. At 100% modulation, the baseband signal touches zero but is never negative, so the first and second quadrant are used all the way down to the origin. At 150% modulation, the baseband signal goes negative, which means an inversion of the output as operation enters the 3rd and 4th quadrants. A mixer which can operate in all four quadrants, producing the inverted signal as would be expected of an "ideal" mixer where $v_\text{out} = v_1 \times v_2$, is a balanced modulator. Some mixers can't produce $v_\text{out} = v_1 \times v_2$ in all four quadrants. They may clip the output rather than inverting it when crossing into the 3rd and 4th quadrants: Such mixers are not balanced modulators. Consider: the difference between 50% and 100% modulation in AM is that the latter has a decreased carrier power in relation to the sidebands. As the modulation increases beyond 100%, the sideband power continues to increase and the carrier power decreases. When the modulating signal is "balanced", meaning it spends as much time being positive as it does negative, the carrier power reaches zero and you're left with just the sidebands. For generating AM this is no good, but for SSB it's great since there's no need to somehow remove the carrier. Only the unwanted sideband needs to be removed, and that can be done with a sharp filter, or by combining with a second modulator but at a 90 degree phase shift. Not the answer you're looking for? Browse other questions tagged modes ssb mixer or ask your own question. What does a SSB frequency actually indicate? What advantages does dual-conversion have over single-conversion superheterodyne? What is this short, wide band signal? How does selecting the opposite sideband when receiving CW on HF reduce interference? What does a SSB signal look like? Is this simple description of SSB correct? What might this "spreading" interference be? In the context of the mixer in my radio, what does balanced and unbalanced mean? What function do high-pass filters in a direct conversion receiver perform? What is the FM-N mode?
CommonCrawl
Relative validity of a semi-quantitative food frequency questionnaire for Singaporean toddlers aged 15–36 months Cameron Allan1, Ummi Hani Abdul Kader2, Jowynn Yu Ying Ang1, Leilani Muhardi2,3 & Smita Nambiar2 There is presently no simple tool for use in large epidemiological studies to understand the food and nutrient intakes of Asian toddlers. This study aimed to assess the relative validity of a semi-quantitative food frequency questionnaire (sqFFQ) developed for multi-ethnic Singaporean toddlers aged 15–36 months. Ninety-one parents completed the sqFFQ and a 2-day weighed food record as the reference method. Intake of energy and 25 nutrients were determined for each method and compared using Pearson correlations corrected for attenuation, Bland-Altman plots, and weighted kappa according to quartiles; sqFFQ calibration was performed using multivariable linear regression. Deattenuated correlations for energy and all nutrients were acceptable (r = ≥0.30, p < 0.001). The sqFFQ was highly reproducible, but significantly overestimated intake of energy and all nutrients except vitamin A. Bland-Altman plots showed wide limits of agreement for energy and all nutrients. Weighted kappa ranged from 0.12 (slight) to 0.53 (moderate). After calibration, deattenuated correlations improved for energy and 10/25 nutrients, with no change or a slight decline for the remainder, including one falling to r = 0.27. Limits of agreement narrowed for energy and all nutrients, and except for DHA, median intakes were not significantly different except for vitamin A, enabling population estimates of absolute intakes. Weighted kappa improved overall; energy and 16 nutrients now had moderate agreement (0.41–0.60), while 9 nutrients had fair agreement (0.21–0.40). The Singaporean toddler semi-quantitative food frequency questionnaire is suitable for ranking nutrient intakes of Singaporean toddlers in larger epidemiological studies. However, for population estimates of absolute nutrient intakes, it is recommended that a subsample within a cohort complete weighed food records for calibration purposes. This study was registered retrospectively on clinicaltrials.gov on 3rd May 2017 (identifier code: NCT03138330). Toddlerhood is a critical period during the lifecycle. Defined here as children aged 12–36 months, this phase is marked by rapid growth, maturation of organs and increasing levels of physical activity [1]. Relative to their body size, toddlers have high nutritional requirements [1]. Any deficiencies or excesses in macro- and micronutrients that occur during this critical period can have lasting negative consequences later in life. Conditions such as iron deficiency and obesity are prevalent in developed and developing countries, and can often exist in parallel [2]. In addition to this, toddlers are establishing healthy eating habits as they transition from an infant diet to the family diet [3]. Therefore, insights into food and nutrient intakes of toddlers are extremely important. Dietary data collection can be integrated into clinical and epidemiological studies to understand the food and nutrient intake patterns of a population. Such information can help with the development of dietary guidelines, assess if children are meeting recommendations and if any diet-disease relationships exist. Depending on study objectives, there are several different methods for collecting dietary information. These methods are similar in adults and children, however, with the exception of nutritional biomarkers, dietary information is obtained from a proxy (parent or guardian), especially if the child is under ten years of age [4]. The food record (FR) and FFQ are two examples of dietary assessment methods commonly used in epidemiological studies involving children [4]. The FR collects information current food intake and is used to estimate nutrient intakes [5]. Participants keep a diary of all foods and beverages consumed in a day, along with quantities that are estimated or weighed (WFR). Food records can be burdensome on participants due to the level of detail required and multiple days of recording. This can be especially challenging when toddlers are involved, as they may not eat the same foods as the rest of the family and different carers may be involved at various mealtimes, thus resulting in inconsistent reporting. For these reasons, the FR is one of the more tedious and expensive nutritional tools to implement and analyse [5, 6]. The FFQ differs significantly from the FR, as it retrospectively gathers information on habitual food intake. [7] The FFQ consists of a finite list of foods consumed by a particular population and participants indicate how often they consume these foods. Intake can also be crudely quantified [7]. The tool is inexpensive to administer; simple to complete and analyses is more straight forward. This makes it a useful tool in large population studies where the intention is to rank individuals according to their intakes and then seek associations between diet and disease [7]. Some limitations of the FFQ include: overestimation of nutrient intakes at the individual level; reliance on the user's memory to recall past intake; its use is restricted to a specific population; it requires regular updating and it needs to be validated [5, 7]. There are a limited number of FFQs available for toddlers. Studies involving the development and validation of FFQs have been conducted in North America [8,9,10,11,12,13,14,15], Europe [16,17,18,19,20,21,22,23], Australia and New Zealand [24,25,26]. The age ranges included in these studies were 1.5–4 years. Even fewer FFQs are available for children in Asia [27, 28], of which none involve children less than three years of age. Therefore, presently, there is no simple tool for use in large epidemiological studies to understand the food and nutrient intakes of Asian toddlers. To address this gap, a multi-ethnic sqFFQ for Singaporean toddlers was recently developed [29], but yet to be validated. The purpose of the present study was to validate this new tool for use among Singaporean toddlers aged 15–36 months. The most common reference method for the multi-nutrient validation of a sqFFQ designed for young children is the FR [30, 31]. The Singaporean toddler sqFFQ was assessed for its ability to rank and estimate nutrient intakes relative to the WFR for energy and 25 nutrients that are important for growth and development during this critical period. Sampling, recruitment and participants As studies involving the validation of nutritional tools among toddlers are limited in Asia, sample sizes used in other similar studies were used as a guide. Additionally, Cade and colleagues (2002), suggested at least 50–100 subjects are required for each demographic group, particularly if Bland Altman analyses and correlations are used; increasing the sample size beyond this would not strengthen correlations [31]. As the sqFFQ was designed as one questionnaire for a multiethnic sample (all races have access to many different types of foods and cuisines), a convenience sample of approximately 100 subjects and their primary caregiver was consecutively recruited over twelve months (December 2015 to November 2016). For inclusion into the study, toddlers had to be healthy, 15–36 months of age and of Chinese, Malay or Indian ethnicity (the predominant ethnic groups in Singapore) [32]. Children with any acute or chronic illnesses that affected food intake were excluded, as were children with one or both parents who did not meet the ethnicity criteria. This was to avoid over-representation of a minority group (3.2% of the population in 2016) [32], and also because the food list in the sqFFQ was based on food consumption of the three main races. Recruitment was from 15 months of age because the sqFFQ asks about habitual food intake over the last three months and only information from 12 months onwards was of interest. Children were recruited via day-care centres. A convenience sample of 74 centres (35 government-based and 39 private) across the island were selected. Only 16 centres agreed to participate (reasons for non-participation included: too busy with administrative duties, committed to other studies, the principals felt they were not at liberty to authorise the study (headquarters had to be involved), or, simply not interested). These 16 centres provided approximately 260 children who met our inclusion criteria, of which, only 46 responded to the invitation letter (we are uncertain if invitation letters were distributed to all eligible children). To increase the speed of recruitment, the snowball technique was introduced, so current participants could refer others and research staff could ask colleagues and their friends to spread the word on the study. Sixty-six parents expressed interest via this method (n = 112). Once the caregiver returned the participation form, the study research assistant arranged a face-to-face meeting to fully explain the purpose of the study, how to fill in a series of different questionnaires and obtain signed consent. Participants were given two weeks to complete the study components and received a $75 (Singapore dollars) shopping voucher for their participation. Participants filled out a series of questionnaires in order to meet several study objectives. The questionnaires relevant to the objective of this study are described below. Initial questionnaire The initial questionnaire was completed during the first face-to-face meeting. This questionnaire aimed to capture information on each parent's weight, height, education level and combined household income. Parents self-reported their child's birth weight and length and current weight and length/height. Parents could use the most recent child weight and height/length measure noted in the child's health book (if it was in the last 2 weeks); however, they were encouraged to have the child measured at a local clinic during the study period. Study staff did not do the measurements because it required the child to be present at the initial meeting, which added another level of complexity to the recruitment. As anthropometric data were not crucial to the validation analyses and mainly collected to describe the population, self-reported data were deemed sufficient. Weighed food record Parents were asked to record food intake for two non-consecutive days (one weekend, one weekday), as previously recommended [33,34,35]. Full instructions were given verbally during the initial meeting and detailed written instructions accompanied the WFR templates. Parents indicated the day, date, time of meals, meal occasion, description of all the foods offered, portion consumed, and place it was consumed. Extra pages were included for recipes and supplements used that day. Emphasis was placed on the level of detail required when describing the food types, recipes, cooking methods (including the addition of salt, seasonings, fats and oils) and brands. If the child was breastfed, mothers were asked to record the minutes the child latched on. Each participant was given digital kitchen scales which could register weights of 1 g to 5000 g (unnamed; model SF-2012) and they were showed how to tare the weight of plates/cups/bowls before weighing the food and weighing leftovers. In the instance where a meal was eaten away from the home, and the scales could not be used, parents were asked to describe portions in relation to standard cup and spoon measures, or, the standard bowl measure used in the sqFFQ (parents were shown what these were at the initial meeting). For meals consumed while the child attended day care, the research assistant obtained details from the facility, as meals are supplied by the facility. If another carer oversaw a mealtime, they were asked to fill out the details in the diary. All the food WFRs were reviewed in person or via phone call. At the end of the review, the parent was asked if the child's intake was usual, more than usual or less than usual. If, after review, the record was still deemed as poor quality, then it was excluded from analyses. Semi-quantitative food frequency questionnaire The sqFFQ was an original design, with food lists and portion sizes developed in a previous study [29]. Briefly, the sqFFQ food list was derived by interviewing 30 mothers (ten from each ethnic group mentioned above) in a focus group setting. The mothers were asked about the child's habitual intake and were also asked to complete 3-day food diaries. Over 500 different foods, typical portion sizes and utensils were reported from the interviews and diaries. It was decided that one food list would be used for all three races. This was because Singapore is a multicultural society and all races could easily access any type of food and cuisine. The final sqFFQ consisted of 99 items, including single and composite items, as well as items where foods of a similar type and nutrient profile were grouped together (for example in the vegetables section, vegetables were grouped as bulbs, tubers, root, stem, fruit, seeds, with examples provided for each; certain items in other food groups were separated based on their fat, sugar and fibre content). These 99 items were then divided across 11 food groups: breads and cereals, vegetables, fruits, legumes and nuts, meat/poultry/fish and alternatives, dairy and alternatives, snacks, fast foods, beverages (other than dairy and alternatives), salty and sweet seasonings including fats and oils used in cooking, and supplements. Within each group, an open question was included where participants could add other foods to the list. Portion sizes commonly used for toddlers were listed next to each item. Frequency responses started at "Never" and increased across 10 categories to a maximum of ">6 times per day". Verbal and detailed written instructions were given, including illustrations showing the portion sizes referred to in the food lists and dimensions of common utensils. An appendix was included with photographs and descriptions of approximately 50 different foods listed in the sqFFQ to further guide parents. Reproducibility was assessed by asking participants to complete a second sqFFQ. As there were no guidelines indicating whether the full sample was needed for reproducibility, or, a proportion of the sample was sufficient, 20 % of the sample were asked to complete a second sqFFQ one to two weeks later [27, 34, 36]. The completed questionnaires were reviewed, particularly if the portions, when totalled, exceeded what was recommended for this age group. Nutrient analysis Energy and 25 nutrients were of interest in this study: protein, total fat, saturated fat (SFA), monounsaturated fat (MUFA), polyunsaturated fat (PUFA), docosahexaenoic acid (DHA), total carbohydrate (CHO), total sugars, fibre; vitamins: A, thiamine (B1), riboflavin (B2), niacin (B3), dietary folate equivalents (DFE), cobalamin (B12), C, E; minerals: calcium (Ca), iron (Fe), iodine (I), magnesium (Mg), phosphorus (P), potassium (K), sodium (Na) and zinc (Zn). For the WFR nutrient values were determined with FoodWorks 8 Professional software package (Xyris Software Pty Ltd., Australia). This software linked several national databases available in Australia and allowed new foods to be added (38 generic food items and 19 follow-on and young child formulas were added). For foods specific to Singapore, the Singaporean Health Promotion Board nutrient database [37] was used to create and add new foods into the system (27 items in total). The Composition of Foods Integrated dataset by McCance and Widdowson (revised version) was also consulted [38]. When new foods or recipes were created, and information on all nutrients was not available, efforts were made to match it as closely as possible to an existing food in the database, based on ingredients and nutrient values. The software allowed each nutrient to be over-written with a new value, making it possible to "borrow" the missing nutrient value from a food already in the system. Where brands were given on a WFR, information was obtained from package labelling or company websites. (These were the main sources of information for formulas and supplements.) Breastfeeding was assumed to provide approximately 10 g breastmilk per minute. Per breast, feeds were capped at 10 min, since milk flow after this length of time was considered too slow to contribute nutritionally. Feeds shorter than 2 min were excluded for the same reason. If the next feeding session commenced within 30 min of the start of the previous feed, the duration was added to the first feed and capped at 10 min per breast [39,40,41]. For the sqFFQ, a reference spreadsheet was developed that included all the nutrient values for the portion specified for each item, using FoodWorks 8. The mean of up to five foods per single item was used to estimate nutrient values. For items which were a group of similar foods, for example, rice-based dishes or small flower fruits, up to five variations of each food in the group were averaged. Each frequency category was converted to a single number of serves per day. For example, 1–3 times a month was averaged to 2 serves/30.4 days = 0.065 serves per day. These were then multiplied by the portion of food to obtain nutrients per day for a particular food. The sum of all the foods in the list was the total intake. As a high proportion of children consumed vitamin and mineral supplements, each type was added to the food list as a new food. Additionally, four new foods were created as they could not fit into an existing category. These were muesli bar, breastmilk, dried seafood, and dried seaweed. Microsoft Office Excel 2010 (USA) was used to determine nutrient intakes per day, which was then exported to a statistical package for analyses. Nutrients were not adjusted for energy intake. This was deemed unnecessary as the assessment of nutritional intake was not an aim of the present study. Analyses were performed on IBM SPSS Statistics for Windows, version 23 (IBM Corp., Armonk, N.Y., USA). Anthropometry Z-scores were determined using the World Health Organisation (WHO) AnthroCalc v3.2.2. Data were checked for normality using the Smirnov-Kolmogorov test and visual checks of histograms. As 50% of data were skewed, a number of convenient Box-Cox transformations (cube, square, square root, cube root, natural log, inverse cube root, inverse square root, inverse, inverse square, inverse cube) were performed in an attempt to reduce the skewness to within a range of − 0.5 to 0.5. The cube root values were within this defined range and were used in Bland Altman, correlations and selected multivariable regression models, while raw values were used for other analyses described below. As there was no set method for validating the sqFFQ, a number of techniques documented in the literature were used in a series of steps [42]. A p-value of less than 0.05 was considered statistically significant. Correlations between methods Firstly, linear associations between the two methods were explored using Pearson correlations. Additionally, deattenuated Pearson correlations were used to account for variation in the diet with the formula: $$ {\mathrm{r}}_{\mathrm{xy}}/\mathrm{sqrt}\left({\mathrm{r}}_{\mathrm{xx}}/{\mathrm{r}}_{\mathrm{yy}}\right) $$ where rxy was the correlation between the mean of the 2-day WFR and first (main) sqFFQ; rxx was the correlation between Day 1 and Day 2 of the WFRs and ryy was the correlation of the first and repeat sqFFQs [43]. Correlations between 0.30–0.49 were considered acceptable and 0.50–0.70 were good [44]. Only nutrients with deattenuated correlations ≥0.30 were included in subsequent analyses. Reproducibility of the sqFFQ Reproducibility of the sqFFQ was assessed using intra-class correlation (model: two-way mixed; type: absolute agreement; alpha = 0.05). Agreement between methods The Wilcoxon signed rank test was used to assess differences in median nutrient intakes by each method. Agreement was then assessed using weighted kappa (κw). Quadratic weights were used to assess the statistical significance of the agreement (if sqFFQ and WFR ranked the nutrient into the same quarter = 0 points; adjacent quarter = 1 point, 2-quarter difference = 4 points and extreme quarter = 9 points). This test is generally thought to be a more robust measure than simple percent agreement calculation, since κw takes into account the possibility of the agreement occurring by chance [45]. κw was calculated using an online tool developed by Lowry (1998), because SPSS did not have a κw calculator [46]. Agreements of < 0 indicated poor agreement, 0–0.20 slight, 0.21–0.40 fair, 0.41–0.60 moderate, 0.61–0.80 substantial, and 0.81–1.00 almost perfect [45]. Bland-Altman plots were constructed to assess the differences between the methods for each nutrient. Limits of agreement (LOA) were calculated as: $$ {\displaystyle \begin{array}{c}\mathrm{Upper}\ \mathrm{limit}=\mathrm{mean}\ \mathrm{of}\ \mathrm{the}\ \mathrm{difference}+\left(1.96\times \mathrm{standard}\ \mathrm{deviation}\right)\\ {}\mathrm{Lower}\ \mathrm{limit}=\mathrm{mean}\ \mathrm{of}\ \mathrm{the}\ \mathrm{difference}-\left(1.96\times \mathrm{standard}\ \mathrm{deviation}\right)\end{array}} $$ (where difference refers to sqFFQ minus WFR for each nutrient), therefore indicating the range in which approximately 95% of data fall. Lastly, using linear regression analyses, the mean was regressed against the difference of the means to check for proportional bias [47]. Calibration of the sqFFQ In the instance where there would be considerable under-, or, overestimation of nutrient intake measured by the sqFFQ, the last step in the validation process was to calibrate the sqFFQ nutrient values against the WFR values, so that it produced similar estimates to the WFR [48]. Multivariable linear regression analyses were used to determine the coefficients needed to derive new calibrated sqFFQ values, using a linear equation (independent variables: original sqFFQ nutrient values, age and/or sex; dependent variable: WFR nutrient values). To ensure the assumptions for multivariable linear regression were met (that is, residuals were normally distributed), certain dependent variable nutrients were entered into the model as cube root values. The final value calculated using the linear equation was back-transformed by cubing. Following this, correlations, comparison of medians, percentile ranks, κw statistics and Bland Altman plots were repeated with the new data to demonstrate improvement in the performance of the sqFFQ. Of the 46 parents who were recruited at the day-care centres, 39 consented and 33 completed the study (completion rate: 12.6%). All 66 parents who were invited via the modified method expressed interest; 65 consented and 62 completed the study (completion rate: 94%). As most of the dropouts occurred towards the end of the study period, they were not replaced. Of the 95 subjects who completed all components of the study, data from 91 participants were included in the following analyses (four subjects had poor quality WFRs). Five of the caregivers who completed the study were fathers and the rest were mothers. The sample was predominantly Chinese, which was reflective of the Singaporean population in 2016 (74.3% Chinese, 13.3% Malay, 9.1% Indian) [32]. The study had slightly more boys than girls, with a median age of 20 months. Table 1 describes other characteristics of the sample. Table 1 Sample characteristics (n = 91) Pearson correlations between methods were lowest for all fats and vitamins A and E. However, correction for attenuation brought all values up to or above the cut-off of 0.3. The reproducibility of the sqFFQ was high (Table 2). All nutrient values determined by the sqFFQ were significantly higher than the WFR (p < 0.001), except for vitamin A, where the difference did not reach significance (Table 3). Table 4 displays the agreement between the methods when intake was ranked into quartiles. κw values ranged from 0.12 (MUFA) to 0.53 (calcium). Moderate agreement (0.41–0.60) was found for 8 nutrients, energy and 13 nutrients had fair agreement (0.21–0.40), while 4 nutrients had slight agreement (0–0.20). Table 2 Correlations between sqFFQ and WFR (n = 91); correlations for reproducibility of sqFFQ (n = 20) Table 3 Median, 25th and 75th percentiles for each nutrient measured by the sqFFQ and WFR (n = 91)a Table 4 Weighted kappa (κw) statistics indicating level of agreement between the sqFFQ and WFR (n = 91) Figure 1 illustrates the Bland-Altman plot for energy. The LOAs were wide, indicating large variability in the way the tools measured energy intake. The position of the midline indicated that the sqFFQ overestimated energy intake. This pattern was observed for all nutrients. Linear regression analyses revealed significant proportional bias for energy, SFA, DHA, sugars, fibre and vitamins A, B12 and E. This included both positive and negative trends with increasing intake. Bland Altman plot for energy before calibration. Indicates mean difference (mid line) and Limits of Agreement (+ 2 standard deviations and − 2 standard deviations); values on the axes are kilojoules transformed to the cube root Due to the significant differences observed between the nutrient values obtained from the sqFFQ compared to the WFR, it was necessary to perform the calibration step. Table 5 provides the coefficients used to calibrate the sqFFQ. After calibration, Pearson correlations for energy and 18/25 nutrients improved, and ranged from 0.21 (SFA) to 0.63 (calcium). For the remaining 7 nutrients, correlations remained unchanged or saw slight decreases. After calibration, deattenuated correlations improved for energy and 10/25 nutrients, with no change or a slight decline for the remainder. Deattenuated correlations for two nutrients could not be computed due to negative correlations resulting for the reproducibility of the sqFFQ. One nutrient fell slightly below the range. Table 5 Coefficients for the calibration of the sqFFQ, as determined by multivariable linear regression analysis Calibration also had varied effects on reproducibility. With the exception of total carbohydrates, reproducibility weakened for other nutrients, and SFA had a negative correlation (Table 2). Calibration improved the ability of the sqFFQ to rank nutrient intake similarly to the WFR (Table 4). κw improved for energy and 64% of nutrients; energy and 16 nutrients had moderate agreement (0.41–0.60), and 9 nutrients had fair agreement (0.21–0.40). Median intakes after calibration for all nutrients were very similar between the methods, with only phosphorus remaining significantly different (Table 3). For all nutrients, Bland-Altman analyses showed mean differences between the methods were now close to zero, with narrower LOAs. Proportional bias was still present for all nutrients, as illustrated visually in Fig. 2. However, overall, the magnitude was reduced and influenced mainly by extremes of intake. Bland Altman plot for energy after calibration. Indicates mean difference (mid line) and Limits of Agreement (+ 2 standard deviations and − 2 standard deviations); values on the axes are kilojoules transformed to the cube root This study aimed to validate a recently developed sqFFQ in its ability to rank and estimate nutrient intakes relative to the WFR, in multi-ethnic Asian toddlers. Results indicated that overall, the sqFFQ overestimated intakes of all nutrients when compared to the WFR. This finding was consistent with literature and most likely attributable to the format of the sqFFQ [26, 27, 48, 49]. With the traditional format of a sqFFQ, not only did parents have to think retrospectively about their child's habitual intake, but they also had to consider frequency of intake, based on the portion size presented next to each food. For example, if the portion of food that their child consumed was smaller, or, larger than what was specified in the sqFFQ, then the frequency had to also be adjusted accordingly. This procedure had to be repeated for nearly 100 foods, which can be fatiguing for parents (the questionnaire took between 30 and 45 min to complete). Typically, parents tended to overestimate intake of foods belonging to the breads and cereals, fruit, and meat/poultry/fish food groups. In this sample of children, traditional main meals usually consisted of composite dishes of multiple grains, meat and/or vegetables, and mixed fruit. In the sqFFQ, each type of rice/rice dish has a portion of ½ a bowl, while each meat item had a portion of 1 tablespoon. So, for example, if a child typically consumed ½ a bowl of rice consisting of equal amounts of two grains (brown and white rice) and ½ a tablespoon each of two meat items (pork and fish), 2–3 times a day, parents tended to place a tick in the 2–3 times a day column for each of the four items. This essentially doubled the child's intake. Ideally, a lower frequency should have been selected to accommodate a smaller portion. This instruction was explained to parents during the initial face-to-face meeting and provided in writing, in the questionnaire. These were the kinds of responses that were flagged for review, and upon further explanation, often parents changed their response to a lower frequency to accommodate the specified portion. However, these types of instructions can be difficult to understand and a flaw of the sqFFQ design. One approach which may reduce this kind of error is to have participants choose a serving size on the FFQ, as well as a frequency, for each item. This would force participants to consider serving size, and it may minimise the need for participants to translate the child's normal serving size and frequency into the set serving size and corresponding frequency on the FFQ. While answering two questions for each item that the child consumes may seem to increase the workload on the participant, this may in fact be easier than the present requirement. Another approach would be to make the questionnaire interviewer-administered. It would allow a dialogue between the researcher and participant and the issue could have been addressed immediately, rather than up to two weeks after completion. This approach may not be feasible in large epidemiological studies where thousands of participants are involved. In this instance, a subsample of participants (dependant on study budget) should be interviewed for quality control purposes. Digitalisation of the sqFFQ may also be an option, so that participants can access the questionnaire via an application on their smartphones or computer [50, 51]. This will provide the participant with a more interesting and interactive experience. For example, Chatbots could be used to clarify participants' queries, flag unusual responses and prompt the user to think about their selection. Additionally, digital tools could potentially reduce the burden on researchers, if data entry can be replaced with nutrient intakes that are instantly calculated by the application, so that unusual nutrient intakes can be promptly identified and followed up. Such technology will no doubt have its limits. The data would need to be reviewed for quality, as there is still reliance on the participants' memory. Unusual responses cannot be eliminated and could potentially increase. How food portions are scaled could also be misleading [51]. The technology may not be accessible in all communities. Lastly, the time and cost invested in these technologies need to be considered as it is better suited to studies with long follow-up and frequent assessment time points. In these analyses, it was found that total fat, SFA, MUFA, vitamin A and E had the weakest correlations and/or agreements between methods. Again, this was consistent with the literature, particularly for fats [26, 27, 48, 49]. There may be several reasons for this. Firstly, the addition of fats and oils are not typically measured during cooking. Secondly, these items are also not the main ingredient in recipes and could be subconsciously left out in recording. Lastly, such ingredients may not feature prominently in food prepared for this age group, or, food may be cooked in bulk and frozen into small portions making it very difficult to estimate how much was in each portion. Any of these reasons could create a significant discrepancy with the sqFFQ. PUFA and DHA on the other hand, were the only fatty acids to demonstrate much stronger correlations and agreements. It could be that follow-on and young child formulas (formulas for children above the age of one year) were the main contributors of DHA, and intake of this item was captured similarly by both methods. Likewise, calcium had high correlations and agreements, even before any adjustments. The initial impression of the results indicated that the sqFFQ in its current form may not be suitable for ranking intake of all nutrients, as total fat, SFA, MUFA and vitamin A only had slight agreement. However, it must be noted that the tools are, in fact, measuring different aspects of the diet: habitual versus actual intake. So, some weaker agreements should be expected, relative to the WFR. Perhaps, if the WFR were completed for more days over a longer period of time, it would be more representative of present usual intakes, thus increasing the agreement between the methods. However, this could also reduce the quality of data from FRs as it is difficult to keep accurate food diaries for many days. Overall, the sqFFQ in its current form is suitable for ranking nutrient intakes. The sqFFQ in its current form should not be used to estimate population intakes, as it would result in substantial overestimation. This finding was consistent with literature, regardless of participant age [16, 48]. If nutrient intakes need to be estimated, then this study demonstrated that the calibration step was effective in: bringing the sqFFQ values closer in range to the WFR values; strengthening agreements for at least two-thirds of nutrients and bringing the mean difference between the tools close to zero for all nutrients. Vitamin A was the only nutrient in the original sqFFQ whose median was not statistically different to the WFR median. Calibration resulted in the median difference becoming significantly different, however, it did improve correlations and agreements for this nutrient. Iodine was negatively impacted as agreement declined, although it remained within the category of "fair" agreement. Lastly, calibration had varied effects on reproducibility. With the exception of total carbohydrates, reproducibility weakened but remained acceptable and SFA had a negative correlation. We speculate weaker and negative correlations were an artefact of the calibration methods aimed at improving agreement between the sqFFQ and WFR. The adjustments via regression equations reduced the range of individual intakes, resulting in poorer apparent reproducibility of the sqFFQ for most nutrients. It must be noted, however, that reproducibility after calibration was only included here for demonstrative purposes. In an actual study, the reproducibility of a new tool only needs to be tested initially. If the calibration step is used to bring values closer to the reference method, repeating ICC is not necessary. The ability to calibrate the sqFFQ values to bring values closer to the reference method is a particularly important finding for large studies aiming to estimate nutrient intakes. It is well understood that the use of FRs is expensive to implement in studies. Therefore, one option would be to administer the sqFFQ to the whole study group and then select a representative subsample (based on total sample size, age of subjects and study budget), to complete WFRs for internal calibration. This way, nutrient intakes can be estimated for a large sample, without significantly increasing costs and analysis time, compared to having the full sample complete WFRs. Alternatively, if a study had similar participants to the children in this study, then the coefficients provided in Table 5 can be used as a method of external calibration [52]. There are a few limitations to this study. Firstly, the use of non-probability sampling could have resulted in a non-representative sample. However, we were fortunate that the race distribution, parental education level and household income were all reflective of the current Singaporean population [32]. Secondly, weighed food records were kept for two days in this study, which was the minimum number of days reported in validation studies [34]. While two days was sufficient to capture micronutrient intakes in this age group, ideally, up to five days of dietary data would have more accurately accounted for the day-to-day variation in macronutrient intake and the three-month time frame of the sqFFQ [31, 53]. Based on feedback from our pilot study, where participants found keeping the 2-day WFR most challenging, reducing the burden on participants for our main study (to ensure retention and high quality data) was the primary reason for selecting the minimum two days for the reference method. Presently, there are no recommendations as to whether the full sample was needed to assess reproducibility of the sqFFQ, or, if a proportion of sample was sufficient [34]. Due to the problems faced with recruitment with both the pilot and current study, 20% of the sample was randomly selected to complete the second sqFFQ. This was based on recent studies conducted in children and adolescents, where reproducibility was tested in 10–25% of the sample [27, 36]. While this approach resulted in high correlations (also an effect of the short timeframe between the questionnaires), the findings lacked power. It is therefore recommended that future studies endeavour to have the full sample complete the repeat questionnaire. Lastly, the sqFFQ asked parents to report on habitual intake over the last three months. In hindsight, this would have been very difficult to estimate; we speculate that some parents may not have even considered this instruction at all. Given how much a toddler's diet could change over three months, due to both developmental progression and inconsistencies related to illness or experimentation with new foods for example, a two-week retrospective time frame may be more realistic for the parents and produce more accurate results [16]. To the best of our knowledge, this is the first time that a toddler-specific sqFFQ, developed for a multiethnic Asian population, has been validated against a WFR for an extensive range of nutrients. It is also one of few FFQ validation studies using a range of methods in a systematic way, and therefore provides a model for the conduction of future toddler FFQ validation studies outside of Singapore. This tool will be useful in large epidemiological studies to determine dietary patterns, frequency of consumption of particular foods or food groups, or rank nutrient intakes to study diet-disease relationships. It is recommended that the sqFFQ is interviewer-administered, and only two weeks retrospective to minimise overestimation. While the tool in its current form is not suitable for estimating nutrient intakes of a population, including WFRs in a representative subsample within a study for calibration purposes can overcome this. This allows for more accurate estimation of nutrient intakes in large nutrition studies, without dramatically increasing the time and cost associated with implementing and analysing FRs. FFQ: Food frequency questionnaire Limits of agreement sqFFQ: WFR: κw : Weighted kappa Brown JE, Isaacs JS, Krinke UB, Lectenberg E, Murtaugh MA, Sharbaugh C, Splett PL, Stang J, Woolridge NH. Nutrition through the life cycle. 5th ed. Stamford: Cengage Learning; 2014. Aigner E, Feldman A, Datz C. Obesity as an emerging risk factor for iron deficiency. Nutrients. 2014;6:3587–600. Schwartz C, Scholtens PAMJ, Lalanne A, Weenen H, Nicklaus S. Development of healthy eating habits early in life. Review of recent evidence and selected guidelines. Appetite. 2011;57:796–807. Olukotun O, Seal N. A systematic review of dietary assessment tools for children age 11 years and younger. Infant Child Adolescent Nutr. 2015;7:139–47. Ortiz-Andrellucchi A, Henríquez-Sánchez P, Sánchez-Villegas A, Peña-Quintana L, Mendez M, Serra-Majem L. Dietary assessment methods for micronutrient intake in infants, children and adolescents: a systematic review. B J Nutr. 2009;102(Suppl 1):S87–117. Dietery assessment primer: Food record. NIH National Cancer Institute. https://dietassessmentprimer.cancer.gov/profiles/record/index.html. Accessed 13 Mar 2017. Dietary assessment primer: Food frequency questionnaire. NIH National Cancer Institute. https://dietassessmentprimer.cancer.gov/profiles/questionnaire/index.html. Accessed 13 Mar 2017. Blum RE, Wei EK, Rockett HR, Langeliers JD, Leppert J, Gardner JD, Colditz GA. Validation of a food frequency questionnaire in native American and Caucasian children 1 to 5 years of age. Matern Child Health J. 1999;3:167–72. D'Ambrosio A, Tiessen A, Simpson JR. Development of a food frequency questionnaire: for toddlers of low-German-speaking Mennonites from Mexico. Can J Dietetic Pract Res. 2012;73:40–4. Iannotti RJ, Zuckerman AE, Blyer EM, O'Brien RW, Finn J, Spillman DM. Comparison of dietary intake methods with young children. Psychol Rep. 1994;74:883–9. Klohe DM, Clarke KK, George GC, Milani TJ, Hanss-Nuss H, Freeland-Graves J. Relative validity and reliability of a food frequency questionnaire for a triethnic population of 1-year-old to 3-year-old children from low-income families. J Am Dietetic Assoc. 2005;105:727–34. Koleilat M, Whaley SE. Reliability and validity of food frequency questions to assess beverage and food group intakes among low-income 2- to 4-year-old children. J Acad Nutr Dietetics. 2016;116:931–9. Parrish LA, Marshall JA, Krebs NF, Rewers M, Norris JM. Validation of a food frequency questionnaire in preschool children. Epidemiology. 2003;14:213–7. Rankin SJ, Levy SM, Warren JJ, Gilmore JE, Broffitt B. Relative validity of an FFQ for assessing dietary fluoride intakes of infants and young children living in Iowa. Public Health Nutr. 2011;14:1229–36. Williams PL, Innis SM. Food frequency questionnaire for assessing infant iron nutrition. Can J Dietetic Pract Res. 2005;66:176–82. Andersen LF, Lande B, Trygg K, Hay G. Validation of a semi-quantitative food-frequency questionnaire used among 2-year-old Norwegian children. Public Health Nutr. 2004;7:757–64. Anderson LF, Tomten H, Haggarty P, Løvø A, Hustvedt BE. Validation of energy intake estimated from a food frequency questionnaire: a doubly labelled water study. Eur J Clin Nutr. 2003;57:279–84. Escobar PC, Lerma JC, Marín DH, Aliaga ED, Simó EM, Miquel BP, Koninckx CR. Development and validation of two food frequency questionnaires to assess gluten intake in children up to 36 months of age. Nutr Hosp. 2015;32:2080–90. Huybrechts I, De Backer G, De Bacquer D, Maes L, De Henauw S. Relative validity and reproducibility of a food-frequency questionnaire for estimating food intakes among Flemish preschoolers. Int J Environ Res Public Health. 2009;6:382–99. Huybrechts I, De Bacquer D, Matthys C, De Backer G, De Henauw S. Validity and reproducibility of a semi-quantitative food-frequency questionnaire for estimating calcium intake in Belgian preschool children. B J Nutr. 2006;95:802–16. Marriott LD, Inskip HM, Borland SE, Godfrey KM, Law CM, Robinson SM. What do babies eat? Evaluation of a food frequency questionnaire to assess the diets of infants aged 12 months. Public Health Nutr. 2009;12:967–72. Sochacka-Tatara E, Pac A. Relative validity of a semi-quantitative FFQ in 3-year-old polish children. Public Health Nutr. 2014;17:1738–44. Vereecken C, Covents M, Maes L. Comparison of a food frequency questionnaire with an online dietary assessment tool for assessing preschool children's dietary intake. J Hum Nutr Dietetics. 2010;23:502–10. Collins CE, Burrows TL, Truby H, Morgan PJ, Wright IMR, Davies PSW, Callister R. Comparison of energy intake in toddlers assessed by food frequency questionnaire and total energy expenditure measured by the doubly labeled water method. J Acad Nutr Dietetics. 2013;113:459–63. Flood VM, Wen LM, Hardy LL, Rissel C, Simpson JM, Baur LA. Reliability and validity of a short FFQ for assessing the dietary habits of 2-5-year-old children, Sydney, Australia. Public Health Nutr. 2014;17:498–509. Watson EO, Heath ALM, Taylor RW, Mills VC, Barris AC, Skidmore PML. Relative validity and reproducibility of an FFQ to determine nutrient intakes of New Zealand toddlers aged 12-24 months. Public Health Nutr. 2015;18:3265–71. Fatihah F, Ng BK, Hazwanie H, Norimah AK, Shanita SN, Ruzita AT, Poh BK. Development and validation of a food frequency questionnaire for dietary intake assessment among multi-ethnic primary school-aged children. Singap Med J. 2015;56:687–94. Kobayashi T, Kamimura M, Imai S, Toji C, Okamoto N, Fukui M, Date C. Reproducibility and validity of the food frequency questionnaire for estimating habitual dietary intake in children and adolescents. Nutr J. 2011. p. 10–27. https://doi.org/10.1186/1475-2891-10-27. Mubarik F, Bhaskaran K, Kho S, Vereijken C, Nambiar S, Eussen S, Muhardi L. Development of food lists as a first step to develop a food frequency questionnaire for toddlers in a multi-ethnic population. Nutr Dietetics. 2017;74:11–7. Burrows TL, Martin RJ, Collins CE. A systematic review of the validity of dietary assessment methods in children when compared with the method of doubly labeled water. J Am Dietetic Assoc. 2010;110:1501–10. Cade J, Thompson R, Burley V, Warm D. Development, validation and utilisation of food-frequency questionnaires - a review. Public Health Nutr. 2002;5:567–87. Census: Population trends. Department of Statistics Singapore. 2016. https://www.singstat.gov.sg/. Accessed 20 Dec 2016. Barikmo L, Torheim LE, Hatloy A, Oshaug A. Development and validation of a quantitative frequency questionnaire to use for assessment of dietary intake in a rural population in Mali. Eur J Clin Nutr. 1998;52:S67. Cade JE, Burley VJ, Warm DL, Thompson RL, Margetts BM. Food-frequency questionnaires: a review of their design, validation and utilisation. Nutr Res Rev. 2004;17:5–22. European Food Safety Authority. General principles for the collection of national food consumption data in the view of a pan-European dietary survey. EFSA J. 2009;7:1435. Araújo MC, Ferreira DM, Pereira RA. Reprodutibilidade de questionário semiquantitativo de freqüência alimentar elaborado para adolescentes da Região Metropolitana do Rio de Janeiro, Brasil (English translation used). Cadernos de Saúde Pública. 2008;24:2775–86. Energy and nutrient composition of food. Singapore Health Promotion Board. 2011. http://focos.hpb.gov.sg/eservices/ENCF/. Accessed 1 Jul 2016. McCance and Widdowson's Composition of foods integrated dataset (CoFID). Public Health England. 2015. https://www.gov.uk/government/publications/composition-of-foods-integrated-dataset-cofid. Accessed 16 Apr 2016. Emmett P. Assessing diet in longitudinal birth cohort studies. Paediatr Perinat Epidemiol. 2009;23:154–73. Kent JC, Mitoulas L, Cox DB, Owens RA, Hartmann PE. Breast volume and milk production during extended lactation in women. Exp Physiol. 1999;84:435–47. Kent JC, Mitoulas LR, Cregan MD, Ramsay DT, Doherty DA, Hartmann PE. Volume and frequency of Breastfeedings and fat content of breast Milk throughout the day. Pediatrics. 2006;117:e387–95. Lombard MJ, Steyn NP, Charlton KE, Senekal M. Application and interpretation of multiple statistical tests to evaluate validity of dietary intake assessment methods. Nutr J. 2015;14. Murphy KR, Davidshofer CO. Psychological testing: principles and applications. 1st edn. Englewood Cliffs: Prentice Hall; 1998. Willett WC, Reynolds RD, Cottrell-Hoehner S, Sampson L, Browne ML. Validation of a semi-quantitative food frequency quesionnaire: comparison with a 1-year diet record. J Am Dietetic Assoc. 1987;87:43–7. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74. Vassarstats. Website for statistical computation: Vassar College; 1998. http://vassarstats.net/kappa.html. Accessed 8 Feb 2017 Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical assessment. Lancet. 1986;1(8476):307–10. Araujo MC, Yokoo EM, Pereira RA. Validation and calibration of a semiquantitative food frequency questionnaire designed for adolescents. J Am Dietetic Assoc. 2010;110:1170–7. Andersen LF, Lande B, Arsky GH, Trygg K. Validation of a semi-quantitative food-frequency questionnaire used among 12-month-old Norwegian infants. Eur J Clin Nutr. 2003;57:881–8. Franco RZ, Alawadhi B, Fallaize R, Lovegrove JA, Hwang F. A web-based graphical food frequency assessment system: design, development and usability metrics. JMIR Human Factors. 2017;4:e13. Schneider BC, Motta JVS, Muniz LC, Bielemann RM, Madruga SW, Orlandi SP, Gigante DP, Assunção MCF. Desenho de um questionário de frequência alimentar digital autoaplicado para avaliar o consumo alimentar de adolescentes e adultos jovens: coortes de nascimentos de Pelotas, Rio Grande do Sul. Revista Brasileira de Epidemiologia. 2016;19:419–32. Diet Assessment Primer: Instrument Profiles. NIH National Cancer Institute. https://dietassessmentprimer.cancer.gov/. Accessed 13 Mar 2017. Lanigan JA, Wells JCK, Lawson MS, Lucas A. Number of days needed to assess energy and nutrient intake in infants and young children between 6 months and 2 years of age. Eur J Clin Nutr. 2004;58:745–50. The authors would like to thank Steven Ting, Senior Statistician at Danone Nutricia Research, Singapore, for his guidance on the statistical methodology and interpretation. All aspects of this this work (design of the study; collection, analysis, and interpretation of data; and writing the manuscript) were funded by Danone Nutricia Research, Singapore. The study was conducted internally with the assistance of university students on internships. Interns were supported with a standard wage as per policy. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. School of Exercise & Nutrition Sciences, Queensland University of Technology, Victoria Park Road, Kelvin Grove, QLD 4059, Australia Cameron Allan & Jowynn Yu Ying Ang Early Life Nutrition, Danone Nutricia Research, 30 Biopolis St, Matrix Building #05-01B, Singapore, 138671, Singapore Ummi Hani Abdul Kader, Leilani Muhardi & Smita Nambiar Healthcare Nutrition Science, Danone Nutricia Sari Husada, 15th Fl, Cyber 2 Building, Jl. HR Rasuna Said no. 13, Jakarta, Indonesia Leilani Muhardi Cameron Allan Ummi Hani Abdul Kader Jowynn Yu Ying Ang Smita Nambiar CA was involved in data collection and analyses, wrote the first draft of the manuscript and reviewed final version of manuscript; UHAK conducted the research and contributed to data analysis; JAYY contributed to data analysis; LM was responsible for the development of the FFQ and reviewed the manuscript; SN was the intern supervisor of UHAK, CA and JAYY and was responsible for study design (including development of sqFFQ), contributed to all data analysis and wrote the final version of the manuscript. All authors read and approved the final manuscript Correspondence to Leilani Muhardi. This study was approved by Parkway Independent Ethical Board (PIEC/2015/023). The study was explained to participants in a face-to-face meeting, and signed consent was obtained from caregivers. Allan, C., Abdul Kader, U.H., Ang, J.Y.Y. et al. Relative validity of a semi-quantitative food frequency questionnaire for Singaporean toddlers aged 15–36 months. BMC Nutr 4, 42 (2018). https://doi.org/10.1186/s40795-018-0252-9 Nutrient intake
CommonCrawl
Providing trusted data for industrial wireless sensor networks Shuyan Yu1 & Jinyuan He2 The deployment of wireless sensor networks, or WSNs, in industrial domains has attracted much attention over the past few years. An increasing number of applications have been developed such as for condition monitoring in the railway industry. Nevertheless, compared with traditional WSNs, the industrial environment is harsher, noisier, and more complex, which poses a higher requirement for the network security especially in terms of data trustiness and which further deters WSN practical integration in industrial applications. The main contribution of this research is to partially address the security issues by means of providing trusted data for industrial WSNs. To this end, a negative binomial distribution-based trust scheme combined with the D–S belief theory and a noise filter method is proposed and designed for industrial WSNs. In this paper, we first discuss the trust theory in WSNs and the disadvantages of traditional trust schemes for industrial applications, then analyze and evaluate the proposed method, and finally compare the performance of our method with some classic trust schemes. Through simulation tests about temperature readings of a factory workshop, it shows that the proposed method can improve the data trustiness, reliability, and robustness in the trust evaluation process under industrial environments and ensure the security of the network. Industrial wireless sensor networks, or IWSNs, have received more and more attention in recent years [1]. IWSNs consist of a certain number of small sensors and several base stations or data sinks [2]. IWSNs are mainly used for collecting and transmitting data from field devices. With limited computing abilities and storage capacities, these battery-powered small device sensors shown in Fig. 1 [3] are usually equipped with sense unit and signal transmission unit. The basic operations of such networks are periodic sensing, data gathering, and data transmission by individual sensor nodes to the data sink via intermediate nodes. Sensor nodes in IWSNs are resource constraint devices since their processing capabilities, power supply, memory capacity, and bandwidth have stringent constraints. But due to their low cost and high scalability, partially with the help of cloud computing [4, 5], IWSNs have been used in a wide variety of real industrial applications ranging from nuclear plant facility management, supply and demand energy management, industrial process control to conditioning monitoring in the railway industry as is presented in Fig. 2 [3]. According to [3], sensor devices are attached to the object being monitored such as tracks, bridges, or train mechanics with one or more sensors mounted on a sensor board; the sensor nodes communicate with the base station using a wireless transmission protocol; the base station collates data and transmits data to the control center server possibly through satellites or GPRS; and the sensor nodes may communicate directly with the server rather than via the base station, or the user accesses the data directly via the base station. Components of a sensor node IWSN application of conditioning monitoring in the railway industry The environment of IWSNs is extremely complex with strict requirements such as speed and reliability [6], and IWSN device nodes are often deployed in unattended or even hostile industrial areas; therefore data trustiness and security must be taken into consideration when the networks are being designed. Further, lack of physical security makes sensor nodes easy to be compromised by intruders who will later attack the whole network. If the compromised nodes or unreliable data sources cannot be identified in time, secret information may be revealed and the whole network could be under the control of the adversaries [7]. Besides, individual nodes are not always being honest in their interactions with others and may not provide trust or reliable information for their peers. Thus, the existence of unreliable sources in the network will deteriorate the accuracy as well as the system performance [8], which later threats the full functioning of IWSNs. For example, a body sensor network can remotely monitor the vital information and activities of a patient, but untrusted data might lead to a wrong therapy or even death of this patient. In order to provide data accuracy and security for the network, trust theory [9–16] has been gradually studied by researchers. Through the evaluation and storing the trust values of sensor nodes in WSNs, it can compute how much data from those nodes can be trusted when they are doing a certain job such as packet delivery and routing response. In this research, we propose a trust scheme to provide trusted data for IWSNs, which uses the negative binomial distribution as a trust computation model. Considering the noise under industrial environments, a noise filter method is designed and combined with the proposed scheme. The organization of this study is as follows. Section 2 discusses the trust theory in WSNs and the disadvantages of traditional trust schemes for industrial applications, Section 3 analyzes and evaluates the proposed method, Section 4 shows the simulation tests, and Section 5 concludes this work. Traditional security solutions such as cryptography and intrusion detection have been successfully applied in the computer networks, but when dealing with the internal compromised nodes, these methods are not so effective. The reason is that compromised nodes still have the access to the cryptographic keys that are used to secure the communication links within the network. Additionally, a compromised node pretending to be an authorized one cannot be detected by using cryptographic primitives. Thus, compromised nodes can pretend to be a legitimate one from a cryptographic standpoint while undertaking malicious actions. To deal with the untrusted source from the malicious nodes, through the evaluation and storing the trust values of WSN nodes, it is possible to know how much those nodes can be trusted when they are undertaking a certain task. Trust schemes, usually defined as a node's belief in the reliability of another one's behaviors or actions, have been studied and proposed as an alternative to traditional security solutions. A typical trust scheme architecture is presented in Fig. 3 [15], and trust properties are shown in Fig. 4 [16]. A typical trust scheme architecture Trust properties Among the trust schemes, statistics-based models such as Bayesian theory models [17–24] have received wide attention, based on which many trust models have been proposed by researchers in the past several years. The basis of trust mechanism is that its calculation is either directly based on the historical behaviors of participating nodes or indirectly based on the recommendations from other nodes. Generally, Bayesian theory fundamentally conforms to the procedure of trust evaluation. Bayesian theory-based trust system attempts to discover the behavior patterns through historical actions [13]. In Bayesian theory, it first calculates the prior probability of an event, then applies the prior probability into the binomial distribution, and finally modifies or updates the probability by using a posterior inference according to the relevant evidences. In reputation-based framework for high integrity sensor networks (RFSN) [10], a representative application of binomial distribution-based trust scheme in WSNs, each sensor holds trust metrics representing past behavior of other nodes in order to predict these nodes' future behavior. RFSN uses a completely decentralized method and can run on each sensor node. Nodes in RFSN only interact with other nodes within the wireless communication range; therefore, they only maintain the trust of nodes within the neighborhood. In RFSN, a transaction is defined as two nodes exchanging information or participating in a collaborative process. Based on the trust metrics built for other nodes by the behavior monitoring mechanism, a sensor node can treat them as cooperative or non-cooperative and evaluate the trustworthiness of these nodes. In the practical application, trust in RFSN is defined as the probability that a node will cooperate. In [10], let Θ represent the probability that a certain node will cooperate, and a prior distribution that denotes the probability that a node would cooperate with another one is defined by $$ P(\Theta)=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\Theta^{\alpha-1}(1-\Theta)^{\beta-1} $$ where 0≤Θ≤1,α≥0, and β≥0. Θ can be used as the success probability in Bernoulli observations. For example, let T∈[ 0,1] be the node i's rating for node j in one transaction, then $$ P(T|\Theta)=\Theta^{T}(1-\Theta)^{1-T} $$ When the transaction is complete, the posterior of Θ is defined by $$ \begin{aligned} P(\Theta |T)=&\frac{P(T|\Theta)P(\Theta)}{\int P(T|\Theta)P(\Theta)d\Theta}\\ &\sim \text{beta}(\alpha+T,\beta+1-T) \end{aligned} $$ Then, the mathematical expectation of Θ is defined by $$ E(\Theta)=\frac{\alpha+T}{\alpha+T+\beta+1-T} $$ In Eq. (4) or Eq. (5), E(Θ) can be regarded as the trust value a sensor node in the practical application of WSNs. The higher the value of E(Θ), the more trusted the sensor node becomes. After n transactions, the mathematical expectation of Θ is defined by $$ E(\Theta)=\frac{\alpha+nT}{\alpha+\beta+n} $$ And the two trust parameters become $$ \alpha=\alpha+nT, \beta=\beta+n\times(1-T) $$ In Eqs. (4), (5), and (6), the trust parameter α and β can be interpreted respectively as the observed number of positive outcomes and the observed number of negative outcomes regarding a certain transaction. For example, in a packet delay transaction, node i asks node j to transmit its data packets; after 10 requests from node i, node j has successfully transmitted 5 packets and failed 4 packets, then the trust parameters about node j can be expressed as α+=5 and β+=4 which are observed and recorded by node i. The proposed scheme Trust computation model The proposed scheme uses the negative binomial distribution as the trust computation model, which is presented as follows. In a sequence of independent Bernoulli trials with success probability ρ, let Z denote the number of failures until the rth success. The random Z is called the negative binomial random variable with parameters ρ and s. Its probability mass function is defined by $$ P(Z=s|r,\rho)={r+s-1 \choose s}\rho^{r}(1-\rho)^{s} $$ for s=0,1,2… and 0<ρ<1. This is the probability of observing s failures before the rth success. In Eq. (7), ρ is the binomial success probability and its conjugate prior distribution is beta distribution. Then, the posterior of ρ is defined by $$ \begin{aligned} P(\rho|Z)=&\frac{P(Z|\rho)P(\rho)}{\int P(Z|\rho)P(\rho)d\rho}\\ =&\frac{\Gamma(\alpha+\beta+r+s)}{\Gamma(\alpha+r)\Gamma(\beta+s)}\rho^{\alpha+r-1}(1-\rho)^{\beta+s-1} \end{aligned} $$ It indicates that the posterior P(ρ∣Z) has a beta distribution with parameters α+r and β+s. Then, the expectation of ρ is defined by $$ E(\rho)=\frac{\alpha+r}{\alpha+\beta+r+s} $$ Traditional trust schemes are not suitable for IWSNs. For example, in Eq. (4), trust parameters α and β are limited with adding 1 after each transaction, so neighboring nodes have to keep observing the observed node so as to compute and record its trust values. Under some IWSN applications, nodes usually need not to keep tracking a certain event, rather they could be put into sleep state for some time when there is no sensing tasks to follow so that their energy can be saved. In this case, Eq. (4) is not applicable. In contrast, in Eq. (9), increment of trust parameters can be set to a certain number according to the characteristics of specific IWSNs, and after r+s transactions, neighboring nodes can compute once the trust of that node. In addition, as is shown in Fig. 5, trust from the third parties should be added as indirect references. The indirect method can be mapped into D-S belief theory [25]. Assume j obtains the trust of i through h. Let \(\left (\alpha ^{h}_{i},\beta ^{h}_{i}\right)\) denote such indirect trust. j has the past trust records about i and h, which is represented by (αi,βi) and (αh,βh) respectively. After combining with indirect reputation, trust parameters are defined by $$ \alpha^{'}_{i}=\alpha_{i}+\frac{2\alpha_{h}\alpha^{h}_{i}}{(\beta_{h}+2)+\left(\alpha^{h}_{i}+\beta^{h}_{i}+2\right)+2\alpha_{h}} $$ Direct trust and indirect trust in sensor networks $$ \beta^{'}_{i}=\beta_{i}+\frac{2\alpha_{h}\beta^{h}_{i}}{(\beta_{h}+2)+\left(\alpha^{h}_{i}+\beta^{h}_{i}+2\right)+2\alpha_{h}} $$ where \(\alpha ^{'}_{i}\) and \(\beta ^{'}_{i}\) are the combined trust parameters about node i respectively. Noise filter method Due to the harshness of industrial environment, noises accompanied by radio interference and node temporary error often occur during the deployment of IWSNs. In this case, the device nodes are unable to record all the actual observations whether they are positive or negative, and the trust parameters α and β become the minimum number of observed successes and failures respectively in the real situation. Obviously, noise filter is important for the trust computation of IWSNs. Based on the mean-value theorem of definite integral, the noise filter method designed for IWSNs is presented below. Let ξ and f(ξ) denote the probability of an event and the corresponding probability density function respectively. According to the mean-value theorem of definite integral, the mean value of f(ξ) is \(\frac {\int ^{1} _{0}f(\xi)d\xi }{1-0}=1\). Because f(ξ) has a mean value of 1 and both the increase and the decrease from 1 are counted twice by |f(ξ)−1|, combined with Eqs. (9), (10), and (11), the noise filter denoted by NF is defined by $$ \int_{0}^{1}|\frac{\Gamma\left(\alpha^{'}+\beta^{'}+r+s\right)} {2\Gamma(\alpha^{'}+r)\Gamma(\beta^{'}+s)}\rho^{\phantom{\dot{i}\!}\alpha^{'}+r-1}(1-\rho)^{\phantom{\dot{i}\!}\beta^{'}+s-1}-1|d\rho $$ Then, the expectation of ρ with noise filter is defined by $$ E(\rho_{{NF}})=NF \times\frac{\alpha^{'}+r}{\alpha^{'}+\beta^{'}+r+s} $$ To test the performance of the proposed trust scheme, NS–2 is used for the simulation and RFSN as a classic trust scheme that utilized the binomial distribution-based trust is selected for comparison. A cluster-based sensor network is formed to monitor the temperature readings of a factory workshop. Tests in this section consist of three different scenarios, and the parameters are set as follows: One hundred twenty sensor nodes are randomly deployed in a rectangular region, and a base station is located in the center. Sensor nodes are divided into 8 clusters with 15 nodes in each cluster. The temperature reading of each normal sensor node is within (25, 30). Assume that there are compromised nodes in the network and their data readings are within (40-45). Suppose the NIC of each sensor node is in promiscuous mode so that it can overhear the data packets from its nearby neighbors. To simulate the noise, 5% temperature readings from normal sensor nodes is not within (25, 30). In each simulation, the base station launches 2000 queries to collect temperature readings from the monitored region. In this part, changes in proportion of compromised nodes are considered to test the convergence time of the trust schemes. It is desirable that the convergence time should be as fast as possible. We can notice that in Fig. 6, as the proportion of compromised nodes goes up, convergence time of the two trust schemes increases. For a given proportion value, convergence time of the negative binomial is shorter, e.g., when the proportion is up to 50%, the convergence time is about 585 and 650 for both schemes respectively. It indicates that the NB scheme uses less time to detect the malicious nodes and data from the NB scheme can be more trusted. Convergence time to detect compromised nodes In this part, the success rate of service request attempts that are launched by compromised nodes and answered by normal nodes is tested between the two trust schemes. The lower the rate is, the more reliable the scheme becomes. In Fig. 7, as the running time increases, the success rate of the compromised nodes drops gradually in both schemes and it comes close to zero at about 1000 and 1200 s for both schemes respectively. The rate drops faster under the NB scheme which indicates that it is more effective in checking compromised nodes and that data from NB scheme is more reliable. Success rate of compromised nodes Trust robustness in both schemes is tested in Figs. 8 and 9 when the compromised nodes are increasing gradually in the network. Average trust with 5% increasing rate of compromised nodes in every 100 query in RFSN Average trust with 5% increasing rate of compromised nodes in every 100 query in NB In Figs. 8 and 9, compromised nodes increase at a rate of 5% in every 100 query. Before reaching at about the 800th query (equivalent to about 40% compromised nodes in the network), trust in both schemes can effectively detect the compromised nodes and minimize their influence on the network. From the 800th query, the average trust value of legitimate nodes begins to go downward and the average trust value of compromised nodes start to go upward. This trend continues until to about the 1500th query in Fig. 8 and 1550th query in Fig. 9 (equivalent to about 75% / 77.5% compromised nodes in the network) when the average trust values of both kinds of nodes reach the same point. After that, the average trust value of compromised nodes exceeds that of the legitimate nodes, which makes the whole network compromised and unreliable. Figures 8 and 9 also indicate that the NB scheme is more robust under the attack of compromised nodes and they also show that when the compromised nodes outnumber the legitimate nodes in the both schemes, the network becomes vulnerable and easy to be attacked. Although cryptography primitives can provide the capability to tackle the attacks from the external networks, they cannot address the problem caused by the internal compromised devices, which results in untrusted data in the network. In this article, a trust scheme with noise filter is proposed to provide trusted data for IWSNs, and the simulations show that the proposed method can improve the data trustiness, reliability, and robustness under industrial environments. In our future work, we will continue to study the noise filter algorithms and refine the trust granularity so as to further improve the data trustiness for IWSNs. Additionally, the behavior monitoring algorithm among sensor nodes will also be studied to enhance the observing accuracy. D–S: Dempster–Shafer GPRS: General packet radio service IWSN: Industrial wireless sensor network Negative binomial NIC: RFSN: Reputation-based framework for high integrity sensor networks WSN: B.C. Villaverde, S. Rea, D. Pesch, InRout-A QoS aware route selection algorithm for industrial wireless sensor networks. Ad Hoc Netw.10(3), 458–478 (2012). D. Ioannis, Interactive multimedia installation art development using recycled input and sensing devices. Int. J. Arts Technol. 9(2), 108–125 (2016). V.J. Hodge, S. O'Keefe, M. Weeks, A. Moulds, Wireless sensor networks for conditioning monitoring in the railway industry: a survey. IEEE Trans. Intell. Trans. Syst. 16(3), 1088–1106 (2015). Z. Xia, Y. Zhu, X. Sun, Z. Qin, K. Ren, Towards privacy-preserving content-based image retrieval in cloud computing. IEEE Trans. Comput. 6(1), 276–286 (2018). Z. Xia, N.N. Xiong, A.V. Vasilakos, X. Sun, EPCBIR: an efficient and privacy-preserving content-based image retrieval scheme in cloud computing. Inf. Sci. 387:, 195–204 (2017). C. Pei, Y. Xiao, W. Liang, X. Han, Trade-off of security and performance of lightweight block ciphers in Industrial Wireless Sensor Networks. EURASIP. J. Wirel. Commun. Netw. 2018(1), 117–135 (2018). R. Feng, X. Xu, X. Zhou, J. Wan, A trust evaluation algorithm for wireless sensor networks based on node behaviors and d-s evidence theory. Sensors. 11(2), 1345–1360 (2011). W. Li, S. Saruwatari, M. Bandai, T. Watanabe, Discussions on trade-offs in data aggregation in wireless sensor networks. Comput. Syst. Sci. Eng. 29(1), 51–63 (2014). S. Seo, J.W. Kim, J.D. Kim, J.M. Chung, Reconfiguration time and complexity minimized trust-based clustering scheme for MANETs. Eurasip J. Wirel. Commun. Netw. 2017(1), 155–121 (2017). S. Ganeriwal, M.B. Srivastava, Reputation based framework for high integrity sensor networks. ACM Trans. Sens. Netw. 4(3), 15–37 (2008). Y. Wang, M. Zhang, W. Shu, An emerging intelligent optimization algorithm based on trust sensing model for wireless sensor networks. Eurasip J. Wirel. Commun. Netw. 2018(1), 145–154 (2018). G. Amudha, P. Narayanasamy, Distributed location and trust based replica detection in wireless sensor networks. Wirel. Pers. Commun. 102(4), 3303–3321 (2018). F. Wang, F. Wang, B. Huang, L.T. Yang, SONR: a reliable reputation system of self-organized network. J. Netw. Comput. Appl. 35(3), 914–926 (2012). T. Zhang, L. Yan, Y. Yang, Trust evaluation method for clustered wireless sensor networks based on cloud model. Wirel. Netw. 24(3), 777–797 (2018). J. Duan, D. Yang, S. Zhang, J. Zhao, M. Gidlund, in the 39th Annual Conference of the IEEE Industrial Electronics Society, IECON 2013. A trust management scheme for industrial wireless sensor networks (Vienna, 2013), pp. 5576–5581. O. Khalid, S.U. Khan, S.A. Madani, K. Hayat, M.I. Khan, Comparative study of trust and reputation systems for wireless sensor networks. Security Comm. Netw. 6(6), 669–688 (2013). M.A.A. Kappel, D.C. Knupp, R.P. Domingos, I.N. Bastos, Analysis of hydrogen permeation in metals by means of a new anomalous diffusion model and Bayesian inference. Comput. Mater. Continua. 49-50(1), 13–29 (2015). G. Jayaprakash, M.P. Muthuraj, Prediction of compressive strength of various SCC mixes using relevance vector machine. Comput. Mater. Continua. 54(1), 83–102 (2018). G. D'Angelo, S. Rampone, F. Palmieri, Developing a trust model for pervasive computing based on Apriori association rules learning and Bayesian classification. Soft Comput. 21(21), 6297–6315 (2017). S. Yoon, Y. Yu, Extended virtual in-situ calibration method in building systems using Bayesian inference. Autom. Constr. 73:, 20–30 (2017). W. Meng, W. Li, Y. Xiang, K.K.R. Choo, A bayesian inference-based detection mechanism to defend medical smartphone networks against insider attacks. J. Netw. Comput. Appl. 78:, 162–169 (2017). E.T. Chancey, J. P. Bliss, Y. Yamani, H. Hah, Trust and the compliance-reliance paradigm: the effects of risk, error bias, and reliability on trust and dependence. Hum. Factors. 59(3), 333–345 (2017). V.S. Janani, M.S.K. Manikandan, Efficient trust management with Bayesian-Evidence theorem to secure public key infrastructure-based mobile ad hoc networks. Eurasip J. Wirel. Commun. Netw. 2018(1), 25–52 (2018). V.S. Janani, M.S.K. Manikandan, Mobility aware clustering scheme with bayesian-evidence trust management for public key infrastructure in ad hoc networks. Wirel. Pers. Commun. 99(1), 371–401 (2018). Z. Jiao, H. Gong, Y. Wang, A D–S evidence theory-based relay protection system hidden failures detection method in smart grid. IEEE Trans. Smart Grid. 9(3), 2118–2126 (2018). The authors gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation. This work is partially supported by Zhejiang Academy of Education Planning and Research under the grant no. 2017SCG384. The data sets in the simulation tests are assumed to be the temperature readings of a factory workshop, and the data readings are randomly generated within each value interval; therefore, interested researchers can generate their own random data within the same three value intervals as presented in our simulation tests. College of Management and Information, Zhejiang Post and Telecommunication College, Shaoxing, China Shuyan Yu Institute of Sustainable Industrial and Liveable Cities, Victoria University, Victoria, Australia Jinyuan He Search for Shuyan Yu in: Search for Jinyuan He in: SY conducted the experiments and wrote the first draft of the paper. JH helped to revise the paper and polished the paper. Both authors read and approved the final manuscript. Correspondence to Shuyan Yu. Yu, S., He, J. Providing trusted data for industrial wireless sensor networks. J Wireless Com Network 2018, 289 (2018) doi:10.1186/s13638-018-1307-y Trusted data Industrial wireless sensor networks Negative binomial distribution
CommonCrawl
Calibration of the Logarithmic-Periodic Dipole Antenna (LPDA) Radio Stations at the Pierre Auger Observatory using an Octocopter (1702.01392) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello June 13, 2018 astro-ph.IM, astro-ph.HE An in-situ calibration of a logarithmic periodic dipole antenna with a frequency coverage of 30 MHz to 80 MHz is performed. Such antennas are part of a radio station system used for detection of cosmic ray induced air showers at the Engineering Radio Array of the Pierre Auger Observatory, the so-called Auger Engineering Radio Array (AERA). The directional and frequency characteristics of the broadband antenna are investigated using a remotely piloted aircraft (RPA) carrying a small transmitting antenna. The antenna sensitivity is described by the vector effective length relating the measured voltage with the electric-field components perpendicular to the incoming signal direction. The horizontal and meridional components are determined with an overall uncertainty of 7.4^{+0.9}_{-0.3} % and 10.3^{+2.8}_{-1.7} % respectively. The measurement is used to correct a simulated response of the frequency and directional response of the antenna. In addition, the influence of the ground conductivity and permittivity on the antenna response is simulated. Both have a negligible influence given the ground conditions measured at the detector site. The overall uncertainties of the vector effective length components result in an uncertainty of 8.8^{+2.1}_{-1.3} % in the square root of the energy fluence for incoming signal directions with zenith angles smaller than 60{\deg}. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory (1612.07155) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Feb. 26, 2018 astro-ph.HE We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above $5 \cdot 10^{18}$ eV, i.e.~the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results. Indication of anisotropy in arrival directions of ultra-high-energy cosmic rays through comparison to the flux pattern of extragalactic gamma-ray sources (1801.06160) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L.A.S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Feb. 6, 2018 astro-ph.CO, astro-ph.HE A new analysis of the dataset from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 EeV with zenith angles up to 80 deg recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include all types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0 sigma, the highest value of the test statistic being for energies above 39 EeV. The three alternative models are favored against isotropy with 2.7-3.2 sigma significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed. Probing the evolution of the EAS muon content in the atmosphere with KASCADE-Grande (1801.05513) KASCADE-Grande Collaboration: W.D. Apel, J.C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, D. Fuhrmann, A. Gherghel-Lascu, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, H.J. Mathes, H.J. Mayer, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski Jan. 17, 2018 astro-ph.HE The evolution of the muon content of very high energy air showers (EAS) in the atmosphere is investigated with data of the KASCADE-Grande observatory. For this purpose, the muon attenuation length in the atmosphere is obtained to $\Lambda_\mu = 1256 \, \pm 85 \, ^{+229}_{-232}(\mbox{syst})\, \mbox{g/cm}^2$ from the experimental data for shower energies between $10^{16.3}$ and $10^{17.0} \, \mbox{eV}$. Comparison of this quantity with predictions of the high-energy hadronic interaction models QGSJET-II-02, SIBYLL 2.1, QGSJET-II-04 and EPOS-LHC reveals that the attenuation of the muon content of measured EAS in the atmosphere is lower than predicted. Deviations are, however, less significant with the post-LHC models. The presence of such deviations seems to be related to a difference between the simulated and the measured zenith angle evolutions of the lateral muon density distributions of EAS, which also causes a discrepancy between the measured absorption lengths of the density of shower muons and the predicted ones at large distances from the EAS core. The studied deficiencies show that all four considered hadronic interaction models fail to describe consistently the zenith angle evolution of the muon content of EAS in the aforesaid energy regime. Inferences on Mass Composition and Tests of Hadronic Interactions from 0.3 to 100 EeV using the water-Cherenkov Detectors of the Pierre Auger Observatory (1710.07249) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 19, 2017 astro-ph.HE We present a new method for probing the hadronic interaction models at ultra-high energy and extracting details about mass composition. This is done using the time profiles of the signals recorded with the water-Cherenkov detectors of the Pierre Auger Observatory. The profiles arise from a mix of the muon and electromagnetic components of air-showers. Using the risetimes of the recorded signals we define a new parameter, which we use to compare our observations with predictions from simulations. We find, firstly, inconsistencies between our data and predictions over a greater energy range and with substantially more events than in previous studies. Secondly, by calibrating the new parameter with fluorescence measurements from observations made at the Auger Observatory, we can infer the depth of shower maximum for a sample of over 81,000 events extending from 0.3 EeV to over 100 EeV. Above 30 EeV, the sample is nearly fourteen times larger than currently available from fluorescence measurements and extending the covered energy range by half a decade. The energy dependence of the average depth of shower maximum is compared to simulations and interpreted in terms of the mean of the logarithmic mass. We find good agreement with previous work and extend the measurement of the mean depth of shower maximum to greater energies than before, reducing significantly the statistical uncertainty associated with the inferences about mass composition. Search for High-energy Neutrinos from Binary Neutron Star Merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory (1710.05839) ANTARES, IceCube, Pierre Auger, LIGO Scientific, Virgo Collaborations: A. Albert, M. Andre, M. Anghinolfi, M. Ardid, J.-J. Aubert, J. Aublin, T. Avgitas, B. Baret, J. Barrios-Marti, S. Basa, B. Belhorma, V. Bertin, S. Biagi, R. Bormuth, S. Bourret, M.C. Bouwhuis, H. Branzacs, R. Bruijn, J. Brunner, J. Busto, A. Capone, L. Caramete, J. Carr, S. Celli, R. Cherkaoui El Moursli, T. Chiarusi, M. Circella, J.A.B. Coelho, A. Coleiro, R. Coniglione, H. Costantini, P. Coyle, A. Creusot, A. F. Diaz, A. Deschamps, G. De Bonis, C. Distefano, I. Di Palma, A. Domi, C. Donzaud, D. Dornic, D. Drouhin, T. Eberl, I. El Bojaddaini, N. El Khayati, D. Elsasser, A. Enzenhofer, A. Ettahiri, F. Fassi, I. Felis, L.A. Fusco, P. Gay, V. Giordano, H. Glotin, T. Gregoire, R. Gracia Ruiz, K. Graf, S. Hallmann, H. van Haren, A.J. Heijboer, Y. Hello, J.J. Hernandez-Rey, J. Hossl, J. Hofestadt, G. Illuminati, C.W. James, M. de Jong, M. Jongen, M. Kadler, O. Kalekin, U. Katz, D. Kiessling, A. Kouchner, M. Kreter, I. Kreykenbohm, V. Kulikovskiy, C. Lachaud, R. Lahmann, D. Lef`evre, E. Leonora, M. Lotze, S. Loucatos, M. Marcelin, A. Margiotta, A. Marinelli, J.A. Martinez-Mora, R. Mele, K. Melis, T. Michael, P. Migliozzi, A. Moussa, S. Navas, E. Nezri, M. Organokov, G.E. Puavualacs, C. Pellegrino, C. Perrina, P. Piattelli, V. Popa, T. Pradier, L. Quinn, C. Racca, G. Riccobene, A. Sanchez-Losa, M. Salda na, I. Salvadori, D. F. E. Samtleben, M. Sanguineti, P. Sapienza, F. Schussler, C. Sieger, M. Spurio, Th. Stolarczyk, M. Taiuti, Y. Tayalati, A. Trovato, D. Turpin, C. Tonnis, B. Vallage, V. Van Elewyck, F. Versari, D. Vivolo, A. Vizzoca, J. Wilms, J.D. Zornoza, J. Zu niga, M. G. Aartsen, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, I. Al Samarai, D. Altmann, K. Andeen, T. Anderson, I. Ansseau, G. Anton, C. Arguelles, J. Auffenberg, S. Axani, H. Bagherpour, X. Bai, J. P. Barron, S. W. Barwick, V. Baum, R. Bay, J. J. Beatty, J. Becker Tjus, K.-H. Becker, S. BenZvi, D. Berley, E. Bernardini, D. Z. Besson, G. Binder, D. Bindig, E. Blaufuss, S. Blot, C. Bohm, M. Borner, F. Bos, D. Bose, S. Boser, O. Botner, E. Bourbeau, J. Bourbeau, F. Bradascio, J. Braun, L. Brayeur, M. Brenzke, H.-P. Bretz, S. Bron, J. Brostean-Kaiser, A. Burgman, T. Carver, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, G. H. Collin, J. M. Conrad, D. F. Cowen, R. Cross, M. Day, J. P. A. M. de Andre, C. De Clercq, J. J. DeLaunay, H. Dembinski, S. De Ridder, P. Desiati, K. D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J. C. Diaz-Velez, V. di Lorenzo, H. Dujmovic, J. P. Dumm, M. Dunkman, E. Dvorak, B. Eberhardt, T. Ehrhardt, B. Eichmann, P. Eller, P. A. Evenson, S. Fahey, A. R. Fazely, J. Felde, K. Filimonov, C. Finley, S. Flis, A. Franckowiak, E. Friedman, T. Fuchs, T. K. Gaisser, J. Gallagher, L. Gerhardt, K. Ghorbani, W. Giang, T. Glauch, T. Glusenkamp, A. Goldschmidt, J. G. Gonzalez, D. Grant, Z. Griffith, C. Haack, A. Hallgren, F. Halzen, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G. C. Hill, K. D. Hoffman, R. Hoffmann, B. Hokanson-Fasig, K. Hoshina, F. Huang, M. Huber, K. Hultqvist, M. Hunnefeld, S. In, A. Ishihara, E. Jacobi, G. S. Japaridze, M. Jeong, K. Jero, B. J. P. Jones, P. Kalaczynski, W. Kang, A. Kappes, T. Karg, A. Karle, U. Katz, M. Kauer, A. Keivani, J. L. Kelley, A. Kheirandish, J. Kim, M. Kim, T. Kintscher, J. Kiryluk, T. Kittler, S. R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, L. Kopke, C. Kopper, S. Kopper, J. P. Koschinsky, D. J. Koskinen, M. Kowalski, K. Krings, M. Kroll, G. Kruckl, J. Kunnen, S. Kunwar, N. Kurahashi, T. Kuwabara, A. Kyriacou, M. Labare, J. L. Lanfranchi, M. J. Larson, F. Lauber, M. Lesiak-Bzdak, M. Leuermann, Q. R. Liu, L. Lu, J. Lunemann, W. Luszczak, J. Madsen, G. Maggi, K. B. M. Mahn, S. Mancina, R. Maruyama, K. Mase, R. Maunu, F. McNally, K. Meagher, M. Medici, M. Meier, T. Menne, G. Merino, T. Meures, S. Miarecki, J. Micallef, G. Momente, T. Montaruli, R. W. Moore, M. Moulai, R. Nahnhauer, P. Nakarmi, U. Naumann, G. Neer, H. Niederhausen, S. C. Nowicki, D. R. Nygren, A. Obertacke Pollmann, A. Olivas, A. O'Murchadha, T. Palczewski, H. Pandya, D. V. Pankova, P. Peiffer, J. A. Pepper, C. Perez de los Heros, D. Pieloth, E. Pinat, M. Plum, D. Pranav, P. B. Price, G. T. Przybylski, C. Raab, L. Radel, M. Rameez, K. Rawlins, I. C. Rea, R. Reimann, B. Relethford, M. Relich, E. Resconi, W. Rhode, M. Richman, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, D. Rysewyk, T. Salzer, S. E. Sanchez Herrera, A. Sandrock, J. Sandroos, M. Santander, S. Sarkar, S. Sarkar, K. Satalecka, P. Schlunder, T. Schmidt, A. Schneider, S. Schoenen, S. Schoneberg, L. Schumacher, D. Seckel, S. Seunarine, J. Soedingrekso, D. Soldin, M. Song, G. M. Spiczak, C. Spiering, J. Stachurska, M. Stamatikos, T. Stanev, A. Stasik, J. Stettner, A. Steuer, T. Stezelberger, R. G. Stokstad, A. Stossl, N. L. Strotjohann, T. Stuttard, G. W. Sullivan, M. Sutherland, I. Taboada, J. Tatar, F. Tenholt, S. Ter-Antonyan, A. Terliuk, G. Tevsic, S. Tilav, P. A. Toale, M. N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, C. F. Tung, A. Turcati, C. F. Turley, B. Ty, E. Unger, M. Usner, J. Vandenbroucke, W. Van Driessche, N. van Eijndhoven, S. Vanheule, J. van Santen, M. Vehring, E. Vogel, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, F. D. Wandler, N. Wandkowsky, A. Waza, C. Weaver, M. J. Weiss, C. Wendt, J. Werthebach, S. Westerhoff, B. J. Whelan, K. Wiebe, C. H. Wiebusch, L. Wille, D. R. Williams, L. Wills, M. Wolf, J. Wood, T. R. Wood, E. Woolsey, K. Woschnagg, D. L. Xu, X. W. Xu, Y. Xu, J. P. Yanez, G. Yodh, S. Yoshida, T. Yuan, M. Zoll, A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, J.M. Albury, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Mu niz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Bohavcova, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceicc ao, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Croninaltaffiliation Deceased, August 2016., S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, J.A. Day, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Diaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Feldbusch, F. Fenu, B. Fick, J.M. Figueira, A. Filipvcivc, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. Garcia, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Glas, C. Glaser, G. Golup, M. Gomez Berisso, P.F. Gomez Vitale, N. Gonzalez, A. Gorgi, M. Gottowik, A.F. Grilloaltaffiliation Deceased, February 2017., T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, V.M. Harvey, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Horandel, P. Horvath, M. Hrabovsky, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kaapa, K.H. Kampert, B. Keilhauer, N. Kemmerich, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. Lopez, A. Lopez Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Marics, G. Marsella, D. Martello, H. Martinez, O. Martinez Bravo, J.J. Masias Meza, H.J. Mathes, S. Mathys, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafa, A.L. Muller, G. Muller, M.A. Muller, S. Muller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Novzka, L.A. Nu nez, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pcekala, R. Pelayo, J. Pe na-Rodriguez, L.A.S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sanchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovanek, F.G. Schroder, S. Schroder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, R. vSmida, G.R. Snow, P. Sommers, S. Sonntag, J.F. Soriano, R. Squartini, D. Stanca, S. Stanivc, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Duran, T. Sudholz, T. Suomijarvi, A.D. Supanitsky, J. vSupik, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tome, G. Torralba Elipe, P. Travnicek, M. Trini, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdes Galicia, I. Vali no, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cardenas, R.A. Vazquez, D. Veberivc, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villase nor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedenski, L. Wiencke, H. Wilczynski, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello, B. P. Abbott, R. Abbott, T. D. Abbott, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, C. Affeldt, M. Afrough, B. Agarwal, M. Agathos, K. Agatsuma, N. Aggarwal, O. D. Aguiar, L. Aiello, A. Ain, P. Ajith, B. Allen, G. Allen, A. Allocca, P. A. Altin, A. Amato, A. Ananyeva, S. B. Anderson, W. G. Anderson, S. V. Angelova, S. Antier, S. Appert, K. Arai, M. C. Araya, J. S. Areeda, N. Arnaud, K. G. Arun, S. Ascenzi, G. Ashton, M. Ast, S. M. Aston, P. Astone, D. V. Atallah, P. Aufmuth, C. Aulbert, K. AultONeal, C. Austin, A. Avila-Alvarez, S. Babak, P. Bacon, M. K. M. Bader, S. Bae, P. T. Baker, F. Baldaccini, G. Ballardin, S. W. Ballmer, S. Banagiri, J. C. Barayoga, S. E. Barclay, B. C. Barish, D. Barker, K. Barkett, F. Barone, B. Barr, L. Barsotti, M. Barsuglia, D. Barta, J. Bartlett, I. Bartos, R. Bassiri, A. Basti, J. C. Batch, M. Bawaj, J. C. Bayley, M. Bazzan, B. Becsy, C. Beer, M. Bejger, I. Belahcene, A. S. Bell, B. K. Berger, G. Bergmann, J. J. Bero, C. P. L. Berry, D. Bersanetti, A. Bertolini, J. Betzwieser, S. Bhagwat, R. Bhandare, I. A. Bilenko, G. Billingsley, C. R. Billman, J. Birch, R. Birney, O. Birnholtz, S. Biscans, S. Biscoveanu, A. Bisht, M. Bitossi, C. Biwer, M. A. Bizouard, J. K. Blackburn, J. Blackman, C. D. Blair, D. G. Blair, R. M. Blair, S. Bloemen, O. Bock, N. Bode, M. Boer, G. Bogaert, A. Bohe, F. Bondu, E. Bonilla, R. Bonnand, B. A. Boom, R. Bork, V. Boschi, S. Bose, K. Bossie, Y. Bouffanais, A. Bozzi, C. Bradaschia, P. R. Brady, M. Branchesi, J. E. Brau, T. Briant, A. Brillet, M. Brinkmann, V. Brisson, P. Brockill, J. E. Broida, A. F. Brooks, D. A. Brown, D. D. Brown, S. Brunett, C. C. Buchanan, A. Buikema, T. Bulik, H. J. Bulten, A. Buonanno, D. Buskulic, C. Buy, R. L. Byer, M. Cabero, L. Cadonati, G. Cagnoli, C. Cahillane, J. Calderon Bustillo, T. A. Callister, E. Calloni, J. B. Camp, M. Canepa, P. Canizares, K. C. Cannon, H. Cao, J. Cao, C. D. Capano, E. Capocasa, F. Carbognani, S. Caride, M. F. Carney, J. Casanueva Diaz, C. Casentini, S. Caudill, M. Cavagli`a, F. Cavalier, R. Cavalieri, G. Cella, C. B. Cepeda, P. Cerda-Duran, G. Cerretani, E. Cesarini, S. J. Chamberlin, M. Chan, S. Chao, P. Charlton, E. Chase, E. Chassande-Mottin, D. Chatterjee, B. D. Cheeseboro, H. Y. Chen, X. Chen, Y. Chen, H.-P. Cheng, H. Chia, A. Chincarini, A. Chiummo, T. Chmiel, H. S. Cho, M. Cho, J. H. Chow, N. Christensen, Q. Chu, A. J. K. Chua, S. Chua, A. K. W. Chung, S. Chung, G. Ciani, R. Ciolfi, C. E. Cirelli, A. Cirone, F. Clara, J. A. Clark, P. Clearwater, F. Cleva, C. Cocchieri, E. Coccia, P.-F. Cohadon, D. Cohen, A. Colla, C. G. Collette, L. R. Cominsky, M. Constancio Jr., L. Conti, S. J. Cooper, P. Corban, T. R. Corbitt, I. Cordero-Carrion, K. R. Corley, N. Cornish, A. Corsi, S. Cortese, C. A. Costa, M. W. Coughlin, S. B. Coughlin, J.-P. Coulon, S. T. Countryman, P. Couvares, P. B. Covas, E. E. Cowan, D. M. Coward, M. J. Cowart, D. C. Coyne, R. Coyne, J. D. E. Creighton, T. D. Creighton, J. Cripe, S. G. Crowder, T. J. Cullen, A. Cumming, L. Cunningham, E. Cuoco, T. Dal Canton, G. Dalya, S. L. Danilishin, S. D'Antonio, K. Danzmann, A. Dasgupta, C. F. Da Silva Costa, V. Dattilo, I. Dave, M. Davier, D. Davis, E. J. Daw, B. Day, S. De, D. DeBra, J. Degallaix, M. De Laurentis, S. Deleglise, W. Del Pozzo, N. Demos, T. Denker, T. Dent, R. De Pietri, V. Dergachev, R. De Rosa, R. T. DeRosa, C. De Rossi, R. DeSalvo, O. de Varona, J. Devenson, S. Dhurandhar, M. C. Diaz, L. Di Fiore, M. Di Giovanni, T. Di Girolamo, A. Di Lieto, S. Di Pace, I. Di Palma, F. Di Renzo, Z. Doctor, V. Dolique, F. Donovan, K. L. Dooley, S. Doravari, I. Dorrington, R. Douglas, M. Dovale Alvarez, T. P. Downes, M. Drago, C. Dreissigacker, J. C. Driggers, Z. Du, M. Ducrot, P. Dupej, S. E. Dwyer, T. B. Edo, M. C. Edwards, A. Effler, H.-B. Eggenstein, P. Ehrens, J. Eichholz, S. S. Eikenberry, R. A. Eisenstein, R. C. Essick, D. Estevez, Z. B. Etienne, T. Etzel, M. Evans, T. M. Evans, M. Factourovich, V. Fafone, H. Fair, S. Fairhurst, X. Fan, S. Farinon, B. Farr, W. M. Farr, E. J. Fauchon-Jones, M. Favata, M. Fays, C. Fee, H. Fehrmann, J. Feicht, M. M. Fejer, A. Fernandez-Galiana, I. Ferrante, E. C. Ferreira, F. Ferrini, F. Fidecaro, D. Finstad, I. Fiori, D. Fiorucci, M. Fishbach, R. P. Fisher, M. Fitz-Axen, R. Flaminio, M. Fletcher, H. Fong, J. A. Font, P. W. F. Forsyth, S. S. Forsyth, J.-D. Fournier, S. Frasca, F. Frasconi, Z. Frei, A. Freise, R. Frey, V. Frey, E. M. Fries, P. Fritschel, V. V. Frolov, P. Fulda, M. Fyffe, H. Gabbard, B. U. Gadre, S. M. Gaebel, J. R. Gair, L. Gammaitoni, M. R. Ganija, S. G. Gaonkar, C. Garcia-Quiros, F. Garufi, B. Gateley, S. Gaudio, G. Gaur, V. Gayathri, N. Gehrelsaltaffiliation Deceased, February 2017., G. Gemme, E. Genin, A. Gennai, D. George, J. George, L. Gergely, V. Germain, S. Ghonge, Abhirup Ghosh, Archisman Ghosh, S. Ghosh, J. A. Giaime, K. D. Giardina, A. Giazotto, K. Gill, L. Glover, E. Goetz, R. Goetz, S. Gomes, B. Goncharov, G. Gonzalez, J. M. Gonzalez Castro, A. Gopakumar, M. L. Gorodetsky, S. E. Gossan, M. Gosselin, R. Gouaty, A. Grado, C. Graef, M. Granata, A. Grant, S. Gras, C. Gray, G. Greco, A. C. Green, E. M. Gretarsson, P. Groot, H. Grote, S. Grunewald, P. Gruning, G. M. Guidi, X. Guo, A. Gupta, M. K. Gupta, K. E. Gushwa, E. K. Gustafson, R. Gustafson, O. Halim, B. R. Hall, E. D. Hall, E. Z. Hamilton, G. Hammond, M. Haney, M. M. Hanke, J. Hanks, C. Hanna, M. D. Hannam, O. A. Hannuksela, J. Hanson, T. Hardwick, J. Harms, G. M. Harry, I. W. Harry, M. J. Hart, C.-J. Haster, K. Haughian, J. Healy, A. Heidmann, M. C. Heintze, H. Heitmann, P. Hello, G. Hemming, M. Hendry, I. S. Heng, J. Hennig, A. W. Heptonstall, M. Heurs, S. Hild, T. Hinderer, D. Hoak, D. Hofman, K. Holt, D. E. Holz, P. Hopkins, C. Horst, J. Hough, E. A. Houston, E. J. Howell, A. Hreibi, Y. M. Hu, E. A. Huerta, D. Huet, B. Hughey, S. Husa, S. H. Huttner, T. Huynh-Dinh, N. Indik, R. Inta, G. Intini, H. N. Isa, J.-M. Isac, M. Isi, B. R. Iyer, K. Izumi, T. Jacqmin, K. Jani, P. Jaranowski, S. Jawahar, F. Jimenez-Forteza, W. W. Johnson, D. I. Jones, R. Jones, R. J. G. Jonker, L. Ju, J. Junker, C. V. Kalaghatgi, V. Kalogera, B. Kamai, S. Kandhasamy, G. Kang, J. B. Kanner, S. J. Kapadia, S. Karki, K. S. Karvinen, M. Kasprzack, M. Katolik, E. Katsavounidis, W. Katzman, S. Kaufer, K. Kawabe, F. Kefelian, D. Keitel, A. J. Kemball, R. Kennedy, C. Kent, J. S. Key, F. Y. Khalili, I. Khan, S. Khan, Z. Khan, E. A. Khazanov, N. Kijbunchoo, Chunglee Kim, J. C. Kim, K. Kim, W. Kim, W. S. Kim, Y.-M. Kim, S. J. Kimbrell, E. J. King, P. J. King, M. Kinley-Hanlon, R. Kirchhoff, J. S. Kissel, L. Kleybolte, S. Klimenko, T. D. Knowles, P. Koch, S. M. Koehlenbeck, S. Koley, V. Kondrashov, A. Kontos, M. Korobko, W. Z. Korth, I. Kowalska, D. B. Kozak, C. Kramer, V. Kringel, B. Krishnan, A. Krolak, G. Kuehn, P. Kumar, R. Kumar, S. Kumar, L. Kuo, A. Kutynia, S. Kwang, B. D. Lackey, K. H. Lai, M. Landry, R. N. Lang, J. Lange, B. Lantz, R. K. Lanza, A. Lartaux-Vollard, P. D. Lasky, M. Laxen, A. Lazzarini, C. Lazzaro, P. Leaci, S. Leavey, C. H. Lee, H. K. Lee, H. M. Lee, H. W. Lee, K. Lee, J. Lehmann, A. Lenon, M. Leonardi, N. Leroy, N. Letendre, Y. Levin, T. G. F. Li, S. D. Linker, T. B. Littenberg, J. Liu, R. K. L. Lo, N. A. Lockerbie, L. T. London, J. E. Lord, M. Lorenzini, V. Loriette, M. Lormand, G. Losurdo, J. D. Lough, C. O. Lousto, G. Lovelace, H. Luck, D. Lumaca, A. P. Lundgren, R. Lynch, Y. Ma, R. Macas, S. Macfoy, B. Machenschalk, M. MacInnis, D. M. Macleod, I. Maga na Hernandez, F. Maga na-Sandoval, L. Maga na Zertuche, R. M. Magee, E. Majorana, I. Maksimovic, N. Man, V. Mandic, V. Mangano, G. L. Mansell, M. Manske, M. Mantovani, F. Marchesoni, F. Marion, S. Marka, Z. Marka, C. Markakis, A. S. Markosyan, A. Markowitz, E. Maros, A. Marquina, F. Martelli, L. Martellini, I. W. Martin, R. M. Martin, D. V. Martynov, K. Mason, E. Massera, A. Masserot, T. J. Massinger, M. Masso-Reid, S. Mastrogiovanni, A. Matas, F. Matichard, L. Matone, N. Mavalvala, N. Mazumder, R. McCarthy, D. E. McClelland, S. McCormick, L. McCuller, S. C. McGuire, G. McIntyre, J. McIver, D. J. McManus, L. McNeill, T. McRae, S. T. McWilliams, D. Meacher, G. D. Meadors, M. Mehmet, J. Meidam, E. Mejuto-Villa, A. Melatos, G. Mendell, R. A. Mercer, E. L. Merilh, M. Merzougui, S. Meshkov, C. Messenger, C. Messick, R. Metzdorff, P. M. Meyers, H. Miao, C. Michel, H. Middleton, E. E. Mikhailov, L. Milano, A. L. Miller, B. B. Miller, J. Miller, M. Millhouse, M. C. Milovich-Goff, O. Minazzoli, Y. Minenkov, J. Ming, C. Mishra, S. Mitra, V. P. Mitrofanov, G. Mitselmakher, R. Mittleman, D. Moffa, A. Moggi, K. Mogushi, M. Mohan, S. R. P. Mohapatra, M. Montani, C. J. Moore, D. Moraru, G. Moreno, S. R. Morriss, B. Mours, C. M. Mow-Lowry, G. Mueller, A. W. Muir, Arunava Mukherjee, D. Mukherjee, S. Mukherjee, N. Mukund, A. Mullavey, J. Munch, E. A. Mu niz, M. Muratore, P. G. Murray, K. Napier, I. Nardecchia, L. Naticchioni, R. K. Nayak, J. Neilson, G. Nelemans, T. J. N. Nelson, M. Nery, A. Neunzert, L. Nevin, J. M. Newport, G. Newtonaltaffiliation Deceased, December 2016., K. K. Y. Ng, T. T. Nguyen, D. Nichols, A. B. Nielsen, S. Nissanke, A. Nitz, A. Noack, F. Nocera, D. Nolting, C. North, L. K. Nuttall, J. Oberling, G. D. O'Dea, G. H. Ogin, J. J. Oh, S. H. Oh, F. Ohme, M. A. Okada, M. Oliver, P. Oppermann, Richard J. Oram, B. O'Reilly, R. Ormiston, L. F. Ortega, R. O'Shaughnessy, S. Ossokine, D. J. Ottaway, H. Overmier, B. J. Owen, A. E. Pace, J. Page, M. A. Page, A. Pai, S. A. Pai, J. R. Palamos, O. Palashov, C. Palomba, A. Pal-Singh, Howard Pan, Huang-Wei Pan, B. Pang, P. T. H. Pang, C. Pankow, F. Pannarale, B. C. Pant, F. Paoletti, A. Paoli, M. A. Papa, A. Parida, W. Parker, D. Pascucci, A. Pasqualetti, R. Passaquieti, D. Passuello, M. Patil, B. Patricelli, B. L. Pearlstone, M. Pedraza, R. Pedurand, L. Pekowsky, A. Pele, S. Penn, C. J. Perez, A. Perreca, L. M. Perri, H. P. Pfeiffer, M. Phelps, O. J. Piccinni, M. Pichot, F. Piergiovanni, V. Pierro, G. Pillant, L. Pinard, I. M. Pinto, M. Pirello, M. Pitkin, M. Poe, R. Poggiani, P. Popolizio, E. K. Porter, A. Post, J. Powell, J. Prasad, J. W. W. Pratt, G. Pratten, V. Predoi, T. Prestegard, M. Prijatelj, M. Principe, S. Privitera, G. A. Prodi, L. G. Prokhorov, O. Puncken, M. Punturo, P. Puppo, M. Purrer, H. Qi, V. Quetschke, E. A. Quintero, R. Quitzow-James, F. J. Raab, D. S. Rabeling, H. Radkins, P. Raffai, S. Raja, C. Rajan, B. Rajbhandari, M. Rakhmanov, K. E. Ramirez, A. Ramos-Buades, P. Rapagnani, V. Raymond, M. Razzano, J. Read, T. Regimbau, L. Rei, S. Reid, D. H. Reitze, W. Ren, S. D. Reyes, F. Ricci, P. M. Ricker, S. Rieger, K. Riles, M. Rizzo, N. A. Robertson, R. Robie, F. Robinet, A. Rocchi, L. Rolland, J. G. Rollins, V. J. Roma, R. Romano, C. L. Romel, J. H. Romie, D. Rosinska, M. P. Ross, S. Rowan, A. Rudiger, P. Ruggi, G. Rutins, K. Ryan, S. Sachdev, T. Sadecki, L. Sadeghian, M. Sakellariadou, L. Salconi, M. Saleem, F. Salemi, A. Samajdar, L. Sammut, L. M. Sampson, E. J. Sanchez, L. E. Sanchez, N. Sanchis-Gual, V. Sandberg, J. R. Sanders, B. Sassolas, B. S. Sathyaprakash, P. R. Saulson, O. Sauter, R. L. Savage, A. Sawadsky, P. Schale, M. Scheel, J. Scheuer, J. Schmidt, P. Schmidt, R. Schnabel, R. M. S. Schofield, A. Schonbeck, E. Schreiber, D. Schuette, B. W. Schulte, B. F. Schutz, S. G. Schwalbe, J. Scott, S. M. Scott, E. Seidel, D. Sellers, A. S. Sengupta, D. Sentenac, V. Sequino, A. Sergeev, D. A. Shaddock, T. J. Shaffer, A. A. Shah, M. S. Shahriar, M. B. Shaner, L. Shao, B. Shapiro, P. Shawhan, A. Sheperd, D. H. Shoemaker, D. M. Shoemaker, K. Siellez, X. Siemens, M. Sieniawska, D. Sigg, A. D. Silva, L. P. Singer, A. Singh, A. Singhal, A. M. Sintes, B. J. J. Slagmolen, B. Smith, J. R. Smith, R. J. E. Smith, S. Somala, E. J. Son, J. A. Sonnenberg, B. Sorazu, F. Sorrentino, T. Souradeep, A. P. Spencer, A. K. Srivastava, K. Staats, A. Staley, M. Steinke, J. Steinlechner, S. Steinlechner, D. Steinmeyer, S. P. Stevenson, R. Stone, D. J. Stops, K. A. Strain, G. Stratta, S. E. Strigin, A. Strunk, R. Sturani, A. L. Stuver, T. Z. Summerscales, L. Sun, S. Sunil, J. Suresh, P. J. Sutton, B. L. Swinkels, M. J. Szczepanczyk, M. Tacca, S. C. Tait, C. Talbot, D. Talukder, D. B. Tanner, M. Tapai, A. Taracchini, J. D. Tasson, J. A. Taylor, R. Taylor, S. V. Tewari, T. Theeg, F. Thies, E. G. Thomas, M. Thomas, P. Thomas, K. A. Thorne, E. Thrane, S. Tiwari, V. Tiwari, K. V. Tokmakov, K. Toland, M. Tonelli, Z. Tornasi, A. Torres-Forne, C. I. Torrie, D. Toyra, F. Travasso, G. Traylor, J. Trinastic, M. C. Tringali, L. Trozzo, K. W. Tsang, M. Tse, R. Tso, L. Tsukada, D. Tsuna, D. Tuyenbayev, K. Ueno, D. Ugolini, C. S. Unnikrishnan, A. L. Urban, S. A. Usman, H. Vahlbruch, G. Vajente, G. Valdes, N. van Bakel, M. van Beuzekom, J. F. J. van den Brand, C. Van Den Broeck, D. C. Vander-Hyde, L. van der Schaaf, J. V. van Heijningen, A. A. van Veggel, M. Vardaro, V. Varma, S. Vass, M. Vasuth, A. Vecchio, G. Vedovato, J. Veitch, P. J. Veitch, K. Venkateswara, G. Venugopalan, D. Verkindt, F. Vetrano, A. Vicere, A. D. Viets, S. Vinciguerra, D. J. Vine, J.-Y. Vinet, S. Vitale, T. Vo, H. Vocca, C. Vorvick, S. P. Vyatchanin, A. R. Wade, L. E. Wade, M. Wade, R. Walet, M. Walker, L. Wallace, S. Walsh, G. Wang, H. Wang, J. Z. Wang, W. H. Wang, Y. F. Wang, R. L. Ward, J. Warner, M. Was, J. Watchi, B. Weaver, L.-W. Wei, M. Weinert, A. J. Weinstein, R. Weiss, L. Wen, E. K. Wessel, P. Wessels, J. Westerweck, T. Westphal, K. Wette, J. T. Whelan, B. F. Whiting, C. Whittle, D. Wilken, D. Williams, R. D. Williams, A. R. Williamson, J. L. Willis, B. Willke, M. H. Wimmer, W. Winkler, C. C. Wipf, H. Wittel, G. Woan, J. Woehler, J. Wofford, K. W. K. Wong, J. Worden, J. L. Wright, D. S. Wu, D. M. Wysocki, S. Xiao, H. Yamamoto, C. C. Yancey, L. Yang, M. J. Yap, M. Yazback, Hang Yu, Haocun Yu, M. Yvert, A. Zadro.zny, M. Zanolin, T. Zelenova, J.-P. Zendri, M. Zevin, L. Zhang, M. Zhang, T. Zhang, Y.-H. Zhang, C. Zhao, M. Zhou, Z. Zhou, S. J. Zhu, X. J. Zhu, M. E. Zucker, J. Zweizig The Advanced LIGO and Advanced Virgo observatories recently discovered gravitational waves from a binary neutron star inspiral. A short gamma-ray burst (GRB) that followed the merger of this binary was also recorded by the Fermi Gamma-ray Burst Monitor (Fermi-GBM), and the Anticoincidence Shield for the Spectrometer for the International Gamma-Ray Astrophysics Laboratory (INTEGRAL), indicating particle acceleration by the source. The precise location of the event was determined by optical detections of emission following the merger. We searched for high-energy neutrinos from the merger in the GeV--EeV energy range using the ANTARES, IceCube, and Pierre Auger Observatories. No neutrinos directionally coincident with the source were detected within $\pm500$ s around the merger time. Additionally, no MeV neutrino burst signal was detected coincident with the merger. We further carried out an extended search in the direction of the source for high-energy neutrinos within the 14-day period following the merger, but found no evidence of emission. We used these results to probe dissipation mechanisms in relativistic outflows driven by the binary neutron star merger. The non-detection is consistent with model predictions of short GRBs observed at a large off-axis angle. Muon Counting using Silicon Photomultipliers in the AMIGA detector of the Pierre Auger Observatory (1703.06193) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollant, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 4, 2017 physics.ins-det, astro-ph.IM AMIGA (Auger Muons and Infill for the Ground Array) is an upgrade of the Pierre Auger Observatory designed to extend its energy range of detection and to directly measure the muon content of the cosmic ray primary particle showers. The array will be formed by an infill of surface water-Cherenkov detectors associated with buried scintillation counters employed for muon counting. Each counter is composed of three scintillation modules, with a 10 m$^2$ detection area per module. In this paper, a new generation of detectors, replacing the current multi-pixel photomultiplier tube (PMT) with silicon photo sensors (aka. SiPMs), is proposed. The selection of the new device and its front-end electronics is explained. A method to calibrate the counting system that ensures the performance of the detector is detailed. This method has the advantage of being able to be carried out in a remote place such as the one where the detectors are deployed. High efficiency results, i.e. 98 % efficiency for the highest tested overvoltage, combined with a low probability of accidental counting ($\sim$2 %), show a promising performance for this new system. Spectral Calibration of the Fluorescence Telescopes of the Pierre Auger Observatory (1709.01537) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. We used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. We made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small. The Pierre Auger Observatory: Contributions to the 35th International Cosmic Ray Conference (ICRC 2017) (1708.06592) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 2, 2017 astro-ph.CO, astro-ph.IM, astro-ph.HE Contributions of the Pierre Auger Collaboration to the 35th International Cosmic Ray Conference (ICRC 2017), 12-20 July 2017, Bexco, Busan, Korea. Observation of a Large-scale Anisotropy in the Arrival Directions of Cosmic Rays above $8 \times 10^{18}$ eV (1709.07321) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Sept. 21, 2017 astro-ph.HE Cosmic rays are atomic nuclei arriving from outer space that reach the highest energies observed in nature. Clues to their origin come from studying the distribution of their arrival directions. Using $3 \times 10^4$ cosmic rays above $8 \times 10^{18}$ electron volts, recorded with the Pierre Auger Observatory from a total exposure of 76,800 square kilometers steradian year, we report an anisotropy in the arrival directions. The anisotropy, detected at more than the 5.2$\sigma$ level of significance, can be described by a dipole with an amplitude of $6.5_{-0.9}^{+1.3}$% towards right ascension $\alpha_{d} = 100 \pm 10$ degrees and declination $\delta_{d} = -24_{-13}^{+12}$ degrees. That direction indicates an extragalactic origin for these ultra-high energy particles. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory (1611.06812) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlín, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello June 20, 2017 astro-ph.HE We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to $80^\circ$ and energies in excess of 4 EeV ($4 \times 10^{18}$ eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding $p$-values obtained after accounting for searches blindly performed at several angular scales, are $1.3 \times 10^{-5}$ in the case of the angular power spectrum, and $2.5 \times 10^{-3}$ in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales. Search for photons with energies above 10$^{18}$ eV using the hybrid detector of the Pierre Auger Observatory (1612.01517) April 7, 2017 hep-ex, astro-ph.HE A search for ultra-high energy photons with energies above 1 EeV is performed using nine years of data collected by the Pierre Auger Observatory in hybrid operation mode. An unprecedented separation power between photon and hadron primaries is achieved by combining measurements of the longitudinal air-shower development with the particle content at ground measured by the fluorescence and surface detectors, respectively. Only three photon candidates at energies 1 - 2 EeV are found, which is compatible with the expected hadron-induced background. Upper limits on the integral flux of ultra-high energy photons of 0.027, 0.009, 0.008, 0.008 and 0.007 km$^{-2}$ sr$^{-1}$ yr$^{-1}$ are derived at 95% C.L. for energy thresholds of 1, 2, 3, 5 and 10 EeV. These limits bound the fractions of photons in the all-particle integral flux below 0.1%, 0.15%, 0.33%, 0.85% and 2.7%. For the first time the photon fraction at EeV energies is constrained at the sub-percent level. The improved limits are below the flux of diffuse photons predicted by some astrophysical scenarios for cosmogenic photon production. The new results rule-out the early top-down models $-$ in which ultra-high energy cosmic rays are produced by, e.g., the decay of super-massive particles $-$ and challenge the most recent super-heavy dark matter models. A targeted search for point sources of EeV photons with the Pierre Auger Observatory (1612.04155) March 21, 2017 hep-ph, astro-ph.HE Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined $p$-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictions of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region. The effect of the atmospheric refractive index on the radio signal of extensive air showers (1701.07338) A. Corstanje, A. Bonardi, S. Buitink, H. Falcke, J.R. Hörandel, P. Mitra, K. Mulrey, A. Nelles, J.P. Rachen, L. Rossetto, P. Schellart, O. Scholten, S. ter Veen, S. Thoudam, G. Trinh, T. Winchen Jan. 25, 2017 astro-ph.IM, astro-ph.HE For the interpretation of measurements of radio emission from extensive air showers, an important systematic uncertainty arises from natural variations of the atmospheric refractive index $n$. At a given altitude, the refractivity $N=10^6\, (n-1)$ can have relative variations on the order of $10 \%$ depending on temperature, humidity, and air pressure. Typical corrections to be applied to $N$ are about $4\%$. Using CoREAS simulations of radio emission from air showers, we have evaluated the effect of varying $N$ on measurements of the depth of shower maximum $X_{\rm max}$. For an observation band of 30 to 80 MHz, a difference of $4 \%$ in refractivity gives rise to a systematic error in the inferred $X_{\rm max}$ between 3.5 and 11 $\mathrm{g/cm^2}$, for proton showers with zenith angles ranging from 15 to 50 degrees. At higher frequencies, from 120 to 250 MHz, the offset ranges from 10 to 22 $\mathrm{g/cm^2}$. These offsets were found to be proportional to the geometric distance to $X_{\rm max}$. We have compared the results to a simple model based on the Cherenkov angle. For the 120 to 250 MHz band, the model is in qualitative agreement with the simulations. In typical circumstances, we find a slight decrease in $X_{\rm max}$ compared to the default refractivity treatment in CoREAS. While this is within commonly treated systematic uncertainties, accounting for it explicitly improves the accuracy of $X_{\rm max}$ measurements. Ultrahigh-energy neutrino follow-up of Gravitational Wave events GW150914 and GW151226 with the Pierre Auger Observatory (1608.07378) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello On September 14, 2015 the Advanced LIGO detectors observed their first gravitational-wave (GW) transient GW150914. This was followed by a second GW event observed on December 26, 2015. Both events were inferred to have arisen from the merger of black holes in binary systems. Such a system may emit neutrinos if there are magnetic fields and disk debris remaining from the formation of the two black holes. With the surface detector array of the Pierre Auger Observatory we can search for neutrinos with energy above 100 PeV from point-like sources across the sky with equatorial declination from about -65 deg. to +60 deg., and in particular from a fraction of the 90% confidence-level (CL) inferred positions in the sky of GW150914 and GW151226. A targeted search for highly-inclined extensive air showers, produced either by interactions of downward-going neutrinos of all flavors in the atmosphere or by the decays of tau leptons originating from tau-neutrino interactions in the Earth's crust (Earth-skimming neutrinos), yielded no candidates in the Auger data collected within $\pm 500$ s around or 1 day after the coordinated universal time (UTC) of GW150914 and GW151226, as well as in the same search periods relative to the UTC time of the GW candidate event LVT151012. From the non-observation we constrain the amount of energy radiated in ultrahigh-energy neutrinos from such remarkable events. Evidence for a mixed mass composition at the `ankle' in the cosmic-ray spectrum (1609.08567) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollant, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, P. Younk, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Nov. 22, 2016 astro-ph.HE We report a first measurement for ultra-high energy cosmic rays of the correlation between the depth of shower maximum and the signal in the water Cherenkov stations of air-showers registered simultaneously by the fluorescence and the surface detectors of the Pierre Auger Observatory. Such a correlation measurement is a unique feature of a hybrid air-shower observatory with sensitivity to both the electromagnetic and muonic components. It allows an accurate determination of the spread of primary masses in the cosmic-ray flux. Up till now, constraints on the spread of primary masses have been dominated by systematic uncertainties. The present correlation measurement is not affected by systematics in the measurement of the depth of shower maximum or the signal in the water Cherenkov stations. The analysis relies on general characteristics of air showers and is thus robust also with respect to uncertainties in hadronic event generators. The observed correlation in the energy range around the `ankle' at $\lg(E/{\rm eV})=18.5-19.0$ differs significantly from expectations for pure primary cosmic-ray compositions. A light composition made up of proton and helium only is equally inconsistent with observations. The data are explained well by a mixed composition including nuclei with mass $A > 4$. Scenarios such as the proton dip model, with almost pure compositions, are thus disfavoured as the sole explanation of the ultrahigh-energy cosmic-ray flux at Earth. Cosmic-ray energy spectrum and composition up to the ankle - the case for a second Galactic component (1605.03111) S. Thoudam, J.P. Rachen, A. van Vliet, A. Achterberg, S. Buitink, H. Falcke, J.R. Hörandel We have carried out a detailed study to understand the observed energy spectrum and composition of cosmic rays with energies up to ~10^18 eV. Our study shows that a single Galactic component with subsequent energy cut-offs in the individual spectra of different elements, optimised to explain the observed spectra below ~10^14 eV and the knee in the all-particle spectrum, cannot explain the observed all-particle spectrum above ~2x10^16 eV. We discuss two approaches for a second component of Galactic cosmic rays -- re-acceleration at a Galactic wind termination shock, and supernova explosions of Wolf-Rayet stars, and show that the latter scenario can explain almost all observed features in the all-particle spectrum and the composition up to ~10^18 eV, when combined with a canonical extra-galactic spectrum expected from strong radio galaxies or a source population with similar cosmological evolution. In this two-component Galactic model, the knee at ~ 3x10^15 eV and the second knee at ~10^17 eV in the all-particle spectrum are due to the cut-offs in the first and second components, respectively. We also discuss several variations of the extra-galactic component, from a minimal contribution to scenarios with a significant component below the ankle (at ~4x10^18 eV), and find that extra-galactic contributions in excess of regular source evolution are neither indicated nor in conflict with the existing data. Our main result is that the second Galactic component predicts a composition of Galactic cosmic rays at and above the second knee that largely consists of helium or a mixture of helium and CNO nuclei, with a weak or essentially vanishing iron fraction, in contrast to most common assumptions. This prediction is in agreement with new measurements from LOFAR and the Pierre Auger Observatory which indicate a strong light component and a rather low iron fraction between ~10^17 and 10^18 eV. Testing Hadronic Interactions at Ultrahigh Energies with Air Showers Measured by the Pierre Auger Observatory (1610.08509) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, I.M. Pepe, L. A. S. Pereira, L. Perrone, E. Petermann, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, F. Strafella, A. Stutz, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, D. Yelos, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 31, 2016 hep-ph, hep-ex, astro-ph.HE Ultrahigh energy cosmic ray air showers probe particle physics at energies beyond the reach of accelerators. Here we introduce a new method to test hadronic interaction models without relying on the absolute energy calibration, and apply it to events with primary energy 6-16 EeV (E_CM = 110-170 TeV), whose longitudinal development and lateral distribution were simultaneously measured by the Pierre Auger Observatory. The average hadronic shower is 1.33 +- 0.16 (1.61 +- 0.21) times larger than predicted using the leading LHC-tuned models EPOS-LHC (QGSJetII-04), with a corresponding excess of muons. A comparison of the cosmic-ray energy scales of Tunka-133 and KASCADE-Grande via their radio extensions Tunka-Rex and LOPES (1610.08343) W.D. Apel, J.C. Arteaga-Velázquez, L. Bähren, P.A. Bezyazeekov, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, N.M. Budnev, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, O. Fedorov, B. Fuchs, H. Gemmeke, O. A. Gress, C. Grupen, A. Haungs, D. Heck, R. Hiller, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, Y. Kazarina, M. Kleifges, E.E. Korosteleva, D. Kostunin, O. Krömer, J. Kuijpers, L.A. Kuzmichev, K. Link, N. Lubsandorzhiev, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, R.R. Mirgazov, R. Monkhoev, C. Morello, J. Oehlschläger, E.A. Osipova, A. Pakhorukov, N. Palmieri, L. Pankov, T. Pierog, V.V. Prosin, J. Rautenberg, H. Rebel, M. Roth, G.I. Rubtsov, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weind, R. Wischnewski, J. Wochele, J. Zabierowski, A. Zagorodnikov, J.A. Zensus Oct. 27, 2016 astro-ph.IM, astro-ph.HE The radio technique is a promising method for detection of cosmic-ray air showers of energies around $100\,$PeV and higher with an array of radio antennas. Since the amplitude of the radio signal can be measured absolutely and increases with the shower energy, radio measurements can be used to determine the air-shower energy on an absolute scale. We show that calibrated measurements of radio detectors operated in coincidence with host experiments measuring air showers based on other techniques can be used for comparing the energy scales of these host experiments. Using two approaches, first via direct amplitude measurements, and second via comparison of measurements with air shower simulations, we compare the energy scales of the air-shower experiments Tunka-133 and KASCADE-Grande, using their radio extensions, Tunka-Rex and LOPES, respectively. Due to the consistent amplitude calibration for Tunka-Rex and LOPES achieved by using the same reference source, this comparison reaches an accuracy of approximately $10\,\%$ - limited by some shortcomings of LOPES, which was a prototype experiment for the digital radio technique for air showers. In particular we show that the energy scales of cosmic-ray measurements by the independently calibrated experiments KASCADE-Grande and Tunka-133 are consistent with each other on this level. Search for ultrarelativistic magnetic monopoles with the Pierre Auger Observatory (1609.04451) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, R.J. Barreira Luz, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, M. Lauscher, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, G. Müller, M.A. Muller, S. Müller, I. Naranjo, L. Nellen, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C.A. Sarmiento, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, F. Strafella, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, D. Torres Machado, M. Torri, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 3, 2016 astro-ph.HE We present a search for ultra-relativistic magnetic monopoles with the Pierre Auger Observatory. Such particles, possibly a relic of phase transitions in the early universe, would deposit a large amount of energy along their path through the atmosphere, comparable to that of ultrahigh-energy cosmic rays (UHECRs). The air shower profile of a magnetic monopole can be effectively distinguished by the fluorescence detector from that of standard UHECRs. No candidate was found in the data collected between 2004 and 2012, with an expected background of less than 0.1 event from UHECRs. The corresponding 90% confidence level (C.L.) upper limits on the flux of ultra-relativistic magnetic monopoles range from $10^{-19}$ (cm$^{2}$ sr s)$^{-1}$ for a Lorentz factor $\gamma=10^9$ to $2.5 \times10^{-21}$ (cm$^{2}$ sr s)$^{-1}$ for $\gamma=10^{12}$. These results - the first obtained with a UHECR detector - improve previously published limits by up to an order of magnitude. Prototype muon detectors for the AMIGA component of the Pierre Auger Observatory (1605.01625) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello May 12, 2016 hep-ex, physics.ins-det Auger Muons and Infill for the Ground Array) is an upgrade of the Pierre Auger Observatory to extend its range of detection and to directly measure the muon content of the particle showers. It consists of an infill of surface water-Cherenkov detectors accompanied by buried scintillator detectors used for muon counting. The main objectives of the AMIGA engineering array, referred to as the Unitary Cell, are to identify and resolve all engineering issues as well as to understand the muon-number counting uncertainties related to the design of the detector. The mechanical design, fabrication and deployment processes of the muon counters of the Unitary Cell are described in this document. These muon counters modules comprise sealed PVC casings containing plastic scintillation bars, wavelength-shifter optical fibers, 64 pixel photomultiplier tubes, and acquisition electronics. The modules are buried approximately 2.25 m below ground level in order to minimize contamination from electromagnetic shower particles. The mechanical setup, which allows access to the electronics for maintenance, is also described in addition to tests of the modules' response and integrity. The completed Unitary Cell has measured a number of air showers of which a first analysis of a sample event is included here. Azimuthal asymmetry in the risetime of the surface detector signals of the Pierre Auger Observatory (1604.00978) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, M. Ambrosio, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, M. Lauscher, P. Lautridou, P. Lebrun, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, D. Mockler, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, I.M. Pepe, L. A. S. Pereira, L. Perrone, E. Petermann, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, F. Strafella, A. Stutz, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, A. Valbuena-Delgado, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, D. Yelos, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello April 13, 2016 astro-ph.HE The azimuthal asymmetry in the risetime of signals in Auger surface detector stations is a source of information on shower development. The azimuthal asymmetry is due to a combination of the longitudinal evolution of the shower and geometrical effects related to the angles of incidence of the particles into the detectors. The magnitude of the effect depends upon the zenith angle and state of development of the shower and thus provides a novel observable, $(\sec \theta)_\mathrm{max}$, sensitive to the mass composition of cosmic rays above $3 \times 10^{18}$ eV. By comparing measurements with predictions from shower simulations, we find for both of our adopted models of hadronic physics (QGSJETII-04 and EPOS-LHC) an indication that the mean cosmic-ray mass increases slowly with energy, as has been inferred from other studies. However, the mass estimates are dependent on the shower model and on the range of distance from the shower core selected. Thus the method has uncovered further deficiencies in our understanding of shower modelling that must be resolved before the mass composition can be inferred from $(\sec \theta)_\mathrm{max}$. Timing calibration and spectral cleaning of LOFAR time series data (1603.08354) A. Corstanje, S. Buitink, J.E. Enriquez, H. Falcke, J.R. Hörandel, M. Krause, A. Nelles, J.P. Rachen, P. Schellart, O. Scholten, S. ter Veen, S. Thoudam, T.N.G. Trinh March 28, 2016 astro-ph.IM We describe a method for spectral cleaning and timing calibration of short voltage time series data from individual radio interferometer receivers. It makes use of the phase differences in Fast Fourier Transform (FFT) spectra across antenna pairs. For strong, localized terrestrial sources these are stable over time, while being approximately uniform-random for a sum over many sources or for noise. Using only milliseconds-long datasets, the method finds the strongest interfering transmitters, a first-order solution for relative timing calibrations, and faulty data channels. No knowledge of gain response or quiescent noise levels of the receivers is required. With relatively small data volumes, this approach is suitable for use in an online system monitoring setup for interferometric arrays. We have applied the method to our cosmic-ray data collection, a collection of measurements of short pulses from extensive air showers, recorded by the LOFAR radio telescope. Per air shower, we have collected 2 ms of raw time series data for each receiver. The spectral cleaning has a calculated optimal sensitivity corresponding to a power signal-to-noise ratio of 0.08 (or -11 dB) in a spectral window of 25 kHz, for 2 ms of data in 48 antennas. This is well sufficient for our application. Timing calibration across individual antenna pairs has been performed at 0.4 ns precision; for calibration of signal clocks across stations of 48 antennas the precision is 0.1 ns. Monitoring differences in timing calibration per antenna pair over the course of the period 2011 to 2015 shows a precision of 0.08 ns, which is useful for monitoring and correcting drifts in signal path synchronizations. A cross-check method for timing calibration is presented, using a pulse transmitter carried by a drone flying over the array. Timing precision is similar, 0.3 ns. Nanosecond-level time synchronization of autonomous radio detector stations for extensive air showers (1512.02216) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Eser, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, A. Lang, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello Feb. 15, 2016 hep-ex, physics.ins-det To exploit the full potential of radio measurements of cosmic-ray air showers at MHz frequencies, a detector timing synchronization within 1 ns is needed. Large distributed radio detector arrays such as the Auger Engineering Radio Array (AERA) rely on timing via the Global Positioning System (GPS) for the synchronization of individual detector station clocks. Unfortunately, GPS timing is expected to have an accuracy no better than about 5 ns. In practice, in particular in AERA, the GPS clocks exhibit drifts on the order of tens of ns. We developed a technique to correct for the GPS drifts, and an independent method is used for cross-checks that indeed we reach nanosecond-scale timing accuracy by this correction. First, we operate a "beacon transmitter" which emits defined sine waves detected by AERA antennas recorded within the physics data. The relative phasing of these sine waves can be used to correct for GPS clock drifts. In addition to this, we observe radio pulses emitted by commercial airplanes, the position of which we determine in real time from Automatic Dependent Surveillance Broadcasts intercepted with a software-defined radio. From the known source location and the measured arrival times of the pulses we determine relative timing offsets between radio detector stations. We demonstrate with a combined analysis that the two methods give a consistent timing calibration with an accuracy of 2 ns or better. Consequently, the beacon method alone can be used in the future to continuously determine and correct for GPS clock drifts in each individual event measured by AERA. Search for correlations between the arrival directions of IceCube neutrino events and ultrahigh-energy cosmic rays detected by the Pierre Auger Observatory and the Telescope Array (1511.09408) The IceCube Collaboration: M. G. Aartsen, K. Abraham, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, D. Altmann, T. Anderson, I. Ansseau, M. Archinger, C. Arguelles, T. C. Arlen, J. Auffenberg, X. Bai, S. W. Barwick, V. Baum, R. Bay, J. J. Beatty, J. Becker Tjus, K.-H. Becker, E. Beiser, P. Berghaus, D. Berley, E. Bernardini, A. Bernhard, D. Z. Besson, G. Binder, D. Bindig, M. Bissok, E. Blaufuss, J. Blumenthal, D. J. Boersma, C. Bohm, M. Börner, F. Bos, D. Bose, S. Böser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, N. Buzinsky, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, D. F. Cowen, A. H. Cruz Silva, J. Daughhetee, J. C. Davis, M. Day, J. P. A. M. de André, C. De Clercq, E. del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K. D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J. C. Díaz-Vélez, V. di Lorenzo, J. P. Dumm, M. Dunkman, B. Eberhardt, T. Ehrhardt, B. Eichmann, S. Euler, P. A. Evenson, S. Fahey, A. R. Fazely, J. Feintzeig, J. Felde, K. Filimonov, C. Finley, T. Fischer-Wasels, S. Flis, C.-C. Fösig, T. Fuchs, T. K. Gaisser, R. Gaior, J. Gallagher, L. Gerhardt, K. Ghorbani, D. Gier, L. Gladstone, M. Glagla, T. Glüsenkamp, A. Goldschmidt, G. Golup, J. G. Gonzalez, D. Góra, D. Grant, Z. Griffith, A. Groß, C. Ha, C. Haack, A. Haj Ismail, A. Hallgren, F. Halzen, E. Hansen, B. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G. C. Hill, K. D. Hoffman, R. Hoffmann, K. Holzapfel, A. Homeier, K. Hoshina, F. Huang, M. Huber, W. Huelsnitz, P. O. Hulth, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G. S. Japaridze, M. Jeong, K. Jero, M. Jurkovic, A. Kappes, T. Karg, A. Karle, M. Kauer, A. Keivani, J. L. Kelley, J. Kemp, A. Kheirandish, J. Kiryluk, J. Kläs, S. R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, L. Köpke, C. Kopper, S. Kopper, D. J. Koskinen, M. Kowalski, K. Krings, G. Kroll, M. Kroll, G. Krückl, J. Kunnen, N. Kurahashi, T. Kuwabara, M. Labare, J. L. Lanfranchi, M. J. Larson, M. Lesiak-Bzdak, M. Leuermann, J. Leuner, L. Lu, J. Lünemann, J. Madsen, G. Maggi, K. B. M. Mahn, M. Mandelartz, R. Maruyama, K. Mase, H. S. Matis, R. Maunu, F. McNally, K. Meagher, M. Medici, A. Meli, T. Menne, G. Merino, T. Meures, S. Miarecki, E. Middell, L. Mohrmann, T. Montaruli, R. Morse, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S. C. Nowicki, D. R. Nygren, A. Obertacke Pollmann, A. Olivas, A. Omairat, A. O'Murchadha, T. Palczewski, H. Pandya, D. V. Pankova, L. Paul, J. A. Pepper, C. Pérez de los Heros, C. Pfendner, D. Pieloth, E. Pinat, J. Posselt, P. B. Price, G. T. Przybylski, M. Quinnan, C. Raab, L. Rädel, M. Rameez, K. Rawlins, R. Reimann, M. Relich, E. Resconi, W. Rhode, M. Richman, S. Richter, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, L. Sabbatini, H.-G. Sander, A. Sandrock, J. Sandroos, S. Sarkar, K. Schatto, M. Schimp, T. Schmidt, S. Schoenen, S. Schöneberg, A. Schönwald, L. Schulte, L. Schumacher, D. Seckel, S. Seunarine, D. Soldin, M. Song, G. M. Spiczak, C. Spiering, M. Stahlberg, M. Stamatikos, T. Stanev, A. Stasik, A. Steuer, T. Stezelberger, R. G. Stokstad, A. Stößl, R. Ström, N. L. Strotjohann, G. W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, J. Tatar, S. Ter-Antonyan, A. Terliuk, G. Tešić, S. Tilav, P. A. Toale, M. N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, A. Turcati, E. Unger, M. Usner, S. Vallecorsa, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, J. van Santen, J. Veenkamp, M. Vehring, M. Voge, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, Ch. Weaver, C. Wendt, S. Westerhoff, B. J. Whelan, K. Wiebe, C. H. Wiebusch, L. Wille, D. R. Williams, H. Wissing, M. Wolf, T. R. Wood, K. Woschnagg, D. L. Xu, X. W. Xu, Y. Xu, J. P. Yanez, G. Yodh, S. Yoshida, M. Zoll. The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J.C. Chirinos Diaz, J. Chudoba, R.W. Clay, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, R. Dallier, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. Garcia-Gamez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, Q. Hasankiadeh, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, I. Naranjo, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, H. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, J. Peña-Rodriguez, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, F. Strafella, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello. The Telescope Array Collaboration: R.U. Abbasi, M. Abe, T. Abu-Zayyad, M. Allen, R. Azuma, E. Barcikowski, J.W. Belz, D.R. Bergman, S.A. Blake, R. Cady, M.J. Chae, B.G. Cheon, J. Chiba, M. Chikawa, W.R. Cho, T. Fujii, M. Fukushima, T. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C.C.H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H.B. Kim, J.H. Kim, J.H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y.J. Kwon, J. Lan, S.I. Lim, J.P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J.N. Matthews, M. Minamino, Y. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, T. Nonaka, A. Nozato, S. Ogio, J. Ogura, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I.H. Park, M.S. Pshirkov, D.C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, L.M. Scott, P.D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B.K. Shin, H.S. Shin, J.D. Smith, P. Sokolsky, R.W. Springer, B.T. Stokes, S.R. Stratton, T.A. Stroman, T. Suzawa, M. Takamura, M. Takeda, R. Takeishi, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S.B. Thomas, G.B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, T. Wong, R. Yamane, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel This paper presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECR magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored.
CommonCrawl
Differences in global gene expression in muscle tissue of Nellore cattle with divergent meat tenderness Larissa Fernanda Simielli Fonseca1, Daniele Fernanda Jovino Gimenez1, Danielly Beraldo dos Santos Silva1, Roger Barthelson2, Fernando Baldi1, Jesus Aparecido Ferro1 & Lucia Galvão Albuquerque1 Meat tenderness is the consumer's most preferred sensory attribute. This trait is affected by a number of factors, including genotype, age, animal sex, and pre- and post-slaughter management. In view of the high percentage of Zebu genes in the Brazilian cattle population, mainly Nellore cattle, the improvement of meat tenderness is important since the increasing proportion of Zebu genes in the population reduces meat tenderness. However, the measurement of this trait is difficult once it can only be made after animal slaughtering. New technologies such as RNA-Seq have been used to increase our understanding of the genetic processes regulating quantitative traits phenotypes. The objective of this study was to identify differentially expressed genes related to meat tenderness, in Nellore cattle in order to elucidate the genetic factors associated with meat quality. Samples were collected 24 h postmortem and the meat was not aged. We found 40 differentially expressed genes related to meat tenderness, 17 with known functions. Fourteen genes were up-regulated and 3 were down-regulated in the tender meat group. Genes related to ubiquitin metabolism, transport of molecules such as calcium and oxygen, acid-base balance, collagen production, actin, myosin, and fat were identified. The PCP4L1 (Purkinje cell protein 4 like 1) and BoLA-DQB (major histocompatibility complex, class II, DQ beta) genes were validated by qRT-PCR. The results showed relative expression values similar to those obtained by RNA-Seq, with the same direction of expression (i.e., the two techniques revealed higher expression of PCP4L1 in tender meat samples and of BoLA-DQB in tough meat samples). This study revealed the differential expression of genes and functions in Nellore cattle muscle tissue, which may contain potential biomarkers involved in meat tenderness. Meat quality traits in Brazilian animal breeding programs have not been fully explored because of the late expression of these attributes and the complex evaluation that can only be made after slaughter. Furthermore, on the domestic market, producers are generally not paid for meat quality, a fact that diminishes interest in improving meat quality traits and has hindered their inclusion in traditional selection objectives. In contrast, on international markets, meat tenderness is one of the most valued traits [39], a fact that highlights the importance towards improving this trait since Brazil is one of the world's largest beef exporters. Meat tenderness is the preferred sensory attribute of consumers [7]. According to Scollan et al. [46], the European food industry has sought to improve this trait to gain market share over other types of food. In Brazil, about 80% of the cattle herd consists of Zebu animals or their crossbreeds [1]. In this respect, the improvement of meat tenderness becomes important since Ferguson et al. [17] have shown that the higher the proportion of Zebu genes in a population, mainly Nellore cattle, the less tender the meat. Meat tenderness can only be measured after slaughter making this trait more complex to select animals. Thus, alternative tools are useful to include meat tenderness in animal breeding programs [9]. Modern recently developed large-scale RNA sequencing technologies (RNA-Seq) have been useful in understanding the genetic and physiological processes that regulate the phenotype of quantitative traits in a certain situation [34]. RNA-Seq permits analysis of the transcriptional profiles of cells, tissues or organs in a certain situation and the discovery of known and unknown genes involved in a given cellular process [57]. This new technique can be used to identify novel potential molecular markers that permit more accurate and early genetic predictions [51], with a consequent reduction in the generation interval that would contribute to the improvement of difficult-to-measure traits such as meat tenderness. RNA-Seq has been widely used in recent studies to investigate differentially expressed genes related to meat tenderness in different species. For example, genes related to the degradation of filamins, lipogenesis and collagen synthesis have been identified in a study on meat tenderness in broiler chickens [40]. Gonçalves [20] found genes related to metabolic pathways involved in apoptosis, calcium transport, proteolysis and ribosome synthesis in castrated Nellore cattle, classified as extreme for tenderness based on estimated breeding values for shear force measured after 14 days of aging. Bongiorni et al. [8], who studied gene expression in longissimus dorsi muscle of animals of two Italian beef breeds (Maremmana and Chianina) representing the extremes for meat tenderness, detected differentially expressed genes related to growth and sodium-potassium pumps, among others. Despite the above-mentioned publications, studies investigating differentially expressed genes related to meat tenderness in cattle are rare. In this respect, the better understanding and identification of the transcripts and biological processes, associated with this complex and economically important trait, will permit to highlight genes that could contain potential biomarkers involved in meat tenderness. The objective of this study was to identify genes differentially expressed in muscle tissue (longissimus dorsi) of Nellore cattle with divergent meat tenderness using RNA-Seq in order to obtain data that increase our understanding of the genetic and metabolic mechanisms underlying this trait. RNA sequencing, alignment, and assembly of the transcripts The TopHat2 program identified a total of 942 million reads (2 × 100 bp) and the sequencing coverage was 63X (coverage for all transcripts of all samples). An average of almost 24 million reads were obtained per sample and 88.3% of the reads were mapped. For tender meat group, an average of 24,928,506 (89%) million reads were mapped, while for tough meat group, an average of 22,170,021 million reads were mapped (89%) (Additional file 1: Table S1). We found transcript for 28,059 genes and 103,309 potential new isoforms. To evaluate the quality of sequencing, the expression profiles of the Glucuronidase Beta (GUSB), erythrocyte hydroxymethylbilane synthase (HMBS), Hypoxanthine Phosphoribosyltransferase 1 (HPRT1), phosphoglycerate kinase 1 (PGK1) and TATA-Box Binding Protein (TBP) genes were analyzed, which exhibited a similar expression profile in the tender and tough meat groups (Additional file 2: Figure S1). A box plot (Additional file 3: Figure S2) containing the transformed FPKM values (log10) for each group and the plot of principal component analysis (PCA) (Additional file 4: Figure S3) were constructed using the cummerRbund package. As can be seen in the box plot, the distribution of quartiles was consistent between groups, indicating high quality of the data. In addition, the medians were similar in the two groups and close to −1, indicating that the level of sequencing coverage permitted the identification of low-expressed genes [11, 51]. PCA showed the formation of different groups (tender and tough meat), indicating differences in the expression of genes between the tender and tought meat groups'. Analysis of differentially expressed genes Analysis of differential expression in the tender and tough meat groups revealed 40 differentially expressed genes (q-value <0.05) (Table 1). Seventeen of these genes have a known function. The log2 signal (fold change) was used was used to partition the DE genes into up- and down-regulated groups. In this analysis, 35 genes were found to be up-regulated and 5 were down-regulated in relation to the tough meat group. Among the genes with known function, 14 were up-regulated and 3 were down-regulated. Table 1 Differentially expressed genes detected in the samples divergent for meat tenderness Combined functional annotation using all differentially expressed (up- and down-regulated) genes for meat tenderness was performed with the DAVID v6.7 database using Bos taurus as a reference. This analysis permitted the identification of seven functional groups (annotation clusters; Additional file 5: Table S2). These genes were classified according to their function: cell fraction (GO: 000267), cell junction (GO: 0030054), intrinsic component of membrane (GO: 0031224), regulation of cell communication (GO: 0010646), catalytic activity (GO: 0003824), organelles (GO: 0043226), and binding (GO: 0005488), among others. Using the ClueGO plug-in, the differentially expressed transcripts HMOX1, AT2, CLDN19, CLEC4G, CLEC12A, PNP and SYP were found to be inter-related through biological processes (cell communication, regulation of response to stimuli), molecular function (binding proteins), or cell components (integral membrane component) (Fig. 1). The gene HMOX1 was expressed more in tough meat group, while the other five genes were expressed more in tender meat group. The proteins encoded by these transcripts are involved in the transport of molecules such as sodium, potassium, calcium, and oxygen [15, 29, 36, 43]. Enrichment analysis of the HMOX1, CLDN19, CLEC4G, CLEC12A, PNP and SYP genes using the ClueGO plug-in of the Cytoscape program. Note the interrelationships between these genes, which are related to the transport of molecules Using the same programs, the DMGDH gene (dimethylglycine dehydrogenase) was identified as a member of the "glycine, serine and threonine metabolism" pathway (Fig. 2). Glycine makes up about one-third of the helical polypeptide chains of collagen [30]. On the other hand, according to Bailey [2], collagen is degraded by serine proteases, with serine also being part of the glycine metabolic pathway, and by cysteine proteases whose metabolic pathway ("cysteine and methionine metabolism") is associated with the DMGDH pathway. In the present study, the transcript of this gene was expressed more in tender meat. Enrichment analysis of the DMGDH gene with the ClueGO plug-in. The yellow circlels highlight the biological processes and serine and glycine metabolic pathway in which this gene is involved Figure 3 illustrates the interrelationships between the TCF7L1, EXOSC2, DMGDH and ASAH1 transcripts obtained by enrichment analysis. This analysis shows that the main link between these genes is the cell component called "intracellular membrane-bound organelle". This category refers to structures found inside the cell such as the nucleus and mitochondria [10]. Gene expression analysis in Angus cattle also showed a relationship between meat tenderness and this cell component category [59]. The genes identified in this study are related to actin-myosin assembly, collagen synthesis, lipid accumulation, and serine and glycine metabolic pathways [2, 22, 30, 38]. Enrichment analysis of the TCF71, EXOSC2, DMGDH and ASAH1 genes with the ClueGO plug-in Validation of differentially expressed genes The relative expression values (log2) of the transcripts were similar for the two techniques used, RNA-Seq and qRT-PCR, with values of 2.12 and 2.03 (standard deviation = 0.89) for PCP4L1 and of −0.84 and −0.644 (standard deviation = 0.44) for BoLA-BQD, respectively (Fig. 4). Similar to the RNA-Seq analysis, higher expression of the PCP4L1 and BoLA-BQD genes was observed in the tender and tough meat groups, respectively. Thus, these transcripts showed similar patterns of mRNA abundance in the RNA-Seq and qRT-PCR analyses, with the same direction of expression (i.e., up-regulated and down-regulated, respectively, in relation to the tender meat group). Comparison of the relative expression values of two differentially expressed transcripts obtained by RNA-Seq and qRT-PCR A higher proportion of Zebu genes in cattle herds considerably reduces meat tenderness when compared to taurine breeds. In Brazil, the herd mostly consists in Zebu cattle, mainly Nellore, then improve meat tenderness is very important, because for the beef export market, in which Brazil plays an important role, tenderness is paramount in determining the value of the product. Gene expression studies have been used as a tool to identify gene candidates and metabolic pathways related to traits of economic interest. In the present study, the USP32 (ubiquitin specific peptidase 32) transcript was expressed more in tender meat. Members of the ubiquitin-proteasome system are important during the transformation of muscle into meat. These proteins are involved in proteolysis, causing the degradation of myofibrillar proteins in muscle cells [47]. In a genome-wide association study (GWAS) of Nellore cattle using different meat tenderness measures, Tizioto et al. [52] identified genes of the USP family, including USP32. Another study on cattle also associated genes of the USP family with meat tenderness. In Wagiu cattle, the USP2 gene was strongly associated with meat tenderness [12] and gene expression analysis in Nellore cattle showed that the USP2 gene was expressed more in tender meat samples [20]. The functional categories cell junction, regulation of cell communication and intrinsic component of membrane are related to the binding, communication and transport of molecules between cells [10]. Among the transcripts related to these categories, CTNNB1 (catenin - cadherin-associated protein beta 1), which was expressed more in tender meat, is involved in the same metabolic pathway as actin and myosin. Actin and myosin are the proteins found in thin and thick myofilaments, respectively, which form the myofibril that is responsible for muscle contraction. These proteins are the most abundant in the mechanism of muscle contraction, accounting for 52 to 56% of all muscle proteins [48]. Each actin filament binds to the plasma membrane of the cell through a structure, called focal contact. This structure consists of binding proteins and of a transmembrane protein that are products of the "focal adhesion" pathway to which the CTNNB1 and TCF7L1 (transcription factor 7 like 1) genes belong. On the outer side of the cell, in the extracellular matrix, the transmembrane protein binds to a collagen fiber [14, 23]. According to Bailey [2], a direct association exists between collagen content and the toughening of meat. However, in the present study, the CTNNB1 and TCF7L1 transcripts were expressed more in tender meat. The SYP (synaptophysin) transcript, which was expressed more in tender meat, encodes an integral membrane protein found in small synaptic vesicles. In a study on rats, [44] showed that the phosphorylation of synaptophysin is calcium dependent. The authors observed a four-fold increase in serine phosphorylation of synaptophysin in the presence of the calmodulin-calcium complex. According to Bailey et al. [2], serine proteases are responsible for the degradation of collagen, which, in turn, directly influences meat tenderness. In addition, calcium is essential for muscle contraction by acting as a catalyst of enzymatic proteolytic activity, which is directly related to the process of meat tenderization [37]. The AT2 transcript, which encodes angiotensin II, was expressed more in tender meat. This protein is involved in vasoconstriction and regulates the secretion of aldosterone, which, in turn, stimulates the reabsorption of sodium by the kidneys. In this respect, after slaughter and during bleeding, angiotensin is activated to restore blood pressure. The result of these stimuli is the depolarization of the cell membrane, altering the distribution of sodium and potassium, in addition to permitting the flow of calcium ions [43]. In a study on crossbred cattle (Luxi-Simmental), Zhong-Liang et al. [60] observed a decline in shear force after the injection of angiotensin II into the carcass for 7 days after slaughter. Bongiorni et al. [8], studying gene expression in longissimus dorsi muscle of Italian Maremmana and Chianina breeds, also found the differential expression of genes to be related to sodium and potassium flow. The functional category "catalytic activity" is related to increases in the velocity of a biochemical reaction at physiological temperatures [10]. Some reactions that occur during the postmortem period depend on calcium and cellular pH, which decrease in the first 24 h after slaughter [25]. A member of this functional category is ASAH1 (N-acylsphingosine amidohydrolase (acid ceramidase) 1), which belongs to a family of hydrolases that catalyze the synthesis and degradation of ceramide into sphingolipid and free fatty acid and are acid pH dependent [32]. A genetic deficiency in ASAH1 that reduces its catalytic activity causes a lysosomal sphingolipid storage disorder characterized by the accumulation of lipids in cells and tissues throughout the organism [38]. ASAH1 also belongs to the "sphingolipid signaling pathway" and "sphingolipid metabolism" categories in which serine is also involved, with serine protease degrading collagen [2]. Thus, ASAH1, which was expressed more in tender meat, may be related to the process of meat tenderization. Another member of the "catalytic activity" category is HMOX1 (heme oxygenase 1), which was expressed more in tough meat. This gene encodes a protein involved in the metabolism of porphyrins, molecules whose catalytic activity is activated by iron [35]. Porphyrins are precursors of hemes, the main components of hemoglobin, myoglobin and cytochromes which are responsible for the transport of oxygen and electrons in tissues [36]. The C-type lectin (CLEC) family comprises calcium-dependent carbohydrate-binding protein domains that are involved in cell-cell adhesion [15]. In the present study, the CLEC4G and CLEC12A transcripts were expressed more in tender meat. GWAS in Nellore cattle demonstrated an association of the CLEC12A gene with different meat tenderness measures [52]. The IQCG transcript (IQ motif containing G), which was expressed more in tender meat, encodes a protein that functions as a binding site for different proteins, including myosin light chains and calmodulins. Calmodulin phosphorylates myosin, a process that permits the sliding of fibers and muscle contraction. In this case, calcium present in the reaction, binds to calmodulin, attached to IQ motif, and stimulates the ATPase activity of myosin [42]. According to Duston [16], in addition to factors such as collagen content, the structure and state of contraction of myofibrils (which mainly consists of myosin and actin) directly affect meat tenderness. The protein encoded by the PNP transcript (purine nucleoside phosphorylase), which was expressed more in tough meat, plays a role in nicotinate and nicotinamide metabolism. Nicotinate (niacin or vitamin B3) is a precursor of NAD+ and NADP+ coenzymes, which are essential for the production of ATP in the cell [28]. Numerous structural changes and biochemical events occur in the first 24 h after slaughter of the animal, which are responsible for the conversion of muscle into meat [25]. In the early postmortem stages, ATP levels are maintained constant by the conversion of ADP plus phosphocreatine into ATP and oxygen supply ceases because of the cessation of blood circulation. At this stage, slow production of lactate is observed and the onset of rigor mortis occurs (slow phase). The decrease in phosphocreatine levels characterizes the rapid phase, which consists of a rapid decline in available ATP that is used as an energy reserve after the consumption of glycogen and other carbohydrates and is therefore hydrolyzed again to ADP. The scarcity of ATP during this phase is accompanied by the release of calcium ions into the myofibrillar space, which causes muscle shortening with a direct influence on meat tenderness [5]. Another event that occurs during this phase is the anaerobic conversion of glycogen into glucose, producing lactate and reducing the pH of the medium. In addition, the transport of sodium and potassium across the cell membrane, which uses the energy released by the hydrolysis of ATP into ADP, is impaired because it occurs against the concentration gradient. The protons generated during the hydrolysis of ATP into ADP cause a significant decline in intracellular pH [3]. According to Darrel et al. [13], this drop in pH directly influences the final tenderness of meat, especially during the process of aging. According to Koohmaraie et al. [26], calcium is responsible for the activation of calpains and calpastatins (calcium-dependent cysteine proteases) and calpain I has been shown to be the main enzyme responsible for postmortem tenderization of meat by degrading cytoskeletal proteins that confer the structural integrity of the myofibrillar matrix. Nevertheless, in the present study, the calpain and calpastatin genes were not differentially expressed in the tender and tough meat groups. This finding might be explained by the fact that the amount of calpastatin in cells is higher 24 h after slaughter [43] and in this study the samples were collected immediately after cleaning the carcasses. Other GWAS and gene expression studies of muscle tissue in Nellore cattle also found no relationship between meat tenderness and calpain or calpastatin [20, 52]. The EXOSC2 transcript, which encodes exosome component 2, was expressed more in tender meat. According to Jong et al. [22], this gene is related with collagen activity in humans. This found could indicated a relationship between this genes and collagen activity in bovines, because there is a direct association exists between collagen content and the toughening of meat [2]. The ZKSCAN2 transcript (zinc finger with KRAB and SCAN domains 2), which was expressed more in tender meat, is vertebrate specific and synthesizes zinc finger proteins that bind through an N-terminus to the SCAN domain (dimerization motif). The function of this gene is not well known, but zinc finger proteins have been associated with the regulation of growth factor transcription and lipid metabolism [45]. In cattle, the main histocompatibility complex class II is called BoLA-DQB (bovine leukocyte antigen) [24]. In the present study, the BoLA-DQB transcript was expressed more in tough meat. We found no studies investigating the association of this gene with meat tenderness. However, this gene has been associated with growth traits in Holstein and beef cattle (Angus, Charolais, Hereford, Limousin, Simmental); [4, 49] and, according to Koohmaraie et al. [27], animals with higher growth rates have more palatable and more tender meat. When we compared this study with a GWAS study for meat tenderness using the same Nellore population, we do not found common genes between them, but there were some shared functions related to phosphorylation and catalytic activity [33]. These functions are related with oxygen and calcium transport, and collagen degradation, important processes for the the toughening of meat, especially after slaughter. In a GWAS study using another Nellore cattle population, Tizioto et al. [52] identified regions that influence tenderness at three different time points (24 h and 7 and 14 days after slaughter). Some of the genes reported by these authors were also identified in the present study, such as CLDN19, CLEC12A and USP32. In addition to these genes, the authors reported an association of genes belonging to the family of BoLA-BQD, CTNNB1, EXOSC2 and IQCG transcripts and meat tenderness. Global gene expression analysis in animals phenotypically divergent for meat tenderness identified genes related to ubiquitin metabolism, transport of molecules such as calcium and oxygen, acid-base balance, collagen synthesis, actin and myosin, and fat accumulation. These results contribute to the understanding of the molecular mechanisms involved in the meat tenderization process, at the time of slaughter, and to the development of strategies to select animals with more tender meat. Animals and sample collection Meat samples were collected from 132 intact male (non castrated) Nellore animals belonging to the same contemporary group (i.e., animals that remained together from birth to slaughter). The animals were from the Capivara Farm that participates in the Qualitas Nelore Breeding Program. All animals were finished in feedlots for, approximately, 90 days and slaughtered at an average age of 731 ± 81 days on the same day and under the same conditions. The slaughter occurred in a commercial plant, under usual process in Brazilian beef industry: the animals are slaughtered and the half-carcasses are refrigerated by 24 h. After that, the carcass is deboned, frozen and commercialized. All samples were frozen and none of them was aged. For RNA, muscle tissue (longissimus dorsi) samples were collected immediately after slaughtering and stored in 15-mL Falcon tubes containing 5 mL RNA holder (BioAgency, São Paulo, SP, Brazil) at −80 °C until the time for total RNA extraction. Additionally, for shear force measurements, a longissimus muscle sample was removed during deboning, after 24 h in a cold chamber, between the 12th and 13th rib of each left half-carcass. Transcriptome studies show the genes expressed in a specific time for a specific cell, i.e. it shows which genes are been expressed at the moment of the sample collection. So, we have chosen to study the gene expression related with tenderness using the phenotype measured closest to the sample collection for RNA extraction, that is, after 24 h postmortem. Analysis of shear force Longissimus dorsi samples measuring 2.54 cm in thickness were obtained for analysis of tenderness. The standardized procedure proposed by Wheeler et al. [58] was used for shear force determination in a mechanical Salter Warner-Bratzler Shear Force device. The samples analyzed were not submitted to any type of aging process. From this analysis (n = 132), 40 samples derived from animals extreme for meat tenderness (20 with tender meat and 20 with tough meat) were selected. The Student t-test implemented in the R environment [41] was applied to verify differences between the tender and tough meat groups (Table 2). Table 2 Number of animals (N), mean, standard error, minimum and maximum of meat tenderness measured by shear force (kgf/cm2) Total RNA was extracted from the samples obtained from the extreme animals selected (n = 40). Muscle tissue (longissimus dorsi) samples that were collected immediately after slaughter and stored in 15-mL Falcon tubes containing 5 mL RNA holder (BioAgency, São Paulo, SP, Brazil) at −80 °C were used to extract total RNA. An average of 50 mg of the muscle tissue previously stored in RNA holder (BioAgency, São Paulo, SP, Brazil) was used for extraction with the RNeasy Lipid Tissue Mini Kit (Qiagen, Valencia, CA, USA) according to manufacturer recommendations. The purity of the extracted RNA was determined by reading absorbance in a NanoDrop 1000 spectrophotometer (Thermo Fisher Scientific, Santa Clara, CA, USA, 2007). The quality of the total RNA extracted was evaluated in an Agilent 2100 Bioanalyzer (Agilent, Santa Clara, CA, USA, 2009) and its concentration and contamination with genomic DNA were measured in a Qubit® 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA, 2010). Sequencing (RNA-Seq) was performed on an Illumina HiSeq 2500 System. Messenger RNA was obtained from the total RNA extracted and libraries containing 200 bp fragments were constructed and pooled to perform multiplexing sequencing. The reads obtained were paired-end of 2 × 100 bp. Sequence processing and alignment The sequence data generated with the Illumina HiSeq 2500 System were converted into FastQ format, using the Casava software (https://support.illumina.com/downloads/casava_18_changes.html). The computational analyses were performed on CyVerse platform [19]. First, sequenced fragments (reads) of low quality were trimmed using the Sickle program (github.com/najoshi/sickle). The TopHat2 v2.0.9 program [54] was then used to map the fragments and to align them with the bovine reference genome (UMD3.1) available in the NCBI database (http://www.ncbi.nlm.nih.gov/genome/?term=bos+taurus). For each library, a .bam file containing the aligned reads in relation to the reference genome was generated. Assembly and quantification of the transcripts The Cufflinks2 v2.1.1 program [55] was used to assemble the aligned reads of each sample and to estimate the number of transcripts, expressed as fragments per kilobase of transcript per million reads mapped (FPKM). The Cufflinks2 result per sample was concatenated in a single file with the Cuffmerge2 v2.1.1 program, which was used as a reference in the differential gene expression analysis. Differential gene expression analysis Using the Cuffdiff2 v 2.1.1 program [53, 55], the sequence alignment files generated (.bam) were divided into two contrasting groups according to meat tenderness. The FPKM values of each transcript were calculated for each sample. The Cuffdiff2 program uses a t-test for the calculation of p-values. False discovery rates (FDR) were controlled by the Benjamini-Hochberg procedure considering a q-value of less than 5%. The CummeRbund package [55], implemented in the R environment [41], was used for exploration and visualization of the data obtained and generate PCA and boxplot graphics. Annotation of differentially expressed genes The Database for Annotation, Visualization, and Integrated Discovery (DAVID) v6.7, which consists of an integrated system of biological databases and analytical tools designed to systematically extract the biological meaning from a large list of genes and/or proteins [21], was used to annotate and interpret the lists of differentially expressed genes. The Functional Annotation Tool was used for this purpose, which determines the most relevant Gene Ontology (GO) terms for each list of genes. The Functional Annotation Clustering algorithm was applied to generate annotations of functional groups. DAVID pathway mapping was used to identify metabolic pathways in which the differentially expressed genes are involved. The ClueGo plug-in of the Cytoscape program was used to visualize non-redundant biological terms for genes in functionally grouped networks [6]. Real-time quantitative PCR (qRT-PCR) was used to validate the differential expression of the genes identified by RNA-Seq analysis. All the 40 RNA samples used in the RNA-Seq analyses was used to validate the data by qRT-PCR. Two differentially expressed genes were chosen randomly for this purpose: bovine leukocyte antigen (BoLA-DQB) and Purkinje cell protein 4-like 1 (PCP4L1). In addition to these genes, three reference genes were chosen and quantified by qRT-PCR, as proposed by Vandesompele et al. [56], to normalize the data. The RNA-Seq technique detected no differences in the expression of the beta-glucuronidase (GUSB), hypoxanthine phosphoribosyltransferase 1 (HPRT1) and TATA box binding protein (TBP) genes between the groups studied and these genes were therefore chosen as housekeeping genes and were tested by qRT-PCR. The method (conditions and equipment) described by Fonseca et al. [18] was used for validation of the differentially expressed genes by qRT-PCR: One μg total RNA was used to synthetize the first complementary DNA (cDNA) strand using SuperScript III First-Strand Synthesis SuperMix for qRT-PCR (Invitrogen). To design the primers (Table 3), the Primer Express 3.0 software (Applied Biosystems, 2004) was used and the GenBank database (http://www.ncbi.nlm.nih.gov) was accessed to obtain the mRNA nucleotide sequences. The primers specificity was tested by NCBI BLAST algorithm (https://blast.ncbi.nlm.nih.gov/Blast.cgi). Genorm (https://genorm.cmgg.be/) and Expression Suite softwares v1.0 (Applied Biosystems, Foster, CA, USA, 2012) were used to test the expression stability of the housekeeping genes. Table 3 Sequence of the forward (F) and reverse (R) primers used in the qRT-PCR assays All qRT-PCR reactions were done with 7500 Real-Time PCR (Applied Biosystems, 2009). For these reactions we used: 0.1 μg cDNA; 1X SYBR Green Master Mix and forward and reverse primers. Primers concentrations were determined by titration: 600 nanoMolar (nM) forward and reverse primers (600/600) for BoLA-BQD and GUSB; 300 nanoMolar (nM) forward and reverse primers (300/300) for HPRT1 and 100 nanoMolar (nM) forward and reverse primers (100/100) for PCP4L1 and TBP. The analyses were performed in triplicate. For each gene (target and housekeeping), we included a negative and a positive control in every reaction. Serial dilutions of cDNA (1:5) were used to build a standard curve and to calculate the qRT-PCR efficiency for each gene. Only PCR primers showing an efficiency of 90–110% were used [31]. The amplification conditions were: 40 cycles at 50 °C for 2 min, 95 °C for 10 min, and 60 °C for 1 min. Dissociation analyzes were performed to monitor the reactions specificity. For the housekeeping genes, the geometric means of Ct values were calculated [56]. For the analysis of relative expression, a mixed linear model was fitted [50]: $$ {\mathrm{Y}}_{\mathrm{gikr}}={\mathrm{T}}_{\mathrm{ig}}+{\mathrm{D}}_{\mathrm{ik}}+{\mathrm{e}}_{\mathrm{gikr}} $$ where: Ygikr is the Ct obtained from the thermocycler software for gene g, in the rth well of the plate (referring to the technical replicate) in a sample obtained from animal k of treatment i (low or high meat tenderness group). Tig is the group of animals effect i (low or high meat tenderness group) on the expression of gene g; Dik is a random sampling specific effect which captures differences between samples shared by genes, particularly those affecting RNA concentration such as different extraction and amplification efficiency, and egikr is a residual effect. ADP: ASAH1: N-Acylsphingosine Amidohydrolase (Acid Ceramidase) 1 AT2: Angiotensin II Receptor Type 2 ATP: BoLA-DQB: Major Histocompatibility Complex, Class II, DQ Beta cDNA: CLDN19: Claudin 19 CLEC: C-Type Lectin Family CLEC12A: C-Type Lectin Domain Family 12 Member A CLEC4G: C-Type Lectin Domain Family 4 Member G Ct: Threshold Cycle CTNNB1: Catenin - Cadherin-Associated Protein Beta 1 DMGDH: Dimethylglycine Dehydrogenase EXOSC2: Exosome Component 2 False Discovery Rates FPKM: Fragments Per Kilobase Of Transcript Per Million Reads Mapped GUSB: Glucuronidase Beta GWAS: HMBS: Erythrocyte Hydroxymethylbilane Synthase HMOX1: Heme Oxygenase 1 HPRT1: Hypoxanthine Phosphoribosyltransferase 1 IQCG: IQ Motif Containing G NAD+ : Nicotinamide Adenine Dinucleotide NADP+ : Nicotinamide Adenine Dinucleotide Phosphate nM: nanoMolar PCA: Principal Component Analysis PCP4L1: Purkinje Cell Protein 4 Like 1 PGK1: Phosphoglycerate Kinase 1 PNP: Purine Nucleoside Phosphorylase qRT-PCR: Quantitative Real Time Polymerase Chain Reaction RNA-Seq: SYP: TATA-Box Binding Protein TCF7L1: Transcription Factor 7 Like 1 USP: Ubiquitin Specific Peptidase Family USP2: Ubiquitin Specific Peptidase 2 USP32: Ubiquitin Specific Peptidase 32 ZKSCAN2: Zinc Finger With KRAB And SCAN Domains 2 ABIEC, Associação Brasileira das Indústrias Exportadoras de Carne, 2016. http://www.abiec.com.br/. Accessed 8 Feb 2016. Bailey AJ, Paul RG, Knott L. Knott. Mechanisms of maturation and aging of collagen. Mech Ageing Dev.1998, 106:1–56. Bate-Smith EC, Bendall JR. Factors determining the time course of rigor mortis. J Physiol 1949, 110:47–65. Batra TR, Lee AJ, Gavora JS, Stear MJ. CLASS I alleles of the bovine major histocompatibility system and their association with economic traits. J Dairy Sci. 1989;72:2115–2124. Bendall JR. Posmortem changes in muscle. In: GH BOURNE, editor. The structure and function of muscle, vol. 2. New York: Academic Press; 1973. p. 244–309. Bindea G, Mlecnik B, Hackl H, Charoentong P, Tosolini M. ClueGO: a Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. J Bioinformatics. 2009;25(8):1091–3. Boleman SJ, Boleman SL, Miller RK, Taylor JF, Cross HR, Wheeler TL, Koohmaraie M, Shackelford SD, Miller MF, West RL, Johnson DD, Savell JW. Consumer evaluation of beef of known categories of tenderness. J Anim Sci. 1997;75:1521–4. Bongiorni S, Gruber CEM, Bueno S, Chillemi G, Ferr F, Failla S, Moioli B, Valentini A. Transcriptomic investigation of meat tenderness in two Italian cattle breeds. Anim Genet. 2016. doi:10.1111/age.12418. Campo MM, Sañudo C, Panea B, Alberti P, Santolaria P. Breed type and aging time effects on sensory characteristics of beef strip loin steaks. Meat Sci. 1999;51:383–91. Carbon S, Ireland A, Mungall CJ, Shu S, Marshall B, Lewis S. AmiGO: online access to ontology and annotation data. Bioinformatics. 2009;2009(25):288–9. Chapple RH, Tizioto PC, Wells KD. Characterization of the rat developmental liver transcriptome. Physiol Genomics. 2013;45:301–11. CRC: cooperative Resersh Centre for Beef Genetic Technologies. Annual Report of CRC for Beef Genetics Tecnologies. High Quality beef for Global Consumers. Armidale, Australia, 2008. Darrel E, Goll DE, Valery F, Thompson VF, Li H. The calpain system. Physiol Rev. 2003;83:731–801. De Robertis E. Bases da Biologia Celular e Molecular. Rio de Janeiro: Editora Guanabara Koogan; 2010. Drickamer K. C-Type lectin-like domains. Curr Opin Struct Biol. 1999;5:585–90. Duston TR, Hostetler RL, Carpenter ZL. Effect of collagen levels and sarcomere shortening on muscle tenderness. J Food Sci. 1976;41:863–6. Ferguson DM, et al. Effect of electrical stimulation on protease activity and tenderness of M. Longissimus from cattle with different proportions of Bos Indicus content. Meat Sci. 2000;55:265–72. Fonseca LFS, Gimenez DF, Mercadante ME, Bonilha SF, Ferro JA, Baldi F, Souza FRP, Albuquerque LG. Expression of genes related to mitochondrial function in Nellore cattle divergently ranked on residual feed intake. Mol Biol Rep. 2015;42:559–65. Goff SA, Vaughn M, Mckay S. The iPlant collaborative: cyberinfrastructure for plant biology. Front Plant Sci. 2011;2:34. doi:10.3389/fpls.2011.00034. Gonçalves TM. Dissertation. In: differential expression of genes related with meat tenderness in Nellore cattle. ESALQ, USP; 2015. p. 97. http://www.teses.usp.br/teses/disponiveis/11/11139/tde-12052015-165345/pt-br.php. Accessed 16 Mar 2016. Huang W, Sherman BT, Lempick RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009;4:44–57. Jong OG, Balkom BWM, Gremmels H, Verhaar MC. Exosomes from hypoxic endothelial cells have increased collagen crosslinking activity through up-regulation of lysyl oxidase-like 2. J Cell Mol Med. 2015;XX:1–9. Junqueira LCU, Carneiro J. Biologia Celular e Molecular. 8th ed. Guanabara; 2005. Klein J, Bontrop RE, Dawkins RL, Erlich HA, Gyllensten UB, Heise ER, Jones PP, Parham P, Wakeland EK, Watkins DI. Nomenclature for the major histocompatibility complexes of different species: a proposal. Immunogenetics. 1990;4:217–9. Koohmaraie M. The biological basis of meat tenderness and potential genetic approaches for its control and prediction. Proc Recip Meat Conf. 1995;48:69–75. Koohmaraie M. Biochemical factors regulating the toughening and tenderization process of meat. Meat Sci. 1996;43:S193–201. Koohmaraie M, Kent MP, Shackelford SD, Veiseth E, Wheeler TL. Meat tenderness and muscle growth: is there any relationship? Meat Sci. 2002;62:345–52. LAMP: Library of Apicomplexan Metabolic Pathways. Nicotinate and nicotinamide metabolism. http://www.llamp.net/?q=Nicotinate%20metabolism. Accessed 29 Dec 2015. Lee NP, Tong MK, Leung PP. Kidney claudin-19: localization in distal tubules and collecting ducts and dysregulation in polycystic renal disease. FEBS Lett. 2006;580:923–31. Lehninger AL, Nelson LD, Cox MM. Princípios de bioquímica. 3rd ed. São Paulo: SARVIER; 2002. p. 1009. Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative PCR and the 2(−Delta Delta C(T)) method. Methods. 2001;25:402–8. Lucki NC, Sewer MB. Genistein stimulates MCF-7 breast cancer cell growth by inducing acid ceramidase (ASAH1) gene expression. JBC. 2011;286:19399–409. Magalhães AFB, de Camargo GMF, Junior FGA, Gordo DGM, Tonussi RL, et al. Genome-wide association study of meat quality traits in Nellore cattle. PLoS One. 2016;11(6):e0157845. doi:10.1371/journal.pone.0157845 Malone JH, Oliver B. Microarrays, deep sequencing and the true measure of the transcriptome. BMC Biol. 2011;9:34. Manso CMCP, Neri CR, Vidoto EA, Sacco HC, Ciuffi KJ, Iwamoto LS, Iamamoto Y, Nascimento OR, Serra OA. Characterization of iron(III) porphyrin-hidroxo complexes in organic media through Uv-Vis and EPR spectroscopies. J. Inorg. Biochem. 1999;73:85–93. Otterbein LE, Soares MP, Yamashita K, Bach FH. Heme oxygenase-1: unleashing the protective properties of heme. Trends Immunol. 2003;24:449–55. Ouali A, Gagaoua M, Boudida Y, Becila S, Boudjellal A, Herrera-Mendez CH, Sentandreu MA. Biomarkers of meat tenderness: presente knowledge and perspectives in regards to our current understanding of the mechanisms involved. Meat Sci. 2013;95:854–70. Park JH, Schuchman EH. Acid ceramidase and human disease. Biochim Biophys Acta. 2006;1758:2133–8. Paz CCP, Luchiari Filho A. Melhoramento genético e diferenças de raças com relação à qualidade da carne bovina. Pecuária de corte. 2000;101:58–63. Piorkowska K, Żukowski K, Nowak J, Połtowicz K, Ropka-Molik K, Gurgul A. Genome-wide RNA-Seq analysis of breast muscles of two broiler chicken groups differing in shear force. Anim Genet. 2015;47(1):68–80. R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2015. https://www.R-project.org/. Rhoads AR, Friedberg F. Sequence motifs for calmodulin recognition. FASEB J Official Publ Federation of American Societies for Exp Biol. 1997;11:331–40. Rubensam JM, Transformações Post Mortem E Qualidade Da Carne Suína. 1ª Conferência Internacional Virtual sobre Qualidade de Carne Suína. 2000. http://www.cnpsa.embrapa.br/sgc/sgc_publicacoes/anais00cv_jane_pt.pdf. Accessed 30 Dec 2015. Rubenstein JL, Greengard P, Czernik AJ. Calcium-dependent serine phosphorylation of synaptophysin. Synapse. 1993;13:161–72. Sander TL, Stringer KF, Maki JL, Szauter P, Stone JR, Collins T. The SCAN domain defines a large family of zinc finger transcription factors. Gene. 2003;310:29–38. Scollan N, Hocquette J, Nuernberg K, Dannenberger D, Ian R, Moloney A. Innovations in beef production systems that enhance the nutritional and health value of beef lipids and their relationship with meat quality. Meat Sci. 2006;74:17–33. Sekikawa M, Seno K, Mikami M. Degradation of ubiquitin in beef during storage. Meat Sci. 1998;48:201–4. Sgarbieri VC. Proteínas em alimentos protéicos. São Paulo: Varela; 1996. p. 517. Stear MJ, Pokorny TS, Muggli NE, Stone RT. The relationships of birth weight, preweaning gain and postweaning gain with the bovine major histocompatibility system. J Anim Sci. 1989;67:641–9. Steibel JP, Poletto R, Coussens PM, Rosa JMG. A powerful and flexible linear mixed model framework for the analysis of relative quantification RT-PCR data. Genomics. 2009;94:146–52. Tizioto PC, Coutinho LL, Decker JE, Schnabel RD, Rosa KO, Oliveira PSN, Souza MM, Mourão GB, Tullio RR, Chaves AS, Lannad PD, Zerlotini-Neto A, Mudadu MA, Taylor JF, Regitano LCA. Global liver gene expression differences in Nelore steers with divergent residual feed intake phenotypes. BMC Genomics. 2015;16:216. Tizioto PC, Decker JE, Taylor JF, Schnabel RD, Mudadu MA. Genome scan for meat quality traits in Nelore beef cattle. Physiol Genomics. 2013;45:1012–20. Trapnell C, Hendrickson DG, Sauvageau M, Goff L, RInn JL, Pacther L. Differential analysis of gene regulation at transcript resolution with RNA-seq. Nat Biotechnol. 2013;31:46–53. Trapnell C, Pachter L, Salzberg SL. TopHat: discovering splice junctions with RNA-Seq. Bioinformatics. 2009;25:1105–11. Trapnell C, Roberts A, Goff L, Pertea G, Kim D, Kelley DR. Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and cufflinks. Nat Protoc. 2012;7(3):562–78. Vandesompele J, De Preter K, Pattyn F, Poppe B, Van Roy N, De Paepe A, Speleman F. Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome Biol. 2002;3(7):RESEARCH0034. Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Advance Online Plublication: Nat. Rev. Genet; 2008. Wheeler TL, Koohmaraie M, Shackelford SD. Standardized Warner-Bratzler shear force procedures for meat tenderness measurement. Clay Center: Roman L. Hruska U. S. MARC. USDA, 1995. Zhao C, Tian F, Yu Y, Liu G, Zan L, Scott M, Song J. miRNA-dysregulation associated with tenderness variation induced by acute stress in Angus cattle. J Anim Sci Biotechnol. 2012;3:12. Zhong-liang HU, Xing JL, Xiao-feng YE. Effects of Angiotensin II on Beef Quality. Acta Agric Jiangxi. 2009:11. We thank the Qualitas Nelore breeding program company for providing the tissue samples and database used in this study. The RNA sequencing was fund by the project "Genomic tools for the genetic improvement of traits of direct economic importance in Nelore cattle". This was finance also by the São Paulo Research Foundation – FAPESP. (FAPESP grant # 2009/16118–5). The LFSF scholarship was fund by the São Paulo Research Foundation – FAPESP (FAPESP grant # 2013/09190–7). The dataset utilized in this study belongs to a Qualitas Nelore breeding program company, and could be available on request. The author do not have authorization to share the data. Faculty of Agricultural and Veterinary Sciences, São Paulo State University, FCAV/UNESP, Jaboticabal, São Paulo, Brazil Larissa Fernanda Simielli Fonseca , Daniele Fernanda Jovino Gimenez , Danielly Beraldo dos Santos Silva , Fernando Baldi , Jesus Aparecido Ferro & Lucia Galvão Albuquerque CyVerse, University of Arizona, Tucson, USA Roger Barthelson Search for Larissa Fernanda Simielli Fonseca in: Search for Daniele Fernanda Jovino Gimenez in: Search for Danielly Beraldo dos Santos Silva in: Search for Roger Barthelson in: Search for Fernando Baldi in: Search for Jesus Aparecido Ferro in: Search for Lucia Galvão Albuquerque in: LFSF, DFJG, LGA, JAF and FB conceived and designed the experiment; LFSF and DFJG performed the experiments; LFSF, LGA, DBSS and RB analyzed and interpreted the results; LFSF, LGA, JAP and RB drafted and revised the manuscript. All authors read and approved the final version of the manuscript. Correspondence to Larissa Fernanda Simielli Fonseca. All experimental procedures were approved by Ethics Committee of the Faculty of Agrarian and Veterinary Sciences of Sao Paulo State, Jaboticabal, São Paulo (protocol number 18,340/16). The animals were provideded by Qualitas Nelore breeding program company and they were slaughtered in commercial slaughterhouses. These slaughterhouses have animal welfare departments staffed by professionals trained by WAG (World Animal Protection) to ensure that the animals are killed humanely using a captive bolt pistol for stunning. Additional file 1: Table S1. Samples number (N), classification of the sample, shear force (kgf/cm2), number of transcripts aligned in pairs (N reads), and percentage of transcripts aligned in pairs (% reads). (DOCX 16 kb) Additional file 2: Figure S1. Expression profile of reference genes in the experimental groups (tender and tough meat). (TIFF 176 kb) Box plot of expression values (log10 FPKM) obtained for the groups studied (tender and tough meat). (TIFF 162 kb) Principal component analysis (PCA) of the transcripts found in the tender (red) and tough (blue) meat groups. (TIFF 215 kb) Enriched GO terms obtained with the DAVID software for differentially expressed genes. (XLSX 14 kb) Fonseca, L.F.S., Gimenez, D.F.J., dos Santos Silva, D.B. et al. Differences in global gene expression in muscle tissue of Nellore cattle with divergent meat tenderness. BMC Genomics 18, 945 (2017) doi:10.1186/s12864-017-4323-0 Meat quality
CommonCrawl
Navier-Stokes equations in Riemannian geometry The Navier-Stokes equations can be written on a Riemannian manifold as: $$\dot{u}+\nabla_u u+ \Delta u=(df)^* $$ $$d^* u=0$$ where $\nabla$ is the Levi-Civita connection, $u$ is a vector field, $\Delta$ is the Laplacian, $df$ is the differential of $f$, $(df)^* $ is the dual of $df$ via the metric, and $d^*u$ is the divergence of $u$. The problem is due to Antoine Balan. Do you have references? dg.differential-geometry ap.analysis-of-pdes reference-request $\begingroup$ Have you looked at the work of Marsden and Weinstein? $\endgroup$ – Deane Yang $\begingroup$ Moreover, googling "Navier-Stokes Riemannian manifold" produces a lot of hits. $\endgroup$ $\begingroup$ Do you think it would be possible to extend the results of Arnol'd to the Navier-Stokes equation ? $\endgroup$ $\begingroup$ For instance, considering the Navier-Stokes equation as a small perturbation of the Euler equations, just as it was done by Cruzeiro et al., but on a stochastic point of view $\endgroup$ $\begingroup$ From the point of view of extending the results from flat space to compact manifolds, there is no difficulty at all: arxiv.org/abs/0901.4412 $\endgroup$ – timur The answer and comments about Arnold and Marsden papers are a little off side. They concern the equation of inviscid fluids, called Euler equation. This differs from Navier-Stokes by the highest-order derivatives $\Delta u$. This changes completely the functional analysis background. Also, Euler equation has a geometrical interpretation (geodesics on the group of measure-preserving diffeomorphisms), whereas Navier-Stokes has not. I am not aware of references for Navier-Stokes on manifolds. However, I don't think that this is a real problem. What has been important so far for Navier-Stokes is the space dimension and the embedding theorems we have between functional spaces like Sobolev, Besov and others. For instance, the Cauchy problem must be globally well-posed on every compact surface, and locally well-posed on $3$-manifolds. Denis SerreDenis Serre You are missing a $\dot u$ in your equation! We want a dynamic vector field. The sign of your $\nabla_u u$ and $\Delta u$ are usually taken to be opposite, as with the sign of your $df^*$ and $\nabla_u u$. See p. 63 of Arnol'd-Khesin's book `Topological Methods in Fluid Mechanics'. Arnol'd and Khesin definitely knew how to do this. Khesin is still alive! Richard MontgomeryRichard Montgomery $\begingroup$ Dear Richard Montgomery, I would add a link to the text "Topological Methods in Hydrodynamics" by Arnol'd and Khesin. books.google.com/… $\endgroup$ – agt You could look at the paper: Groups of Diffeomorphisms and the motion of an incompressible fluid, by Ebin and Marsden. About two centuries after Euler, in 1966 Arnold gave a geometric reformulation of the classical equations for an imcompressible fluid in terms of the geodesic spray of left invariant metric on an infinite dimensional Lie Group. Ebin and Marsden promptly employed this reformulation to obtain existence and uniqueness results for these equations on compact oriented riemannian manifolds. This circle of ideas is one of the first important application of infinite dimensional manifolds as remarked by Stephen Smale. By the way, should not the equation contain the time derivative of the unknown $u$? agtagt For what it's worth, the Navier-Stokes equation on manifolds is also mentioned in this recent paper http://arxiv.org/pdf/1107.2698, see (1.16) there, in connection with another flow for vector fields that the authors define. YangMillsYangMills I would write the Navier-Stokes equations on a Riemannian manifold $(\mathcal M,g)$ in a slightly different way. The unknown is still a time-dependent vector field $v$, to which you can associate a one-form $u$, defined in the charts by $$ \langle u(x), T\rangle_{T_x^*(\mathcal M), T_x(\mathcal M)}=g(v(x),T), \quad \text{$u=gv$ for short.} $$ Then the equation is $$ \partial_t u+\mathcal L_v(u)+\nu d^* du=dq,\quad \text{div} v=0,$$ where $\mathcal L_v$ is the Lie derivative with respect to $v$. Note that, if $\Omega_0$ is an orientation of $\mathcal M$, we define the divergence of $v$ by the formula $ \mathcal L_v(\Omega_0)=(\text{div} v)\Omega_0. $ We may define now the vorticity $\omega$ as $du$ and get the equations $$ \partial_t \omega+\mathcal L_v(\omega)+\nu dd^* \omega=0,\quad \text{div} v=0, \omega=d(gv).$$ BazinBazin Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ap.analysis-of-pdes reference-request or ask your own question. When can a connection Induce a Riemannian metric for which it is the Levi-Civita connection? "Nash Style" Embedding Theorem for Connections Convergence of solutions to Navier-Stokes to Euler's equation for viscosity $\to$ zero Principal bundles and Subriemannian Geometry 1-parameter group of a vector field A solution to the Navier-Stokes equation that is defined for on $[0,T]$ with $T$ large is global? Leray projector in $L^{\infty}$ and negative order Besov spaces for the Navier-Stokes equations
CommonCrawl
Home > Vol 48, No 1 (September 2016) > Papageorgiou Keywords Conley index Morse theory Navier-Stokes equations Variational methods bifurcation critical point theory degree theory existence existence results fixed point fixed point index measure of noncompactness multiple solutions periodic solution periodic solutions positive solution positive solutions singular perturbations topological degree variational method variational methods Subject Classification Primary: 35J20; Secondary: 35J60, 35J92, 58E05 Resonance; multiple solution; superlinear reaction; nodal solutions; critical groups Multiplicity theorems for resonant and superlinear nonhomogeneous elliptic equations Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu DOI: http://dx.doi.org/10.12775/TMNA.2016.048 We consider nonlinear elliptic equations driven by the sum of a $p$-Laplacian ($p> 2$) and a Laplacian. We consider two distinct cases. In the first one, the reaction $f(z,\cdot)$ is $(p-1)$-linear near $\pm\infty$ and resonant with respect to a nonprincipal variational eigenvalue of $(-\Delta_{p},W_{0}^{1,p}(\Omega))$. We prove a multiplicity theorem producing three nontrivial solutions. In the second case, the reaction $f(z,\cdot)$ is $(p-1)$-superlinear but does not satisfy the Ambrosetti-Rabinowitz condition. We prove two multiplicity theorems. In the first main result we produce six nontrivial solutions all with sign information and in the second theorem we have five nontrivial solutions. Our approach uses variational methods combined with the Morse theory, truncation methods, and comparison techniques. PREVIEW FULL TEXT S. Aizicovici, N.S. Papageorgiou and V. Staicu, Degree Theory for Operators of Monotone Type and Nonlinear Elliptic Equations with Inequality Constraints, Memoirs Amer. Math. Soc. Vol 196, No. 915 (2008). S. Aizicovici, N.S. Papageorgiou and V. Staicu, On p-superlinear equations with a nonhomogeneous differential operator, Nonlinear Differential Equations Appl. (NoDEA) 20 (2013), 151–175. A. Ambrosetti and P. Rabinowitz, Dual variational methods in critical point theory and applications, J. Functional Anal. 14 (1973), 349–381. D. Arcoya and D. Ruiz, The Ambrosetti–Prodi problem for the p-Laplace operator, Comm. Partial Differential Equations 31 (2006), 849–865. T. Bartsch, Z. Liu and T. Weth, Nodal solutions of a p-Laplacian equation, Proc. London Math. Soc. 91 (2005), 129–152. R. Benguria, H. Brezis and E.H. Lieb, The Thomas–Fermi–von Weizsäcker theory of atoms and molecules, Comm. Math. Physics 79 (1981), 167–180. S. Cingolani and M. Degiovanni, Nontrivial solutions for p-Laplace equations with right hand side having p-linear growth, Comm. Partial Differential Equations 30 (2005), 1191–1203. S. Cingolani and G. Vannella, Critical groups computations on a class of Sobolev Banach spaces via Morse index, Ann. Inst. H. Poincaré Anal. Non Linéaire 20 (2003), 271–292. M. Degiovanni and S. Lancelotti, Linking over cones and nontrivial solutions for pLaplace equations with p-superlinear nonlinearity, Ann. Inst. H. Poincaré Analyse NonLinéaire 24 (2007), 907-919. J.I. Diaz and J.E. Saa, Existence et unicité de solutions positives pour certaines équations elliptiques quasilinéaires, C.R. Acad. Sci. Paris Sér. 305 (1987), 521–524. J. Dugundji, Topology, Allyn and Bacon Inc, Boston (1966). N. Dunford and J. Schwartz, Linear Operators I. General Theory, Wiley–Interscience, New York (1958). E. Fadell and P. Rabinowitz, Generalized cohomological index theories for Lie group actions with an application to bifurcation questions in Hamiltonian systems, Invent. Math. 45 (1978), 139–174. M. Filippakis, A. Kristaly and N.S. Papageorgiou, Existence of five nonzero solutions with constant sign for a p-Laplacian equation, Discrete Cont. Dyn. Systems 24 (2009), 405–440. L. Gasinski and N.S. Papageorgiou, Nonlinear Analysis, Chapman, Hall/CRC, Boca Raton, Fl. (2006). L. Gasinski and N.S. Papageorgiou, Multiple solutions for nonlinear coercive problems with a nonhomogeneous differential operator and a nonsmooth potential, Set. Valued Var. Anal. 20 (2012), 417–443. L. Iturriaga, E. Massa, J. Sachez and P. Ubilla, Positive solutions of the p-Laplacian involving a superlinear nonlinearity with zeros, J. Differential Equations 248 (2010), 309–327. O. Ladyzhenskaya and N. Uraltseva, Linear and Quasilinear Elliptic Equations, Academic Press, New York (1968). G. Lieberman, Boundary regularity for solutions of degenerate elliptic equations, Nonlinear Anal. 12 (1988), 1203–1219. N.S. Papageorgiou and S. Kyritsi, Handbook of Applied Analysis, Springer, New York (2009). N.S. Papageorgiou and V.D. Rădulescu, Qualitative phenomena for some classes of quasilinear elliptic equations with multiple resonance, Appl. Math. Optim. 69 (2014), 393–430. N.S. Papageorgiou and G. Smyrlis, On nonlinear nonhomogeneous resonant Dirichlet equations, Pacific J. Math. 264 (2013), 421–453. P. Pucci and J. Serrin, The Maximum Principle, Birkhäuser, Basel (2007). M. Sun, Multiplicity of solutions for a class of quasilinear elliptic equations at resonance, J. Math. Anal. Appl. 386 (2012), 661–668.
CommonCrawl
Theoretical Biology and Medical Modelling Effects of pathogen dependency in a multi-pathogen infectious disease system including population level heterogeneity – a simulation study Abhishek Bakuli ORCID: orcid.org/0000-0001-5123-19741,2, Frank Klawonn1,3, André Karch2,4 & Rafael Mikolajczyk4,5,6 Theoretical Biology and Medical Modelling volume 14, Article number: 26 (2017) Cite this article Increased computational resources have made individual based models popular for modelling epidemics. They have the advantage of incorporating heterogeneous features, including realistic population structures (like e.g. households). Existing stochastic simulation studies of epidemics, however, have been developed mainly for incorporating single pathogen scenarios although the effect of different pathogens might directly or indirectly (e.g. via contact reductions) effect the spread of each pathogen. The goal of this work was to simulate a stochastic agent based system incorporating the effect of multiple pathogens, accounting for the household based transmission process and the dependency among pathogens. With the help of simulations from such a system, we observed the behaviour of the epidemics in different scenarios. The scenarios included different household size distributions, dependency versus independency of pathogens, and also the degree of dependency expressed through household isolation during symptomatic phase of individuals. Generalized additive models were used to model the association between the epidemiological parameters of interest on the variation in the parameter values from the simulation data. All the simulations and statistical analyses were performed using R 3.4.0. We demonstrated the importance of considering pathogen dependency using two pathogens, and showing the difference when considered independent versus dependent. Additionally for the general scenario with more pathogens, the assumption of dependency among pathogens and the household size distribution in the population cohort was found to be effective in containing the epidemic process. Additionally, populations with larger household sizes reached the epidemic peak faster than societies with smaller household sizes but dependencies among pathogens did not affect this outcome significantly. Larger households had more infections in all population cohort examples considered in our simulations. Increase in household isolation coefficient for pathogen dependency also could control the epidemic process. Presence of multiple pathogens and their interaction can impact the behaviour of an epidemic across cohorts with different household size distributions. Future household cohort studies identifying multiple pathogens will provide useful data to verify the interaction processes in such an infectious disease system. Respiratory infections are the most common type of infections that contribute to loss of productive time due to acute conditions [1]. Households play an important role for the transmission process of respiratory infective agents, since they serve as confined structures due to the proximity of contacts among individuals that belong to such a confinement [2]. Approximately a third of the influenza like infection transmissions occur within households [3,4,5]. Studies on modelling epidemics spread in populations distributed into household clusters of varying sizes have been conducted to investigate possible control measures against epidemic outbreaks where larger households were associated with more infection transmissions [6,7,8,9,10]. Individual level stochastic models, also known as agent based models are highly flexible constructs to study complex phenomena by simulating the behaviour of multiple agents (individuals or grouped entities) simultaneously. FluTE [11] and FRED [12] are examples of such agent based models that have been built incorporating the community structure to study the progression of influenza like infections in the population [13,14,15]. Epidemic studies, till date, have mostly focused on the effect of a single pathogen in determining the population behaviour and spread of infections. Seasonal epidemics of respiratory infections are a common phenomenon during the winter months annually with several emergent and dominant pathogens circulating in the society. Additionally, there is always the possibility for antigenic drifts which are due to mutations of viruses impacting the protective effect of immunity from further infections [16]. Thus there is a need to study the epidemic reality of several pathogens co-existing in the community, with differential seasonality patterns, as well as differential severity and transmissibility characteristics. The idea of dynamic interaction between pathogens or ecological interference has been studied for diseases with differential seasonality in case of measles and whopping cough [17] and for the impact of vaccination for pandemic influenza [18]. The study of the infection process with multiple interacting pathogens has been lacking in the agent based models that have been developed in the past. Infection from one pathogen along with an intervention strategy, like household isolation, can not only have an impact on the individual's exposure to the specific pathogen but also to other pathogens which can eventually impact parallel epidemic processes from other co-existing pathogens. This involves cross immunity caused by an infectious pathogen, and changes in the contact structure among individuals within and between households. In addition to this, if there are two pathogens with exactly the same characteristics, they create a competition within the scope of the epidemic process. Additional factors like household structure and presence of an immunized proportion of individuals can impact the course of the epidemic since they can potentially accelerate or decelerate the transmission of infections in the population [8,9,10]. Moreover they are also directly related to the household isolation strategy since they impact the within household transmission. The aim of our study is to investigate how multi-pathogen interaction impacts the epidemic process when compared to scenarios with only a single pathogen if different household structures and the proportion of already immune individuals are taken into account. Agent-based modelling of disease transmission We use an agent-based approach with the basic structure of an SEIR (Susceptible, Exposed, Infectious, and Recovered) model. During the exposed state we assume that individuals are asymptomatic and do not impact the transmission process. After a period of being asymptomatic the individuals enter the infectious phase where they are symptomatic and can transmit infections. The assumption that during the infectious phase, there is household isolation making an individual nullify the risk of external infection from other pathogens, causes the interaction between pathogens in the multi-pathogen setting. The degree of this reduction of the external transmissibility depends on pathogen characteristics. The single pathogen case is shown in Table 1. TP 1 (t) has two components: transmission of the pathogen resulting from contacts in the society (P external (p, t)) and from contacts within households ( P family (p, t)) for a given pathogen p. Transmission in the society depends on baseline infectivity of the pathogen (v), proportion of infectious in the society reduced by pathogen specific factor z, and a seasonality parameter (s(t)). Table 1 The transition probability matrix for a single pathogen with the SEIR states \( {P}_{external}\left(p,t\right)=v\ s(t)\ z\ \left(\frac{I\left(t-1\right)}{N}+{P}_0\right), \)with \( N=S(t)+E(t)+I(t)+R(t)\ and\ \frac{I\left(t-1\right)}{N}+{P}_0\le 1 \). s(t) = A (sin (ω (t − t 0)) + 1)/2, ω = 2π/365, (if simulation starts on October 1st then t = 0, t 0 = 0). P 0 indicates external influx of infection, A indicates the amplitude of the seasonality function (for example the effects of outside temperature, Table 2). The parameters are calibrated to restrict P external (t) with the upper limit as one. Table 2 Description of the symbols used in the mathematical formulation of the transition probabilities for describing the agent based model Transmission in the family depends on pathogen characteristics - baseline infectivity v, factor for within family closeness of contacts c and number of infectious persons in the same household I h (t). \( {P}_{family}\left(p,t\right)=1-{\left(1-v\ c\right)}^{I_h\left(t-1\right)} \) (Description provided in Table 2.) Z(t) ∈ {Susceptible, Exposed, Infectious, Recovered} ∀ t ≥ 1, where Z is an individual in the study. TP 1 = Probability(Z(t + 1) = Exposed | Z(t) = Susceptible) (Table 1). =1 − (1 − P external (p, t)) ∗ (1 − P family (p, t)) (Table 2). The probability TP 1 describes the transition probability from being Susceptible to becoming Exposed. The above formulation includes the specific scenario, when there is no possibility of a family based transmission, which is always the case for a single member household. Let LP (>0) and IP (>0) (description in Table 2) be the average latency period and infectious period, respectively for a given pathogen p. TP 2 describes the transition probability of an Exposed individual becoming Infectious for the pathogen it is already exposed to. TP 3 describes the transition probability for an Infectious individual to obtain immunity or become Recovered for that pathogen for the remaining time in the study period. In this paper, we assume that LP and IP are independent constructs. TP 2 = Probability(Z(t + 1) = Infectious | Z(t) = Exposed) (Table 1) $$ \min \left(\frac{1}{LP},1\right)=q={TP}_2 $$ X(i, p)~Geometric(q)− After time X(i, p), that is, at time X(i, p) + 1, the i th individual becomes Infectious, since the time it became Exposed for pathogen p. (Description in Table 2). TP 3 = Probability(Z(t + 1) = Recovered | Z(t) = Infectious) (Table 1) $$ \min \left(\frac{1}{IP},1\right)=r={TP}_3 $$ Y(i, p)~Geometric(r)− After time Y(i, p), that is, at time Y(i, p) + 1 the i th individual acquires immunity, since the time it became Infectious from the pathogen p, for the remaining study period (Description in Table 2).We also assume that X(i, p) and Y(i, p) are independently distributed as a geometric distribution. When there are multiple pathogens present in the society (p and p ' in our case are two different exemplary pathogens), we introduce an additional state of Susceptible + in the agent based model. On acquiring symptoms of infection (i.e. state as Infectious) with pathogen p, there is a check to verify if an individual is susceptible for another pathogen', i.e. Susceptible(p ' ) is True or False. If True, then the individual at Susceptible (p ' ) moves to Susceptible + ( p ' ) instantaneously. Once an individual at Infectious(p) moves to Recovered(p), we check once again if the individual is still susceptible to p ' , i.e. Susceptible + ( p ' ) is True or False. If True then Susceptible + ( p ' ) moves to Susceptible(p ' ) instantaneously (Fig. 1). A person at the state of Susceptible + is potentially at risk only to the household mode of infection transmission and can become Exposed. Following this, the steps for the exposed individual are the same as described for one pathogen. In case the individual reaches the state Recovered for p, while it is still at Susceptible + for some p ', then it becomes Susceptible once again for p '. The described process is pictorially represented through a Markov chain (Fig. 2). We vary the degree of household isolation using a parameter λ which takes values between zero and one to indicate differences in the risk of acquiring an infection from outside the household. Graphical illustration of Susceptible, Exposed, Infectious, and Recovered states of the agent based model with some assumptions described. The time lines for the latency period and infectious period are also indicated through the dashed lines for an ith individual in the population for pathogens p and p', where p' ≠ p. The dependency assumption induces the Susceptible + state. The black arrows represent the influence direction, whereas the coloured arrows represent the transitions. The part above the dotted line indicates the states when only one pathogen is present in society, or when the pathogens are independently functioning in the system. The part below the dotted line is introduced when more than one pathogen is present in society and the pathogens interfere in the joint behaviour. *When an individual is Infectious for pathogen p and is still susceptible for another pathogen p ' it instantaneously moves to the state Susceptible + for pathogen p ' .** Once the individual is at the recovered state for pathogen p and is still at state Susceptible + for pathogen p ' it switches back to Susceptible state instantaneously Markov chain describing the dependency process among pathogens. ** Once the individual is in the Recovered state for pathogen p and is still at Susceptible + state for pathogen p ' it switches back to Susceptible state instantaneously \( Probability\left(\boldsymbol{Z}\left(\boldsymbol{t}+1\right)={Susceptible}^{+}\kern0.1em |\kern0.1em \boldsymbol{Z}\left(\boldsymbol{t}\right)= Susceptible\right)=1\kern0.1em \mathrm{for}\kern0.17em \mathrm{pathogen}\kern0.1em {p}^{\hbox{'}} \) (Fig. 1). When Z(t) = Infectious for pathogen p and p ≠ p '. The description of P family remains unchanged. TPS + describes the probability that, an individual at the state Susceptible + remains as Susceptible +. This indicates the situation where the individual does not get exposed to a pathogen p ' but remains symptomatic for p. This depends on the pathogen characteristics for both p and p ' . $$ {TPS}^{+}= Probability\left(\boldsymbol{Z}\left(\boldsymbol{t}+1\right)={Susceptible}^{+}\kern0.5em |\ \boldsymbol{Z}\left(\boldsymbol{t}\right)={Susceptible}^{+}\right) $$ =(1 − (1 − P family (p ', t)) ∗ (1 − (1 − λ)P external (p ', t))) ∗ (1 − TP 3( p)) (Fig. 2). Population structure We have considered three population structures with different properties (Germany, India, one-person structure). The data on the household size distribution in Germany was used from DESTATIS (Statistisches Bundesamt, Wiesbaden 2015 report) while that of India from the census reports of 2011 [19, 20]. We have also considered a hypothetical population of one person households as the most extreme scenario. This has been described with the frequencies for each household size in Table 3. We distributed 10,000 individuals into each population scenario. Table 3 The household size distributions for the different populations considered to describe the epidemic outcomes from simulations using the agent based model Pathogen characteristics We studied a general multi-pathogen setting with n (n = 10) pathogens with characteristics chosen to reflect potential real life situations as described in Table 4. Baseline infectivity v was calibrated to achieve a maximum incidence rate of approximate 10% in person weeks for respiratory infections during the peak winter season (https://grippeweb.rki.de). Two broad types of pathogens were considered, the influenza type and the common cold type. Influenza type pathogens have typically reported shorter latent periods (period with asymptomatic infection; 1–4 days for influenza and common cold between 1 and 6 days) but a longer infectious period (period with symptomatic infection; 5–9 days for Influenza and 2–3 days for Common cold) while it has been reported as the opposite for common cold type of pathogens [21,22,23]. Table 4 Pathogen characteristics. This table with the input parameters for the simulation of the agent based model with ten pathogens. I indicates influenza type while C indicates common cold type of pathogen The simulation proceeded in discrete time steps. Each step denoted a day in the follow up period. Based on the initial number of Infectious individuals, the epidemic process began its course of action. It followed the seasonal trend of the pathogen, the relation to other household members, and the prevalence of the infection for the specific pathogen in society at a given time point. We started initially with two pathogens from Table 4 (Pathogen 6 and Pathogen 10). Pathogen 6 would be in accordance with the characteristics of a pandemic influenza strain whereas pathogen 10 would correspond to the characteristics of human respiratory syncytial virus (HRSV). Then we observed the scenario where both the pathogens jointly interact. Finally we looked at the general scenario with 10 pathogens jointly which would be a more appropriate representation of the reality during the winter season [18] (https://grippeweb.rki.de). The comparisons were done for the scenarios of pathogen dependencies, 1) assuming all pathogens existed independently (λ = 0%), and 2) assuming the pathogens worked together and influenced each other (λ = 100%) (indicated by Pathogen Dependency- Yes or No); household size distribution based on different household size distributions in different countries (Country – Germany, India or Hypothetical). At the start of the simulation few people were infectious for every pathogen, which was denoted as I(1) to kick start the infection process, while the number of people already immune at the start were represented as R(1). In addition to the above, we assumed that, for every pathogen there would be a small chance that an individual could acquire an infection from outside the system. This has been described as the external influx of infection. We had set this value to one in a ten thousand, at each observational time point (day) in the epidemic process. Besides this, the maximum number of days spent as infectious had been censored to 55 days. Each of the scenario combinations were replicated 100 times for a study period of 150 days in the peak season for respiratory illness. In our base case scenario with the German population and pathogen dependency (λ = 100%), we look at the same temporal tend (seasonality) for all the pathogens considered. However to present the effect of differential seasonality, we introduce a different temporal trend for pathogen 10, by modifying the value of t 0 as +45 days and −45 days. This results in shifting the peak of the epidemic for these pathogens and impacts the overall epidemic process when multiple pathogens are present. Also for this scenario we evaluated the effect of change in λ from 0% to 100% in steps of 10% which would allow us to infer on the importance of the pathogen dependency assumption through the introduction of household isolation. We measure the epidemiological parameters of interest which are 1) height of the epidemic peak (peak prevalence), 2) time taken to reach the peak of the epidemic, 3) incidence proportion (attack rate) of infections in the study period, and 4) incidence proportion stratified by household size for the different populations in consideration, through our simulations as described above. Summary statistics are presented for all the outcomes described above. We observe the peak prevalence and the incidence proportion for the pathogens 6 and 10, both individually and jointly. We are interested in the hypothesis that jointly modelling pathogens creates a competition, and hence we would observe lower values of the peak prevalence and incidence proportion, compared to observing them individually. The observations are compared using the non-parametric Mann–Whitney–Wilcoxon test for evaluating the difference when observing joint epidemics. The parametric version with the paired t-test also gives us similar results, however due to no necessity of normal distribution assumptions the Mann–Whitney–Wilcoxon test values are reported [24]. We also use a simple linear regression model [25] on the outcomes described above and show the confidence intervals for the slope across different outcomes to indicate the impact of pathogen dependency on the country variable (used to describe the different household size distribution) in the scenario with 10 pathogens. The covariate used is the coefficient for the degree of household isolation (0 to describe the independent scenario and 1 to describe complete household isolation in case of the dependency). The confidence intervals show the variability in the slopes across different country variables indicating different household size distributions. For studying the degree of household isolation, we use the Generalized Additive Model (GAM). GAM's are an extension of the generalized linear model (GLM) allowing for some kind of smoothing of the predictor variables. The advantage of GAMs is that it allows us to deal with highly non-linear and non-monotonic relationships between the response and the predictor variables often driven by the observed data at hand [26, 27]. GAMs are also used in this work to model the dependency of incidence proportion of infections stratified by household size where a non-linear relationship is observed. In our simulations, pathogen 6 demonstrated behaviour similar to a pandemic influenza epidemic. Hence this was the most severe pathogen in our list. Pathogen 10 was the second most severe among the pathogens present. The differences in the household size distribution described through the country variable have been demonstrated in Table 5. Smaller household sizes were associated with less severe epidemics demonstrated through the smaller values of the peak prevalence as well as the lower incidence proportion. And the epidemic process was also slower which would be indicated through the delayed median time in reaching the peak prevalence of infections in the study period. The epidemic almost never occurred (low values of incidence and prevalence and high variability in time to reach the peak prevalence) in the hypothetical population cohort where within household infection transmissions were completely absent (Fig. 3). Simulation of pathogen 6 and pathogen 10 jointly was associated with household isolation during the symptomatic phase of an episode, and this brought in competition within the two pathogens during the epidemic process. We tested the hypothesis that incidence proportion and peak prevalence was higher in individual simulations of the pathogen as opposed to the joint interaction of the two pathogens in a system. The difference could be observed only where the epidemic occurred (i.e. not in the hypothetical population cohort). Furthermore we also evaluated the hypothesis that the sum of the independent peak prevalence and the incidence proportion from the two pathogens was greater than the joint overall peak prevalence and incidence proportion in the two pathogen system. Here too the difference was observed except in the hypothetical cohort (Table 5). In the two pathogen system the time taken to reach the peak prevalence was dominated by the pandemic pathogen (pathogen 6). However there was no difference observed in this duration in the two pathogen system and separately simulated individual pathogen systems. When two exactly same pathogens were considered then the results of the comparison were similar. However when two pathogens were more aggressive (characteristics of pathogen 6) then the difference in the peak prevalence of infections between jointly modelling them and considering them individually, were significantly higher than the scenario when two pathogens had moderate characteristics (characteristics of pathogen 10). Table 5 Summary and comparison of two pathogen system (S2) vs. one pathogen system (S1). The pathogen is indicated in the parenthesis. S1(P6 + P10) indicates the sum of the individual values from the pathogen independently whereas S2(P6 + P10) indicates the system where the household isolation introduces pathogen dependency and the pathogens function jointly. The outcomes of peak prevalence and incidence proportion (during the 150 day period) along with their 95% confidence intervals (based on Monte-Carlo simulations) are shown in the summary section. The comparison section displays the non-parametric p values (based on the Mann-Whitney-Wilcoxon test) obtained when comparing the pathogen systems over the simulation runs Difference across the country locations indicating the different household size distribution and the coefficient of household reduction during the symptomatic phase of the infection. The slope is obtained from the linear model to indicate the change caused by the most extreme difference in the coefficient due to the dependent scenario (all pathogens interacting with dependency) and the independent scenario (all pathogens working independently). This is also visible in Table 6. The outcomes of interest that have been presented are 3.1- peak prevalence during the observed epidemic, 3.2- incidence of infections during the 150 day period of interest. (Numbers above 1 indicate that cumulative probability of infections during the study period was above 100%) Since the multi pathogen scenario would be a more probable model, we consider 10 pathogens as described in Table 4. These include among them pathogen 6 and pathogen 10 which have been described before. We now vary the coefficient of household reduction process between the extremes of 0 and 1, indicating pathogens functioning independently in the population and pathogens interacting within the population respectively. Looking across the different country variables for varying household size distributions, we observed that here too societies with larger household sizes had an accelerated and more severe epidemic (Fig. 3 (3.1, 3.2) and Additional file 1). Also we looked at the difference introduced by the extreme values of household isolation during the infectious phase using the slope of the linear regression model. The summary of the slopes indicated the differences when the epidemic took place (Table 6). For the German household size and hypothetical household size distributed cohort we could not observe any significant decrease in the speed of the epidemic as opposed to the cohort with the Indian household size distribution where there was an accelerated epidemic observed with an increased coefficient of household isolation. Table 6 Comparison of slopes across the different country locations. This indicates the observed difference in the outcomes from the epidemics due to the differences in the coefficient of household isolation (the extreme scenarios of complete dependency versus pathogens functioning independently) and the household size distribution in the country location used as shown in Fig. 3 (3.1, 3.2) We analysed the impact of changing the coefficient of household isolation during the infectious period by varying it from 0 to 100% in steps of 10% for the cohort with the German household size distribution. The time taken to reach the epidemic peak remained unchanged with the variation of the household isolation coefficient. However there was a decrease in the epidemic peak and the incidence proportion of infections with an increase in household isolation coefficient. This would be a result of decreasing contacts during the infectious phase with the society making individuals less vulnerable to newer infections during this period. The results are represented in Fig. 4 (4.1 and 4.2) and the smoothed coefficients based on the coefficient of household isolation for the GAM regression was highly significant for the outcomes of incidence proportion and peak prevalence of infections (both below 0.0001). However it was not significant for the outcome of the time taken to reach the peak prevalence. Fitting of a simple linear regression model with the above scenario also gave us similar results and the slope was always negative (except for the outcome of time to reach the incidence peak). However the fit was better with the GAM model through the median points. Epidemic outcomes with varying degree of household isolation. We observe a decrease in peak prevalence (4.1), and incidence of infections (numbers above 1 indicate that cumulative probability of infections during the study period was above 100%) (4.2), with the increase in the degree of household isolation during the infectious phase Additionally, we looked at the proportion of individuals who were at home at a given time point. Together with this we also looked at the distribution of the proportion of people who were infectious for one or more pathogens on a given time point. We assessed these proportions in simulations for the German household size distribution, with immune individuals in the population, and 10 pathogens interacting with each other during the epidemic process (Fig. 5). Our calculations showed that majority of the cases where the person was symptomatic and remained at home, was due to one pathogen with a proportion of 0.9959 (0.99, 1.00) (median with 5th and 95th percentile values in bracket). For two pathogens at a time, the proportion was 0.003 (0.00, 0.01) (median and with 5th and 95th percentile values in bracket) whereas for three pathogens at a time the median proportion was already zero. This could also be seen in Fig. 5, where the proportion of infectious individuals for two or more pathogens at a time point was very close to the zero line. For the similar scenario with the household size distribution of India in the cohort, we obtained similar results. The proportion of household stays due to one pathogen infection was dominant, 0.9963(0.96, 1.00) (median with 5th and 95th percentile values in bracket). There were also some rare cases of being simultaneously infected by three or four pathogens. Simulation results showing the average population proportion from 100 simulated epidemics during the epidemic period that are under household isolation for being symptomatic for infections. The black and the red line indicate how the proportion of people acquires infections during the course of the epidemic and then recover with time. The red line shows that a maximum of a tenth of the population remains at home on an average during the epidemic period. The blue line almost covers the red line indicating that majority of the infection episodes are caused by one pathogen. The pink and the grey lines are almost close to zero at all the time points indicating how unlikely it is for an individual to be infected with more than one pathogen at a time We also analysed the incidence proportion stratified for the household size in the different population distributions. We saw an increase in incidence proportion of infections with increase in household size (Fig. 6). Here also we used a GAM model to represent the nonlinear relationship between incidence proportion and household size. For the hypothetical population cohort we could not have any relationship because it only represented one membered household. The incidences of the one member household were also different across the different population distributions with higher incidences in population with larger household sizes. Incidence of infections stratified by household size (numbers above 1 indicate that cumulative probability of infections during the study period was above 100%) Finally, we observed the impact of shifting the temporal trend for pathogen 10 (representing the characteristics of RSV virus) as opposed to pathogen 6 (representing the characteristics of pandemic influenza virus) and the remaining 8 viruses in the 10 pathogen system. The trend for pathogen 10 was shifted by using the different values for t 0 as +45 and −45 days. We performed the simulations only for the German population type with dependency among pathogens (complete household isolation during infectious phase). In comparison to the base case scenario, there was a decrease in the peak prevalence as well as the incidence proportion due to the temporal shift in the trend for pathogen 10. The decrease was significantly higher for the shift where the peak for pathogen 10 is delayed by 45 days as opposed to the peak coming forward by 45 days. The time taken to reach the epidemic peak remained unchanged (Fig. 7 (7.1 and 7.2)). Also the decrease in the incidence proportion was comparatively smaller than for the peak prevalence between the scenario where the peak was forward by 45 days for pathogen 10 and base case. Epidemic outcomes for the base case scenario (all pathogens temporally aligned in their seasonality) in comparison to the scenarios where pathogen 10 has a shifted temporal trend. The shifting reduces the intensity of the epidemic. The reduction is more when there is a delayed peak in the epidemic for pathogen 10 as opposed to an earlier peak. (Incidence numbers above 1 indicate that cumulative probability of infections during the study period was above 100%) We have proposed an agent based model to study the behaviour of epidemics under the influence of multiple pathogens working simultaneously in the population. With the presence of two pathogens in such a system without the influence of any other effect, we could demonstrate how the interference of the pathogens in the infection process played a role in controlling the epidemic process (lower number of infected individuals as well as lower daily incidence proportion). The interference among pathogens was introduced through the assumption of household isolation during the period of being symptomatically infectious, where the individual was immune to the risk of acquiring infections from outside the household. To our knowledge this was the first time for studying the behaviour of an epidemic process incorporating the influence of multiple pathogens using an agent based model. We further went on to present a more general scenario where there are 10 pathogens, and also the impact from recovered individuals being present in the population at the start of the epidemic process. Our simulations were performed to study the impact of the dependency among pathogens as opposed to pathogens functioning independently (2 extreme levels for the coefficient of household isolation during the infectious period), and the household size distributions of different populations (three different populations with varying household size distributions) The population system reached a stable state at the end of simulation period, confirming that the epidemic had almost died out in 150 days (approximately 5 months during winter season). The dependencies among pathogens were important determinants in controlling the epidemic process. Additionally, the household size distributions did produce significant differences in the peak of the epidemic (peak prevalence) and the incidence proportion in the study period of interest. For common respiratory infections like influenza and common cold, household size can be an important factor determining their spread as seen for influenza or influenza like illnesses based hospitalizations: the population structure difference has accounted for a third of the observed variation [28]. In our simulations we observed that household size distribution influences the speed of the epidemic. Population with larger household sizes reached the peak of the epidemic much faster than those with smaller household sizes. Looking at the incidence of infections across household sizes, we could see that larger households were associated with more infections due to the intra household infection spread, consistently with assumed random mixing within the household. Looking across the different pathogens, we observed that the infectious period also is important in shaping the severity of the epidemic. Pathogen 6 and 10 as considered in the simulation have almost similar characteristics except for the duration of the infectious phase, but this resulted in different severity of the epidemic. Also in the multipathogen scenario, the epidemic characteristics are dominated by pathogen 10. Shifting of the temporality to introduce a peak 45 days before for pathogen 10 allows for more infections in the multipathogen system as opposed to a delay in the peak. Our simulation study does come with limitations. There are common challenges associated with agent based models, especially in statistical methods for hypothesis testing in combination with determining the number of appropriate simulation runs [29]. In addition to the standard challenges, our assumptions are largely simplistic in nature, assuming for random mixing within the household and the population is looked upon as an assortment of homogenous agents. The increase in contacts with the increasing household size may not necessarily take place. Secondly, we induce a sort of isolation for the transmission process of infections, but we do not account for the specific severity of the infections, except for the duration of being symptomatic and infectious. The severity of the pathogen can directly influence the duration of isolation. Even for our sensitivity analysis, we assume this parameter to be same for all the pathogens. Additionally, we also assume same transmissibility characteristics for all the pathogens. These are strong assumptions that have been made for the realization of the system in a simplistic way. However, this model can be extended easily to observe more complex realizations of a realistic system. Through our agent based model formulation, we could demonstrate the importance of considering the multi-pathogen interactions in controlling the spread of infections during an epidemic process. Household size and dependency among pathogens are important factors in determining the outcome of the epidemic. Future prospective studies in household cohorts looking at pathogen identification and coinfections can provide quantitative measures for specific characteristics of the multi-pathogen system. This kind of data can also be used to test the validity of the assumptions made in simulation models. Adams PF, Hendershot GE, Marano MA. Current estimates from the National Health Interview Survey, 1996. United States: Vital Health Stat. 10; 1999. p. 1–203. Carrat F, Sahler C, Rogez S, Leruez-Ville M, Freymuth F, Le Gales C, et al. Influenza burden of illness: estimates from a national prospective survey of household contacts in France. Arch. Intern. Med. [internet]. 2002;162:1842–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/12196082. Viboud C, Boëlle PY, Cauchemez S, Lavenu A, Valleron AJ, Flahault A, et al. Risk factors of influenza transmission in households. Br J Gen Pract. 2004;54:684–9. Cauchemez S, Carrat F, Viboud C, Valleron AJ, Boëlle PY. A Bayesian MCMC approach to study transmission of influenza: application to household longitudinal data. Stat. Med. 2004;23:3469–87. Klick B, Leung GM, Cowling BJ. Optimal design of studies of influenza transmission in households. I: case-ascertained studies. Epidemiol. Infect. England; 2012;140:106–114. Baker RD, Stevens RH. A random-effects model for analysis of infectious disease final-state data. Biometrics. 1995/09/01. 1995;51:956–968. Ball F, Mollison D, Scalia-Tomba G. Epidemics with two levels of mixing. Ann Appl Probab. 1997;7:46–89. Becker NG, Dietz K. The effect of household distribution on transmission and control of highly infectious diseases. Math Biosci. 1995/06/01. 1995;127:207–219. Becker NG, Starczak DN. Optimal vaccination strategies for a community of households. Math Biosci. 1997;139:117–32. Shaban N, Andersson M, Svensson A, Britton T. Household epidemics: modelling effects of early stage vaccination. Biom J. 2009/06/24. 2009;51:408–419. Chao DL, Halloran ME, Obenchain VJ, Longini IM. FluTE, a publicly available stochastic influenza epidemic simulation model. PLoS Comput Biol. 2010;6:e1000656. Grefenstette JJ, Brown ST, Rosenfeld R, DePasse J, Stone NT, Cooley PC, et al. FRED (a Framework for Reconstructing Epidemic Dynamics): an open-source software system for modeling infectious diseases and control strategies using census-based populations. BMC Public Health. 2013/10/10. 2013;13:940. Lukens S, DePasse J, Rosenfeld R, Ghedin E, Mochan E, Brown ST, et al. A large-scale immuno-epidemiological simulation of influenza A epidemics. BMC Public Health [Internet]. 2014;14:1019. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4194421/ Auchincloss AH, Diez Roux AV. A new tool for epidemiology: The usefulness of dynamic-agent models in understanding place effects on health. Am J Epidemiol. 2008;168:1–8. Available from: https://academic.oup.com/aje/article/168/1/1/123870 Marshall BDL, Galea S. Formalizing the role of agent-based modeling in causal inference and epidemiology. Am J Epidemiol. 2015;181:92–9. Julien Beauté, Snacken R, Adlhoch C. Annual epidemiological report: Respiratory tract infections [Internet]. Eur Cent Dis Prev Control. 2014;1–26. Available from: https://ecdc.europa.eu/sites/portal/files/media/en/publications/Publications/Respiratory-tract-infections-annual-epidemiologicalreport-report-2014.pdf Rohani P, Green CJ, Mantilla-Beniers NB, Grenfell BT. Ecological interference between fatal diseases. Nature. 2003;422:885–8. Mercer GN, Barry SI, Kelly H. Modelling the effect of seasonal influenza vaccination on the risk of pandemic influenza infection. BMC Public Health [Internet]. 2011;11 Suppl 1:S11. Available from: https://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-11-S1-S11 DESTATIS Statistisches Bundesamt. Households and families [Internet]. 2015. Available from: https://www.destatis.de/EN/FactsFigures/SocietyState/Population/HouseholdsFamilies/HouseholdsFamilies.html Government of India. Ministry of Home Affairs-Government of India. Census Report. 2011; Available from: http://www.censusindia.gov.in/2011census/hh-series/hh01.html. Centers for Disease Control and Prevention. CDC-Seasonal Influenza (Flu) [Internet]. Centers Dis. Control Prev. 2014. Available from: http://www.cdc.gov/flu/ Carrat F, Vergu E, Ferguson NM, Lemaitre M, Cauchemez S, Leach S, et al. Time lines of infection and disease in human influenza: A review of volunteer challenge studies. Am. J. Epidemiol. 2008. p. 775–85. Lessler J, Reich NG, Brookmeyer R, Perl TM, Nelson KE, Cummings DA. Incubation periods of acute respiratory viral infections: a systematic review. Lancet Infect. Dis. 2009. p. 291–300. Fay MP, Proschan M a. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Stat. Surv. [Internet]. 2010;4:1–39. Available from: http://projecteuclid.org/euclid.ssu/1266847666 Weisberg S. Applied linear regression [internet]. Statistics (Ber) 2005. Available from: http://books.google.co.uk/books?hl=en&lr=&id=xd0tNdFOOjcC&oi=fnd&pg=PR13&dq=applied+linear+regression&ots=dR9uFuGFOR&sig=yEJp0gtdPZJlce5LrrAAvNQXz3Y Guisan A, Edwards Jr TC, Hastie T. Generalized linear and generalized additive models in studies of species distributions: setting the scene. Ecol. Modell. [Internet]. 2002;157:89–100. Available from: http://www.sciencedirect.com/science/article/pii/S0304380002002041%5Cnhttp://ac.els-cdn.com/S0304380002002041/1-s2.0-S0304380002002041-main.pdf?_tid=e89ebed0-b163-11e6-a313-00000aacb35f&acdnat=1479895514_86097a6f09eddb22059bb37b9a881a66 Crawley MJ. Generalized Additive Models. R B. [Internet]. 2012;666–80. Available from: https://doi.org/10.1002/9781118448908.ch18%5Cnhttp://onlinelibrary.wiley.com/store/10.1002/9781118448908.ch18/asset/ch18.pdf?v=1&t=i7pataz0&s=23bc8ddeaae0238242064672982022a4b846ce7c Kumar S, Piper K, Galloway DD, Hadler JL, Grefenstette JJ. Is population structure sufficient to generate area-level inequalities in influenza rates? An examination using agent-based models. BMC Public Health [Internet]. 2015;15:947. Available from: http://www.biomedcentral.com/1471-2458/15/947 Lee JS, Filatova T, Ligmann-Zielinska A, Hassani-Mahmooei B, Stonedahl F, Lorscheid I, et al. The complexities of agent-based modeling output analysis. JASSS. 2015;18(4):4. doi:10.18564/jasss.2897. We thank all the group members of the Epidemiological and Statistical Methods research group at HZI, Braunschweig for their valuable feedback and discussions. Internal funding of the Helmholtz Centre for Infection Research. The R code for the simulations supporting the conclusions of this article is available from the corresponding author upon request. Helmholtz Centre for Infection Research, Research Group Biostatistics, Braunschweig, Germany Abhishek Bakuli & Frank Klawonn PhD Programme "Epidemiology", Braunschweig-Hannover, Germany & André Karch Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbuettel, Germany Frank Klawonn Helmholtz Centre for Infection Research, Department of Epidemiology, Braunschweig, Germany André Karch & Rafael Mikolajczyk Hannover Medical School, Hannover, Germany Rafael Mikolajczyk Institute for Medical Epidemiology, Biometry, and Informatics (IMEBI), Medical Faculty of the Martin Luther University Halle-Wittenberg, Halle (Saale), Germany Search for Abhishek Bakuli in: Search for Frank Klawonn in: Search for André Karch in: Search for Rafael Mikolajczyk in: RM and FK conceived the idea of the simulation study. All authors contributed to the theoretical development of the model. AB programmed the simulations and statistical analyses. AB, FK made contributions to the statistical analyses and simulations. AB drafted the manuscript. All authors contributed to the interpretation of the data, writing, and revising of the manuscript and approved the final manuscript. Correspondence to Rafael Mikolajczyk. Additional file Time taken to reach the peak prevalence varies according to the household size distribution in the cohort. Populations with larger households on an average experienced the epidemics at an accelerated rate compared to populations with smaller households on an average. (PNG 24 kb) Bakuli, A., Klawonn, F., Karch, A. et al. Effects of pathogen dependency in a multi-pathogen infectious disease system including population level heterogeneity – a simulation study. Theor Biol Med Model 14, 26 (2017) doi:10.1186/s12976-017-0072-7 Received: 22 September 2017 Pathogen dependency Multi-pathogen
CommonCrawl
T model of growth and its application in systems of tumor-immune dynamics MBE Home Application of evolutionary games to modeling carcinogenesis 2013, 10(3): 913-923. doi: 10.3934/mbe.2013.10.913 A flexible multivariable model for Phytoplankton growth Mohammad A. Tabatabai 1, , Wayne M. Eby 1, , Sejong Bae 2, and Karan P. Singh 2, Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, United States, United States School of Medicine, University of Alabama at Birmingham, Birmingham AL 35294, United States, United States Received May 2012 Revised January 2013 Published April 2013 We introduce a new multivariable model to be used to study the growth dynamics of phytoplankton as a function of both time and the concentration of nutrients. This model is applied to a set of experimental data which describes the rate of growth as a function of these two variables. The form of the model allows easy extension to additional variables. Thus, the model can be used to analyze experimental data regarding the effects of various factors on phytoplankton growth rate. Such a model will also be useful in analysis of the role of concentration of various nutrients or trace elements, temperature, and light intensity, or other important explanatory variables, or combinations of such variables, in analyzing phytoplankton growth dynamics. Keywords: multivariable model, nutrient concentration, growth models, biovolume., Phytoplankton. Mathematics Subject Classification: 91B62, 62P1. Citation: Mohammad A. Tabatabai, Wayne M. Eby, Sejong Bae, Karan P. Singh. A flexible multivariable model for Phytoplankton growth. Mathematical Biosciences & Engineering, 2013, 10 (3) : 913-923. doi: 10.3934/mbe.2013.10.913 M. E. Baird, S. M. Emsley and J. M. Meglade, Modelling the interacting effects of nutrient uptake, light capture and temperature on phytoplankton growth,, J. Plankton Research 23, 23 (2001), 840. Google Scholar B. Beardall, D. Allen, J. Bragg, Z. V. Finkel, K. J. Flynn, A. Quigg, T. A. V. Rees, A. Richardson and J. A. Raven, Allometry and stoichiometry of unicellular, colonial and multicellular phytoplankton,, New Phytol., 181 (2009), 295. Google Scholar Z. Bursac, M. Tabatabai and D. K. Williams, Non-linear hyperbolastic growth models and applications in cranofacial and stem cell growth,, in, (2006), 190. Google Scholar M. R. Droop, The nutrient status of algal cells in continuous culture,, J. Mar. Biol. Assoc. UK, 54 (1974), 825. Google Scholar P. Duarte, M. F. Macedo and L. C. da Fonseca, The relationship between phytoplankton diversity and community function in a coastal lagoon,, Hydrobiologia, 555 (2006), 3. Google Scholar W. Eby, M. Tabatabai and Z. Bursac, Hyperbolastic modeling of tumor growth with a combined treatment of iodoacetate and dimethylsulfoxide,, BMC Cancer, 10 (2010). doi: 10.1186/1471-2407-10-509. Google Scholar G. T. Evans and M. A. Paranjape, Precision of estimates of phytoplankton growth and microzooplankton grazing when the functional response of grazers may be nonlinear,, Mar. Ecol. Prog. Ser., 80 (1992), 285. Google Scholar K. J. Flynn, A mechanistic model for describing dynamic multi-nutrient, light, temperature interactions in phytoplankton,, J. Plankton Research, 23 (2001), 977. Google Scholar K. Gao, Y. Wu, G. Li, H. Wu, V. E. Villafañe and E. W. Helbling, Solar UV radiation drives CO2 fixation in marine phytoplankton: A double-edged sword., Plant Physiol, 144 (2007), 54. Google Scholar R. J. Greider, The relationship between steady state phytoplankton growth and photosynthesis,, Limnol. Oceanogr., 35 (1990), 971. Google Scholar T-Y. Ho, A. Quigg, Z. V. Finkel, A. J. Milligan, K. Wyman, P. G. Falkowski and F. M. M. Morel, The elemental composition of some marine phytoplankton,, J. Phycol., 39 (2003), 1145. Google Scholar D. A. Kiefer and J. J. Cullen, Phytoplankton growth and light absorption as regulated by light, temperature, and nutrients,, J. Plankton Research, 10 (1991), 163. Google Scholar T. Kmeṫ, M. Straškraba and P. Mauersberger, A mechanistic model of the adaptation of phytoplankton photosynthesis,, Bull. Math. Biol., 55 (1993), 259. Google Scholar L. Mailleret, J-L. Gouzé and O. Bernard, Nonlinear control for algae growth models in the chemostat,, Bioprocess Biosyst. Eng., 27 (2005), 319. Google Scholar Z-P. Mei, Z. V. Finkel and A. J. Irwin, Light and nutrient availability affect the size-scaling of growth in phytoplankton,, J. Theor. Biol., 259 (2009), 582. Google Scholar J. Monod, La technique de culture continue: Théorie et applications,, Annales de l'Inst. Pasteur, 79 (1950), 390. Google Scholar R. K. Nagle and E. B. Saff, "Fundamentals of Differential Equations,", Fourth Edition. Addison Wesley Publishing Company, (1996). Google Scholar C. Pahl-Wost and D. M. Imboden, DYPHORA-a dynamic model for the rate of photosynthesis of algae,, J. Plankton Res., 12 (1990), 1207. Google Scholar D. L. Roelke, P. M. Eldridge and L. A. Cifuentes, A model of phytoplankton competition for limiting and nonlimiting nutrients: Implications for development of estuarine and near shore management schemes,, Estuaries, 22 (1999), 92. Google Scholar K. A. Safi and J. M. Gibbs, Importance of different size classes of phytoplankton in Beatrix Bay, Marlborough Sounds, New Zealand, and the potential implications for aquaculture of mussel, Prena canaliculus,, New Zealand Journal of Marine and Freshwater Research, 37 (2003), 267. Google Scholar E. Sakshang, K. Andresen and D. A. Kiefer, A steady state description of growth and light absorption in the marine planktonic diatom Skeletonema costatum,, Limnol. Oceanogr., 34 (1989), 198. Google Scholar W. V. Sobczak, J. E. Cloern, A. D. Jassby and A. B. Müller-Solger, Bioavailability of organic matter in a highly disturbed estuary: The role of detrital and algal resources,, PNAS, 99 (2002), 8101. Google Scholar M. Tabatabai, Z. Bursac, W. Eby and K. Singh, Mathematical modeling of stem cell proliferation,, Med. & Biol. Eng. & Comp., 49 (2011), 253. doi: 10.1007/s11517-010-0686-y. Google Scholar M. Tabatabai, W. Eby and K. P. Singh, Hyperbolastic modeling of wound healing,, Mathematical and Computer Modelling, 53 (2011), 755. doi: 10.1016/j.mcm.2010.10.013. Google Scholar M. Tabatabai, D. K. Williams and Z. Bursac, Hyperbolastic growth models: Theory and application,, Theoretical Biology and Medical Modeling, 2 (2005), 1. Google Scholar M. Takahashi, J. Ishizaka, T. Ishimaru, L. P. Atkinson, T. N. Lee, Y. Yamaguchi, Y. Fujita and S. Ichimura, Temporal change in nutrient concentrations and phytoplankton biomass in short time scale local upwelling around the Izu Peninsula,, Japan. J. Plankton Res., 8 (1986), 1039. Google Scholar S. C. Wofsy, A simple model to predict extinction coefficients and phytoplankton biomass in eutrophic waters,, Limnology and Oceanography, 28 (1983), 1144. Google Scholar Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020457 Chao Xing, Jiaojiao Pan, Hong Luo. Stability and dynamic transition of a toxin-producing phytoplankton-zooplankton model with additional food. Communications on Pure & Applied Analysis, 2021, 20 (1) : 427-448. doi: 10.3934/cpaa.2020275 Thazin Aye, Guanyu Shang, Ying Su. On a stage-structured population model in discrete periodic habitat: III. unimodal growth and delay effect. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021005 Robert Stephen Cantrell, King-Yeung Lam. Competitive exclusion in phytoplankton communities in a eutrophic water column. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020361 Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $ \mathbb{R}^2 $. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020447 Emre Esentürk, Juan Velazquez. Large time behavior of exchange-driven growth. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 747-775. doi: 10.3934/dcds.2020299 Anna Abbatiello, Eduard Feireisl, Antoní Novotný. Generalized solutions to models of compressible viscous fluids. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 1-28. doi: 10.3934/dcds.2020345 Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010 Urszula Ledzewicz, Heinz Schättler. On the role of pharmacometrics in mathematical models for cancer treatments. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 483-499. doi: 10.3934/dcdsb.2020213 P. K. Jha, R. Lipton. Finite element approximation of nonlocal dynamic fracture models. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1675-1710. doi: 10.3934/dcdsb.2020178 João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 Ebraheem O. Alzahrani, Muhammad Altaf Khan. Androgen driven evolutionary population dynamics in prostate cancer growth. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020426 Gabrielle Nornberg, Delia Schiera, Boyan Sirakov. A priori estimates and multiplicity for systems of elliptic PDE with natural gradient growth. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3857-3881. doi: 10.3934/dcds.2020128 Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362 Zhimin Li, Tailei Zhang, Xiuqing Li. Threshold dynamics of stochastic models with time delays: A case study for Yunnan, China. Electronic Research Archive, 2021, 29 (1) : 1661-1679. doi: 10.3934/era.2020085 Evelyn Sander, Thomas Wanner. Equilibrium validation in models for pattern formation based on Sobolev embeddings. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 603-632. doi: 10.3934/dcdsb.2020260 Shigui Ruan. Nonlinear dynamics in tumor-immune system interaction models with delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 541-602. doi: 10.3934/dcdsb.2020282 Mohammad A. Tabatabai Wayne M. Eby Sejong Bae Karan P. Singh
CommonCrawl
user: malcolmjmr malcolmjmr 329 Matching Annotations searchuserinterfaces.com searchuserinterfaces.com Design of Search User Interfaces (Ch 1) | Search User Interfaces | Marti Hearst | Cambridge University Press 2009 malcolmjmr 13 Aug 2020 Research shows that people are highly likely to revisit information they have viewed in the past and to re-issue queries that they have written in the past (Jones et al., 2002, Milic-Frayling et al., 2004). In one large study, 40% of people's search results clicks were on pages that they had clicked on before over the course of a year, with 71% of these using the identical query string as before (Teevan et al., 2006a). In a survey associated with this study, 17% of interviewees reported "not being able to return to a page I once visited" as one of the "biggest problems in using the web." Therefore, allowing search over recently viewed information can improve a user's productivity (Dumais et al., 2003). Web browsers, as opposed to search engines, can provide much of this functionality. For example, the Chrome Web browser supports information revisiting by showing a grid of thumbnail images representing a user's most frequently visited web pages, and the drop-down menu from the many browser Web address bars shows recently visited pages. Search engines themselves can provide query history, as well as history of previously selected pages if the user agrees to having that information recorded. The PubMed bioscience journal service shows recently issued queries and visited documents in a simple history display (see Figure 1.6). Similarly, many shopping Web site show recently viewed items in a graphical form. Thumbnail images have also been experimented with in search results listing, both for reminding searchers of previously visited pages and for suggesting information about the hit, such as its genre. problem search history Visit annotations in context Annotators searchuserinterfaces.com/book/sui_ch1_design.html Collapse view global.asc.upenn.edu global.asc.upenn.edu Cosmology for a Different Computer Universe: Nelson: JoDI malcolmjmr 31 Jul 2020 1.4 Psychological resistances This is the mote global.asc.upenn.edu/fileLibrary/PDFs/26_nelson_reading2.pdf www.bloomberg.com www.bloomberg.com The Decade of Deleveraging Didn't Quite Turn Out That Way Big companies have enjoyed big profits, fattened by widening margins as wages stagnate. That's allowed them to sustain a huge debt load. But drilling down shows that credit quality, as viewed by ratings companies, has tumbled. According to S&P Global Ratings, the companies rated BBB+, BBB, or BBB- (the three lowest investment grades before they would hit "junk" status and face much higher interest payments) now outnumber all of the companies with some level of A-rated debt. It looks as though companies are "gaming" the ratings companies, borrowing as much as they can get away with. Precisely what happened with consumers credit scores as a function of their borrowing habits. You see a large number of consumers at the 600 threshold. These arbitrary cutoffs, create problematic tipping points. bloomberg.com/graphics/2019-decade-of-debt/ blog.readwise.io blog.readwise.io Using Spaced Repetition and Active Recall with Books to Hack Your Brain Spaced repetition is a technique for spacing out of reviews of previously learned material according to an algorithm designed to optimize your limited time for review. Each time you review a piece of information, you supply feedback to that algorithm which estimates the optimal time to show you that information again. Michale Nielson provides a good summary of how this works. blog.readwise.io/hack-your-brain-with-spaced-repetition-and-active-recall/ How to Actually Use What You Read with Readwise: Part 2 You'll also want to review your original reaction to those passages. You can capture these reactions, of course, by taking notes. Note taking requires a little more effort than I would expend during the initial capture process. There's a wide variance in the thoroughness and time that one takes to write a note. As a consequence note taking, when done most thoroughly, may disrupt the flow of reading. What's more, you do not know ahead of time the relative significance of each passage, and much of the note taking effort can go to waste if too much focus is paid to those passage that are relatively less substantial. Mortimer Adler, the author of the classic manual on reading How To Read a Book An annotated copy With reading, this means highlighting especially salient passages. Yes we need to highlight salient passages, but more importantly we need to begin our capture of important information, as Mortimer Adler suggests, by "coming to terms" with the author. blog.readwise.io/reading-workflow-part-2/ How to Actually Use What You Read with Readwise All it takes is a swipe of the finger. Would a press not be better? fortelabs.co fortelabs.co Progressive Summarization: A Practical Technique for Designing Discoverable Notes There's a natural tension between the two, compression and context. This a false dichotomy and tradeoff. You can compress information based its context. fortelabs.co/blog/progressive-summarization-a-practical-technique-for-designing-discoverable-notes augmentingcognition.com augmentingcognition.com Augmenting Long-term Memory malcolmjmr 26 Jun 2020 My somewhat pious belief was that if people focused more on remembering the basics, and worried less about the "difficult" high-level issues, they'd find the high-level issues took care of themselves. But while I held this as a strong conviction about other people, I never realized it also applied to me. And I had no idea at all how strongly it applied to me. Using Anki to read papers in new fields disabused me of this illusion. I found it almost unsettling how much easier Anki made learning such subjects. I now believe memory of the basics is often the single largest barrier to understanding. If you have a system such as Anki for overcoming that barrier, then you will find it much, much easier to read into new fields. augmentingcognition.com/ltm.html www.cityrealty.com www.cityrealty.com Smart Investments: Central Park North condos with incredible front-row views malcolmjmr 21 May 2020 The northern end of the park has typically seen less affluent neighbors and significantly less attention, but Central Park Conservancy is about to change that. Earlier this fall, the non-profit group announced a $150 million renovation that would improve the parkland, add a new boardwalk along the man-made lake known as Harlem Meer, and build a new recreation facility to replace the Lasker pool and skating rink, both of which date back to the 1960's. (Side note: The Trump Organization has the concession to run the skating rink through 2021, by which time there may be someone else in the White House.) Construction is set to begin in 2021, and completion is estimated for 2024. Central Park North cityrealty.com/nyc/market-insight/features/future-nyc/smart-investments-central-park-north-condos-incredible-front-row-views/36261 stratechery.com stratechery.com Dithering and Open Versus Free "paying for the regular delivery of well-defined value" — are so important. I defined every part of that phrase: Paying: A subscription is an ongoing commitment to the production of content, not a one-off payment for one piece of content that catches the eye. Regular Delivery: A subscriber does not need to depend on the random discovery of content; said content can be delivered to to the subscriber directly, whether that be email, a bookmark, or an app. Well-defined Value: A subscriber needs to know what they are paying for, and it needs to be worth it. It is very important to clearly define what a subscriptions means. First, it's not a donation: it is asking a customer to pay money for a product. What, then, is the product? It is not, in fact, any one article (a point that is missed by the misguided focus on micro-transactions). Rather, a subscriber is paying for the regular delivery of well-defined value. The importance of this distinction stems directly from the economics involved: the marginal cost of any one Stratechery article is $0. After all, it is simply text on a screen, a few bits flipped in a costless arrangement. It makes about as much sense to sell those bit-flipping configurations as it does to sell, say, an MP3, costlessly copied. So you need to sell something different. In the case of MP3s, what the music industry finally learned — after years of kicking and screaming about how terribly unfair it was that people "stole" their music, which didn't actually make sense because digital goods are non-rivalrous — is that they should sell convenience. If streaming music is free on a marginal cost basis, why not deliver all of the music to all of the customers for a monthly fee? This is the same idea behind nearly every large consumer-facing web service: Netflix, YouTube, Facebook, Google, etc. are all predicated on the idea that content is free to deliver, and consumers should have access to as much as possible. Of course how they monetize that convenience differs: Netflix has subscriptions, while Google, YouTube, and Facebook deliver ads (the latter two also leverage the fact that content is free to create). None of them, though, sell discrete digital goods. It just doesn't make sense. stratechery.com/2020/dithering-and-the-open-web/ Remember Significantly More of What You Read With Readwise Cloze deletion is, of course, just a fancy way of saying fill in the blank. This might sound trivial, but the simple act forces you to consider the surrounding context and search your mind for an answer. This, in turn, is scientifically proven to form stronger memories enabling you to remember profoundly more of what you've read. memory search blog.readwise.io/remember-more-of-what-you-read-with-readwise/ Stratechery by Ben Thompson The music industry, meanwhile, has, at least relative to newspapers, come out of the shift to the Internet in relatively good shape; while piracy drove the music labels into the arms of Apple, which unbundled the album into the song, streaming has rewarded the integration of back catalogs and new music with bundle economics: more and more users are willing to pay $10/month for access to everything, significantly increasing the average revenue per customer. The result is an industry that looks remarkably similar to the pre-Internet era: Notice how little power Spotify and Apple Music have; neither has a sufficient user base to attract suppliers (artists) based on pure economics, in part because they don't have access to back catalogs. Unlike newspapers, music labels built an integration that transcends distribution. music-industry stratechery.com/2018/the-bill-gates-line/ www.nature.com www.nature.com Web annotation tool Hypothesis hits a milestone malcolmjmr 30 Apr 2020 The team behind Hypothesis, an open-source software tool that allows people to annotate web pages, announced in March that its users had collectively posted more than 5 million comments across the scholarly web since the tool was launched in 2011. That's up from about 220,000 total comments in 2015 (see 'Comment counts'). The company has grown from 26,000 registered users to 215,000 over the same period. hypothesis users nature.com/articles/d41586-019-01427-9 web.archive.org web.archive.org Apple shutters Advanced Technology Group malcolmjmr 07 Jan 2020 "Apple research transferred more stuff into product than any other lab I can think of, including Hewlett-Packard and IBM," the source said, but Jobs wasn't aware enough of the role ARL played in developing current Apple technology before deciding to cut the group's funding, he noted. apple jobs problem web.archive.org/web/20160310073538/http://www.cnet.com/news/apple-shutters-advanced-technology-group/ en.wikipedia.org en.wikipedia.org History of artificial intelligence - Wikipedia malcolmjmr 13 Dec 2019 Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy.[79] With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS).[80] As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS. ai history constraint costs computers en.wikipedia.org/wiki/History_of_artificial_intelligence www.loper-os.org www.loper-os.org Loper OS » You have made your bedrock, now lie in it. Imagine that every car maker save for Toyota insisted on using the infamous East German Trabant as a standard of quality - yet blindly imitated random elements of Toyota's visual design. How long would it take for the whiners to appear on the scene and start making noises about monopolistic tyranny? How long would it take for Toyota to start living up to these accusations in earnest? And why should it not do so? What is to be gained from corporate sainthood? From a refusal to fleece eagerly willing suckers for all they're worth? Idle threats of defection by outraged iPhone developers [4] are laughable nonsense simply because - in the two categories listed - Apple has no competition. Every commercial product which competes directly with an Apple product (particularly the iPhone) gives me (and many others) the distinct impression that "where it is original, it is not good, and where it is good, it is not original." market for lemons apple market for lemons loper-os.org/ www.dougengelbart.org www.dougengelbart.org Augmenting Human Intellect: A Conceptual Framework - 1962 (AUGMENT,3906,) - Doug Engelbart Institute He then showed you how he could make a few strokes on the keyset to designate the type of link he wanted established, and pick the two symbol structures that were to be linked by means of the light pen. He said that most links possessed a direction, i.e., they were like an arrow pointing from one substructure to another, so that in setting up a link he must specify the two substructures in a given order. links structure "Most of the structuring forms I'll show you stem from the simple capability of being able to establish arbitrary linkages between different substructures, and of directing the computer subsequently to display a set of linked substructures with any relative positioning we might designate among the different substructures. You can designate as many different kinds of links as you wish, so that you can specify different display or manipulative treatment for the different types." links networks structure symbol "You usually think of an argument as a serial sequence of steps of reason, beginning with known facts, assumptions, etc., and progressing toward a conclusion. Well, we do have to think through these steps serially, and we usually do list the steps serially when we write them out because that is pretty much the way our papers and books have to present them—they are pretty limiting in the symbol structuring they enable us to use. Have you even seen a 'scrambled-text' programmed instruction book? That is an interesting example of a deviation from straight serial presentation of steps.3b6b "Conceptually speaking, however, an argument is not a serial affair. It is sequential, I grant you, because some statements have to follow others, but this doesn't imply that its nature is necessarily serial. We usually string Statement B after Statement A, with Statements C, D, E, F, and so on following in that order—this is a serial structuring of our symbols. Perhaps each statement logically followed from all those which preceded it on the serial list, and if so, then the conceptual structuring would also be serial in nature, and it would be nicely matched for us by the symbol structuring.3b6c "But a more typical case might find A to be an independent statement, B dependent upon A, C and D independent, E depending upon D and B, E dependent upon C, and F dependent upon A, D, and E. See, sequential but not serial? A conceptual network but not a conceptual chain. The old paper and pencil methods of manipulating symbols just weren't very adaptable to making and using symbol structures to match the ways we make and use conceptual structures. With the new symbol-manipulating methods here, we have terrific flexibility for matching the two, and boy, it really pays off in the way you can tie into your work.3b6d This makes you recall dimly the generalizations you had heard previously about process structuring limiting symbol structuring, symbol structuring limiting concept structuring, and concept structuring limiting mental structuring. structure constraint non linear networks symbol Suppose that one wants to link Card B to Card A, to make a trail from A to B. we should also be able to go from B to A One need arose quite commonly as trains of thought would develop on a growing series of note cards. There was no convenient way to link these cards together so that the train of thought could later be recalled by extracting the ordered series of notecards. An associative-trail scheme similar to that out lined by Bush for his Memex could conceivably be implemented with these cards to meet this need and add a valuable new symbol-structuring process to the system. memex associative index Note, too, the implications extending from Bush's mention of one user duplicating a trail (a portion of his structure) and giving it to a friend who can put it into his Memex and integrate it into his own trail (structure). shared search view An example of this general sort of thing was given by Bush where he points out that the file index can be called to view at the push of a button, which implicitly provides greater capability to work within more sophisticated and complex indexing systems index memex The associative trails whose establishment and use within the files he describes at some length provide a beautiful example of a new capability in symbol structuring that derives from new artifact-process capability, and that provides new ways to develop and portray concept structures. Any file is a symbol structure whose purpose is to represent a variety of concepts and concept structures in a way that makes them maximally available and useful to the needs of the human's mental-structure development—within the limits imposed by the capability of the artifacts and human for jointly executing processes of symbol-structure manipulation. memex opportunity As we are currently using it, the term includes the organization, study, modification, and execution of processes and process structures. Whereas concept structuring and symbol structuring together represent the language component of our augmentation means, process structuring represents the methodology component (plus a little more, actually). There has been enough previous discussion of process structures that we need not describe the notion here, beyond perhaps an example or two. The individual processes (or actions) of my hands and fingers have to be cooperatively organized if the typewriter is to do my bidding. My successive actions throughout my working day are meant to cooperate toward a certain over-all professional goal. process structure description With a computer manipulating our symbols and generating their portrayals to us on a display, we no longer need think of our looking at the symbol structure which is stored—as we think of looking at the symbol structures stored in notebooks, memos, and books. What the computer actually stores need be none of our concern, assuming that it can portray symbol structures to us that are consistent with the form in which we think our information is structured. Separation of model and view opportunity human computers interaction view generation But another kind of view might be obtained by extracting and ordering all statements in the local text that bear upon consideration A of the argument—or by replacing all occurrences of specified esoteric words by one's own definitions. augmentation features A natural language provides its user with a ready-made structure of concepts that establishes a basic mental structure, and that allows relatively flexible, general-purpose concept structuring. Our concept of language as one of the basic means for augmenting the human intellect embraces all of the concept structuring which the human may make use of. language intelligence augmentation Before we pursue further direct discussion of the H-LAM/T system, let us examine some background material. Consider the following historical progression in the development of our intellectual capabilities:2c4a 2c4b (1) Concept Manipulation—Humans rose above the lower forms of life by evolving the biological capability for developing abstractions and concepts. They could manipulate these concepts within their minds to a certain extent, and think about situations in the abstract. Their mental capabilities allowed them to develop general concepts from specific instances, predict specific instances from general concepts, associate concepts, remember them, etc. We speak here of concepts in their raw, unverbalized form. For example, a person letting a door swing shut behind him suddenly visualizes the person who follows him carrying a cup of hot coffee and some sticky pastries. Of all the aspects of the pending event, the spilling of the coffee and the squashing of the pastry somehow are abstracted immediately, and associated with a concept of personal responsibility and a dislike for these consequences. But a solution comes to mind immediately as an image of a quick stop and an arm stab back toward the door, with motion and timing that could prevent the collision, and the solution is accepted and enacted. With only non-symbolic concept manipulation, we could probably build primitive shelter, evolve strategies of war and hunt, play games, and make practical jokes. But further powers of intellectual effectiveness are implicit in this stage of biological evolution (the same stage we are in today).2c4b1 (2) Symbol Manipulation—Humans made another great step forward when they learned to represent particular concepts in their minds with specific symbols. Here we temporarily disregard communicative speech and writing, and consider only the direct value to the individual of being able to do his heavy thinking by mentally manipulating symbols instead of the more unwieldly concepts which they represent. Consider, for instance, the mental difficulty involved in herding twenty-seven sheep if, instead of remembering one cardinal number and occasionally counting, we had to remember what each sheep looked like, so that if the flock seemed too small we could visualize each one and check whether or not it was there.2c4b2 (3) Manual, External, Symbol Manipulation—Another significant step toward harnessing the biologically evolved mental capabilities in pursuit of comprehension and problem solutions came with the development of the means for externalizing some of the symbol-manipulation activity, particularly in graphical representation. This supplemented the individual's memory and ability to visualize. (We are not concerned here with the value derived from human cooperation made possible by speech and writing, both forms of external symbol manipulation. We speak of the manual means of making graphical representations of symbols—a stick and sand, pencil and paper and eraser, straight edge or compass, and so on.) It is principally this kind of means for external symbol manipulation that has been associated with the evolution of the individual's present way of doing his concept manipulation (thinking). symbol manipulation description It has been jokingly suggested several times during the course of this study that what we are seeking is an "intelligence amplifier." (The term is attributed originally to W. Ross Ashby[2,3]. At first this term was rejected on the grounds that in our view one's only hope was to make a better match between existing human intelligence and the problems to be tackled, rather than in making man more intelligent. But deriving the concepts brought out in the preceding section has shown us that indeed this term does seem applicable to our objective. 2c2a Accepting the term "intelligence amplification" does not imply any attempt to increase native human intelligence. The term "intelligence amplification" seems applicable to our goal of augmenting the human intellect in that the entity to be produced will exhibit more of what can be called intelligence than an unaided human could; we will have amplified the intelligence of the human by organizing his intellectual capabilities into higher levels of synergistic structuring. What possesses the amplified intelligence is the resulting H-LAM/T system, in which the LAM/T augmentation means represent the amplifier of the human's intelligence.2c2b In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of "artificial intelligence" has been going on for centuries. definition intelligence amplification associative memex dougengelbart.org/content/view/138/ www.nytimes.com www.nytimes.com His answer is that our creative minds are being strengthened rather than atrophied by the ability to interact easily with the Web and Wikipedia. "Not only has transactive memory not hurt us," he writes, "it's allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone." This is where I disagree with Thompson. The potential for IA is there but we have retrogressed with the advent of the web. web memory Socrates and his prediction that writing would destroy the Greek tradition of dialectic. Socrates' primary concern was that people would write things down instead of remembering them. "This discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories," Plato quotes him as saying. "They will trust to the external written characters and not remember of themselves." The dialectic process is important particularly in the context of human to computer communication and synthesis. Here Socrates articulates the importance of memory to this process and how writing undermines it. If there is an asymmetry between the mind of the writer and reader the written work provides method of diffusing information from one mind to another. This balance of the mind is true of human to computer interaction as well. We need to expand our memory capacity if we are to be expand the reasoning capacity of computers. But instead we are using computers to substitute our memories. We neglect memory so we can't reason; humans and computers alike. memory socrates This is not a new idea. It is based on the vision expounded by Vannevar Bush in his 1945 essay "As We May Think," which conjured up a "memex" machine that would remember and connect information for us mere mortals. The concept was refined in the early 1960s by the Internet pioneer J. C. R. Licklider, who wrote a paper titled "Man-Computer Symbiosis," and the computer designer Douglas Engelbart, who wrote "Augmenting Human Intellect." They often found themselves in opposition to their colleagues, like Marvin Minsky and John McCarthy, who stressed the goal of pursuing artificial intelligence machines that left humans out of the loop. Seymour Papert, had an approach that provides a nice synthesis between these two camps, buy leveraging early childhood development to provide insights on the creation of AI. AI intelligence amplification Thompson's point is that "artificial intelligence" — defined as machines that can think on their own just like or better than humans — is not yet (and may never be) as powerful as "intelligence amplification," the symbiotic smarts that occur when human cognition is augmented by a close interaction with computers. Intelligence amplification over artificial intelligence. In reality you can't get to AI until you've mastered IA. AI human augmentation intelligence amplification Like a centaur, the hybrid would have the strength of each of its components: the processing power of a large logic circuit and the intuition of a human brain's wetware. The result: human-machine teams, even when they didn't include the best grandmasters or most powerful computers, consistently beat teams composed solely of human grandmasters or superfast machines. This is what is most needed: the spark of intuition coupled with the indefatigably pursuit of its implications. We handle the former and computers the latter. centaurs machine human synthesis centaurs nytimes.com/2013/11/03/books/review/smarter-than-you-think-by-clive-thompson.html History of Apple Inc. - Wikipedia During 1995, a decision was made to (officially) start licensing the Mac OS and Macintosh ROMs to 3rd party manufacturers who started producing Macintosh "clones". This was done in order to achieve deeper market penetration and extra revenue for the company. This decision lead to Apple having over a 10% market share until 1997 when Steve Jobs was re-hired as interim CEO to replace Gil Amelio. Jobs promptly found a loophole in the licensing contracts Apple had with the clone manufacturers and terminated the Macintosh OS licensing program, ending the Macintosh clone era. The result of this action was that Macintosh computer market share quickly fell from 10% to around 3%. steve jobs apple problem markets constraint en.wikipedia.org/wiki/History_of_Apple_Inc. www.paulgraham.com www.paulgraham.com If Lisp is So Great malcolmjmr 15 Nov 2019 In languages, as in so many things, there's not much correlation between popularity and quality. Why does John Grisham (King of Torts sales rank, 44) outsell Jane Austen (Pride and Prejudice sales rank, 6191)? Would even Grisham claim that it's because he's a better writer? paulgraham.com/iflisp.html The Python Paradox Which makes them exactly the kind of programmers companies should want to hire. Hence what, for lack of a better name, I'll call the Python paradox: if a company chooses to write its software in a comparatively esoteric language, they'll be able to hire better programmers, because they'll attract only those who cared enough to learn it. And for programmers the paradox is even more pronounced: the language to learn, if you want to get a good job, is a language that people don't learn merely to get a job. selection developer heuristics paulgraham.com/pypar.html Let the Other 95% of Great Programmers In It would be great if more Americans were trained as programmers, but no amount of training can flip a ratio as overwhelming as 95 to 5. Especially since programmers are being trained in other countries too. Barring some cataclysm, it will always be true that most great programmers are born outside the US. It will always be true that most people who are great at anything are born outside the US. No amount of training in the current development paradigm can flip this ratio but if we were to make dev tools simpler and ubiquitous then it just might. paulgraham.com/95.html Noble savage - Wikipedia In his Discourse on the Origins of Inequality, Rousseau, anticipating the language of Darwin, states that as the animal-like human species increased there arose a "formidable struggle for existence" between it and other species for food.[34] It was then, under the pressure of necessity, that le caractère spécifique de l'espèce humaine—the specific quality that distinguished man from the beasts—emerged—intelligence, a power, meager at first but yet capable of an "almost unlimited development". Rousseau calls this power the faculté de se perfectionner—perfectibility.[35] Man invented tools, discovered fire, and in short, began to emerge from the state of nature. Yet at this stage, men also began to compare himself to others: "It is easy to see. ... that all our labors are directed upon two objects only, namely, for oneself, the commodities of life, and consideration on the part of others." rousseau evolution en.wikipedia.org/wiki/Noble_savage www.nobelprize.org www.nobelprize.org The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1974 This brings me to the crucial issue. Unlike the position that exists in the physical sciences, in economics and other disciplines that deal with essentially complex phenomena, the aspects of the events to be accounted for about which we can get quantitative data are necessarily limited and may not include the important ones. While in the physical sciences it is generally assumed, probably with good reason, that any important factor which determines the observed events will itself be directly observable and measurable, in the study of such complex phenomena as the market, which depend on the actions of many individuals, all the circumstances which will determine the outcome of a process, for reasons which I shall explain later, will hardly ever be fully known or measurable. And while in the physical sciences the investigator will be able to measure what, on the basis of a prima facie theory, he thinks important, in the social sciences often that is treated as important which happens to be accessible to measurement. This is sometimes carried to the point where it is demanded that our theories must be formulated in such terms that they refer only to measurable magnitudes. measurement problem hayek The particular occasion of this lecture, combined with the chief practical problem which economists have to face today, have made the choice of its topic almost inevitable. On the one hand the still recent establishment of the Nobel Memorial Prize in Economic Science marks a significant step in the process by which, in the opinion of the general public, economics has been conceded some of the dignity and prestige of the physical sciences. On the other hand, the economists are at this moment called upon to say how to extricate the free world from the serious threat of accelerating inflation which, it must be admitted, has been brought about by policies which the majority of economists recommended and even urged governments to pursue. We have indeed at the moment little cause for pride: as a profession we have made a mess of things. hayek quote economics problem It seems to me that this failure of the economists to guide policy more successfully is closely connected with their propensity to imitate as closely as possible the procedures of the brilliantly successful physical sciences – an attempt which in our field may lead to outright error. It is an approach which has come to be described as the "scientistic" attitude – an attitude which, as I defined it some thirty years ago, "is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed."1 hayek quote science philosophy economics nobelprize.org/prizes/economic-sciences/1974/hayek/lecture/ whitepaper.audius.co whitepaper.audius.co AudiusWhitepaper.pdf Early in the life of the Audius network, the AudiusDAO will control governance. During this bootstrap-ping phase, the Audius DAO will also have the abilityto intervene in catastrophic circumstances to x criticalissues in the Audius blockchain code, such as issues en-abling fraud or resulting in unintended loss of Audiusor Loud tokens. centralization governance audius There will be two groups created at the timeof main network launch: Audius DAO (DecentralizedAutonomous Organization) and Artist Advisory DAO. types users audius To make governance more accessible to users, voting canbe delegated by anyone to other users or groups of users,such that if a user places no vote on a speci c proposal,their designated delegate's vote will be used in place oftheir own. proxy voting audius These user classes are not mutually exclusive. There-fore, if a user has earnings and/or holdings that fall intomultiple classes, their vote can be counted in multipleclasses. complexity voting types users audius To submit a proposal, a user must bond a set num-ber of Audius tokens (denotedBGP) in the governancesystem, which remain bonded for the duration of theirproposal. Before a proposal's eective date, the origi-nal submitter can also choose to withdraw the proposalif they so choose, returning their bonded tokens. Thisbond is required as an anti-spam measure and to ensurethat proposers to have a sucient stake in the Audiusprotocol to make changes to it. At the proposal's res-olution (successful, failed, or withdrawn), the bond isreturned to proposal submitter. solution sybil attacks bonds audius transaction costs Pro-posals also include a block count at which point they gointo eect; this eectiveness date must be at least 1 weekin the future at time of proposal submission to give usersample time to review and vote on the proposal. voting audius time Participation in governance creates value in Audius,and should be rewarded Voting should not be rewarded. Apathy should be penalized. voting audius copy of theseguidelines will be included in a contract on the network,and updates to these guidelines ow through the Audiusgovernance protocol. A full fee and bond schedule forarbitration will be published closer to the time of theAudius main network launch, and these fees and bondscan be modi ed in the Audius governance protocol. should be done already... problem audius On a recurring basis,subscription listens would be tallied and payouts wouldbe made to artists by a transparent, auditable subscrip-tion system running on the Audius blockchain. What are the mechanisms of the system? whitepaper.audius.co/AudiusWhitepaper.pdf In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[167] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[168] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[169] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[170] There were many other explanations and for each there was a corresponding research program underway. ai approach problem The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight. substitution personal computers lisp machine funding Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts problem ai approach The neats: logic and symbolic reasoning[edit source] Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[100] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[101] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[102] Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[103] Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[104] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.[105] The scruffies: frames and scripts[edit source] Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[106] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[107] In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[108] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames. ai history approaches Macintosh - Wikipedia In 1988 Apple sued Microsoft and Hewlett-Packard on the grounds that they infringed Apple's copyrighted GUI, citing (among other things) the use of rectangular, overlapping, and resizable windows. After four years, the case was decided against Apple, as were later appeals. Apple's actions were criticized by some in the software community, including the Free Software Foundation (FSF), who felt Apple was trying to monopolize on GUIs in general, and boycotted GNU software for the Macintosh platform for seven years. apple gui history problem en.wikipedia.org/wiki/Macintosh Lisp machine - Wikipedia Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho,[7] which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system. ai xerox parc history In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed] In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle. ai history problem xerox parc en.wikipedia.org/wiki/Lisp_machine malcolmjmr 25 Oct 2019 We see a number of speci c challenges faced by creatorsand listeners today:1. There is little to no transparency around the originsof creator payouts (e.g. number of plays, location,original gross payment before fees)2. Incomplete rights ownership data often preventscontent creators from getting paid; instead, earn-ings accumulate in digital service providers (DSPs)and rights societies3. There are layers of middlemen and signi cant timedelay involved in payments to creators4. Publishing rights are complicated and opaque, withno incentives for the industry to make rights datapublic and accurate5. Remixes, covers, and other derivative content arelargely censored due to rights management issues6. Licensing issues prevent DSPs and content from be-ing accessible worldwide www-groups.dcs.st-and.ac.uk www-groups.dcs.st-and.ac.uk John Maynard Keynes: "Newton, the Man" I do not see him in this light. I do not think that any one who has pored over the contents of that box which he packed up when he finally left Cambridge in 1696 and which, though partly dispersed, have come down to us, can see him like that. Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago. Isaac Newton, a posthumous child bom with no father on Christmas Day, 1642, was the last wonderchild to whom the Magi could do sincere and appropriate homage. newton keynes quote www-groups.dcs.st-and.ac.uk/history/Extras/Keynes_Newton.html www.wsj.com www.wsj.com China's Labor Market Is Changing, but It Isn't Because of a Trade War malcolmjmr 24 Sep 2019 One widely circulated report this summer—which appears to have caught Mr. Trump's attention—estimates that China shed five million industrial jobs, 1.9 million of them directly because of U.S. tariffs, between the beginning of the trade conflict and the end of May this year. labor markets china tariffs contraction That isn't insubstantial. But it is still small compared with China's urban labor force of 570 million. It also represents a slower pace than the 23 million manufacturing jobs shed in China between 2015 and 2017, according to the report, published by China International Capital Corp., an investment bank with Chinese state ownership. labor markets china contraction manufacturing wsj.com/articles/chinas-labor-market-is-changing-but-it-isnt-because-of-a-trade-war-11569317401 Fed Adds $105 Billion to Financial System in Two Transactions The Fed offered $30 billion of reserves maturing Oct. 8, receiving $62 billion in bids from banks offering collateral in the form of Treasury and mortgage securities. Banks bid for $32 billion more than the amount offered by the Fed. In a second offering, the Fed added $75 billion in overnight reserves, with banks bidding for $80.2 billion, or $5.2 billion more than was available. fed money markets wsj.com/articles/fed-adds-to-financial-system-in-two-transactions-this-month-11569329248 United Auto Workers Calls for Strike at GM's U.S. Factories A prolonged walkout can quickly take a financial toll on car companies because they book revenue only when a vehicle is shipped to a dealership. An assembly-plant shutdown can cost an auto maker an estimated $1.3 million every hour, according to the Center for Automotive Research in Ann Arbor, Mich. costs labor strike The GM strike would surpass in size the work stoppage by more than 30,000 employees at Stop & Shop groceries in New England earlier this year. But it would be far smaller than one involving 73,000 GM workers in 2007, when the company's workforce was much larger. labor strike wsj.com/articles/united-auto-workers-union-to-strike-gms-u-s-factories-11568560131 Saudi Arabia Aims to Restore a Third of Lost Oil Output Monday But sustained Saudi outage of several million daily barrels would rattle markets, because of the lack of other players big enough to step in and provide enough supply to cover the shortfall longer term. Even if Saudi officials were successful in restoring all or most of the lost production, the attack demonstrates a new vulnerability to supply lines across the oil-rich Gulf. Tankers have been paying sharply higher insurance premiums, while shipping rates have soared in the region after a series of maritime attacks on oil-laden vessels, which the U.S. has blamed on Iran. oil production costs insurance premium wsj.com/articles/saudi-arabia-aims-to-restore-a-third-of-lost-oil-output-by-monday-11568568391 ECB Launches Major Stimulus Package, Cuts Key Rate Reflecting those divisions, officials decided not to enlarge significantly the pool of assets the bank can buy—though it did expand the kinds of corporate and mortgage bonds it can purchase. Without changing rules that prohibit the bank from buying more than a third of any government's debt, Mr. Ducrozet estimated that the ECB can continue its bond purchases for only 9-12 months. wsj.com/articles/ecb-launches-major-stimulus-package-cuts-key-rate-11568289016 Henry Charles Carey - Wikipedia The Executive [Lincoln] is frequently compelled to affix his signature to bills of the highest importance, much of which he regards as wholly at war with the national interests. henry carey quote government problem example henry carey en.wikipedia.org/wiki/Henry_Charles_Carey www.usatf.org www.usatf.org USA Track & Field - Two-time Team USATF coach Bob Larsen selected as 2019 Legend Coach The USATF Legend Coach Award is in its sixth year and is selected by the USATF Coaches Advisory Committee. The inaugural award was presented to Hall of Fame Tigerbelle Coach Ed Temple in 2014, followed by Dr. Joe Vigil in 2015, Tom Tellez in 2016, Clyde Hart in 2017 and Brooks Johnson last year. track awards usatf.org/News/Two-time-Team-USATF-coach-Bob-Larsen-selected-as-2.aspx www.cnet.com www.cnet.com Apple pioneer Bill Fernandez on AR, VR and the design of the future "But in moving towards flat design we are losing much of the wisdom that was embedded in the old 3D style of UI, for example: a user must be able to glance at a screen and know what is an interactive element (e.g., a button or link) and what is not (e.g., a label or motto); a user must be able to tell at a glance what an interactive element does (does it initiate a process, link to another page, download a document, etc.?); the UI should be explorable, discoverable and self-explanatory. But many apps and websites, in the interest of a clean, spartan visual appearance, leave important UI controls hidden until the mouse hovers over just the right area or the app is in just the right state. This leaves the user in the dark, often frustrated and disempowered." cnet.com/news/apple-pioneer-bill-fernandez-on-ar-vr-and-the-design-of-the-future/ www.pri.org www.pri.org Why some experts want the US to adopt a VAT and other tax lessons from around the world "Democrats think it's not progressive enough because it doesn't put extra burdens on higher-income people, like an income tax does," Hines says. "And Republicans worry that it's too easy for the government to raise money with one." taxes problem politics vat pri.org/stories/2017-04-26/why-some-experts-want-us-adopt-vat-and-other-tax-lessons-around-world www.pcmag.com www.pcmag.com The Best Graphics Cards for VR in 2019 No available HMDs support VirtualLink at this writing, nor are we aware of any, but it's something to keep in mind if you're waffling between a GeForce RTX card and a last-generation GeForce GTX or a Radeon card for VR. Nothing is certain, but it's possible a future headset may debut with this as the optional or mandatory interface. problem vr opportunity pcmag.com/roundup/360538/the-best-graphics-cards-for-vr Inflated Bond Ratings Helped Spur the Financial Crisis. They're Back. When the resort refinanced its debt in 2017 in a $469 million deal, bankers picked DBRS as one of two firms to rate the debt. DBRS had just loosened its standards for such "single-asset" commercial-mortgage deals. DBRS issued grades as much as three rungs higher on comparable slices rated by Morningstar in 2014. ratings dispersion Investor reliance on credit ratings has gone from "high to higher," says Swedish economist Bo Becker, who co-wrote a study finding that in the $4.4 trillion U.S. bond-mutual-fund industry, 94% of rules governing investments made direct or indirect references to ratings in 2017, versus 90% in 2010. bonds ratings wsj.com/articles/inflated-bond-ratings-helped-spur-the-financial-crisis-theyre-back-11565194951 Germany's Longest Bond Goes Negative for First Time Negative rates in theory mean the German government can borrow money from investors and get paid for doing so. But Berlin runs a budget surplus and has no desire to increase spending as other slower-growing European countries would like it to do. Olaf Scholz, Germany's finance minister, has said recently the government doesn't need to act as if it is in a crisis germany fiscal policy constraint wsj.com/articles/germanys-longest-bond-goes-negative-for-first-time-11564751881 www.ft.com www.ft.com How pension funds are reacting to negative bond yields At the same time, US companies are deleveraging, which has shrunk the supply of new corporate debt, leading to a dearth of investment-grade issuance. Net supply from municipal borrowers, another vital source of new issuance, has also turned negative so there is not enough available for pension funds and insurers to buy. quality bonds scarcity "Pension funds can't match their liabilities with where rates are today so they have to hope that equity markets will continue to rally," he says. markets funds bonds problem ft.com/content/e27c430f-30bc-3cf6-9917-962ca2eee807 The Apple of Steve Jobs needed HyperCard-like products like the Monsanto Company needs a $100 home genetic-engineering set. steve jobs hypercard quote The Lisp Machine (which could just as easily have been, say, a Smalltalk machine) was a computing environment with a coherent, logical design, where the "turtles go all the way down." An environment which enabled stopping, examining the state of, editing, and resuming a running program, including the kernel. An environment which could actually be fully understood by an experienced developer. One where nearly all source code was not only available but usefully so, at all times, in real time. An environment to which we owe so many of the innovations we take for granted. It is easy for us now to say that such power could not have existed, or is unnecessary. Yet our favorite digital toys (and who knows what other artifacts of civilization) only exist because it was once possible to buy a computer designed specifically for exploring complex ideas. Certainly no such beast exists today – but that is not what saddens me most. Rather, it is the fact that so few are aware that anything has been lost. lisp smalltalk users complexity knowledge sensemaking The reason for this is that HyperCard is an echo of a different world. One where the distinction between the "use" and "programming" of a computer has been weakened and awaits near-total erasure. A world where the personal computer is a mind-amplifier, and not merely an expensive video telephone. A world in which Apple's walled garden aesthetic has no place. What you may not know is that Steve Jobs killed far greater things than HyperCard. He was almost certainly behind the death of SK8. And the Lisp Machine version of the Newton. And we may never learn what else. And Mr. Jobs had a perfectly logical reason to prune the Apple tree thus. He returned the company to its original vision: the personal computer as a consumer appliance, a black box enforcing a very traditional relationship between the vendor and the purchaser. Jobs supposedly claimed that he intended his personal computer to be a "bicycle for the mind." But what he really sold us was a (fairly comfortable) train for the mind. A train which goes only where rails have been laid down, like any train, and can travel elsewhere only after rivers of sweat pour forth from armies of laborers. (Preferably in Cupertino.) The Apple of Steve Jobs needed HyperCard-like products like the Monsanto Company needs a $100 home genetic-engineering set. The Apple of today, lacking Steve Jobs — probably needs a stake through the heart. hypercard steve jobs apple problem innovators dilemma innovators dilemma Brewster Kahle - Wikipedia Kahle has been critical of Google's book digitization, especially of Google's exclusivity in restricting other search engines' digital access to the books they archive. In a 2011 talk Kahle described Google's 'snippet' feature as a means of tip-toeing around copyright issues, and expressed his frustration with the lack of a decent loaning system for digital materials. He said the digital transition has moved from local control to central control, non-profit to for-profit, diverse to homogeneous, and from "ruled by law" to "ruled by contract". Kahle stated that even public-domain material published before 1923, and not bound by copyright law, is still bound by Google's contracts and requires permission to be distributed or copied. Kahle reasoned that this trend has emerged for a number of reasons: distribution of information favoring centralization, the economic cost of digitizing books, the issue of library staff without the technical knowledge to build these services, and the decision of the administrators to outsource information services example google publishing contract books problem en.wikipedia.org/wiki/Brewster_Kahle HyperCard - Wikipedia It is this combination of features that also makes HyperCard a powerful hypermedia system. Users can build backgrounds to suit the needs of some system, say a rolodex, and use simple HyperTalk commands to provide buttons to move from place to place within the stack, or provide the same navigation system within the data elements of the UI, like text fields. Using these features, it is easy to build linked systems similar to hypertext links on the Web.[5] Unlike the Web, programming, placement, and browsing were all the same tool. Similar systems have been created for HTML but traditional Web services are considerably more heavyweight. hypercard benefits web comparison en.wikipedia.org/wiki/HyperCard Great man theory - Wikipedia Such are great historical men—whose own particular aims involve those large issues which are the will of the World-Spirit. quote hegel great man theory great man theory en.wikipedia.org/wiki/Great_man_theory www.quora.com www.quora.com What's arguably the single most amazing thing that computers have made possible? - Quora One way to look at this is that when a new powerful medium of expression comes along that was not enough in our genes to be part of traditional cultures, it is something we need to learn how to get fluent with and use. Without the special learning, the new media will be mostly used to automate the old forms of thought. This will also have effects, especially if the new media is more efficient at what the old did: this can result in gluts, that act like legal drugs (as indeed are the industrial revolution's ability to create sugar and fat, it can also overproduce stories, news, status, and new ways for oral discourse. surplus problem media alan kay To understand what has happened, we only need to look at the history of writing and printing to note two very different consequences (a) the first, a vast change over the last 450 years in how the physical and social worlds are dealt with via the inventions of modern science and governance, and (b) that most people who read at all still mostly read fiction, self-help and religion books, and cookbooks, etc.* (all topics that would be familiar to any cave-person). alan kay quote books learning problem quora.com/Whats-arguably-the-single-most-amazing-thing-that-computers-have-made-possible Service design - Wikipedia A practical example of service design thinking can be found at the Myyrmanni shopping mall in Vantaa, Finland. The management attempted to improve the customer flow to the second floor as there were queues at the landscape lifts and the KONE steel car lifts were ignored. To improve customer flow to the second floor of the mall (2010) Kone Lifts implemented their 'People Flow' Service Design Thinking by turning the Elevators into a Hall of Fame for the 'Incredibles' comic strip characters. Making their Elevators more attractive to the public solved the people flow problem. This case of service design thinking by Kone Elevator Company is used in literature as an example of extending products into services. service design flow example strategy en.wikipedia.org/wiki/Service_design Human–computer information retrieval - Wikipedia In 1996 and 1998, a pair of workshops at the University of Glasgow on information retrieval and human–computer interaction sought to address the overlap between these two fields. Marchionini notes the impact of the World Wide Web and the sudden increase in information literacy – changes that were only embryonic in the late 1990s. it took a half a century for these disciplines to discern their complementarity! information retrieval human computers interaction en.wikipedia.org/wiki/Human–computer_information_retrieval www.edge.org www.edge.org I do not actually know of a real findability index, but tools in the field of information retrieval could be applied to develop one. One of the unsolved problems in the field is how to help the searcher to determine if the information simply is not available. search problem Although some have written about information overload, data smog, and the like, my view has always been the more information online, the better, so long as good search tools are available. Sometimes this information is found by directed search using a web search engine, sometimes by serendipty by following links, and sometimes by asking hundreds of people in our social network or hundreds of thousands of people on a question answering website such as Answers.com, Quora, or Yahoo Answer edge.org/response-detail/10653 Unfortunately, misguided views about usability still cause significant damage in today's world. In the 2000 U.S. elections, poor ballot design led thousands of voters in Palm Beach, Florida to vote for the wrong candidate, thus turning the tide of the entire presidential election. At the time, some observers made the ignorant claim that voters who could not understand the Palm Beach butterfly ballot were not bright enough to vote. I wonder if people who made such claims have never made the frustrating "mistake" of trying to pull open a door that requires pushing. Usability experts see this kind of problem as an error in the design of the door, rather than a problem with the person trying to leave the room. example user experience design problem The web, in yet another example of its leveling effect, allows nearly everyone to see nearly every interface. Thus designers can learn rapidly from what others have done, and users can see if one web site's experience is substandard compared to others. user experience web development information diffusion Commune - Wikipedia At the start of the 1970s, The New Communes author Ron E. Roberts classified communes as a subclass of a larger category of Utopias.[5] He listed three main characteristics. Communes of this period tended to develop their own characteristics of theory though, so while many strived for variously expressed forms of egalitarianism, Roberts' list should never be read as typical. Roberts' three listed items were: first, egalitarianism – that communes specifically rejected hierarchy or graduations of social status as being necessary to social order. Second, human scale – that members of some communes saw the scale of society as it was then organized as being too industrialized (or factory sized) and therefore unsympathetic to human dimensions. And third, that communes were consciously anti-bureaucratic. en.wikipedia.org/wiki/Commune Theory of the firm - Wikipedia, the free encyclopedia Another prominent conclusion is that joint asset ownership is suboptimal if investments are in human capital. Does that have to be the case? human capital firm en.wikipedia.org/wiki/Theory_of_the_firm cbie.gitbook.io cbie.gitbook.io Other examples of complex adaptive systems are:stock markets: Many traders make decisions on the information known to them and their individual expectations about future movements of the market. They may start selling when they see the prices are going down (because other traders are selling). Such herding behavior can lead to high volatility on stock markets. immune systems: Immune systems consist of various mechanisms, including a large population of lymphocytes that detect and destroy pathogens and other intruders in the body. The immune systems needs to be able to detect new pathogens for the host to survive and therefore needs to be able to adapt.brains: The neural system in the brain consists of many neurons that are exchanging information. The interactions of many neurons make it possible for me to write this sentence and ponder the meaning of life. ecosystems: Ecosystems consist of many species that interact by eating other species, distributing nutrients, and pollinating plants. Ecosystems can be seen as complex food webs that are able to cope with changes in the number of certain species, and adapt – to a certain extent – to changes in climate. human societies: When you buy this new iPhone that is manufactured in China, with materials derived from African soils, and with software developed by programmers from India, you need to realize that those actions are made by autonomous organizations, firms and individuals. These many individual actions are guided by rules and agreements we have developed, but there is no ruler who can control these interactions. example complexity Path FormationPaved paths are not always the most desirable routes going from point A to point B. This may lead pedestrians to take short-cuts. Initially pedestrians walk over green grass. Subsequent people tend to use the stamped grass path instead of the pristine grass, and after many pedestrians an unpaved path is formed without any top-down design. path dependence definition path dependence cbie.gitbook.io/introduction-to-agent-based-modeling/concepts_and_tools/complex_adaptive_systems/emergence www.asindexing.org www.asindexing.org History of Information Retrieval | American Society for Indexing However, indexes in the modern sense, giving exact locations of names and subjects in a book, were not compiled in antiquity, and only very few seem to have been made before the age of printing. There are several reasons for this. First, as long as books were written in the form of scrolls, there were neither page nor leaf numbers not line counts (as we have them now for classical texts). Also, even had there been such numerical indicators, it would have been impractical to append an index giving exact references, because in order for a reader to consult the index, the scroll would have to be unrolled to the very end and then to be rolled back to the relevant page. (Whoever has had to read a book available only on microfilm, the modern successor of the papyrus scroll, will have experienced how difficult and inconvenient it is to go from the index to the text.) Second, even though popular works were written in many copies (sometimes up to several hundreds),no two of them would be exactly the same, so that an index could at best have been made to chapters or paragraphs, but not to exact pages. Yet such a division of texts was rarely done (the one we have now for classical texts is mostly the work of medieval and Renaissance scholars). Only the invention of printing around 1450 made it possible to produce identical copies of books in large numbers, so that soon afterwards the first indexes began to be compiled, especially those to books of reference, such as herbals. (pages 164-166) Index entries were not always alphabetized by considering every letter in a word from beginning to end, as people are wont to do today. Most early indexes were arranged only by the first letter of the first word, the rest being left in no particular order at all. Gradually, alphabetization advanced to an arrangement by the first syllable, that is, the first two or three letters, the rest of an entry still being left unordered. Only very few indexes compiled in the 16th and early 17th centuries had fully alphabetized entries, but by the 18th century full alphabetization became the rule... (p. 136) (For more information on the subject of indexes, please see Professor Wellisch's Indexing from A to Z, which contains an account of an indexer being punished by having his ears lopped off, a history of narrative indexing, an essay on the zen of indexing, and much more. Please, if you quote from this page, CREDIT THE AUTHOR. Thanks.) Indexes go way back beyond the 17th century. The Gerardes Herbal from the 1590s had several fascinating indexes according to Hilary Calvert. Barbara Cohen writes that the alphabetical listing in the earliest ones only went as far as the first letter of the entry... no one thought at first to index each entry in either letter-by-letter or word-by-word order. Maja-Lisa writes that Peter Heylyn's 1652 Cosmographie in Four Bookes includes a series of tables at the end. They are alphabetical indexes and he prefaces them with "Short Tables may not seeme proportionalble to so long a Work, expecially in an Age wherein there are so many that pretend to learning, who study more the Index then they do the Book." index history Pliny the Elder (died 79 A.D.) wrote a massive work called The Natural History in 37 Books. It was a kind of encyclopedia that comprised information on a wide range of subjects. In order to make it a bit more user friendly, the entire first book of the work is nothing more than a gigantic table of contents in which he lists, book by book, the various subjects discussed. He even appended to each list of items for each book his list of Greek and Roman authors used in compiling the information for that book. He indicates in the very end of his preface to the entire work that this practice was first employed in Latin literature by Valerius Soranus, who lived during the last part of the second century B.C. and the first part of the first century B.C. Pliny's statement that Soranus was the first in Latin literature to do this indicates that it must have already been practiced by Greek writers. example index history asindexing.org/about-indexing/history-of-information-retrieval/ Vaclav Smil - Wikipedia Smil notes that as of 2018, coal, oil, and natural gas still supply 90% of the world's primary energy. Despite decades of growth of renewable energy, the world uses more fossil fuels in 2018 than in 2000, by percentage. oil energy supply en.wikipedia.org/wiki/Vaclav_Smil William Stanley Jevons - Wikipedia Jevons received public recognition for his work on The Coal Question (1865), in which he called attention to the gradual exhaustion of Britain's coal supplies and also put forth the view that increases in energy production efficiency leads to more, not less, consumption.[5]:7f, 161f This view is known today as the Jevons paradox, named after him. Due to this particular work, Jevons is regarded today as the first economist of some standing to develop an 'ecological' perspective on the economy. jevons en.wikipedia.org/wiki/William_Stanley_Jevons AI winter - Wikipedia The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse. artificial intelligence funding problem en.wikipedia.org/wiki/AI_winter alo.mit.edu alo.mit.edu Adaptive markets: financial evolution at the speed of thought by Andrew Lo volatility and leverage are co-determined and arepro-cyclical; that is, together, they amplify the impact ofshocks. The mechanism, to be specific, is that decliningvolatility reduces the cost of taking on more leverage andfurthers a buildup of risk. The lesson: Risk managers mustresist the temptation to sell volatility when it is low andfalling. The AMH implicitly embraces modeling suchbehavior with heterogeneous agents that use heuristics. risk amh evolution markets volatility management alo.mit.edu/wp-content/uploads/2017/01/Berner2018_Article_AdaptiveMarketsFinancialEvolut.pdf Ron Sun - Wikipedia Throughout the past two decades, he has been conducting research in the fields of psychology of learning and hybrid neural network (in particular, applying these models to research on human skill acquisition). Specifically, he has worked on the integrated effect of "top-down" and "bottom-up" learning in human skill acquisition,[1][2] in a variety of task domains, for example, navigation tasks,[3] reasoning tasks, and implicit learning tasks.[4] This inclusion of bottom-up learning processes has been revolutionary in cognitive psychology, because most previous models of learning had focused exclusively on top-down learning (whereas human learning clearly happens in both directions). This research has culminated with the development of an integrated cognitive architecture that can be used to provide a qualitative and quantitative explanation of empirical psychological learning data. The model, CLARION, is a hybrid neural network that can be used to simulate problem solving and social interactions as well. More importantly, CLARION was the first psychological model that proposed an explanation for the "bottom-up learning" mechanisms present in human skill acquisition: His numerous papers on the subject have brought attention to this neglected area in cognitive psychology. neural networks psychology learning en.wikipedia.org/wiki/Ron_Sun worrydream.com worrydream.com Learnable Programming Bob Barton [said] "The basic principle of recursive design is to make the parts have the same power as the whole." For the first time I thought of the whole as the entire computer, and wondered why anyone would want to divide it up into weaker things called data structures and procedures. Why not divide it up into little computers... Why not thousands of them, each simulating a useful structure? recursion design computers quote alan kay worrydream.com/LearnableProgramming/ www.investopedia.com www.investopedia.com The 2007-08 Financial Crisis in Review To keep recession away, the Federal Reserve lowered the Federal funds rate 11 times - from 6.5% in May 2000 to 1.75% in December 2001 - creating a flood of liquidity in the economy. Cheap money, once out of the bottle, always looks to be taken for a ride. It found easy prey in restless bankers—and even more restless borrowers who had no income, no job and no assets. These subprime borrowers wanted to realize their life's dream of acquiring a home. For them, holding the hands of a willing banker was a new ray of hope. More home loans, more home buyers, more appreciation in home prices. It wasn't long before things started to move just as the cheap money wanted them to. interest rate change contributions financial crisis investopedia.com/articles/economics/09/financial-crisis-review.asp www.itweb.co.za www.itweb.co.za Eight steps to success when designing document-centric workflows in financial institutions Virtually all BPMs have utilities for creating simple, data-gathering forms. And in many types of workflows, these simple forms may be adequate. However, in any workflow that includes complex document assembly (such as loan origination workflows), BPM forms are not likely to get the job done. Automating the assembly of complex documents requires ultra-sophisticated data-gathering forms, which can only be designed and created after the documents themselves have been automated. Put another way, you won't know which questions need to be asked to generate the document(s) until you've merged variables and business logic into the documents themselves. The variables you merge into the document serve as question fields in the data gathering forms. And here's the key point - since you have to use the document assembly platform to create interviews that are sophisticated enough to gather data for your complex documents, you might as well use the document assembly platform to generate all data-gathering forms in all of your workflows. data acquisition document benefits itweb.co.za/content/XGxwQDM1bKRqlPVo Benjamin Franklin - Wikipedia malcolmjmr 22 Mar 2019 collate new term collate disquisition new term disquisition new term en.wikipedia.org/wiki/Benjamin_Franklin Ken Jennings - Wikipedia malcolmjmr 27 Feb 2019 In a 2011 Reddit IAmA, Jennings recalled how in 2004 the Democratic politicians Chuck Schumer and Harry Reid unsuccessfully asked Jennings to run for the United States Senate from Utah. Jennings commented, "That was when I realized the Democratic Party was f@#$ed in '04."[19] example politics democrats problem en.wikipedia.org/wiki/Ken_Jennings Write Like You Talk You don't need complex sentences to express complex ideas. When specialists in some abstruse topic talk to one another about ideas in their field, they don't use sentences any more complex than they do when talking about what to have for lunch. They use different words, certainly. But even those they use no more than necessary. And in my experience, the harder the subject, the more informally experts speak. Partly, I think, because they have less to prove, and partly because the harder the ideas you're talking about, the less you can afford to let language get in the way. writing process quote It seems to be hard for most people to write in spoken language. So perhaps the best solution is to write your first draft the way you usually would, then afterward look at each sentence and ask "Is this the way I'd say this if I were talking to a friend?" If it isn't, imagine what you would say, and use that instead. After a while this filter will start to operate as you write. When you write something you wouldn't say, you'll hear the clank as it hits the page.Before I publish a new essay, I read it out loud and fix everything that doesn't sound like conversation. I even fix bits that are phonetically awkward; I don't know if that's necessary, but it doesn't cost much. If you simply manage to write in spoken language, you'll be ahead of 95% of writers. And it's so easy to do: just don't let a sentence through unless it's the way you'd say it to a friend. conclusion quote writing process paulgraham.com/talk.html www.ccn.com www.ccn.com Facebook Discussed Launching Cryptocurrency: Report "They're actively, actively recruiting," said Cheddar's Alex Heath. "They're also trying to scoop up crypto start-ups that are at the white-paper level, which means they don't really even have a product yet." ccn.com/facebook-aggressively-hiring-blockchain-devs-discussed-launching-cryptocurrency-report/ No Silver Lining: A Precious Metal Gets 'Hit From Two Sides' Portfolio flows into emerging markets slowed sharply in August to $2.2 billion from $13.7 billion in July, the Institute of International Finance said. emerging markets capital flows wsj.com/articles/no-silver-lining-a-precious-metal-gets-hit-from-two-sides-1536148803 Asian Junk Bonds Can't Catch a Break The selloff partly reflects a broader malaise in emerging markets. U.S. interest rate increases and a stronger dollar have lured cash back to America, often at the expense of developing economies. Some countries have come under additional pressure because of U.S. tariffs or sanctions, while economic turmoil in Turkey and Argentina have further fueled investors' concerns. emerging markets debt wsj.com/articles/asian-junk-bonds-cant-catch-a-break-1536128566 fortune.com fortune.com The NYSE's Owner Wants to Bring Bitcoin to Your 401(k). Are Crypto Credit Cards Next? Bakkt will provide access to a new Bitcoin trading platform on the ICE Futures U.S. exchange. And it will also offer full warehousing services, a business that ICE doesn't have. "Bakkt's revenue will come from two sources," says Loeffler, "the trading fees on the ICE Futures U.S. exchange, and warehouse fees paid by the customers that buy Bitcoin and store with Bakkt." businessmodel revenue bakkt Bakkt plans to offer a full package combining a major CFTC-regulated exchange with CFTC-regulated clearing and custody, pending the approval from the commission and other regulators. still pending regulatory approval bakkt regulations At a recent meeting with the couple in the plush Bond Room at the NYSE, Sprecher stressed that Loeffler has been a collaborator in charting ICE's next big move. "Kelly and I brainstormed for five years to find a strategy for digital currencies," says Sprecher. bakkt is 5 years in the making Cracking the 401(k) and IRA market for cryptocurrency would be a huge win for Bakkt. But the startup's plans raise the prospect of an even more ambitious goal: Using Bitcoin to streamline and disrupt the world of retail payments by moving consumers from swiping credit cards to scanning their Bitcoin apps. The market opportunity is gigantic: Consumers worldwide are paying lofty credit card or online-shopping fees on $25 trillion a year in annual purchases. Allowing money from 401ks and IRA's would allow for huge influx of passive capital. Retail component would actually cause selling pressure as was seen in 2015 when more and more retailers started accepting bitcoin. fortune.com/longform/nyse-owner-bitcoin-exchange-startup/ Gold exchange-traded product - Wikipedia The idea of a gold exchange-traded fund was first conceptualized by Benchmark Asset Management Company Private Ltd in India when they filed a proposal with the SEBI in May 2002. However it did not receive regulatory approval at first and was only launched later in March 2007 Took 5 years to get approval for gold etf in India en.wikipedia.org/wiki/Gold_exchange-traded_product Exchange-traded fund - Wikipedia However, most ETCs implement a futures trading strategy, which may produce quite different results from owning the commodity. However, generally commodity ETFs are index funds tracking non-security indices. Because they do not invest in securities, commodity ETFs are not regulated as investment companies under the Investment Company Act of 1940 in the United States, although their public offering is subject to SEC review and they need an SEC no-action letter under the Securities Exchange Act of 1934. They may, however, be subject to regulation by the Commodity Futures Trading Commission. Commodity etfs are regulated by CFTC but need a no action letter from the SEC to be approved. The idea of a Gold ETF was first officially conceptualised by Benchmark Asset Management Company Private Ltd in India when they filed a proposal with the SEBI in May 2002.[32] The first gold exchange-traded fund was Gold Bullion Securities launched on the ASX in 2003, and the first silver exchange-traded fund was iShares Silver Trust launched on the NYSE in 2006. As of November 2010 a commodity ETF, namely SPDR Gold Shares, was the second-largest ETF by market capitalization.[33] In 8 years gold etf became the second largest by market cap en.wikipedia.org/wiki/Exchange-traded_fund www.6sqft.com www.6sqft.com City to develop 2,400 new affordable housing units in East Harlem | 6sqft Mayor de Blasio and his administration have made progress in meeting their goal of building 200,000 affordable units over the span of a decade, as 21,963 new units were added in 2016, the most in 27 years. However, there continues to be a shortage in East Harlem. Out of the nearly 20,000 affordable units, the city brought to all five boroughs, just 249 units have been built in East Harlem, according to a new report by the Department of Housing and Preservation Development (HPD). To better accommodate these residents, the city plans on expediting the construction of 2,400 units of affordable housing over the next few years, as DNA Info reported. Affordable Housing NYC data 6sqft.com/city-to-develop-2400-new-affordable-housing-units-in-east-harlem/ www.learndatasci.com www.learndatasci.com Python for Finance, Part 2: Intro to Quantitative Trading Strategies However, price time-series have some drawbacks. Prices are usually only positive, which makes it harder to use models and approaches which require or produce negative numbers. In addition, price time-series are usually non-stationary, that is their statistical properties are less stable over time. learndatasci.com/tutorials/python-finance-part-2-intro-quantitative-trading-strategies/ nbviewer.jupyter.org nbviewer.jupyter.org Notebook on nbviewer Denote NNN as the number of instances of evidence we possess. As we gather an infinite amount of evidence, say as N→∞N→∞N \rightarrow \infty, our Bayesian results (often) align with frequentist results. Hence for large NNN, statistical inference is more or less objective. On the other hand, for small NNN, inference is much more unstable: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we preserve the uncertainty that reflects the instability of statistical inference of a small NNN dataset. Law of large numbers helps to get to the frequentist result but the bayesian perspective reflects instability of inferential statistics when the number of observed inferences is small. nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb medium.com medium.com Fat protocols aren't new: What blockchain can learn from p2p file sharing A core tenet of the Y Combinator playbook for startups is to talk to your users. If you're interested in building a third party app on top of a fat protocol, the lesson might be to also talk to competing apps' users to figure out what needs aren't being served. In a similar vein, protocol developers should talk to app developers and learn what they think end users want. this isn't happening nearly enough which is why protocols don't provide tech components for viable end user apps medium.com/@jbackus/fat-protocols-arent-new-42d2c538db41 youngry.com youngry.com Why Shark Tank's Mr. Wonderful Loves Royalty Based Funding – YOUNGRY™ The reason Mr. Wonderful loves royalty based funding is because it is a big win for both businesses and investors. Investors see a return on helping businesses succeed. Experienced investors will even offer guidance to help business owners avoid the pitfalls that many entrepreneurs stumble into. On the business side, entrepreneurs get the financing they need without debt or sacrificing ownership of their companies in any way. Additionally, since repayment of royalty based financing is structured around revenue, there are no rigid payment schedule. Royalty based funding provides financing and flexibility, which gives businesses the freedom to reach their potential, while simultaneously providing healthy returns to investors. royalties funding benefits youngry.com/why-shark-tanks-mr-wonderful-loves-royalty-based-funding/ 9 Reasons Why SaaS Subscription Pricing is the 💣💥 Here are the definitions to make sure we're on the same page:Subscription model — a periodic (monthly, yearly, or seasonal) payment to gain access to products or services.Transactional model — you pay as you use the products and services. medium.com/we-are-builders/9-reasons-why-saas-subscription-pricing-is-the-c0490cf78a36 s3.eu-west-2.amazonaws.com s3.eu-west-2.amazonaws.com An+Investor's+Take+on+Cryptoassets+v6.pdf Second, recall that the impetus formovingfrom proof-of-workto proof-of-stakeis to reduce the amount of computational resource and energy required to maintain the network by a couple orders of magnitude. That's good forscalability and potential adoption, butalso meansa commensurate reduction in the PQ of the network. The impetus is for the reduction of technical debt and the increased efficiency of network resource provision. The computational resources used for mining and not for processing transactions gets repurposed to increase the amount of transactions that can be processed. This assumption that Q is constant is bizarre. If just looking at transaction throughput the goal is to be able to process several 100 thousand transactions per second if not millions. P and Q are clearly inversely correlated. Is that added valueenoughto offsetitsinefficiency compared to the incumbent centralised Twitter? Would Token Twitter offer compellingly higher utility compared to centralised Twitter, including enough surplus utility to offset the cost of operating the consensus mechanism? I'm notso sure. Considering the majority of the cost of operating twitter comes from human capital, marketing, legal and accounting and not from IT, which continues to fall on a per unit basis while the aformentioned continue to increase yes. If the assumption is that legal and accounting is no longer needed and developers and other employees are overpaid relative to an entirely crowdsource labor force then then you might see the redundancy costs in IT operations offset by cost reduction in other operational expenses. The combined effect of low and falling PQ and potentially very high V is that the utility value of utility cryptoassets at equilibrium should in fact be relatively low. Utility value of utility crypto assets are not entirely a function of the cost of network resources. These assets also provide influence which can't be financial measured as it captures the participants the expected future value of the network and what having influence over its direction can afford that particular participant. Also I would think that PQ is artificially high right now because of how inefficient blockchains are but as P falls due to further scalability Q should increase not only to offset declines in P but overcompensate the declining P as more services can be built on this infrastructure. A premium will be placed on the network effects of protocol that has a successful applicaiton that other applicaitons will want to interoperate with for its data and microservices (e.g. identity account, financial etc). s3.eu-west-2.amazonaws.com/john-pfeffer/An+Investor's+Take+on+Cryptoassets+v6.pdf www.sidehustlenation.com www.sidehustlenation.com Mechanical Turk Review: How I Made $21,000 a Quarter at a Time Additionally, there is work available in most countries for people living outside the US, but only workers in the US and India can withdraw cash. Workers from other countries can only redeem their earnings through Amazon gift cards. mturk problem payments mturk sidehustlenation.com/amazon-mechanical-turk-review/ Amazon Mechanical Turk - Wikipedia Requesters pay Amazon a 20% commission on the price of successfully completed jobs margin mturk en.wikipedia.org/wiki/Amazon_Mechanical_Turk there have always been far more users/consumers than suppliers, which means that in a world where transactions are costly owning the supplier relationship provides significantly more leverage. transaction theory aggregation theory The value chain for any given consumer market is divided into three parts: suppliers, distributors, and consumers/users. The best way to make outsize profits in any of these markets is to either gain a horizontal monopoly in one of the three parts or to integrate two of the parts such that you have a competitive advantage in delivering a vertical solution. In the pre-Internet era the latter depended on controlling distribution. value chain aggregation theory transaction theory aggregation theory Cryptoasset Valuations – Chris Burniske – Medium Since you use a cryptoasset once, and then it's in someone else's hands, this discounting methodology is not accumulative over each year the way it is with a DCF. why does a token have to be used once and exchange hands. a token can be taken out of circulation medium.com/@cburniske/cryptoasset-valuations-ac83479ffca7 www.coindesk.com www.coindesk.com The Blockchain Token Velocity Problem - CoinDesk Basically, all token pitches include a line that goes something like this: "There is a fixed supply of tokens. As demand for the token increases, so must the price." This logic fails to take into account the velocity problem problem token economics velocity token economics coindesk.com/blockchain-token-velocity-problem/ On Value, Velocity and Monetary Theory – BlockChannel – Medium This, of course, leaves us none the wiser as to how to model velocity, as the equation of exchange is nothing more than an identity. MV=PQ just says that the money flow of expenditures is equal to the market value of what those expenditures buy, which is true by definition. The left and right sides are two ways of saying the same thing; it's a form of double-entry accounting where each transaction is simultaneously recorded on both sides of the equation. Whether an effect should be recorded in M, V, P, or Q is, ultimately, arbitrary. To transform the identity into a tool with predictive potency, we need to make a series of assumptions about each of the variables. For example, monetarists assume M is determined exogenously, V is constant, and Q is independent of M and use the equation to demonstrate how increases in the money supply increase P (i.e. cause inflation). equation of exchange assumptions The first practical problem with velocity is that it's frequently employed as a catch-all to make the two sides of the equation of exchange balance. It often simply captures the error in our estimation of the other variables in the model. token economics equation of exchange problem The core thesis of current valuation frameworks is that utility value can be derived by (a) forecasting demand for the underlying resource that a network provisions (the network's 'GDP') and (b) dividing this figure by the monetary base available for its fulfillment to obtain per-unit utility value. Present values can be derived from future expected utility values using conventional discounting. The theoretical framework that nearly all these valuation models employ is the equation of exchange, MV=PQ. token economics valuations equation of exchange medium.com/blockchannel/on-value-velocity-and-monetary-theory-a-new-approach-to-cryptoasset-valuations-32c9b22e3b6f Mechanism design - Wikipedia Mechanism design studies solution concepts for a class of private-information games. Leonid Hurwicz explains that 'in a design problem, the goal function is the main "given", while the mechanism is the unknown. Therefore, the design problem is the "inverse" of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism.'[1] So, two distinguishing features of these games are: that a game "designer" chooses the game structure rather than inheriting one that the designer is interested in the game's outcome Advantages over traditional game theory for token econimics: a game "designer" chooses the game structure rather than inheriting one that the designer is interested in the game's outcome mechanism design en.wikipedia.org/wiki/Mechanism_design ipfs.io ipfs.io Placeholder Thesis Summary.pdf 1. Thesis: Open Standards, Market Cycles and Investment ReturnsInformation technology evolves in multi-decade cycles of expansion, consolidation anddecentralization. Open standards reduce production costs, which bring down prices for consumers and increase the potential size of the market. New entrants realizing that cost are now low, competition is scarce and the potential reward is high, attempt to disrupt incumbents with more efficient and scalable business models. Market consolidates around the platforms of the companies that realize and implement these business models first. Demand then builds for a low cost, open source alternative to the incumbent platforms. We favor spreading priceand risk by building up and averaging out of positions over time rather than speculating onspeculation. A committed capital structure with significant capital reserves for staged follow-onsgives us the flexibility to build up our investments independent of market sentiment. We areshielded from having to dump assets on the market to honor redemption requests, avoiding thedreaded "death spiral" which can plague more liquid fund structures. placeholder capital investment strategy We fund the development of decentralized information networks coordinated by a scarcecryptoasset – or token – native to the protocol. Our thesis is that decentralization andstandardization at the data layer of the internet is collapsing the production costs of informationnetworks, eliminating data monopolies and creating a new wave of innovation. investment strategy placeholder capital Crypto provides a new mechanism for organizing human activity on a global basis usingprogrammable financial incentives. It's an opportunity to design information networks which canachieve unprecedented levels of scale by decentralizing the infrastructure, open sourcing thedata, and distributing value more broadly. What we've discovered is the native business model ofnetworks – which, as it turns out, encompass the entire economy. opportunity token economics Most of the use cases today involve compensating machine work (transaction processing, filestorage, etc.) with tokens: the building blocks of decentralized applications. But the greatestlong-term opportunity is in networks where tokens are earned by end-users themselves. investment opportunity token economics We've also realized how inefficient the joint-stock equity industry model is at accounting for anddistributing the real value created by online networks. The value of a share of stock is necessarilya function of profits; the price of Twitter's stock only reflects Twitter Inc's ability to monetizethe data – and not the actual worth of the service. Tokens solve this inefficiency by derivingfinancial value directly from user demand as opposed to "taxing" by extracting profits.
CommonCrawl
The Alpha Constant from Relativistic Groups [PDF] Gustavo R. Gonzalez-Martin Abstract: The value of the alpha constant, known to be equal to an algebraic expression in terms of pi and entire numbers related to certain group volumes, is derived from the relativistic structure group of a geometric unified theory, its subgroups and corresponding symmetric space quotients. On the factor alpha in Peyre's constant [PDF] Ulrich Derenthal,Andreas-Stephan Elsenhans,J?rg Jahnel Mathematics , 2012, Abstract: For an arbitrary del Pezzo surface S, we compute alpha(S), which is the volume of a certain polytope in the dual of the effective cone of S, using Magma and Polymake. The constant alpha(S) appears in Peyre's conjecture for the leading term in the asymptotic formula for the number of rational points of bounded height on S over number fields. Comment on the cosmological constant and a gravitational alpha [PDF] Ronald J. Adler Abstract: We call attention to a simple analogy between atomic physics and cosmology. Both have two characteristic length scales. In atomic physics the lengths are the Compton wavelength of the electron and the Bohr radius; the ratio of these two lengths is the fine structure constant, $\alpha=7.30\times10^{-3}$. In cosmology we take the lengths to be the Planck length and the de Sitter radius divided by $\sqrt 3$; the ratio of these two lengths is about $\alpha_g=1.91\times10^{-61}$, which we suggest should be called the gravitational fine structure constant. There is also a basic energy ratio in atomic physics, the ratio of the hydrogen atom binding energy to the electron rest energy, which is equal to ${\alpha^2}/2$. The analogous energy ratio in cosmology is the ratio of the dark energy density (described in terms of the cosmological constant) to the Planck energy density, which is equal to $(1/8\pi)\alpha_g^2$. The long-standing problem of the nature of the dark energy and its small density is obviously equivalent to understanding the extraordinarily small value of $\alpha_g$. We further emphasize that our observational knowledge of dark energy, which is consistent with the cosmological constant interpretation, is entirely on the cosmological scale, so we know essentially nothing about the nature of dark energy on a smaller and presumably more fundamental scale. Calibration issues in estimating variability of the fine structure constant (alpha) with cosmic time [PDF] Miriam Centurión,Paolo Molaro,Sergei Levshakov Abstract: Laser Comb Wavelength calibration shows that the ThAr one is locally unreliable with possible deviations of up to 100 m/s within one order range, while delivering an overall 1 m/s accuracy (Wilken et al 2009). Such deviation corresponds to delta alpha/alpha ~ 7E-6 for a FeII-MgII pair. Comparison of line shifts among the 5 FeII lines, with almost identical sensitivity to fine structure constant changes, offers a clean way to directly test the presence of possible local wavelength calibration errors of whatever origin. We analyzed 5 absorption systems, with zabs ranging from 1.15 to 2.19 towards 3 bright QSOs. The results show that while some lines are aligned within 20 m/s, others reveal large deviations reaching 200 m/s or higher and corresponding to a delta alpha/alpha > 1E-5 level. The origin of these deviations is not clearly identified but could be related to the adaptation of wavelength calibration to CCD manufacturing irregularities. These results suggest that to draw conclusions from delta alpha/alpha analysis based on one or only few lines must be done with extreme care. The running fine structure constant alpha(E) via the Adler function Jegerlehner, F. High Energy Physics - Phenomenology , 2008, DOI: 10.1016/j.nuclphysbps.2008.09.010 Abstract: We present an up-to-date analysis for a precise determination of the effective fine structure constant and discuss the prospects for future improvements. We advocate to use a determination monitored by the Adler function which allows us to exploit perturbative QCD in an optimal well controlled way. Together with a long term program of hadronic cross section measurements at energies up to a few GeV, a determination of alpha(M_Z) at a precision comparable to the one of the Z mass M_Z should be feasible. Presently alpha(E) at E>1 GeV is the least precisely known of the fundamental parameters of the SM. Since, in spite of substantial progress due to new BaBar exclusive data, the region 1.4 to 2.4 GeV remains the most problematic one a major step in the reduction of the uncertainties are expected from VEPP-2000 and from a possible ``high-energy'' option DAFNE-2 at Frascati. The up-to-date evaluation reads Delta alpha^{(5)}_{had}(M_Z^2) = 0.027515 +/- 0.000149 or alpha^{-1}(M_Z)=128.957 +/- 0.020. Does the fine structure constant vary? A third quasar absorption sample consistent with varying alpha [PDF] J. K. Webb,M. T. Murphy,V. V. Flambaum,S. J. Curran Physics , 2002, DOI: 10.1023/A:1022518515530 Abstract: We report preliminary results from a third sample of quasar absorption line spectra from the Keck telescope which has been studied to search for any possible variation of the fine structure constant, alpha. This third sample, which is larger than the sum of the two previously published samples, shows the same effect, and also gives, as do the previous two samples, a significant result. The combined sample yields a highly significant effect, da/a = (alpha_z - alpha_0)/alpha_0 = -0.57 +/- 0.10 x 10^{-5}, averaged over the redshift range 0.2 < z < 3.7. We include a brief discussion of small-scale kinematic structure in quasar absorbing clouds. However, kinematics are unlikely to impact significantly on the averaged non-zero da/a above, and we have so far been unable to identify any systematic effect which can explain it. New measurements of quasar spectra obtained using independent instrumentation and telescopes are required to properly check the Keck results. The alpha-dependence of transition frequencies for some ions of Ti, Mn, Na, C, and O, and the search for variation of the fine structure constant [PDF] J. C. Berengut,V. A. Dzuba,V. V. Flambaum,M. V. Marchenko Physics , 2004, DOI: 10.1103/PhysRevA.70.064101 Abstract: We use the relativistic Hartree-Fock method, many-body perturbation theory and configuration-interaction method to calculate the dependence of atomic transition frequencies on the fine structure constant, alpha. The results of these calculations will be used in the search for variation of the fine structure constant in quasar absorption spectra. $\Lambda\alpha$DM: Observational constraints on unified dark matter with constant speed of sound Balbi, Amedeo;Bruni, Marco;Quercellini, Claudia High Energy Physics - Phenomenology , 2007, DOI: 10.1103/PhysRevD.76.103519 Abstract: We consider the hypothesis that dark energy and dark matter are the two faces of a single dark component, a unified dark matter (UDM) that we assume can be modeled by the affine equation of state (EoS) $P= p_0 +\alpha \rho$, resulting in an {\it effective cosmological constant} $\rho_\Lambda=-p_0/(1+\alpha)$. The affine EoS arises from the simple assumption that the speed of sound is constant; it may be seen as an approximation to an unknown barotropic EoS $P=P(\rho)$, and may as well represent the tracking solution for the dynamics of a scalar field with appropriate potential. Furthermore, in principle the affine EoS allows the UDM to be phantom. We constrain the parameters of the model, $\alpha$ and $\Omega_\Lambda$, using data from a suite of different cosmological observations, and perform a comparison with the standard $\Lambda$CDM model, containing both cold dark matter and a cosmological constant. First considering a flat cosmology, we find that the UDM model with affine EoS fits the joint observations very well, better than $\Lambda$CDM, with best fit values $\alpha=0.01 \pm 0.02$ and $\Omega_\Lambda=0.70 \pm 0.04$ (95% confidence intervals). The standard model (best fit $\Omega_\Lambda=0.71\pm 0.04$), having one less parameter, is preferred by a Bayesian model comparison. However, the affine EoS is at least as good as the standard model if a flat curvature is not assumed as a prior for $\Lambda$CDM. For the latter, the best fit values are $\Omega_K=-0.02^{+0.01}_{-0.02} $ and $\Omega_\Lambda=0.71 \pm 0.04$, i.e. a closed model is preferred. A phantom UDM with affine EoS is ruled out well beyond $3\sigma$. Probing The Cosmological Constant Through The Alcock-Paczynski Test Based on The Lyman-Alpha Forest [PDF] Wen-Ching Lin,Michael Norman Abstract: In recent years, the possibility of measuring the cosmological constant $\Omega_\Lambda$ through the application of the Alcock-Paczynski test to the Lyman Alpha (Ly$\alpha$) forest has been suggested (McDonald et al. 1999; Hui et al. 1999). Despite the theoretical uncertainties due to a few other cosmological parameters, some of the greatest difficulties we encounter concern the huge uncertainties due to cosmic variance and noise. In this paper, we propose a maximum likelihood estimation (MLE) method to deal with cosmic variance and noise using synthetic spectra of quasistellar objects (QSOs) from our cosmological hydrodynamic simulations. We demonstrate that the MLE method can overcome the cosmic variance problem. Applying the MLE method, we find that we have more than 90% probability to determine $\Omega_\Lambda$ within 20% error and approximately of 66% probability to determine $\Omega_\Lambda$ within 10% error by using 30 pairs QSO spectra when other cosmological parameters are assumed. Another important source of error is from noise in the flux spectra, and we have modeled the corresponding effect by studying artificial spectra with different kinds of noise added. We discover that the noise distribution does not have significant effect on the final cross-correlation functions as long as the signal-to-noise ratio (S/N) is fixed. Finally, a preliminary test and discussion about the sensitivities to other cosmological parameters are included in this paper as well. Deducing the asymptotic normalization constant of the 2+ subthreshold state in 16O from 12C + alpha elastic scattering [PDF] Jean-Marc Sparenberg Physics , 2004, DOI: 10.1016/j.nuclphysa.2004.04.077 Abstract: R-matrix analyses of the 12C + alpha elastic-scattering phase shifts deduced from a recent high-precision measurement of the differential cross sections are performed. The l=0 phase shifts constrain the R-matrix radius a around 5.85 fm, while the l=2 phase shifts lead to a strong constrain neither on a nor on the asymptotic normalization constant C of the 2+ subthreshold state (except for a loose upper limit). This contradicts previous R-matrix analyses of the 12C + alpha elastic scattering and explains the incompatibility between values of C obtained in these analyses.
CommonCrawl
Numerical modeling and characterization of a peculiar flow-like landslide Mattia Ceccatelli ORCID: orcid.org/0000-0002-7512-99711, Giovanni Gigli1, Luca Lombardi1, Massimiliano Nocentini1 & Teresa Salvatici1 Geoenvironmental Disasters volume 4, Article number: 23 (2017) Cite this article On March 25th, 2015, a rapid landslide occurred upstream of the village of Gessi-Mazzalasino, in the municipality of Scandiano, affecting two buildings. Rapid landslides, due to their high velocity and mobility, can affect large areas and cause extensive damage. Considering the often unpredictable kinematics of landslides, the post-failure behavior has been studied by many authors to predict the landslide runout phase for hazard assessment. With the aim of characterizing the Gessi-Mazzalasino landslide, field surveys were integrated with the results of laboratory tests. The geometric characteristics (thickness, area and volume) and kinematic aspects of the landslide were estimated by using a laser scanning survey and geomorphological data. To model the landslide and obtain its rheological parameters, a back analysis of the event was performed by means of a depth-averaged 3D numerical code called DAN3D. The results of the back analysis of the landslide propagation were validated with field surveys and velocity estimations along selected sections of the landslide. Finally, potential areas prone to failure or reactivation were identified, and a new simulation was performed that considered the back-calculated rheological parameters. Rapid landslides are one of the most dangerous natural hazards and are one of the most frequent natural disasters in the world. Therefore, prediction of post-failure motion is an essential component of hazard assessment when a potential source of a mobile landslide it is located. To assess the risk affecting the area, both numerical and empirical methods have been proposed, in order to predict the runout phase of the phenomenon. For the numerical modelling of the landslide, carried out with DAN-3D code, the best results were obtained by using a Voellmy reological model, with a constant turbulence parameter (ξ) of 250 m/s2 and a friction parameter (μ) comprised between 0.15 and 0.19. The rheological parameters obtained through dynamic back analyses were used to evaluate the propagation phase and the deposition areas of new potential landslides, that could affect the same area of the 25th March 2015 event. The predicted runout length obtained by the DAN3D software was compared to runout lengths predicted by the Corominas (Can Geotech J 33:260–271, 1996), (Nat. Hazards 19, 47-77) and (UNICIV Report, R-416, School of Civil & Environmental Engineering, UNSW, Sydney Australia 2003) empirical relations. All the data confirm that the impact area of possible future events will be smaller than the 2015 event, probably due to the safety measures established after the landslide. Rapid landslides such as debris flows, debris avalanches, rock avalanches and flow slides are instability phenomena that affect superficial deposits as a consequence of intense and prolonged rainfall events. Rapid landslides are one of the most dangerous and frequent natural hazards in the world and can cause significant damage to goods and people in their path. Guzzetti (2000) showed that more than 80% of the deaths and injuries due to landslides in Italy were related to fast-moving failures, including debris flows, rockfalls, rockslides, and soil slips. At approximately 07:00 PM on March 25th, 2015 a rapid landslide was triggered upstream of the village of Gessi-Mazzalasino, in the municipality of Scandiano (Emilia Romagna region), in north-central Italy (44°34′45″N,10°39′18 E, Fig. 1) due to days of heavy and persistent rainfall. a Topographic map and b) geological map of the study area (from Servizio Geologico Sismico e dei Suoli, Emilia Romagna region, 2011) The landslide was triggered at approximately 220 m a.s.l. and reached the village, causing slight damage to two buildings that were evacuated. One of the main challenges regarding the analysis of rapid landslides is that they are affected by different mechanisms during the failure and post-failure stages (Bandara et al., 2016). Many studies can be found in the literature that are focused on the analysis of landslides using experimental and mathematical methods. Empirical formulas derived from the statistical data of past landslides can provide valuable information (Hsü, 1975; Corominas, 1996), but these formulas are generally approximations, and the obtained information is usually limited to specific contexts. To better understand the effects of landslides, numerical modeling is a particularly useful tool that is capable of capturing the entire landslide process in both space and time. In this paper, we conduct a combined analysis, via both empirical and numerical approaches, to characterize the 2015 phenomenon and to obtain additional information for a risk assessment of the area. A back analysis of the post-failure behavior was conducted with the DAN3D code (McDougall & Hungr, 2004; Hungr & McDougall, 2009), using a trial and error procedure, to obtain the rheological parameters of the March 25th, 2015 phenomenon. DAN3D was developed for the simulation of extremely rapid landslides, even in complex topographies (McDougall & Hungr 2004, Salvatici et al. 2017). Since this code is also capable of simulating material motion and its corresponding rheological changes (Hungr, 1995), DAN3D was used to study the 2015 event. The simulation results were validated by means of runout lengths and flow velocities derived from empirical runout prediction methods. Finally, a forecast analysis was carried out to evaluate the characteristics of potential landslides that could occur in the area in the future, using the rheological parameters obtained by the back analysis of the 2015 event and the post-event digital elevation model (DEM) of the area. Study area and landslide description The study area is located in the municipality of Scandiano, in the Emilia Romagna region. The landslide affected the western slope of the Tresinaro Valley, above the village of Gessi-Mazzalasino. The area is geologically characterized by units of the External Liguride domain and the Neogene-Quaternary succession of the Northern Apennines. The External Liguride domain consists of thin calcareous turbiditic formations known as the Palombini Shales and Varicolored Shales, while the Neogene-Quaternary succession in this area is represented by the Gessoso-Solfifera Formation, the alluvial units of the Ravenna Subsynthem, and the Modena Unit (Amorosi, 1999). Several active and inactive landslide deposits are also located in the area; the 2015 landslide originated from one of these deposits (Fig. 1). Landslide description The landslide source area is located on the slope upstream of the village of Gessi-Mazzalasino (Fig. 2) at an elevation between 230 and 215 m a.s.l. The triggering event was likely heavy rainfall that occurred a few days before the event. The Cà de Caroli weather station, located 1 km northeast of the study area, recorded more than 100 mm of rainfall over a 10-day period (Fig. 3), with a peak rainfall intensity of approximately 60 mm on March 25th (Fig. 4); for comparison, the annual average rainfall at this station is approximately 750 mm. Aerial photograph of the Gessi-Mazzalasino landslide (photo by G. Bertolini) Daily rainfall intensity from 13/03/2015 to 30/03/2015 Hourly rainfall intensity on 25/03/2015 The source area is approximately 1700 m2 and has an irregular shape. The average slope of the source area is approximately 17–20°, but the slope increases up to 30–35° in the triggering area. The initial failure triggered at approximately 220 m a.s.l., approximately 20 m below the ridge. This altitude difference may reflect an increase in the pore pressure due to the hydraulic head, which may represent an additional instability parameter of the slope in addition to the heavy rainfall. The flow-like landslide, after initially spreading in a flat area in the middle sector of the slope, moved through an existing impluvium and reached the inhabited area at the foot of the hill, 450 m below the source area. The thickness of the deposits ranges from a few decimeters up to 2–3 m in the most significant accumulations areas. Field evidence showed that the mass movement started as a sliding mass of the surface layers at the upper part of the slope and evolved into a rapid mud flow at the top of the impluvium, where the flow was channeled into, probably due to the addition and mixing of surface water during the mass movement. The source volume, approximately 10,000 m3, and the planimetric area of the landslide have been identified from the field observations and aerial images that were collected after the event. Geotechnical characterization Two soil samples were collected from the landslide deposits immediately after the event to perform a geotechnical characterization of the materials and recreate the initial flow conditions. The first soil sample was collected in the deposit area at the beginning of the channelized section, and the second sample was collected at the landslide toe (Fig. 5). These samples were subjected to the following laboratory tests: index property testing, Atterberg limits testing, grain size analysis, and direct shear testing. Location of the soil samples (in red) and the geotechnical field investigation (in cyan); point 3: permeability measurement; point 4: permeability measurement and BST In addition, three geotechnical in situ tests were carried out with the aim of collecting further information about the soil in its natural condition. Specifically, two permeability measurements were carried out with a constant compact head permeameter (Amoozemeter) and one measurement of shear strength was carried out with a borehole shear test (BST). The in situ tests were carried out both outside of the landslide area (test number 3) and within the landslide deposit (test number 4). The BST tests were performed on soils in unsaturated conditions; at an equivalent depth, matric suction values (u a − u w) were measured with tensiometers. The BST results were interpreted using the Fredlund et al. (1978) shear strength equation for unsaturated soils. $$ \tau ={c}^{\hbox{'}}+\left(\sigma -{u}_a\right)\tan {\varPhi}^{\hbox{'}}+\left({u}_a-{u}_w\right)\tan {\varPhi}_b^{\hbox{'}} $$ where τ is the shear strength, c' is the effective cohesion, σ is the total normal stress, u a is the pore air pressure due to surface tension, φ′ is the effective friction angle, u w is the pore water pressure, and φ b is the angle expressing the rate of the increase in strength related to matric suction. The BST test results show that the internal friction angle equals 33.8°. The procedure used for measuring ks in the field is called the constant-head well permeameter technique (Philip, 1985), and it is carried out in a borehole. This procedure allowed us to measure the amount of water flowing through the soil in a given time interval under soil-saturated conditions. The saturated permeability of the soil is evaluated with the Glover solution: $$ {k}_s=\frac{Q\left[{\mathrm{sin}}^{-1}\left(h/r\right)-{\left(\frac{r^2}{h^2}+1\right)}^{\frac{1}{2}}+r/h\right]}{2\pi {h}^2} $$ where Q is the steady-state rate of water flow from the permeameter into the soil, sinh−1 is the inverse hyperbolic sine function, h is the depth of water in the borehole, and r is the radius of the borehole. The measured saturated hydraulic conductivity ranges from 2.19 × 10−6 m/s to 5.19 × 10−7 m/s, corresponding to samples 3 and 4. Samples 1 and 2 are primarily unsorted silty soils (Fig. 6) and are classified as a clay of low plasticity (CL) and a silt (ML), respectively, following the Unified Soil Classification System (USCS, Wagner, 1957). The samples have plasticity index (IP) values ranging from 8 to 11. Granulometric curves for the two soil samples (black line for sample and red line for sample 2) Direct shear tests were performed on reconstituted samples, using normal stresses between 40 and 80 kPa, determined from the in situ characteristics. The internal friction angle ranges from 29.1 to 30.1°, while the cohesion (c') is very low. The results of the laboratory and in situ tests are shown in Table 1. Table 1 Geotechnical parameters obtained from laboratory and in situ tests In Fig. 7, the matrix compositions of the two soil samples from the 2015 landslide are compared with the compositions of earth flows, debris flows and mud flows from several areas of the world (Hungr et al., 2001). Ternary plot of the two soil samples with textural classification (from Hungr et al., 2001) Hungr et al. (2001) distinguished different materials involved in flow-like landslides on the basis of several material geotechnical properties. Both the samples from the 2015 landslide deposit had low plasticity indices (IP =11 for sample 1 and IP = 8 for sample 2) and the liquidity index (IL) values were approximately 0.6 and 0.5, respectively. As shown in Fig. 7, the matrix compositions of the 2015 landslide samples fall in the textural field of earth flows and mud flows, while debris flows typically contain less than 30% silt and finer particles. A comparison of colloidal indices does not allow a clear distinction to be made between these two different classes. Earth flows have clay contents ranging from 10% to 70%, averaging approximately 35%, while debris and mud flows are usually not plastic or are only weakly plastic. However, some mud flows derived from volcanic sources may have clay contents greater than 10% and plasticity indices of more than 10 (Jordan, 1994). Therefore, the distinction between "mud" and "earth" should not be based solely on grain size distribution but should instead be derived from the context of each landslide class. Specifically, earth flows and mud flows may involve material of similar texture but are significantly different in other ways; in particular, the velocity of movement during an earth flow differs from that of a mud flow. Velocity analysis There are many equations in the literature for estimating the velocity of the frontal part of flow-like landslides (Hungr et al. 1984). These relations provide a useful parameter to validate the back analysis results (Salvatici et al., 2017, Nocentini et al., 2015). In this work, flow velocity was estimated in the channelized section of the landslide, along the cross sections shown in Fig. 8, by using two methods: the superelevation of the debris surface in the channel belt (Johnson & Rondine, 1984) and the Poiseuille equation (Hungr et al., 1984). Location of the cross sections for flow velocity estimation. a) Location of the cross sections for flow velocity estimation and b) Sections profiles The Johnson & Rondine (1984) relation is based on the difference in the splash heights on the inside and outside of the bends in the flow path (Nocentini et al., 2015). The superelevation of the debris wave around the channel bends tends to be higher than that on the opposite side due to the centrifugal force (Fig. 9). Empirical formula parameters for flow velocity estimation: a) splash heights on the inside and outside of the bends of the flow path (cross section 2); b) radius of curvature; and c) slope angle Thus, in cross sections 1 and 2, the velocity can be calculated by using the following equation: $$ v=\sqrt{gR\cos \delta \mathit{\tan}\beta } $$ where β is the angle between the line connecting the top of the debris waves at both sides of the section and a horizontal line, δ is the slope angle of the flow path, R is the radius of curvature and g is the gravity acceleration. The radius of curvature of the channel was obtained graphical processing using a 1:5000 topographic map. The Hungr et al. (1984) relation, which is based on the Poiseuille equation, can be used to evaluate the flow velocity in the straight sections (cross sections 3 and 4). This equation relates the velocity to the geometric characteristic of the path, the unit weight of the flow mass, and the viscosity of the flow mass: $$ v=\frac{\gamma \mathit{\sin}\delta {H}^2}{lv} $$ where y is the unit weight of the material and is obtained by laboratory test, δ is the slope angle of the flow path, H is the flow depth, l is a constant based on the cross-sectional shape of the channel (3 for a broad channel and 8 for a semicircular channel) and ν is the dynamic viscosity of the flow (assumed to be 3, as indicated by Hungr et al., 1984). The results of the estimated velocity and the geometric parameters of the path are summarized in Tables 2 and 3. Table 2 Velocity values of the flow obtained according to the Johnson & Rondine (1984) formula Table 3 Velocity values of the flow obtained according to the Hungr et al. (1984) formula Laser scanning survey New high-resolution surveying techniques, such as terrestrial laser scanning, quickly obtain detailed 3D terrain models that can be employed in runout analyses (Gigli et al., 2014). A laser scanning investigation was performed during two field surveys, on April 1st, 2015, and April 16th, 2015, by means of a long-range 3D terrestrial laser imaging sensor (RIEGL LMS-Z420i device), which is able to determine the position of up to 12,000 points per second, with a maximum angular resolution of 0.008° and an accuracy of ±10 mm from a maximum distance of 800 m. To completely cover the intervention areas and avoid the shadow areas, were captured four scans from different positions (Fig. 10). Laser scanner point cloud from four scan positions Several laser cylindrical reflectors were placed on the hill slopes, and their coordinates were defined by performing a GPS survey. These tie points were later used to align the point clouds. This process is required for correctly georeferencing the point cloud on a chosen reference system and for merging two or more scans of the same object realized from different points of view. On April 22nd, 2015, another GPS survey was carried out to reconstruct the exact geometry of the landslide body, define the source area and identify trenches that could develop into the edges of potential detachment areas. The data obtained from the laser scanning surveys have been processed to obtain a high-resolution DEM of the area (Fig. 11). DEM of the area Some DEM sectors outside of the landslide were not acquired due to the presence of buildings and dense vegetation, particularly near the toe portion; therefore, it was necessary to integrate the model with an existing 1:5000 topographic map. During the data processing, the safety works on the landslide started. These works, in an initial phase, included the construction of an earthfill dam (Fig. 12) to prevent the excessive expansion of future landslides and to channel the flow of those potential landslides towards the existing channel. One last GPS survey, on September 9th, 2015, was carried out to detect the geometry and location of the earthfill dam, which was then implemented in the digital terrain model, to capture the modified shape of the slope and the post-landslide conditions. Earthfill dam, built during safety operations in the landslide area Runout simulation methods Methods to predict landslide runout were grouped into two categories by (Rickenmann 1999): the first group includes empirical methods that are based on statistical analyses of past events (Iverson, 1997; Corominas, 1996; Hunter & Fell, 2003), and the second group includes analytical methods that account for conservation of momentum and energy to simulate the propagation of flow using 2D or 3D models (Hungr, 1995; McDougall & Hungr, 2004; Hungr & McDougall, 2006). In this work, the runout distance obtained by the DAN3D code was compared with that from empirical methods. DAN3D numerical model DAN3D is a 3D numerical model that uses the continuous Lagrangian approach for integrating the equations of Saint-Venant with depth. The mass conservation equation governs the model: $$ \frac{\partial h}{\partial t}+h\left(\frac{\partial {v}_x}{\partial {v}_y}+\frac{\partial {v}_y}{\partial y}\right)=\frac{\partial b}{\partial t} $$ where b is the bed-normal erosion-entrainment depth, v x and v y are local flow velocities, and t is time. DAN3D employs a simple semi-empirical approach based on the concept of "equivalent fluid", as defined by (Hungr 1995). In this method, the landslides are considered one material governed by simple rheological relations. Therefore, an internal frictional rheology is considered, as well as a basal rheology that depends on one or two parameters (depending on the chosen rheological model) that are established with a calibration procedure by using back analysis. The model requires three input files that describe the topography (path file), source area (source file), and number of materials used with their rheologies and erosion parameters (erosion file). With the aim of simulating different types of fast landslides, DAN3D can implement the following rheological relations: frictional, plastic, turbulent, Bingham and Voellmy. The selection of rheological model to use during the dynamic modeling of a landslide is related to the expected event type and depends on the rheological characteristics of the landslide material. The post-failure phase processes that are triggered during the movement of rapid landslides are extremely complex, and the direct measurement of either the parameters of the involved materials or of the characteristics of the landslide is impossible. Therefore, the best rheological model is determined by performing a back analysis for to the investigated case by using similar phenomena. Empirical relations There are several empirical methods that relate the geometric parameters of landslides to the runout distance; in this case study, three methods, presented by (Corominas 1996), (Rickenmann 1999) and Hunter & Fell (2003), were used to correlate the angle of reach (fahrboschung) with the volume. $$ Log\frac{H}{L}= BlogV+A $$ $$ L=1.9{V}^{0.16}{H}^{0.83} $$ $$ \frac{H}{L}0.69{tana}_2+0.086 $$ where A and B are coefficients that depend on the landslide types and α 2 is the slope inclination. The fahrboschung was defined by (Heim 1932) as the inclination of the line connecting the crest of the landslide source with the toe of the deposits and can be evaluated by the ratio between the elevation difference of the highest and lowest points of flow (H) and the corresponding horizontal distance (L): $$ \mathit{\tan}\alpha =\frac{H}{L} $$ These methods may be applied in preliminary hazard assessments, and they may be compared with the dynamic analysis. Findings and Results Back analysis The calibration procedure is based on a trial and error back analysis of the Gessi-Mazzalasino landslide, to identify the most suitable rheological parameters for describing the flow motion. The path file used for the back analysis was the DEM of the slope, obtained from a 1:5000 topographic map of the area. The source file was defined by the triggering area obtained from the field survey. Since no significant erosion was observed during the field surveys, the erosion file was neglected. The numerous study cases analyzed with the DAN3D code have shown that the Voellmy rheological model is particularly suitable to describe this type of phenomena and that, accordingly, it should be used for modeling the Gessi-Mazzalasino landslide (Nocentini et al., 2015; Gigli et al., 2014). This model, introduced by (Voellmy 1955) for snow avalanches, contains a friction term and a turbulence term: $$ {\tau}_{zx}=-\left(f{\sigma}_z+\frac{\rho {gv}_x^2}{\xi}\right) $$ where f is the frictional coefficient; ξ is a turbulence parameter representing all possible sources of velocity-dependent resistance in landslide dynamics; and ρ, g and v are the density, gravity and velocity, respectively. For the determination of the rheological model parameters, a back analysis was performed based on the study of the deposits from the March 2015 collapse. Two main phases of motion were simulated, according to the field evidence, assuming that the mass movement started as a sliding mass involving the surface layers of the upper part of the slope and evolved into a rapid mud flow as it entered the impluvium, where the flow was channelized. Two basal shear resistances were chosen, according to the landslide dynamics; one rheology material was used for the source area between 226 and 173 m a.s.l., and another was assumed for the channelized area between 173 and 131 m a.s.l. The Voellmy resistance parameters were adjusted by trial and error to achieve the best simulation match in terms of velocity, thickness of deposits and runout distance. The best match between the actual and simulated material distributions was obtained using a frictional coefficient of f = 0.19 for the upper material and a frictional coefficient of f = 0.15 for the lower material, while the turbulence term of ξ = 250.00 m/s2 remained constant throughout the event (Fig. 13). Comparison between the maximum runout distances obtained from the empirical and numerical methods with the field data (in red) Additionally, DAN3D requires the internal frictional angle and the unit weight of the material as input parameters. On the basis of the performed laboratory tests, we used an average internal frictional angle of 30° and a unit weight of 20 kN/m3. The simulation results show that the landslide reaches the flat area located in the middle sector of the slope, where most of the mobilized material was deposited in a layer up to 2 m thick, and then moves through the impluvium until it reaches the inhabited area. Runout distances As show in Fig. 13, there is good agreement between the simulation output and the field data obtained by the GPS survey (landslide path), for both the average thickness and the planar extension of the deposits, especially in the lower part of the slope. The results are summarized in. Table 4 and show some differences between the methods used to evaluate the runout distance. Table 4 Comparison between the calculated and measured runout distances The (Corominas 1996) and (Rickenmann 1999) relations, which produced predicted runout distances of 311 m and 379 m, respectively, underestimate the maximum runout distance of the landslide. However, the Hunter & Fell (2003) relation, with a predicted runout distance of approximately 431 m, agrees with the modeling results and field data; therefore, this is the most suitable empirical method to describe the Gessi-Mazzalasino landslide. Velocity calculations By comparing the flow velocity values obtained by using the empirical equations and the numerical model results, a similar trend along the flow travel distance was observed (Salvatici et al., 2017). The flow velocity was obtained with the models along the four cross sections, the DAN3D results were slightly higher than the velocities obtained by the empirical relations (Table 5). Table 5 Comparison between the DAN3D and empirical velocity values These differences derive from the assumptions made during the simulation phase, making it difficult to model both the kinematic and the depositional parameters of the flow, which were the focus of our study. According to the model results, the flow reached a maximum velocity of approximately 8–12 m/s in the upper part and then slowed down as it entered the impluvium, and it finally stopped when the slope decreased. These results agree with the testimonies of local residents, who evaluated the flow velocity in the final meters of movement at approximately 0.2–1 m/s. Examples of velocity observations from various sources are shown in Fig. 14, together with the velocity values calculated in this paper. These velocities represent point observations or maximum values at randomly chosen locations and are not necessarily maxima for a given event. Range of velocities for various types of flow-like landslides; the range of the estimated Gessi-Mazzalasino landslide velocity is shown in red (square symbols for resident testimonies, triangular symbols for empirical methods and circle symbols for numerical simulation) A clear distinction can be made between extremely rapid processes such as debris flows, mud flows and debris avalanches and slow process such as earthflows (Hungr et al., 2001). Landslide characterization As shown in the previous paragraphs, the distinction between "mud flows" and "earth flows" cannot be based solely on grain size distribution but can instead be derived in other ways, in particular, from the velocity of movement. From the velocities obtained with three direct methods (including resident testimonies and empirical methods), the Gessi-Mazzalasino landslide has an estimated velocity ranging between 0.2 and 3 m/s, while the velocities from the numerical simulation are higher, ranging between 4.5 and 12 m/s. These differences derive from the assumptions made during the simulation phase, resulting in an overestimation of the flow velocity. Analysis of the velocities allow better classification of the Gessi-Mazzalasino event, which has characteristics of a landslide with behaviors between those of mud flow and earthflow phenomena. The Gessi-Mazzalasino landslide material is an unsorted deposit composed of a mixture of sand, gravel and cobbles as well as varying proportions of silt and clay, and the landslide event is characterized by low plasticity and an intermediate velocity. To evaluate the characteristics of potential landslides that could occur in the area, another numerical simulation was performed with DAN3D using the rheological parameters obtained by the back analysis of the 2015 event. For this analysis, the most recent DEM obtained by laser scanner survey was used, representing the topography of the slope after the landslide. During the analysis, a single volume of 10,000 m3 was investigated based on the field survey and on the conservative assumption that failure occurred for a single volume. Figure 14 shows the maximum runout distances obtained from the numerical methods used in this study. The results of the potential landslide show that, due to the safety works, the landslide impact area was predicted to be smaller than that of the 2015 event (Fig. 15), with the mass stopping 70 s after the start of the simulation at the beginning of the impluvium, where the earthfill dam was constructed. A maximum thickness of approximately 2.8 m was reached in the flat area at the middle of the slope, and a maximum velocity of approximately 12 m/s was predicted. Deposit flow thickness from the DAN3D simulation Rapid landslides represent one of the most dangerous natural hazards and are one of the most frequent natural disasters in the world. Therefore, prediction of post-failure motion is an essential component of hazard assessment when a potential source of a mobile landslide can be located. On March 25th, 2015, at approximately 07:00 PM, a rapid landslide was triggered upstream of the village of Gessi-Mazzalasino, in the municipality of Scandiano (Emilia Romagna), and it reached the village, causing slight damage to two buildings that were evacuated for many days. To assess the risk affecting the area, different methods have been proposed to predict the runout phase of the phenomenon. A back analysis based on the path and the deposits of the 2015 event was performed in order to identify the optimal rheological model and to simulate the behavior of potential landslides. The dynamic modeling was carried out by using the DAN3D code, which estimated the extent of the impact area and mapped the distribution of landslide parameters. The predicted runout length obtained by the DAN3D software was compared to runout lengths predicted by the (Corominas 1996), (Rickenmann 1999) and Hunter & Fell (2003) empirical relations. There is good agreement between the simulation output and the observed field data for both the average thickness and the planar extension of the deposits, especially in the lower part of the slope. The Hunter & Fell (2003) results agree with the numerical results, but the (Corominas 1996) and (Rickenmann 1999) equations seem to underestimate the runout distance. To obtain more information about the 2015 landslide, the flow velocity was calculated along four cross sections by means of the superelevation of the debris surface in the channel belt (Johnson & Rondine, 1984) and the Poiseuille equation methods. We classified the Gessi-Mazzalasino landslide into an intermediate category between mud flow and earthflow phenomena on the basis of the velocities and textural composition. All the data, obtained by using a range of methods, confirm that the impact area of possible future events will be smaller than that of the 2015 event, since a potential landslide should stop upstream of the village of Gessi-Mazzalasino due to the safety works constructed after the landslide. The methodology presented in this paper could become a standard procedure in areas affected by different types of flow-like landslides, providing a complete description of hazards. Amorosi, A., M.L. Colalongo, F. Fusco, G. Pasini, and F. Fiorini. 1999. Glacio-eustatic control of continental-shallow marine Cyclicity from late quaternary deposits of the southeastern Po plain, northern Italy. Quat. Res 52 (1): 1–13. Bandara, S., A. Ferrari, and L. Laloui. 2016. Modelling landslides in unsaturated slopes subjected to rainfall infiltration using material point method. Int. J. Numer. Anal. Meth. Geomech 40: 1358–1380. doi:10.1002/nag.2499. Corominas, J. 1996. The angle of reach as a mobility index for small and large landslides. Can Geotech J 33: 260–271. Fredlund, D.G., N.R. Morgenstern, and R.A. Widger. 1978. The shear strength of unsatured soil. Can. Geotech. J 15 (3): 312–321. Gigli, G., W. Frodella, F. Garfagnoli, et al. 2014. 3-D geomechanical rock mass characterization for the evaluation of rockslide susceptibility scenarios. Landslides 11: 131. doi:10.1007/s10346-013-0424-2. Gigli G; Morelli S, Fornera S, Casagli N (2014) Terrestrial laser scanner and geomechanical surveys for the rapid evaluation of rock fall susceptibility scenarios. Landslides vol. 11(1), pp. 1–14, ISSN:1612-510X DOI. Guzzetti, F. 2000. Landslide fatalities and evaluation of landslide risk in Italy. Eng. Geol 58: 89–107. Heim, A. 1932. Bergstruz und Menschenleben, 218. Zurich: Fretz und Wasmuth. Hsü, K.J. 1975. Catastrophic debris streams (sturzstroms) generated by rockfalls. Geol. Soc. Am. Bull 86 (1): 129–140. Hungr, O. 1995. A model for the runout analysis of rapid flow slides, debris flows and avalanches. Can. Geotech. J 32: 610–623. Hungr, O., and McDougall. 2009. S. Two numerical models for landslide dynamic analysis. Comput. Geosci 35: 978–992. doi:10.1016/j.cageo.2007.12.003. Hungr, O., G. Morgan, and R. Kellerhals. 1984. Quantitative analysis of debris torrent hazards for design of remedial measures. Can. Geotech. J 21: 663–677. Hungr, O., S.G. Evans, M. Bovis, and J.N. Hutchinson. 2001. Review of the classification of landslides of the flow type, 221–238. VII: Environmental and Engineering Geoscience. Hunter GJ, Fell R, (2003) The deformation behavior of embankment dams. UNICIV Report, R-416, School of Civil & Environmental Engineering, UNSW, Sydney Australia. Iverson, R.M. 1997. The physics of debris flows. Rev. Geophys 35 (3): 245–296. doi:10.1029/97RG00426. Johnson, A.M., and J.R. Rodine. 1984. Debris flowin slope instability. Prior DB, 257–361. Chichester: Wiley. Jordan, R.P. 1994. Debris flows in the southern Coast Mountains, British Columbia: Dynamic behavior and physical properties, 258. Canada: Department of Geography, University of British Columbia. McDougall, S., and O. Hungr. 2004. A model for the analysis of rapid landslide motion across three - dimensional terrain. Can. Geotech. J 41 (6): 1084–1097. doi:10.1139/T04-052. Nocentini, M., V. Tofani, G. Gigli, F. Fidolini, and N. Casagli. 2015. Modeling debris flows in volcanic terrains for hazard mapping: the case study of Ischia Island (Italy). Landslides 12 (5): 831–846. Philip, J.R. 1985. Approximate analysis of the borehole permeameter in unsaturated soil. Water Res 21: 1025–1033. Rickenmann, D. 1999. Empirical relationship for debris flows. Nat. Hazards 19: 47–77. Salvatici, T., S. Morelli, V. Pazzi, et al. 2017. Debris flow hazard assessment by means of numerical simulations: Implications for the Rotolon creek valley (northern Italy). J. Mt. Sci 14 (4). doi:10.1007/s11629-016-4197-7. Voellmy A (1955). Ueber die Zerstoeerunskraft von Lawinen Schweizerische Bauzeitung. English version "on the destructive force of avalanches" translated by Tate R.E. (1964), ed. US Department of Agricolture Forest Service. Wagner, A.A. 1957. The use of the unified soil classification system by the bureau of reclamation, 125. London: Proceedings of Fourth International Conference on Soil Mechanics and Foundation Engineering. This work was carried out within a research contract between the municipality of Scandiano and the Department of Earth Sciences, University of Florence, entitled: "Studio e Monitoraggio della frana di Gessi-Mazzalasino". The authors would like to thank the local people of the study area for their assistance during fieldwork. We would also like to thank Mr. Giovanni Cantoni for his help during the entire project and Dr. Giovanni Bertolini for providing the aerial photos of the landslide. Department of Earth Sciences, University of Florence, Via G. La Pira, 4, 50121, Florence, Italy Mattia Ceccatelli , Giovanni Gigli , Luca Lombardi , Massimiliano Nocentini & Teresa Salvatici Search for Mattia Ceccatelli in: Search for Giovanni Gigli in: Search for Luca Lombardi in: Search for Massimiliano Nocentini in: Search for Teresa Salvatici in: MC, LL, MN and TS contributed to the fieldwork and were responsible for collecting, integrating and interpreting the field data, as well as preparing the manuscript. GG gave technical support and conceptual advice and contributed to the preparation of the manuscript. All authors read and approved the final manuscript. Correspondence to Mattia Ceccatelli. Ceccatelli, M., Gigli, G., Lombardi, L. et al. Numerical modeling and characterization of a peculiar flow-like landslide. Geoenviron Disasters 4, 23 (2017) doi:10.1186/s40677-017-0087-8 Runout simulation
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) Agriculture, Fishery and Food > Agricultural Engineering/Facilities Asian-Australasian Journal of Animal Sciences (AJAS) aims to publish original and cutting-edge research results and reviews on animal-related aspects of life sciences. Emphasis will be given to studies involving farm animals such as cattle, buffaloes, sheep, goats, pigs, horses and poultry, but studies with other animal species can be considered for publication if the topics are related to fundamental aspects of farm animals. Also studies to improve human health using animal models can be publishable. AJAS will encompass all areas of animal production and fundamental aspects of animal sciences: breeding and genetics, reproduction and physiology, nutrition, meat and milk science, biotechnology, behavior, welfare, health, and livestock farming system. AJAS is sub-divided into 10 sections. - Animal Breeding and Genetics Quantitative and molecular genetics, genomics, genetic evaluation, evolution of domestic animals, and bioinformatics - Animal Reproduction and Physiology Physiology of reproduction, development, growth, lactation and exercise, and gamete biology - Ruminant Nutrition and Forage Utilization Rumen microbiology and function, ruminant nutrition, physiology and metabolism, and forage utilization - Swine Nutrition and Feed Technology Swine nutrition and physiology, evaluation of feeds and feed additives and feed processing technology - Poultry and Laboratory Animal Nutrition Nutrition and physiology of poultry and other non-ruminant animals - Animal Products Milk and meat science, muscle biology, product composition, food safety, food security and functional foods http://submit.ajas.info KSCI KCI SCOPUS SCIE Volume 32 Issue 8_spc African Indigenous Cattle: Unique Genetic Resources in a Rapidly Changing World Mwai, Okeyo;Hanotte, Olivier;Kwon, Young-Jun;Cho, Seoae 911 https://doi.org/10.5713/ajas.15.0002R PDF KSCI At least 150 indigenous African cattle breeds have been named, but the majority of African cattle populations remain largely uncharacterized. As cattle breeds and populations in Africa adapted to various local environmental conditions, they acquired unique features. We know now that the history of African cattle was particularly complex and while several of its episodes remain debated, there is no doubt that African cattle population evolved dramatically over time. Today, we find a mosaic of genetically diverse population from the purest Bos taurus to the nearly pure Bos indicus. African cattle are now found all across the continent, with the exception of the Sahara and the river Congo basin. They are found on the rift valley highlands as well as below sea level in the Afar depression. These unique livestock genetic resources are in danger to disappear rapidly following uncontrolled crossbreeding and breed replacements with exotic breeds. Breeding improvement programs of African indigenous livestock remain too few while paradoxically the demand of livestock products is continually increasing. Many African indigenous breeds are endangered now, and their unique adaptive traits may be lost forever. This paper reviews the unique known characteristics of indigenous African cattle populations while describing the opportunities, the necessity and urgency to understand and utilize these resources to respond to the needs of the people of the continent and to the benefit of African farmers. Estimation of Genetic Parameters for Pork Belly Components in Yorkshire Pigs Kang, H.S.;Lopez, B.M.;Kim, T.H.;Kim, H.S.;Kim, S.H.;Nam, K.C.;Seo, K.S. 922 https://doi.org/10.5713/ajas.14.0678 PDF KSCI This study was conducted to estimate the genetic parameters for pork belly traits and muscles in Yorkshire pigs. Each pork belly was cut into nine parts perpendicular to the thoracic vertebrae (6th to 14th). Traits of belly muscles including the deep pectoral, latissimus dorsi, cutaneous trunci, rectus abdominis, external and internal abdominal oblique from 382 purebred pigs were recorded and analyzed using SAS Package (9.1) and Derivative-free restricted maximum likelihood methods. Heritability estimates for belly traits ranged from 0.27 to 0.49, while they were 0.12 to 0.66 for belly muscles. Moderate to high heritability estimates were noted in belly weight (0.33), belly length (0.28), and belly width (0.49). In belly muscles, the latissimus dorsi and deep pectoral, which are located only in the 6th to 9th vertebrae sections, were found to have heritability estimates ranging from 0.21 to 0.29 and 0.23 to 0.35, respectively. Strong heritability estimates were observed in the 7th to 13th sections of cutaneous trunci muscle ranging from 0.42 to 0.66. Genetic correlations of latissimus dorsi m. with belly length were positive (0.50), while cutaneous trunci m. with belly weight also revealed a positive relationship that ranged from 0.35 to 0.47. The estimated genetic parameters indicate that belly weight can be improved by genetic selection. Differences in the levels of heritability occurred among various parameters of Yorkshire pork belly, which should be considered when performing selection to improve pork belly quality. Moreover, these results can provide valuable information that can be used as the basis for further investigations to improve pork belly. Multiple Linkage Disequilibrium Mapping Methods to Validate Additive Quantitative Trait Loci in Korean Native Cattle (Hanwoo) Li, Yi;Kim, Jong-Joo 926 The efficiency of genome-wide association analysis (GWAS) depends on power of detection for quantitative trait loci (QTL) and precision for QTL mapping. In this study, three different strategies for GWAS were applied to detect QTL for carcass quality traits in the Korean cattle, Hanwoo; a linkage disequilibrium single locus regression method (LDRM), a combined linkage and linkage disequilibrium analysis (LDLA) and a $BayesC{\pi}$ approach. The phenotypes of 486 steers were collected for weaning weight (WWT), yearling weight (YWT), carcass weight (CWT), backfat thickness (BFT), longissimus dorsi muscle area, and marbling score (Marb). Also the genotype data for the steers and their sires were scored with the Illumina bovine 50K single nucleotide polymorphism (SNP) chips. For the two former GWAS methods, threshold values were set at false discovery rate <0.01 on a chromosome-wide level, while a cut-off threshold value was set in the latter model, such that the top five windows, each of which comprised 10 adjacent SNPs, were chosen with significant variation for the phenotype. Four major additive QTL from these three methods had high concordance found in 64.1 to 64.9Mb for Bos taurus autosome (BTA) 7 for WWT, 24.3 to 25.4Mb for BTA14 for CWT, 0.5 to 1.5Mb for BTA6 for BFT and 26.3 to 33.4Mb for BTA29 for BFT. Several candidate genes (i.e. glutamate receptor, ionotropic, ampa 1 [GRIA1], family with sequence similarity 110, member B [FAM110B], and thymocyte selection-associated high mobility group box [TOX]) may be identified close to these QTL. Our result suggests that the use of different linkage disequilibrium mapping approaches can provide more reliable chromosome regions to further pinpoint DNA makers or causative genes in these regions. Whole Genome Association Study to Detect Single Nucleotide Polymorphisms for Behavior in Sapsaree Dog (Canis familiaris) Ha, J.H.;Alama, M.;Lee, D.H.;Kim, J.J. 936 The purpose of this study was to characterize genetic architecture of behavior patterns in Sapsaree dogs. The breed population (n=8,256) has been constructed since 1990 over 12 generations and managed at the Sapsaree Breeding Research Institute, Gyeongsan, Korea. Seven behavioral traits were investigated for 882 individuals. The traits were classified as a quantitative or a categorical group, and heritabilities ($h^2$) and variance components were estimated under the Animal model using ASREML 2.0 software program. In general, the $h^2$ estimates of the traits ranged between 0.00 and 0.16. Strong genetic ($r_G$) and phenotypic ($r_P$) correlations were observed between nerve stability, affability and adaptability, i.e. 0.9 to 0.94 and 0.46 to 0.68, respectively. To detect significant single nucleotide polymorphism (SNP) for the behavioral traits, a total of 134 and 60 samples were genotyped using the Illumina 22K CanineSNP20 and 170K CanineHD bead chips, respectively. Two datasets comprising 60 (Sap60) and 183 (Sap183) samples were analyzed, respectively, of which the latter was based on the SNPs that were embedded on both the 22K and 170K chips. To perform genome-wide association analysis, each SNP was considered with the residuals of each phenotype that were adjusted for sex and year of birth as fixed effects. A least squares based single marker regression analysis was followed by a stepwise regression procedure for the significant SNPs (p<0.01), to determine a best set of SNPs for each trait. A total of 41 SNPs were detected with the Sap183 samples for the behavior traits. The significant SNPs need to be verified using other samples, so as to be utilized to improve behavior traits via marker-assisted selection in the Sapsaree population. Influence of Temperature and Humidity on Pregnancy Rate of Murrah Buffaloes under Subtropical Climate Dash, Soumya;Chakravarty, A.K.;Sah, V.;Jamuna, V.;Behera, R.;Kashyap, N.;Deshmukh, B. 943 Heat stress has adverse effects on fertility of dairy animals. Decline in fertility is linearly associated with an increase in combination of both temperature and humidity. The purpose of this study was to investigate the relationship between temperature humidity index (THI) and the pregnancy rate of Murrah buffaloes in a subtropical climate. The effects of genetic and non-genetic factors viz., sire, parity, period of calving and age group at first calving were found non-significant on pregnancy rate. The effect of THI was found significant (p<0.001) on pregnancy rate of Murrah buffaloes calved for first time and overall pregnancy rate. The threshold THI affecting the pregnancy rate was identified as THI 75. The months from October to March showed THI<75 and considered as non heat stress zone (NHSZ), while months from April to September were determined as heat stress zone (HSZ) with $THI{\geq}75$. The lowest overall pregnancy rate (0.25) was obtained in July with THI 80.9, while the highest overall pregnancy rate (0.59) was found in November with THI 66.1. May and June were identified as critical heat stress zone (CHSZ) within the HSZ with maximum decline (-7%) in pregnancy rate with per unit increase in THI. The highest overall pregnancy rate was estimated as 0.45 in NHSZ with THI value 56.7 to 73.2. The pregnancy rate was found to have declined to 0.28 in HSZ with THI 73.5 to 83.7. However, the lowest pregnancy rate was estimated as 0.27 in CHSZ with THI value 80.3 to 81.6. Effects of Supplementation of Eucalyptus (E. Camaldulensis) Leaf Meal on Feed Intake and Rumen Fermentation Efficiency in Swamp Buffaloes Thao, N.T.;Wanapat, M.;Kang, S.;Cherdthong, A. 951 Four rumen fistulated swamp buffaloes were randomly assigned according to a $4{\times}4$ Latin square design to investigate the effects of Eucalyptus (E. Camaldulensis) leaf meal (ELM) supplementation as a rumen enhancer on feed intake and rumen fermentation characteristics. The dietary treatments were as follows: T1 = 0 g ELM/hd/d; T2 = 40 g ELM/hd/d; T3 = 80 g ELM/hd/d; T4 = 120 g ELM/hd/d, respectively. Experimental animals were kept in individual pens and concentrate was offered at 0.3% BW while rice straw was fed ad libitum. The results revealed that voluntary feed intake and digestion coefficients of nutrients were similar among treatments. Ruminal pH, temperature and blood urea nitrogen concentrations were not affected by ELM supplementation; however, ELM supplementation resulted in lower concentration of ruminal ammonia nitrogen. Total volatile fatty acids, propionate concentration increased with the increasing level of EML (p<0.05) while the proportion of acetate was decreased (p<0.05). Methane production was linearly decreased (p<0.05) with the increasing level of ELM supplementation. Protozoa count and proteolytic bacteria population were reduced (p<0.05) while fungal zoospores and total viable bacteria, amylolytic, cellulolytic bacteria were unchanged. In addition, nitrogen utilization and microbial protein synthesis tended to increase by the dietary treatments. Based on the present findings, it is suggested that ELM could modify the rumen fermentation and is potentially used as a rumen enhancer in methane mitigation and rumen fermentation efficiency. Effects of Dietary Lycopene Supplementation on Plasma Lipid Profile, Lipid Peroxidation and Antioxidant Defense System in Feedlot Bamei Lamb Jiang, Hongqin;Wang, Zhenzhen;Ma, Yong;Qu, Yanghua;Lu, Xiaonan;Luo, Hailing 958 Lycopene, a red non-provitamin A carotenoid, mainly presenting in tomato and tomato byproducts, has the highest antioxidant activity among carotenoids because of its high number of conjugated double bonds. The objective of this study was to investigate the effect of lycopene supplementation in the diet on plasma lipid profile, lipid peroxidation and antioxidant defense system in feedlot lamb. Twenty-eight Bamei male lambs (90 days old) were divided into four groups and fed a basal diet (LP0, 40:60 roughage: concentrate) or the basal diet supplemented with 50, 100, and 200 mg/kg lycopene. After 120 days of feeding, all lambs were slaughtered and sampled. Dietary lycopene supplementation significantly reduced the levels of plasma total cholesterol (p<0.05, linearly), total triglycerides (TG, p<0.05) and low-density lipoprotein cholesterol (LDL-C, p<0.05), as well as atherogenic index (p<0.001), whereas no change was observed in high-density lipoprotein cholesterol (p>0.05). The levels of TG (p<0.001) and LDL-C (p<0.001) were decreased with the feeding time extension, and both showed a linear trend (p<0.01). Malondialdehyde level in plasma and liver decreased linearly with the increase of lycopene inclusion levels (p<0.01). Dietary lycopene intake linearly increased the plasma antioxidant vitamin E level (p<0.001), total antioxidant capacity (T-AOC, p<0.05), and activities of catalase (CAT, p<0.01), glutathione peroxidase (GSH-Px, p<0.05) and superoxide dismutase (SOD, p<0.05). The plasma T-AOC and activities of GSH-Px and SOD decreased with the extension of the feeding time. In liver, dietary lycopene inclusion showed similar antioxidant effects with respect to activities of CAT (p<0.05, linearly) and SOD (p<0.001, linearly). Therefore, it was concluded that lycopene supplementation improved the antioxidant status of the lamb and optimized the plasma lipid profile, the dosage of 200 mg lycopene/kg feed might be desirable for growing lambs to prevent environment stress and maintain normal physiological metabolism. Nighttime Cooling Is an Effective Method for Improving Milk Production in Lactating Goats Exposed to Hot and Humid Environment Sunagawa, Katsunori;Nagamine, Itsuki;Kamata, Yasuhiro;Niino, Noriko;Taniyama, Yoshihiko;Kinjo, Kazuhide;Matayoshi, Ayano 966 Heat production in ruminants follows a diurnal pattern over the course of a day peaking 3 hours following afternoon feeding and then gradually declining to its lowest point prior to morning feeding. In order to clarify the cooling period most effective in reducing decreases in feed intake and milk production, experiments were carried out based on the diurnal rhythm of heat production and heat dissipation. In experiment 1, the effects of hot environment on milk production were investigated. The animals were kept first in a thermoneutral environment ($20.0^{\circ}C$, 80.0%) for 12 days, they were then transitioned to a hot environment ($32^{\circ}C$, 80.0%) for 13 days before being returned to second thermoneutral environment for a further 12 days. In experiment 2, the effectiveness of daytime cooling or nighttime cooling for improving milk production in hot environment was compared. While ten lactating Japanese Saanen goats (aged 2 years, weighing 41.0 kg) during early lactation were used in experiment 1, ten lactating goats (aged 2 years, weighing 47.5 kg) during mid-lactation were used in experiment 2. The animals were fed 300 g of concentrated feed and excessive amounts of crushed alfalfa hay cubes twice daily. Water was given ad libitum. The animals were milked twice daily. When exposed to a hot environment, milk yield and composition decreased significantly (p<0.05). Milk yield in the hot environment did not change with daytime cooling, but tended to increase with nighttime cooling. Compared to the daytime cooling, milk components percentages in the nighttime cooling were not significantly different but the milk components yields in the nighttime cooling were significantly higher (p<0.05). The results indicate that nighttime cooling is more effective than daytime cooling in the reduction of milk production declines in lactating goats exposed to a hot environment. Lipid Sources with Different Fatty Acid Profile Alters the Fatty Acid Profile and Quality of Beef from Confined Nellore Steers Fiorentini, Giovani;Lage, Josiane F.;Carvalho, Isabela P.C.;Messana, Juliana D.;Canesin, Roberta. C.;Reis, Ricardo A.;Berchielli, Telma T. 976 The present study was conducted to determine the effects of lipid sources with different fatty acids profile on meat fatty acids profile and beef quality traits of Nellore. A total of 45 Nellore animals with an average initial body weight of $419{\pm}11kg$ (at $15{\pm}2mo$) were distributed in a completely randomized design consisting of 5 treatments and 9 replicates. The roughage feed was maize silage (600 g/kg on a dry matter [DM] basis) plus concentrate (400 g/kg on a DM basis). The dietary treatments were as follows: without fat (WF), palm oil (PO), linseed oil (LO), protected fat (PF), and soybean grains (SG). No effects of lipid sources were observed (p>0.05) on beef color, pH, water-holding capacity, and sarcomere length. Beef from cattle fed PO had greater shear-force values (p<0.05) compared to beef from cattle fed WF. Deposition of main unsaturated fatty acids (oleic, linoleic, and linolenic) was greater in treatments WF, SG, and LO, respectively, while the values of conjugated linoleic acid (CLA) were greater when animals were fed LO. The inclusion of LO in the diet enhances the concentration of CLA in longissimus muscle and subcutaneous fat besides improving the atherogenicity index and elongase activity. As such, LO can be used with the aim to improve the quality of beef from confined Nellore cattle. Conversely, the use of PO is not recommended since it may increase the concentration of undesirable unsaturated fatty acids in muscle and subcutaneous fat, shear-force and the atherogenicity index. Effects of Palm Kernel Expellers on Growth Performance, Nutrient Digestibility, and Blood Profiles of Weaned Pigs Seo, J.;Kim, W.;Kim, J.;Kim, J.K.;Kim, S.C.;Jang, Y.;Jang, K.;Kim, K.;Kim, B.;Park, S.;Park, I.;Kim, M.K.;Seo, K.S.;Kim, H.B.;Kim, I.H.;Seo, S.;Song, M. 987 This experiment was conducted to investigate the effects of palm kernel expellers on growth performance, nutrient digestibility, and blood profiles of weaned pigs. A total of 88 weaned pigs ($6.94{\pm}0.76kg$ body weight [BW]; 28 d old) were randomly allotted to 2 dietary treatments (4 pigs/pen; 11 replicates/treatment) in a randomized complete block design (sex as a block). The dietary treatments were a typical nursery diet based on corn and soybean meal (CON) and CON added with 20% of palm kernel expellers (PKE). Pigs were fed for 6 wk using a 3-phase feeding program with declining diet complexity and with phases of 1, 2, and 3 wk, respectively. Blood was collected from randomly selected 2 pigs in each pen before weaning and on d 7 after weaning. Pigs were fed respective dietary treatments containing 0.2% chromic oxide from d 29 to 35 after weaning. Fecal samples were collected from randomly selected 2 pigs in each pen daily for the last 3 days after the 4-d adjustment period. Measurements were growth performances, digestibility of dry matter, nitrogen and energy, white and red blood cell counts, packed cell volume, and incidence of diarrhea. The PKE increased average daily gain (ADG) (246 vs 215 g/d; p = 0.06) and average daily feed intake (ADFI) (470 vs 343 g/d; p<0.05) and decreased gain-to-feed ratio (G:F) (0.522 vs 0.628 g/g; p<0.05) during phase 2 compared with CON, but did not affect growth performance during phase 1 and 3. During overall experimental period, PKE increased ADG (383 vs 362 g/d; p = 0.05) and ADFI (549 vs 496 g/d; p<0.05) compared with CON, but did not affect G:F. However, no differences were found on digestibility of dry matter, nitrogen, and energy between CON and PKE. The PKE reduced frequency of diarrhea (15% vs 25%; p = 0.08) for the first 2 wk after weaning compared with CON. Similarly, PKE decreased white blood cells (8.19 vs $9.56{\times}10^3/{\mu}L$; p = 0.07), red blood cells (2.92 vs $3.25{\times}10^6/{\mu}L$; p = 0.09), and packed cell volume (11.1% vs 12.6%; p = 0.06) on d 7 after weaning compared with CON. In conclusion, addition of 20% palm kernel expellers to nursery diet based on corn and soybean meal had no negative effects on growth performance, nutrient digestibility, and blood profiles of weaned pigs. Prediction of Eggshell Ultrastructure via Some Non-destructive and Destructive Measurements in Fayoumi Breed Radwan, Lamiaa M.;Galal, A.;Shemeis, A.R. 993 Possibilities of predicting eggshell ultrastructure from direct non-destructive and destructive measurements were examined using 120 Fayoumi eggs collected from the flock at 45 weeks of age. The non-destructive measurements included weight, length and width of the egg. The destructive measurements were breaking strength and shell thickness. The eggshell ultrastructure traits involved the total thickness of eggshell layer, thickness of palisade layer, cone layer and total score. Prediction of total thickness of eggshell layer based on non-destructive measurements individually or simultaneously was not possible ($R^2=0.01$ to 0.16). The destructive measurements were far more accurate than the non-destructive in predicting total thickness of eggshell layer. Prediction based on breaking strength alone was more accurate ($R^2=0.85$) than that based on shell thickness alone ($R^2=0.72$). Adding shell thickness to breaking strength (the best predictor) increased the accuracy of prediction by 5%. The results obtained indicated that both non-destructive and destructive measurements were not useful in predicting the cone layer ($R^2$ not exceeded 18%). The maximum accuracy of prediction of total score ($R^2=0.48$) was obtained from prediction based on breaking strength alone. Combining shell thicknesses and breaking strength into one equation was no help in improving the accuracy of prediction. The Effect of Bacillus-based Feed Additive on Growth Performance, Nutrient Digestibility, Fecal Gas Emission, and Pen Cleanup Characteristics of Growing-finishing Pigs Upadhaya, S.D.;Kim, S.C.;Valientes, R.A.;Kim, I.H. 999 Bacillus-based feed additive was evaluated for its efficacy on growth performance, nutrient digestibility, fecal gas emission, and the consumption of time and amount of water for cleaning the pen of growing finishing pigs. A total of 120 growing pigs ($23.59{\pm}1.41kg$) were used in a 16-wk feeding trial. Pigs were randomly distributed into 1 of 2 treatments on the basis of body weight and sex. There were 12 replicate pens per treatment, with 5 pigs (3 barrows and 2 gilts) per pen. Dietary treatments were CON which was basal diet, and T1 which was CON+62.5 ppm microbial feed additive that provided $1.47{\times}10^8cfu$ of Bacillus organisms per gram of supplement. During the weeks 0 to 6, average daily gain (ADG) in T1 treatment was higher (p<0.05) than CON, but no improvement in average daily feed intake (ADFI) and feed efficiency (G:F) was noted. During 6 to 16 weeks, no difference (p>0.05) was noted in growth performance. However, ADG was improved (p<0.05) and overall ADFI tended (p = 0.06) to improve in T1 compared with CON. At week 6, the co-efficient of apparent total tract digestibility (CATTD) of dry matter (DM) nitrogen (N) was increased (p<0.05) in T1 compared with CON. Fecal $NH_3$ emission was decreased (p<0.05) in T1 compared with CON, at the end of 6th and 15th weeks. The time and water consumed for washing the pens were decreased (p<0.05) in T1 compared with CON. In conclusion, supplementation with Bacillus-based feed additive could improve the overall growth performances, increase the CATTD of DM and decrease the fecal $NH_3$ content and the time and water consumed in washing the pens for growing-finishing pigs. Effect of γ-Aminobutyric Acid-producing Lactobacillus Strain on Laying Performance, Egg Quality and Serum Enzyme Activity in Hy-Line Brown Hens under Heat Stress Zhu, Y.Z.;Cheng, J.L.;Ren, M.;Yin, L.;Piao, X.S. 1006 Heat-stress remains a costly issue for animal production, especially for poultry as they lack sweat glands, and alleviating heat-stress is necessary for ensuring animal production in hot environment. A high ${\gamma}$-aminobutyric acid (GABA)-producer Lactobacillus strain was used to investigate the effect of dietary GABA-producer on laying performance and egg quality in heat-stressed Hy-line brown hens. Hy-Line brown hens (n = 1,164) at 280 days of age were randomly divided into 4 groups based on the amount of freeze-dried GABA-producer added to the basal diet as follows: i) 0 mg/kg, ii) 25 mg/kg, iii) 50 mg/kg, and iv) 100 mg/kg. All hens were subjected to heat-stress treatment through maintaining the temperature and the relative humidity at $28.83{\pm}3.85^{\circ}C$ and 37% to 53.9%, respectively. During the experiment, laying rate, egg weight and feed intake of hens were recorded daily. At the 30th and 60th day after the start of the experiment, biochemical parameters, enzyme activity and immune activity in serum were measured. Egg production, average egg weight, average daily feed intake, feed conversion ratio and percentage of speckled egg, soft shell egg and misshaped egg were significantly improved (p<0.05) by the increasing supplementation of the dietary GABA-producer. Shape index, eggshell thickness, strength and weight were increased linearly with increasing GABA-producer supplementation. The level of calcium, phosphorus, glucose, total protein and albumin in serum of the hens fed GABA-producing strain supplemented diet was significantly higher (p<0.05) than that of the hens fed the basal diet, whereas cholesterol level was decreased. Compared with the basal diet, GABA-producer strain supplementation increased serum level of glutathione peroxidase (p = 0.009) and superoxide dismutase. In conclusion, GABA-producer played an important role in alleviating heat-stress, the isolated GABA-producer strain might be a potential natural and safe probiotic to use to improve laying performance and egg quality in heat-stressed hens. Effects of Supplemental Beta-mannanase on Digestible Energy and Metabolizable Energy Contents of Copra Expellers and Palm Kernel Expellers Fed to Pigs Kwon, W.B.;Kim, B.G. 1014 The purpose of this study was to determine the effect of ${\beta}$-mannanase supplementation on digestible energy (DE) and metabolizable energy (ME) contents of copra expellers (CE) and palm kernel expellers (PKE) fed to pigs. Six barrows with an initial body weight of 38.0 kg (standard deviation = 1.5) were randomly allotted to a $6{\times}6$ Latin square design with 6 dietary treatments and 6 periods. Six experimental diets were prepared in a $3{\times}2$ factorial treatment arrangement with 3 diets of a corn-soybean meal-based diet, a CE 30% diet, and a PKE 30% diet and with 2 concentrations of supplemental ${\beta}$-mannanase at 0 or 2,400 U/kg. All diets had the same proportion of corn:soybean meal ratio at 2.88:1. The marker-to-marker procedure was used for fecal and urine collection with 4-d adaptation and 5-d collection periods. No interactive effects were observed between diet and ${\beta}$-mannanase on energy digestibility and DE and ME contents of experimental diets. However, diets containing CE or PKE had less (p<0.05) DE and ME contents compared with the corn-soybean meal-based diet. The DE and ME contents in CE and PKE were not affected by supplemental ${\beta}$-mannanase. Taken together, we failed to find the effect of ${\beta}$-mannanase supplementation on energy utilization in CE and PKE fed to pigs. Effect of Different Tumbling Marination Methods and Time on the Water Status and Protein Properties of Prepared Pork Chops Gao, Tian;Li, Jiaolong;Zhang, Lin;Jiang, Yun;Yin, Maowen;Liu, Yang;Gao, Feng;Zhou, Guanghong 1020 The combined effect of tumbling marination methods (vacuum continuous tumbling marination, CT; vacuum intermittent tumbling marination, IT) and effective tumbling time (4, 6, 8, and 10 h) on the water status and protein properties of prepared pork chops was investigated. Results showed that regardless of tumbling time, CT method significantly decreased the muscle fiber diameter (MD) and significantly increased the total moisture content, product yield, salt soluble proteins (SSP) solubility, immobilized water component (p<0.05) compared with IT method. With the effective tumbling time increased from 4 h to 10 h, the fat content and the MD were significantly decreased (p<0.05), whereas the SSP solubility of prepared pork chops increased firstly and then decreased. Besides, an interactive effect between CT method and effective tumbling time was also observed for the chemical composition and proportion of immobilized water (p<0.05). These results demonstrated that CT method of 8 h was the most beneficial for improving the muscle structure and water distribution status, increasing the water-binding capacity and accelerating the marinade efficiency of pork chops; and thus, it should be chosen as the most optimal treatment method for the processing production of prepared pork chops. Relationships between Descriptive Sensory Attributes and Physicochemical Analysis of Broiler and Taiwan Native Chicken Breast Meat Chumngoen, Wanwisa;Tan, Fa-Jui 1028 Unique organoleptic characteristics such as rich flavors and chewy texture contribute to the higher popularity of native chicken in many Asian areas, while the commercial broilers are well-accepted due to their fast-growing and higher yields of meat. Sensory attributes of foods are often used to evaluate food eating quality and serve as references during the selection of foods. In this study, a three-phase descriptive sensory study was conducted to evaluate the sensory attributes of commercial broiler (BR) and Taiwan native chicken (TNC) breast meat, and investigate correlations between these sensory attributes and instrumental measurements. The results showed that for the first bite (phase 1), TNC meat had significantly higher moisture release, hardness, springiness, and cohesiveness than BR meat. After chewing for 10 to 12 bites (phase 2), TNC meat presented significantly higher chewdown hardness and meat particle size, whereas BR meat had significantly higher cohesiveness of mass. After swallowing (phase 3), TNC meat had higher chewiness and oily mouthcoat and lower residual loose particles than BR meat. TNC meat also provided more intense chicken flavors. This study clearly demonstrates that descriptive sensory analysis provides more detailed and more objectively information about the sensory attributes of meats from various chicken breeds. Additionally, sensory textural attributes vary between BR and TNC meat, and are highly correlated to the shear force value and collagen content which influence meat eating qualities greatly. The poultry industry and scientists should be able to recognize the sensory characteristics of different chicken meats more clearly. Accordingly, based on the meat's unique sensory and physicochemical characteristics, future work might address how meat from various breeds could best satisfy consumer needs using various cooking methods. Effects of Mixing on the Aggressive Behavior of Commercially Housed Pigs Rhim, Shin-Jae;Son, Seung-Hun;Hwang, Hyun-Su;Lee, Jae-Kang;Hong, Joon-Ki 1038 In this study, we investigated the effects of mixing on the aggressive behavior of commercially housed pigs. The behavioral patterns of 36 groups of pigs (a total of 360 animals) were observed over 3 consecutive days directly after weaning ($25{\pm}1.2$ days of age), and 25 and 50 days later with the aid of video technology. Fight latency and total duration and frequency of fighting were significantly different among the age groups. The aggressive behaviors decreased in 75-day old pigs if compared to 25- and 50-day old animals. Moreover, dominance index (DI) was higher in 25-day old and lower in 75-day old pigs. A comparison of dominant (DI>0) and submissive (DI<0) pigs showed significant differences (p<0.05) for major aggressive behaviors in all age groups. Dominant pigs were involved in more aggressive interactions, had longer fights, and initiated more fights than submissive pigs. Post-mixing aggressive behavior was altered by previous experience of mixing. Aggressive behavior and DI are suitable methods for analyzing the effects of mixing on commercially housed growing pigs. Modelling Pasture-based Automatic Milking System Herds: The Impact of Large Herd on Milk Yield and Economics Islam, M.R.;Clark, C.E.F.;Garcia, S.C.;Kerrisk, K.L. 1044 The aim of this modelling study was to investigate the effect of large herd size (and land areas) on walking distances and milking interval (MI), and their impact on milk yield and economic penalties when 50% of the total diets were provided from home grown feed either as pasture or grazeable complementary forage rotation (CFR) in an automatic milking system (AMS). Twelve scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as 'moderate'; optimum pasture utilisation of 19.7 t DM/ha, termed as 'high') and 2 rates of incorporation of grazeable complementary forage system (CFS: 0, 30%; CFS = 65% farm is CFR and 35% of farm is pasture) were investigated. Walking distances, energy loss due to walking, MI, reduction in milk yield and income loss were calculated for each treatment based on information available in the literature. With moderate pasture utilisation and 0% CFR, increasing the herd size from 400 to 800 cows resulted in an increase in total walking distances between the parlour and the paddock from 3.5 to 6.3 km. Consequently, MI increased from 15.2 to 16.4 h with increased herd size from 400 to 800 cows. High pasture utilisation (allowing for an increased stocking density) reduced the total walking distances up to 1 km, thus reduced the MI by up to 0.5 h compared to the moderate pasture, 800 cow herd combination. The high pasture utilisation combined with 30% of the farm in CFR in the farm reduced the total walking distances by up to 1.7 km and MI by up to 0.8 h compared to the moderate pasture and 800 cow herd combination. For moderate pasture utilisation, increasing the herd size from 400 to 800 cows resulted in more dramatic milk yield penalty as yield increasing from c.f. 2.6 and 5.1 kg/cow/d respectively, which incurred a loss of up to $AU 1.9/cow/d. Milk yield losses of 0.61 kg and 0.25 kg for every km increase in total walking distance (voluntary return trip from parlour to paddock) and every one hour increase in MI, respectively. The high pasture utilisation combined with 30% of the farm in CFR in the farm increased milk yield by up to 1.5 kg/cow/d, thereby reducing loss by up to $0.5/cow/d (c.f. the moderate pasture and 800 cow herd scenario). Thus, it was concluded that the successful integration of grazeable CFS with pasture has the potential to improve financial performance compared to the pasture only, large herd, AMS. Struvite Crystallization of Anaerobic Digestive Fluid of Swine Manure Containing Highly Concentrated Nitrogen Lee, Eun Young;Oh, Min Hwan;Yang, Seung-Hak;Yoon, Tae Han 1053 In this study, the optimal operation factors for struvite crystallization for removing and recovering nitrogen and phosphorus from anaerobic digestive fluid of swine manure containing highly concentrated nitrogen was determined. Every experiment for the struvite crystallization reaction was conducted by placing 1,000 mL of digestion fluid in a 2,000 mL Erlenmeyer flask at various temperatures, pH, and mixing speed. Except for special circumstances, the digestion fluid was centrifuged (10,000 rpm, 10 min) and then the supernatant was used for the experiment at room temperature and 100 rpm. The optimal mole ratio of $PO_4{^{3-}}:Mg^{2+}$ was 1:1.5, and the pH effect ranging from 9 to 11 was similar, when mixed for 1 hour. Under this condition, the removal efficiency of $NH_4{^+}-N$ and $PO_4{^{3-}}-P$ was 40% and 88.6%, respectively. X-shaped crystal was observed by light and scanning electron microscopy. In addition, struvite crystal structure was confirmed through X-ray diffraction analysis.
CommonCrawl
Estimating health expectancy in presence of missing data: an application using HID survey Cristina Giudici1, Maria Felice Arezzo1 & Nicolas Brouard2 Statistical Methods & Applications volume 22, pages 517–534 (2013)Cite this article In this article we estimate health transition probabilities using longitudinal data collected in France for the survey on handicaps, disabilities and dependencies from 1998 to 2001. Life expectancies with and without disabilities are estimated using a Markov-based multi-state life table approach with two non-absorbing states: able to perform all activities of daily living (ADLs) and unable or in need of help to perform one or more ADLs, and the absorbing state of death. The loss of follow-up between the two waves induces biases in the probabilities estimates: mortality estimates were biased upwards; also the incidence of recovery and the onset of disability seemed to be biased. Since individuals were not missing completely at random, we correct this bias by estimating health status for drop-outs using a non parametric model. After imputation, we found that at the age of 70 disability-free life expectancy decreases by 0.5 years, whereas the total life expectancy increases by 1 year. The slope of the stable prevalence increases, but it remains lower than the slope of the cross sectional prevalence. The gender differences on life expectancy did not change significantly after imputation. Globally, there is no evidence of a general reduction in ADL disability, as defined in our study. The added value of the study is the reduction of the bias induced by sample attrition. The debate on aging in Europe is currently paying considerable attention on healthy life expectancy (HLE) of the elderly. Following the approach of the World Health Organization (WHO), health should be considered as having a dynamic nature,Footnote 1 and should be taken into consideration in the context of life, as the ability to fulfill actions or to carry out a certain role in society. This is the so-called functional approach, taken by the WHO in the elaboration of the international frame of reference on the matter. The most suitable indicator to measure the state of health of a population is health expectancy, which measures the length of life spent in different states of health. The term is often used in a general sense for all indicators of health expressed in terms of expectancy, but the definition most frequently used in Europe is that of disability-free life expectancy (Perenboom 2003), where disability is defined as the impact of disease or injury on the functioning of individuals. In other words, a disability is the inability of accomplishing tasks of daily living which someone of the same age is able to perform (Freedman 2006; Verbrugge 1989). To better clarify our work, we distinguish between the model used to estimate the parameters of the so called health process (i.e. the probability of becoming impaired, the probability of recovery, and the probability of dying from either healthy or unhealthy state) and the methods that use these parameters to estimate health expectancies. Health expectancy's estimation method. There are several methods to estimate health expectancies. Among them the most commonly used are the Sullivan and the multi-state, respectively based on classical life table and longitudinal data. The first method was pioneered by Wolfbein on the length of "working life" (Wolfbein 1949) and is described in details in Sullivan (1971); it combines prevalence of disability obtained through a cross-sectional survey and a period life table. The incidence of incapacity in the period of reference is not taken into account; the prevalence observed at a given moment derives from past health transitions, and therefore depends on the history of the cohorts which make up the sample group. Age-specific cross-sectional prevalences are analogous to age-specific proportions of survivors from the corresponding cohorts (Brouard 1986; Guillot 2003) in the sense that they are not subject to current mortality trends, but to delayed trends. The combination of a cross-sectional prevalence, with a period life table yields to the so called Sullivan index, which is often and improperly called health expectancy. As stressed in the literature, such an health expectancy is not satisfactory in order to monitor the evolution of the current health conditions of a population and to forecast its future development. The second method, named multi-state tables, was pioneered by Rogers (1975) and Willekens for migration and marital status (Willekens 1979; Hoem and Fong 1976) for the multi-state table of working life and Brouard for the introduction of the period prevalence of labor participation (Brouard 1980; Cambois et al. 1999). Multi-state models are based on the analysis of the transitions between states in competition with the probabilities of dying from each state. The information necessary for this type of analysis derives from longitudinal surveys. The result, in this case, is the so called period (or stable) prevalence and can be interpreted analogously to the stationary population of a period life table, as the proportion of the disabled amongst the survivors of successive fictitious cohorts, subject to the flows of entry on disability, recovery and death observed in the period under examination. Thus, the period health expectancy is the expected number of years to be spent in the healthy state by this fictitious cohort. The analogy with the period life expectancy or simply "life expectancy", which is the expectancy of the distribution of deaths by age, is obvious. In the classical life table analysis, the survivors of any age are supposed to be at the same risk of dying. When taking heterogeneity into account, the simplest model consists in considering two states (healthy vs unhealthy, enabled vs disabled), but assuming that the population in each state is homogeneous over time, i.e at each age they are at the same risks of changing their status. This corresponds to the common Markov hypothesis. Estimation of health transition probabilities. Almost all health expectancy research implicitly assumes that age-related health transitions are governed by a Markov process. Thus, the parameters of the health process are generally estimated by recovering the parameters of the embedded Markov process (Laditka and Hayward 2003). Computational issues concerning estimation of health expectancies from longitudinal surveys have been developed by Bonneuil and Brouard (1992) while Lièvre et al. (2003) provided a complete solution with standard errors. The authors developed the embedded Markov chain maximum likelihood procedures pioneered by Laditka and Wolf (1998). They estimate parameterized transition probabilities following the Interpolation of Markov Chain approach (IMaCh).Footnote 2 The IMaCh approach has been recently applied in several analyses dealing with health (Lièvre et al. 2007; Crimmins et al. 2009; Andrade 2010), including studies based on the French HID survey (Cambois and Lièvre 2004; Giudici and Arezzo 2009). In these studies, information on health status is given by the interviews at different time, but non random loss of follow-up "within" successive waves can induce biases in the statistical results. It's not uncommon that demographers treat this problem by omitting data having missing values (listwise deletion). Unfortunately this approach has many inconveniences: even in the most favorable case, i.e. if the data are missing completely at random (MCAR), estimates suffer a loss of precision. In case data are missing non at random (MNAR), estimates are also biased. Very good treatment of the issue of missing data can be found in Little and Rubin (2002), Howell (2007) and Allison (2001). Modern approaches, for example maximum likelihood via EM algorithm and multiple imputation, impute missing values using statistical models. For more details on imputation via EM algorithm see Schafer (1997) and on multiple imputation see Scheuren (2005), Rubin (1987). A popular model choice for implementing imputation is multiple imputation by chained equation (MICE) (Raghunathan et al. 2002; Van Buuren and Oudshoorn 1999). When having many variables or when relations among variables are likely to be non linear and interactions among regressors have a non negligible effect, this method can be a very laborious task with no guarantee of success. Another problem is that variables often have distributions that are not easily captured by parametric models (Burgette and Reiter 2010). For all these reasons we preferred to use a non parametric approach that can easily manage many different regressors, both categorical and numerical, and that naturally takes into account variables interactions and non linear structures. In our study we estimate the probability of transition between different states of health for the population of 70 years old or older in France, during the period 1998–2001, following the multi-state table approach and using the IMaCh program. We based the analysis on the French HID survey, taking into account the loss of follow-up within the two survey waves, and imputing an health state through a non-parametric model named Classification and regression tree (CART), firstly introduced by Breiman in 1984. Taking into account the heterogeneity of mortality due to health states, we compute life expectancy in different states of health and the period prevalence of disability implied by the estimated health transitions. We examine how health transitions are influenced by demographic variables, in order to estimate differences in health expectancy. The added value of the study is the reduction of the bias induced by the loss of follow-up within the two waves of the HID survey. The rest of the paper is organized as follows: Sect. 2.1 describes the HID survey and the drop-out mechanism, Sect. 2.2 describes the model, gives some relevant characteristics of the imputation procedure and outlines the method used for estimating the transition probabilities, Sect. 3 shows results and Sect. 4 concludes. Data and methods Our study is based on the national survey on handicaps, disabilities and dependency (HID), carried out in France by INSEE, between 1998 and 2001, in collaboration with several research institutes including Institut National d'Etudes Démographiques (INED) and Institut National de Recherches Médicales (INSERM). The survey was carried out both in medico-social institutions and private dwellings,Footnote 3 and aimed at describing disability and handicaps for the whole French population. Briefly, a first wave of the HID survey was carried out in late 1998; 14,611 people living in institutions were interviewed. The same persons have been surveyed again in late 2000. In addition, between 300,000 and 400,000 people living in private dwellings filled out a brief questionnaire on "daily life and health" during the 1999 population census. After this filtering operation,Footnote 4 16,924 respondents have been interviewed, once in late 1999 and again in late 2001. Table 1 summarize some relevant characteristics of the survey whereas a detailed explanation of it can be found in Mormiche (1998). Two types of weights are available in HID: the first are representative of the total population living in France in late 1998 and late 1999, in institutional and ordinary settings respectevely; the second are representative of the evolution (between the two waves) of the individuals interviewed at the baseline. For imputation, we used the latter whereas for health estimation we used the first. To carry out our analysis, we selected only the population aged 55 and over at the baseline. On the basis of the HID survey, health is measured through a functional approach: disability refers to the activities needed for independent living and personal care and has been operationalized as the difficulty or inability to perform one of the five activities of daily living (ADL): bathing, dressing, eating, getting in/out of a bed or chair and toileting. Three states are used in the analysis: 1-able to perform all ADLs, 2-unable or in need of help to perform one or more ADLs, and 3-deceased. Table 1 HID Survey characteristics It's worth stressing that, after the second wave was completed, an in-depth analysis was performed by mean of government records (vital statistics) so that the information on an individual's death was recorded. This imply that, if someone was not re-interviewed in the second wave and therefore the health status is not known, he or she is for sure not dead. As can be easily noticed from Table 1, there is a total of 2,356 people (477 in institutions and 1,879 in ordinary settings) whose health state at the second wave is missing. They couldn't be included in the estimation model without imputation: missing data at the second wave are automatically dropped out by IMaCh, and this induces an upwards bias in the probability of dying. In the following we will discuss the drop-out mechanism and give some characteristics of the missing group which help evaluate the nature of this mechanism. We start with people in ordinary settings. At the second wave INSEE decided to leave aside a large part (572 individuals of 55 and over years old) of the department of Hérault in the region of Languedoc–Roussillon in the south of France. We haven't found any official documentation explaining the reasons of this choice, but our guess is budget constraint. The remaining individuals were not re-interviewed either because they refused to answer or because they have changed address and were not found. On behalf of the institutionalized individuals, the reasons for not re-interviewing lies on a change of address (i.e. they moved to another institution or to the household of origin). We were concerned that the refuse to answer or the address change could depend on a worsening of health state and therefore decided to proceed with imputation, using a model which controlled for some relevant variables (i.e. age and health status at baseline). Table 2 shows the distibution of ADL in the second wave conditional on those categorical covariates that we found to be relevant in the model. In the right most column, there are the p-values for 2-samples proportion tests. They clearly show that the probability that an observation is missing is related to the value of some covariates and therefore the drop-outs are not random. Table 2 Distribution of missing values conditional on some covariates In order to reduce the bias due to the attrition, missing data for individual known to be alive in the second wave, but not interviewed, were assigned through CART as explained in detail in Sect. 2.2.1. Sample correction Let \(I(ADL2w)\) be an indicator function taking value 1 if ADL at the second wave is missing and 0 otherwise. As stated in 2.1, once we found the non randomness of drop-outs we decided to input the ADL at the second wave using a model which exploits the influence of the covariates. This simply means to build a model for ADL at the second wave using only the individuals with a known health status. The dependent variable (i.e. ADL at second wave) is binary: disability or disability free. Model building can be done in many different ways, for example using a logit or a probit model. We decided to use a non-parametric model for reasons that were partly disclosed in Sect. 1 and that will be further discussed at the end of the paragraph. CART is a supervised classification algorithm, introduced by Breiman in 1984. A supervised classification problem can be summarized as follows: for \(n\) objects, characterized by a set of \(k\) features \(X = (X_1, X_2,\ldots , X_k)\), is known a priori the class \(j = 1, 2 , 3\ldots J\) to which they belong. Classes are generally indicated with variable \(Y\). The scope is to predict which is the class a new object belong to, given its characteristics. A supervised classification algorithm is a mathematical rule which assign a new object to a class \(j\). A function \(d(X)\), called classifier, is built in a way that it generates a partition of the feature space \(X\) into \(J\) non overlapping subsets. CART is a binary recursive partitioning procedure capable of handling both continuous and nominal characteristics. Starting with the entire sample (parent node), it divides it into two children nodes; any of them are then divided into two grandchildren. To split a node into two child nodes, CART always asks questions that have a "yes" or "no" answer. For example, the question "Is age \(\le \) 72?" splits the tree's root, or parent node, into two branches with "yes"cases going to the left child node and "no" cases to the right. A node is said to be final if it cannot be divided. The procedure stops when the tree reaches its maximum size. The full grown tree is then pruned back in order to look for the best final tree. This is the one that minimize the so called cost-complexity function, which is a function that takes into account at the same time the misclassification rate of individuals and the total number of final nodes. Note that the ensemble of the splitting questions form a rule which allows to assign any individual (also new ones) to a specific final node. The original data has a certain level of heterogeneity: if all individuals belong to the same class, there is no heterogeneity in the data. Conversely, if individuals are uniformly distributed among the \(J\) classes, heterogeneity reaches its maximum level. Heterogeneity can be measured according to different method; one of the most common is the Gini index which is the one we used. Any split is done according to a variable \(X_i\): the algorithm searches over all feature space looking for the optimal division that is for the binary split that reduces data heterogeneity most. Impurity reduction can be measured and it gives variables ranking based on their capability to separate objects. This is called variable importance. An important issue is the capability of a tree to correctly classify a new individual. A measure of this generalization power is the misclassification rate which is simply the number of misclassified individuals out of all observed individuals. If the original sample is big enough, a good estimate of the true misclassification rate is obtained by randomly splitting the sample in two sub samples and using the first part of the data (normally 70 % of it) to grow the tree and the second to test it. Table 3 Importance of independent variables As we briefly mentioned, we used CART for two reasons: the first one is that it generally classifies more accurately than other models (Breiman et al. 1984) and the second is that it naturally takes into account interactions among variables and we believed this is important when dealing with a complex task such as health transitions determinants. To confirm the first statement we tried several logistic models and found that the best rate of correct classification was 77.8 % whereas for CART was 86 %. Table 3 shows the variable importance in predicting the health status at the second wave: CART shows that ADL at the baseline is by far the most important variable. Once we estimated the model, we proceed to imputation for the 2,356 people whose health status was unknown (1,879 for menage group and 477 for institution). Imputation was done using the optimal splitting rules found. Since the values of variables \(X\) are known for each new individual, an unique assignment to a final node can be done and the imputed ADL is the mode of the node. Table 4 shows the predictive ability of CART and tells how reliable the performed imputation is: results are good because the global error rate is about 19 %. In order to provide an indication of state changes in the study, Table 5 shows the sample distribution by status in both waves, before and after imputation: most people began and ended disability-free; recovery percentages changes slightly after imputation, whereas the percentage of those who remained disable increases. Table 4 CART misclassification rate on training and test samples Table 5 Distribution of people interviewed (ménage and istitution) at the baseline by state at the beginning and end of the interval Transition probabilities estimation method We estimate the age-specific flows of entry into and exit from disability, and the matrix of the transition probabilities between good health (coded 1), disability (coded 2) and deceased (coded 3) employing the IMaCh program. The probability for an individual aged \(x\), observed in the state \(i\) during the first wave, to find him/herself in state \(j\) at the second wave is indicated by \(p_{ij}^x\), and the transition probabilities are estimated based on a series of 3\(\times \)3 matrices: $$\begin{aligned} p_{ij}^x = \left( {\begin{array}{ccc} p_{11}^x &{} p_{12}^x &{} p_{13}^x\\ p_{21}^x &{} p_{22}^x &{} p_{23}^x\\ 0 &{} 0 &{} 1 \\ \end{array} } \right) \end{aligned}$$ The first and the second rows represent transitions for individuals who begin the interval respectively non disabled and ADL disabled. The third row represents the absorbing state of death. The probabilities of transition are then parameterized using the following logistic multinomial logit: $$\begin{aligned} \ln \frac{p_{ij}^x}{p_{ii}^x}=\alpha _{ij}+\beta _{ij}x \qquad i \ne j \end{aligned}$$ The software IMaCh is able to provide standard errors for the estimated parameters, which are then used to derive standard errors for the life expectancies implied in the transition probabilities. This is an important characteristic which allows for the assessment of whether results are statistically meaningful. On the basis of transition probabilities estimates, IMaCh provide the so-called period (or stable) prevalence, which can be interpreted, analogously to the stationary population of a life table, as the proportion of the disabled amongst the survivors of successive fictitious cohorts, subject to the flows of entry on disability and recovery observed in the period under examination. In other words, the stable prevalence is implied in the health transitions observed during the survey, whereas the observed prevalence synthesize the history of disability onset, recovery and mortality of the population. Thus, the comparison between the stable and observed prevalence allows to make hypothesis on the future trend of health prevalence for cohorts under examination (Lièvre et al. 2003). Probabilities of transition For each age we estimate the probability of death within a year from each initial health status and compare the results with the 1998–2000 national age-specific mortality, as shown in Fig. 1. Total mortality rate is obtained by weighting each status-based probability of death with the proportion of people in each health status, given by the observed HID prevalence. Before CART imputation, mortality seems to be overestimated: the reason is that, since IMaCh automatically excludes individuals with missing ADL, the denominator of mortality rate is biased downward. The bias is reduced after the imputation. Figure 2 shows the transition probabilities from different initial state of health. As expected, the probability of dying is higher among the disabled. Regardless of the initial health state, the slope decreases after imputation, but the reduction is larger for those who were disabled at the baseline. The imputation modifies mainly the transition rates in older ages, except for recovery; in this case the intercept is reduced, and the slope did not change significantly. Death Rates by age for total population with 95 % confidence interval and comparison with annual national probability of death obtained from French vital statistics Transition probabilities by age for disabled and non disabled with 95 % confidence interval Health Expectancies As shown in Fig. 3, at all ages, our estimates of LE perfectly overlap those based on national statistics: at age 70, our estimates after CART correction are 15.21 years (95 % CI [14.67–15.75]) compared to the 1998–2000 French life table of 15.17 years. Estimation before imputation was lower, due to the overestimation of mortality. According to our model, people aged 70 can expect to live 9.37 years in disability-free state, given that they were in that state initially, but the expectation is reduced to 5.53 years if they were in the disabled state at age 70. The corresponding health expectancies for the disabled state are 6.10 and 8.64 years respectively (Table 6). Total life expectancies from HID survey compared with 1998 national life expectancy Table 6 Life expectancies according to the initial state of health before and after the imputation of an health state (disability free is coded 1 and disabled is coded 2) Implied prevalence The impact of continuing the rates of disability onset, recovery and death on ADL prevalence is shown in Figs. 4 and 5: as expected, the transition probabilities from both initial states (disability free and disabled) to a final state of disability at age x+h (and h=12 months), converge to the so called period, or stable, prevalence of disability. The period prevalence is obtained by simulating cohorts aged 70 years and over which experience over time the observed transitions of health. As widely stressed in the literature, the comparison of the stable with the observed prevalence provides an indication on the evolution of age-specific prevalence of disability, if current transition rates of disability onset and recovery continue indefinitely (Lièvre et al. 2003; Jagger et al. 2003; Laditka and Laditka 2006; Manton and Land 2000; Minicuci et al. 2004; Reynolds et al. 2005; Crimmins et al. 2009). Figure 4 compares the observed and stable prevalence of disability before and after correction. Our imputation of a health state for lost individuals modifies the slope of the curves, but the effect on the stable prevalence is stronger than the effect on observed prevalence. Figure 5 focuses on results after the estimation of missing health status: the slope of the stable prevalence seems to be always lower than slope of the cross sectional prevalence, and globally there is no evidence of a generalreduction in ADL. Observed and stable prevalence before and after the estimation of a state of health for those who are lost between the two waves of the HID survey Observed and stable prevalence after the estimation of a state of health for those who are lost between the two waves of the HID survey with 95 % confidence interval Gender disparities As stressed by Giudici (2006), Giudici and Arezzo (2009), holding all the other independent variables constant, disability is lower for men, and our analysis shows that the gender differences on expected life free of disability did not change significantly after imputation: Fig. 6 shows the transition probabilities for each sex from different initial states of health before and after imputation. Age specific yearly incidences of mortality for men and women before and after the imputation of a health state for lost individuals known alive, with 95 % confidence interval Before imputation, the probability of death for disabled men at age 70 is close to that of women at age 78. But, if men are disability free, their probability of dying at 70 is close to that of women at the same age. After imputation, mortality decreases for both sexes, but the gender gap at different ages is almost the same (Fig. 6). Globally, for both sexes the probability of dying is higher among the disabled than among the non-disabled. In both cases women show higher onset of disability and lower recovery incidences than men. These results are reflected on the estimation of health expectancies and stable prevalence implied in the computed probabilities: Table 7 shows gender differences in health expectancies before and after imputation. Table 7 Life expectancies for men and women according to the initial state of health after the imputation of an health state (disability free is coded 1 and disabled is coded 2) It's clear that in both cases the extra years lived by women (about 3.6 years at age of 70) are spent in disability. The HID survey, as other surveys dealing with health, is characterized by quite a relevant loss of individuals between waves. This attrition biases the transition probability estimates and, consequently, health expectancies in different states of health are also biased. In this work, health is measured through a functional approach, and people are considered disabled if they are unable or in need of help to perform one or more ADLs. In order to reduce the bias due to the attrition, we assigned a state of health to individuals known to be alive in the second wave, whose state of health was unknown, through CART. The correction allows to reduce the bias due to the overestimation of mortality and recovery on the one hand, and to the underestimation of onset of disability on the other hand. According to our model, people aged 70 can expect to live 9.37 years in disability-free state, given that they were in that state initially, but the expectation is reduced to 5.53 years if they were in the disabled state at age 70. The corresponding health expectancies for the disabled state are 6.10 and 8.64 years respectively. Regardless of the initial state of health, people aged 70 can expect to live 15.2 years, of which 6.6 in disability. The main effect of CART imputation on health expectancies is related to the increase of life expectancy of 0.62 years, due to the increase of disabled life expectancy of almost 1.2 years, associated to the reduction of disability free life expectancy of 0.5 of a year. After the imputation, the slope of the stable prevalence seems to be always lower than the slope of the cross sectional prevalence, and globally there is no evidence of a general reduction in ADL. The gender differences on expected life free of disability did not change significantly after imputation. Nevertheless, women show higher onset of disability and lower recovery; and these results are reflected on the estimates of health expectancies and stable prevalence. Social, economic and environmental consequences of illness can be summarized in the sequence: illness or disorder—impairment or invalidity—disability—handicap. According to this sequence, handicap has its origins in a disease (including accidents or other causes of moral or physical traumas) which, as a consequence, causes problems in body functions or structure such as significant deviation or loss (impairment or invalidity). Invalidity constitutes in turn greater or lesser difficulty in performing daily activities (disability). Every dimension of handicap is effectively defined in relation to a norm: for example a disability consists in the reduction of the ability to carry out determined tasks in the way considered normal for a human being. IMaCh is a publicly available computer program introduced by Brouard and Lièvre (2002) and mostly used for the estimation of Health Expectancy from longitudinal surveys. It allows to estimate transition probabilities using a discrete time embedded Markov chain approach. Transitions are supposed to occur at any time and death is always an additional competing risk. See for example (Andrade 2010; Crimmins et al. 2009; Molla and Madans 2008; Yong and Saito 2012) for some interesting applications. In the following we will refer at the two groups as istitution and ménage respectively. The brief questionnaire was administered with the intent of quantifying the disabled population and correctly sampling it. Allison PD (2001) Missing data. Sage Publications, Thousand Oaks Andrade FCD (2010) Measuring the impact of diabetes on life expectancy and disability-free life expectancy among older adults in Mexico. J Gerontol Ser B: Psychol Sci Soc Sci 65B(3):381–389 Bonneuil N, Brouard N (1992) Methods of calculation of health expectancy: application to the LSOA surveys (1984–86-88). In: 5th meeting of the international network on health expectancy (REVES-5): future uses of health expectancy indices, Ottawa Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth International Group, Belmont MATH Google Scholar Brouard N (1980) Espérance de vie active, reprises d'activité féminine: un modèle. Revue économique 31:1260–1287 Brouard N (1986) Structure et dynamique des populations. La pyramide des années à vivre, aspect nationaux et exemples regionaux. Espace Popul Soc 2:157–168 Brouard N, Lièvre A (2002) Computing health expectancies using IMaCh (A maximum likelyhood computer program using interpolation of Markov chains), version 0.71a. Paris, France. INED and EUROREVES Burgette LF, Reiter JP (2010) Multiple imputation for missing data via sequential regression trees. Am J Epidemiol 172(9):1070–1076 Cambois E, Robine JM, Brouard N (1999) Life expectancies applied to specific statuses. A history of the indicators and methods of calculation. Popul Engl Sel 11:7–34 Cambois E, Lièvre A (2004) Risques de perte d'autonomie et chances de récupération chez les personnes âgées de 55 ans ou plus: une évaluation à partir de l'enquête Handicaps, incapacité, dépendance. Etudes et Résultats, 349:1–11 Paris, DRESS Crimmins EM, Hayward MD, Hagedorn A, Saito Y, Brouard N (2009) Change in disability-free life expectancy for americans 70 years old and over. Demography 46(3):627–646 Freedman VA (2006) Late-life disability trends: an overview of current evidence. In: Field MJ, Jette AM, Martin L (eds) Workshop on disability in America: a new look—summary and background papers. National Academies Press: Washington. url: http://www.nap.edu/catalog/11579.html Giudici C (2006) Les déterminants socio-démographiques de la santé aux grands âges. Paris, Working paper Les Lundis de l'INED Giudici C, Arezzo MF (2009) Social inequalities in health expectancy of elderly: evidence from the HID Survey. In: IUSSP-UIESP, XXXVI international population conference; Marrakech, IUSSP-UIESP Guillot M (2003) The cross-sectional average length of life (cal): a cross-sectional mortality measure that reflects the experience of cohorts. Popul Stud 57(1):41–54 Article MathSciNet Google Scholar Hoem J, Fong M (1976) A Markov chain model of working life tables. Working paper 2 Laboratory of Actuarial Mathematics, University of Copenhagen Howell DC (2007) The analysis of missing data. In: Outhwaite W, Turner S (eds) Handbook of social science methodology. Sage: London Jagger C, Goyder E, Clarke M, Brouard N, Arthur A (2003) Active life expectancy in people with and without diabetes. J Publ Heal Med 25:42–46 Laditka SB, Wolf D (1998) New method for analyzing active life expectancy. J Aging Heal 10(2):214–241 Laditka SB, Hayward MD (2003) The evolution of demographic methods to calculate health expectancies. In: Robine JM, Jagger C, Mathers CD, Crimmins EM, Suzman RM (eds) Determining health expectancies. Wiley, London Laditka SB, Laditka JN (2006) Effects of diabetes on healthy life expectancy: shorter lives with more disability for both women and men. In: Yi Z, Crimmins EM, Carriere Y, Robine J-M (eds) Longer life and healthy ageing. Springer, Dordrecht, pp 71–90 Chapter Google Scholar Lièvre A, Brouard N, Heathcote CR (2003) The estimation of health expectancies from cross-longitudinal surveys. Math Popul Stud 10:211–248 Lièvre A, Jusot F, Barnay T, Sermet C, Brouard N, Robine JM, Brieu MA, Forette F (2007) Healthy working life expectancies at age 50 in Europe: a new indicator. J Nutr Heal Aging 11(6):508–514 Little RJA, Rubin DB (2002) Statistical analysis with missing data, 2nd edn. Wiley, New York Manton KG, Land K (2000) Active life expectancy estimates for the U.S. elderly population: a multidimensional continuous mixture model of funtional change applied to completed cohorts, 1982–1996. Demography 37:253–65 Minicuci N, Noale M, Pluijm SMF, Zunzunegui MV, Blumstein T, Deeg DJH, Bardage C, Jylha M (2004) Disability free life expectancy: a cross national comparison of six longitudinal studies on ageing. The CLESA project. Eur J Ageing 1:37–44 Molla MT, Madans JH (2008) Estimating healthy life expectancies using longitudinal survey data: methods and techniques in population health measures. National Center for Health Statistics. Vital Health Stat 2(146). URL:http://www.cdc.gov/nchs/data/series/sr02/sr02146.pdf Mormiche P (1998) L'enquête HID de l'INSEE. Objectifs et schéma organisationnel. Courrier des Statistiques. 87–88:7–18 Perenboom RJM (2003) Health expectancies in european countries. In: Robin JM, Jagger C, Mathers CD (eds) Determining health expectancies. Wiley, Chichester Raghunathan T, Solenberger P, Van Hoewyk J (2002) A multivariate technique for multiply imputing missing values using a sequence of regression models. Surv Methodol 27(1):85–96 Reynolds SL, Saito Y, Crimmins EM (2005) The impact of obesity on active life expectancy in older american men and women. Gerontol 45:438–444 Rogers A (1975) Introduction to multi regional mathematical demography. Wiley, England Rubin DB (1987) Multiple imputation for nonresponse in surveys. Wiley New Jersey, Hoboken Schafer JL (1997) Analisys on incomplete multivariate data. Chapman and Hall, London Scheuren F (2005) Multiple imputation: how it began and continues. Am Stat 59:315–319 Sullivan D (1971) A single index of mortality and morbidity. HSMHA Heal Rep 86(4):347–354 Van Buuren S, Oudshoorn K (1999) Flexible multivariate imputation by MICE. Leiden, Netherlands Verbrugge LM (1989) Recent, present, and future health of american adults. Annu Rev Publ Heal 10: 333–361 Willekens F (1979) Computer program for increment-decrement (multistate) life table analysis: a user's manual to lifeindec. Working papers of the international institute for applied systems analysis Wolfbein S (1949) The length of working life. Popul Stud 3:286–294 Yong V, Saito Y (2012) Are there education differentials in disability and mortality transitions and active life expectancy among japanese older adults? Findings from a 10-year prospective cohort study. J Gerontol Ser B: Psychol Sci Soc Sci 67(3):343–353 Department of Methods and Models for Economics, Territory and Finance, Sapienza, University of Rome, Via del Castro Laurenziano, 9, 00161, Rome, Italy Cristina Giudici & Maria Felice Arezzo Institut National d'Etudes Démographiques (INED), Boulevard Davout, 133, 75020, Paris, France Nicolas Brouard Cristina Giudici Maria Felice Arezzo Correspondence to Maria Felice Arezzo. Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Giudici, C., Arezzo, M.F. & Brouard, N. Estimating health expectancy in presence of missing data: an application using HID survey. Stat Methods Appl 22, 517–534 (2013). https://doi.org/10.1007/s10260-013-0233-8 Classification and regression trees Sample attrition
CommonCrawl
Analyzing the Transition to Buffeting of a 2D Airfoil using the Dynamic Mode Decomposition Poster Posters 1 Mr Sathsara Dias (Clarkson University) The Dynamic Mode Decomposition (DMD) algorithm was first introduced in the fluid mechanics community for analyzing the behavior of nonlinear systems. DMD processes empirical data and produces approximations of eigenvalues and eigenvectors ("DMD modes") of the linear Koopman operator that represents the nonlinear dynamics. In fluid dynamics, this approach has been used to both analyze constituent flow patterns in complex flows, and to design control and sensing strategies. In this work, we focus on predicting the transition to buffeting of a 2D airfoil in a transonic regime. Buffeting is a vibration that occurs as the angle-of-attack increases and the interactions between the shock and flow separation induce limit-cycle oscillations. We demonstrate that this bifurcation can be predicted by tracking the eigenvalue with the greatest real part across a range of parameter values $\alpha$, which is the airfoil's angle. We evaluate the performance of our approach on a synthetic Hopf-bifurcation flow and both pseudo-time simulations of a standard 2D airfoil. As part of the next stage of this research analysis for the time-resolved simulations of a standard 2D airfoil is carried out. Primary author Dr Marko Budišic Dr Pat Piperni Dr Brian Helenbrook RAPS_2020_sathsara_dmd_shock_buffet .pdf
CommonCrawl
Big Ideas Learning, LLC | Seventh Grade Home Reports Center Math Seventh Grade Big Ideas Math: Modeling Real Life Big Ideas Math: Modeling Real Life - Seventh Grade There are no above grade level assessment items for Grade 7. Examples of assessment items which assess grade-level standards include: Chapter 1, Quiz 2, Item 9, students use a vertical number line that shows elevations of a submarine after certain events to determine the distance the submarine rises after diving and the original elevation of the submarine. (7.NS.1.c) Chapter 3, Test A, Item 13, students factor a linear expression in order to determine the length of a square patio that has a perimeter of 16x + 12 feet. (7.EE.1) Chapter 3, Performance Task, Item 1, students write and simplify expressions from information provided in a diagram and a table. They describe and explain what they notice about the two expressions. (7.EE.1-2) Chapter 5, Test A, Item 6, students find the density of a substance in grams per millimeter by examining a graph. (7.RP.2.d) Course Benchmark 2, Item 30, students find the actual perimeter and area of a square using information about the scale drawing of a square. (7.G.1) Chapter 8, Alternative Assessment, Item 1, students are given the scenario about finding out how the residents in their town feel about opening a new gas station. Students describe how to conduct a survey so that the sample is biased, and unbiased survey of 200 people. They project how many residents out of 6200 will support the gas station if 80 out of 200 supported it. (7.SP.1-2) The instructional materials reviewed for Big Ideas Math: Modeling Real Life Grade 7 meet expectations for spending a majority of instructional time on major work of the grade. This includes all the clusters in 7.RP.A, 7.NS.A, and 7.EE.A, B. The supporting domain Statistics and Probability enhances focus and coherence to major standard/clusters of the grade, especially domains 7.NS and 7.RP. For example: In Chapter 5, Section 5.2, Solve Problems Involving Scale Drawings of Geometric figures (7.G.1) is connected to the major work of analyzing proportional relationships (7.RP.A). Students write and solve a proportion using the scale and ratios of the lengths of a drawing. In Chapter 7, Section 7.1, 7.SP.5 is connected to 7.RP.A as students work with probability as the ratio of desired outcomes to possible outcomes, and examine the probability between 0 and 1 including 0 and 1 of an event. Relative frequency is also defined as a ratio. For example, in Problem 4, students describe the likelihood of each event when making three-point shots or missing the shots. In Chapter 7, Section 7.3, Compound Events, connects 7.G.8a with 7.RP.3 when students determine probability by computation of rational numbers, and representing answers as fractions and percents. For example, Problem 4 expresses the probability as 1/6 or 16 2/3%. In Chapter 7, Section 7.3, Probability of Compound Events, 7.SP.8 is connected to the major work of solving real-world problems with rational numbers involving the four operations, 7.NS.3. Students solve simple and compound probabilities using rational numbers in various forms. Chapter 8, Section 8.1, Example 3 utilizes proportions to solve a problem to make projections for modeling real world problems. After randomly surveying 75 students, students use the results to estimate the number of students from the total population of 1200. Cluster 7.SP.A supports 7.RP.3. In Chapter 8, Section 8.2, Self-Assessment, Problem 4, students apply and extend previous understandings of operations with fractions (7.NS.A) to draw inferences about a population (7.SP.A). Students find the means of three samples of the number of hours music students practice each week, and use the means to make one estimate for the mean number of practice hours. The calculations result in a rational number that, when converted to a decimal, results in a repeating decimal, which they make sense of in order to answer the question about the number of hours music students practice each week (7.NS.2). Chapter 9, Section 9.5, Problem Solving with Angles, 7.G.5 is connected to the major work of solving word problems leading to equations, 7.EE.4.a as students write and solve equations to find the missing angle using properties of supplementary, complementary, adjacent, and vertical angles. Laurie's Notes, "Preparing to Teach" describe connections between content from prior grades and lessons to the current learning. For example, in Chapter 4, Section 4, "Students should know how to graph numbers on a number line and how to solve one-variable inequalities using whole numbers. In the exploration, students will be translating inequalities from verbal statements to graphical representations and symbolic sentences." Chapter Overviews describe connections between content from prior and future grades to the current learning, and the progression of learning that will occur. For example, Chapter 5, "Laurie's Notes: Chapter Overview", "The study of ratios and proportions in this chapter builds upon and connects to prior work with rates and ratios in the previous course." This supports Standard 6.RP. In Sections 5.1 and 5.2, students decide whether two quantities are in a proportional relationship using ratio tables. This supports Standard 7.RP.2.a and uses unit rates involving rational numbers. During Sections 5.3, 5.4, and 5.5, students write, solve, and graph proportions. This supports Standard 7.RP.2.a-7.RP.3, "Graphing proportional relationships enables students to see the connection between the constant of proportionality and equivalent ratios", but the term "Slope", Standards 8.EE.5-6, is not included. In Section 5.6, students work with scale drawings, which supports Standard 7.G.1. Each chapter's Progressions page contains two charts. "Through the Grades", lists the relevant portions of standards from prior and future grades (grades 6 and 8) that connect to the grade 7 standards addressed in that chapter. For example, Chapter 4, Sections 4.1-4.2, students use algebra tiles to review the process of solving one-step equations. This is identified as revisiting work from a prior grade-level in the "Chapter Exploration and supports grade-level work in section 4.3 of solving equations of the form px + q =r and p(x + q) =r. This supports Standard 7.EE.4a. Each lesson presents opportunities for students to work with grade-level problems. However, "Scaffolding Instruction" notes suggest assignments for students at different levels of proficiency (emergent, proficient, advanced). These levels are not defined, nor is there any tool used to determine which students fall into which level. In the Concepts, Skills and Problem Solving section at the end of each lesson problems are assigned based on these proficiencies, therefore, not all students have opportunities to engage with the full intent of grade-level standards. For example: In the Teacher Edition, Chapter 6, Section 6.5, the assignments for proficient and advanced students includes a reasoning task in which students determine the price of a drone that is discounted 40%, and then discounted an additional 60% a month later. This reasoning task is omitted from the assignments for emerging students. In the Teacher Edition, Chapter 9, Section 9.2, the assignments for advanced students include a critical thinking task in which students determine how increasing the radius of a circle impacts the area of the circle. This critical thinking task is omitted from the assignments for emerging and proficient students. Each section within a chapter includes problems where the publisher states, "students encounter varying "Depth of Knowledge" levels, reaching higher cognitive demand and promoting student discourse". In Chapter 8, Section 8.1, students examine a sample of a population for validity. This supports Standard 7.SP.1 and use a random sample to draw inferences about a population which supports Standard 7.SP.2. In "Exploration 1" students "make conclusions about the favorite extracurricular activities of students at their school" by first identifying the population and samples of the population, (DOK Level 1) and then by evaluating the differences between two samples and evaluating their conclusions for validity and explain their thinking, (DOK Level 3). Problem 2 students compare two samples to determine which sample is unbiased, (DOK Level 2). In Chapter 4, Section 4.6, students roll two different colored dice with negative and positive numbers on each cube. When the students roll a pair of dice, they write an inequality to represent them. Then they roll one die and multiply each side of the inequality to represent them. They are then asked if the original inequality is still true. Finally, they are asked to make conjectures about how to solve an inequality of the form ax <b for x when a>0 and when a<0. These conjectures will help to develop the key idea(s) for the section which is to write and solve inequalities using multiplication and division. This supports standard 7.EE.4.b. In Chapter 6, students use a percent model to justify their answers, instead of assessing the reasonableness of answers using mental computation and estimation strategies. Mental computation and estimation are strategies specifically called for in standard 7.EE.3. Materials explicitly relate grade-level concepts to prior knowledge from earlier grades. At the beginning of each section in Laurie's Notes, there is a heading marked "Preparing to Teach", which includes a brief explanation of how work in prior courses relates to the work involved in that lesson. In some cases it outlines what happened in prior courses, but is not specific to which grade or course this happens. For example: In Chapter 1, Section 1.1, it states that in prior courses students were introduced to integers, absolute value, and number lines. For example, "It is important that students review these foundational skills because they are necessary for adding and subtracting rational numbers." In Chapter 1, Section 1.1, students review the concept of absolute value (6.NS.7). This leads into Section 1.2 where students begin adding integers (7.NS.1.b). In Chapter 3, Section 3.3 states that students have used the distributive property in previous courses. It adds, "They will extend their understanding to include algebraic expressions involving rational numbers. This property is very important to algebraic work in future courses". In Chapter 3, Section 3.3, Exploration 1, students build upon their experience with the distributive property to include rational numbers. In Example 1, students apply the distributive property to simplify expressions. In Chapter 5, Section 5.2, the Preparing to Teach notes, explain the connection between students' prior work with ratios (describing ratio relationships, completing tables), (6.RP.A), and the content in Section 5.2, stating, "In this lesson, they will extend their work with ratios to include fractions, making connections to their recent work with fractions." In Section 5.1, students complete ratio tables, and write and interpret ratios, but now with fractions, forming a bridge to upcoming work of finding and using unit rates involving rational numbers (7.RP.1). In Chapter 6, Section 6.1, Preparing to Teach, notes state students "should know how to solve simple percent problems, and how to use ratio tables, Standard 6.RP.3." The remainder of Chapter 6, "will build upon this understanding to write and solve percent proportions." (7.RP.3) In the Resources by Chapter book, each chapter has a few questions that are named as "Prerequisite Skills Practice". The intent is for practice from prior knowledge. There is no mention of previous grade knowledge or previous lesson knowledge. In Chapter 5, Algebraic Expressions and Properties, 6.EE, Apply and extend previous understandings of arithmetic to algebraic expressions is directly related to the Chapter 5 learning goals of, "Evaluate algebraic expressions given values of their variables (Section 5.1), Write algebraic expressions and solve problems involving algebraic expressions (Section 5.2), Identify equivalent expressions and apply properties to generate equivalent expressions (Section 5.3), Identify equivalent expressions and apply properties to generate equivalent expressions (Section 5.4), and Factor numerical and algebraic expressions (Section 5.5). In Chapter 3, students engage simultaneously in Standards 7.NS.A and 7.EE.A, as they simplify, add, subtract, factor and expand linear expressions involving positive and negative number coefficients. For example, in Section 3.1, Try It, Problem 9, students simplify 2s - 9s + 8t - t. In Section 3.3, Try It, Problem 5, students use the distributive property to simplify the expression -3/2 (a - 4 - 2a). In Chapter 4, students use operations with integers, Cluster 7.NS.A to solve problems using numerical and algebraic expressions and equations, Cluster 7.EE.B. In Chapter 5, Domain 7.RP connects ratio with computations with rational numbers 7.NS, as students explore rates and unit rates. For example, in Section 5.6, students analyze proportional relationships and use them to solve real-world problems. Chapter 6, the problems and activities provide connections between the skills and understandings of Cluster 7.EE.B to those of Cluster 7.RP.A as students write proportions and equations to represent and solve percent problems, and to write equations to solve problems involving discounts and markups. In Section 6.3, Practice, Problem 23, students write and solve an equation to determine the percent of sales tax on a model rocket costing $24 with a sales tax of $1.92. Chapter 8, Section 8.4, students use random sampling to draw inferences about a population, connecting 7.SP.A with drawing informal comparative inferences about two populations, 7.SP.B. In Chapter 1, Section 2, Exploration 1 (7.NS.1.d), students are taught to add integers with chips and using number lines. "Write an addition expression represented by the number line. Then find the sum." After these examples, students are asked to use conceptual strategies (number line or chips). In Chapter 3, Lesson 2, Exploration 1, students use algebra tiles to model a sum of terms equal to zero and simplify expressions. In the Concepts, Skills and Problem Solving section, students have two additional problems where they use algebra tiles to simplify expressions. (7.EE.1) Chapter 1, Section 4, "Subtracting Integers," Exploration 1 asks students to work with partners and use integer counters to find the differences and sums of several problems with two different representations. For example, "4 - 2" and "4 + (-2)"; "-3 - 1" and "-3 + (-1)" and "13 - 1". Student pairs are asked to generate a rule for subtracting integers. Students who can't generate a rule are prompted to use a number line. After working independently students share their rule with a partner and discuss any discrepancies. (7.NS.1) Chapter 4, Section 1 "Solving Equations Using Addition or Subtraction" Exploration 1, students are asked, "Write the four equations modeled by the algebra tiles. Explain how you can use algebra tiles to solve each equation." (7.EE.3) The instructional materials do not always provide students opportunities to independently demonstrate conceptual understanding throughout the grade-level. The shift from conceptual understanding, most prevalent in the Exploration Section, to procedural understanding occurs within the lesson. The Examples and "Concepts, Skills, and Problem Solving" sections have a focus that is primarily procedural with limited opportunities to demonstrate conceptual understanding. For example: In Chapter 3, Section 2, only Problems 8 and 9 ask students to demonstrate conceptual understanding. For example, Problems 10-17 ask students to "Find the Sum." Problem 10: "(n+8) + (n-12)"; Problem 16: "(6-2.7h) + (-1.3j-4)." Problems 19-26 ask students to "Find the difference." Problem 19: "(-2g+7) - (g+11)"; Problem 26: "(1-5q) - (2.5s+8) - (O.5q+6)". (7.EE.1) In Chapter 2, Section 2, Concepts, Skills & Problem Solving, the majority of the questions require procedural knowledge and do not ask students to demonstrate conceptual understanding. For example, Problems 13-28 ask students to "Find the quotient, if possible", such as Problem 16: "-18 ÷ (-3)"; and Problem 22: "-49 ÷ (-7)". (7.NS.1) The instructional materials for Big Ideas Math: Modeling Real Life Grade 7 meet expectations that they attend to those standards that set an expectation of procedural skill. The instructional materials attend to operations with rational numbers (7.NS.A), using the properties of operations to generate equivalent expressions (7.EE.1), and solving real-life and mathematical problems using numerical and algebraic expressions (7.EE.B). For example: In Chapter 1, Lesson 5, students subtract rational numbers. Examples 1-3 provide step-by-step explanations of the procedural skill of rational numbers. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of subtracting rational numbers. (7.NS.1) In Chapter 2, Lesson 1, students multiply rational numbers. Examples 1-3 provide step-by-step explanations of the procedural skill of multiplying rational numbers. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of multiplying rational numbers. (7.NS.2) In Chapter 3, Lesson 4, students factor expressions. Examples 1-3 provide step-by-step explanations of the procedural skill of factoring an expression. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of factoring an expression. (7.EE.1) In Chapter 4, Lesson 1, students solve equations using addition and subtraction. Examples 1-3 provide step-by-step explanations of the procedural skill of solving an equation using addition and subtraction. In the Concept, Skills, and Problem Solving section, students have many opportunities to demonstrate their skill of solving an equation. (7.EE.4.a) In each lesson there is a "Review & Refresh" section, which provides additional practice for skills previously taught. Within these sections are further opportunities to practice the procedural skills. For example: In Chapter 2, Lesson 2, there are four problems requiring multiplication of rational numbers. For example: "Problem 1: 8 x 10; Problem 2: -6(9); Problem 3: 4(7); Problem 4: -9(-8)". (7.NS.2) In Chapter 3, Lesson 4, there are three problems requiring simplifying expressions. For example: "Problem 1: 8(k-5); Problem 2: -4.5(-6+2d); Problem 3: -1/4(3g-6-5g)". (7.EE.1) In Chapter 4, Lesson 1, there are four problems asking students to factor out the coefficient of the variable term. For example: "Problem 1: 4x-20; Problem 2: -6y-18; Problem 3: -2/5w + 4/5; Problem 4: 0.75z - 6.75". (7.EE.4.a) Chapter 5, Lesson 1, Example 3, Modeling Real Life, "You mix 1/2 cup of yellow paint for every 3/4 cup of blue paint to make 15 cups of green paint. How much yellow paint do you use?" Students are given two methods to solve the questions with both methods being explained and answered. For example, "Method 1: The ratio of yellow paint to blue paint is 1/2 to 3/4. Use a ratio table to find an equivalent ratio in which the total amount of yellow paint and blue paint is 15 cups." [A completed ratio table with annotated description as to how it was filled out is included.] "Method 2: You can use the ratio of yellow paint to blue paint to find the fraction of the green paint that is made from yellow paint. You use 1/2 cup of yellow paint for every ¾ cup of blue paint, so the fraction of the green paint that is made from yellow paint is 2/5 [included equation and solution]. So, you use 2/5 ⋅ 15 = 6 cups of yellow paint." (7.RP.1) Chapter 1, Lesson 1, Example 3, Modeling Real Life, "A moon has an ocean underneath its icy surface. Scientists run tests above and below the surface. [Table Provided] The table shows the elevations of each test. Which test is deepest? Which test is closest to the surface?" The explanation from this point provides students with step-by-step directions on how to solve the problem. "To determine which test is deepest, find the least elevation. Graph the elevations on a vertical number line. [Vertical line provided.] The number line shows that the salinity test is deepest. The number line also shows that the atmosphere test and the ice test are closest to the surface. To determine which is closer to the surface, identify which elevation has a lesser absolute value. Atmosphere: ∣0.3∣ = 0.3 Ice: ∣−0.25∣ = 0.25 So, the salinity test is deepest and the ice test is closest to the surface." (7.NS.1) Chapter 2, Lesson 1, Problem 17, "On a mountain, the temperature decreases by 18°F for each 5000-foot increase in elevation. At 7000 feet, the temperature is 41°F. What is the temperature at 22,000 feet? Justify your answer." (7.NS.3, multi-step, routine) Chapter 3, Lesson 4, Problem 41, Dig Deeper, "A square fire pit with a side length of s feet is bordered by 1-foot square stones as shown. [Diagram provided] a. How many stones does it take to border the fire pit with two rows of stones? Use a diagram to justify your answer." (routine) "b. You border the fire pit with n rows of stones. How many stones are in the nth row? Explain your reasoning." (non-routine) (7.EE.3) Chapter 6, Lesson 3, Problem 32, Dig Deeper, "At a restaurant, the amount of your bill before taxes and tip is $19.83. A 6% sales tax is applied to your bill, and you leave a tip equal to 19% of the original amount. Use mental math to estimate the total amount of money you pay. Explain your reasoning. (Hint: Use 10% of the original amount.)" (7.RP.3, routine) In Chapter 4, Lesson 3, Solving Two-Step Equations, students begin with an Exploration example that uses algebra tiles to show the steps for solving an equation and the relationship to the properties of equality. These examples show the conceptual solving of an equation through models. The lesson shifts to a procedural steps of solving two step equations with Examples 1: "-3x + 5 = 2" and Example 2: "x/8- 1/2 = -7/2". Example 3 is a procedural example of solving two step equations by combining like terms "3y - 8y = 25". The lesson progresses to independent application of the skill in Concepts, Skills, and Problem Solving. Students solve equations procedurally. Chapter 6, Lesson 1, Fractions, Decimals and Percents, students begin the lesson with an Exploration activity where they compare numbers in different forms based on a variety of strategies. Example 1, presents a conceptual model of a decimal using a hundredth grid, and how to convert a decimal to a percent. Example 2, shows the students how to procedurally build on what they have learned to convert a fraction to a decimal to a percent using division. The lesson then moves to independent practice in Concepts, Skills, and Problem Solving where students procedurally convert between decimals, percents, and fractions. Chapter 7, Lesson 2, Experimental and Theoretical Probability, students' learning begins with an Exploration activity in which students conduct two experiments to find relative frequencies (Flip a Quarter and Toss and Thumbtack) to understand the concept behind probability. The lesson moves on to Example 1, Finding an Experimental Probability by utilizing a formula. "$$P(event) =\frac {number of times the event occurs}{total number of trials}$$", and Example 2, Finding a Theoretical Probability, by utilizing the formula "$$P (event)= \frac{number of favorable outcomes}{number of possible outcomes}$$". Example 3, shows the steps for applying each formula to compare probabilities. The bar growth shows the results of rolling a number cube 300 times. How does the experimental probability of rolling an odd number compare with the theoretical probability?" The independent practice in Concepts, Skills, and Problem Solving has the students finding an experimental probability and theoretical probability based on an event. Chapter 9, Lesson 1, Circles and Circumference, begins with Exploration 1, where students use a compass to draw circles and conceptually see the length of the diameter and circumference. Exploration 2, continues to explore diameter and circumference through hands on modeling. The lesson continues with three examples showing the steps of applying the formula for finding radius, circumference, and perimeter of a circle. The independent work of the students is within the Concepts, Skills, and Problem Solving in which students are asked to procedurally solve for the radius, diameter, circumference and perimeter. Chapter 1, Lesson 4, Subtracting Rational Numbers, Exploration 1 (MP2), students work with a partner in answering the following questions: a. Choose a unit fraction to represent the space between the tick marks on each number line. "What expressions involving subtraction are being modeled? What are the differences? b. Do the rules for subtracting integers apply to all rational numbers? Explain your reasoning. You have used the commutative and associative properties to add integers. Do these properties apply in expressions involving subtraction? Explain your reasoning." MP2 is identified in the teaching notes, "The number line helps students see that the rules for subtracting rational numbers shouldn't be different from the rules for subtracting integers." Chapter 8, Lesson 1, Samples and Populations, Example 2 (MP3), students are given the scenario, "You want to know how the residents of your town feel about adding a new landfill. Determine whether each conclusion is valid." Students are provided with information about the survey. MP3 is identified in the teaching notes, "Ask a volunteer to read part (a). Then ask whether the conclusion is valid. Students should recognize that the sample is biased because the survey was not random—you only surveyed nearby residents. Ask a volunteer to read part (b). Then ask whether the conclusion is valid. Students should recognize that the sample is random and large enough to provide accurate data, so it is an unbiased sample." Chapter 5, Lesson 4, Writing and Solving Proportions, Example 3 (MP1), students are provided with two examples of solving proportions using cross products. MP1 is identified in the teaching notes, "As you work through the problems with students, share with them the wisdom of analyzing the problem first to decide what method makes the most sense." The MPs are identified in the digital Student Dashboard under Student Resources, Standards for Mathematical Practice. This link takes you to the same information found in the Teacher Edition. For example: Chapter 9, Lesson 1, Circles and Circumference, Exploration 2 - Exploring Diameter and Circumference, students work with a partner and find the circumference and diameter of a circular base. They determine whether the circumference or diameter is greater and by how much. "Math Practice - Calculate Accurately," students are asked, "What other methods can you use to calculate the circumference of a circle? Which methods are more accurate?" Chapter 6, Lesson 1, Fractions, Decimals, and Percents, Concepts, Skills & Problem Solving, Problem 39, "MP Problem Solving", "The table shows the portion of students in each grade that participate in School Spirit Week. Order the grades by portion of participation from least to greatest." Chapter 2, Lesson 4, Multiplying Rational Numbers, Concept Skills, & Problem Solving, Problems 10-12. "MP Reasoning", "Without multiplying, tell whether the value of the expression is positive or negative. Explain your reasoning." MP7 and MP8 are under-identified in the series, both are identified in four of the ten chapters. The instructional materials do not present opportunities for students to engage in MP1: Make Sense of Problems and Persevere in Solving Them, MP4: Model with mathematics, and MP5: Use appropriate tools strategically. Chapter 2, Lesson 3, Laurie's Notes, Example 1, "Mathematically proficient students are able to plan a solution. Choosing between methods may help students be more efficient and accurate when writing fractions as decimals. Complete part (a) as a class. The first step is to write the mixed number as an equivalent improper fraction. Then divide the numerator by the denominator. Point out that the negative sign is simply placed in the answer after the calculations are complete. Discuss the Another Method note with students. Point out that to find an equivalent fraction with a denominator that is a power of 10, you multiply the numerator and denominator by powers of 2 or 5. This is not possible for repeating decimals. Complete part (b) as a class. Remind students to always divide the numerator by the denominator, regardless of the size of the numbers!" In Example 1, the solution is provided for students and therefore they do not have to persevere in solving the problem. Chapter 5, Lesson 5, Laurie's Notes, Example 3, "Ask students to explain why the graph represents a ratio relationship and to identify the unit rate. Plotting the ordered pairs confirms that x and y are proportional. 'What is the constant of proportionality?' 16. 'What is the equation of the line?' y = 16x. Students can use the equation to find the area cleaned for any amount of time." Students are analyzing a given model, not using a model to solve a problem. Chapter 7, Lesson 3, Laurie's Notes, Example 1, "The tree diagram helps students visualize the 8 outcomes in the sample space." Students are provided with a worked out example, and do not create a tree diagram as a way to model a problem independently. MP5: While the Dynamic Student Edition includes tools for students, the instructional materials present few opportunities for students to choose their own tool, therefore, the full meaning of MP5 is not being attended to. For example: Chapter 8, Lesson 2, Laurie's Notes, Example 2, "Students can use calculators to quickly find the mean of each sample." Teachers direct students to use calculators. Chapter 7, Lesson 2, Laurie's Notes, Exploration 1, "Combine the results for each experiment. As the data are gathered and recorded, several students with calculators can summarize the results." Students are not selecting their own tool in this example. "You be the Teacher", found in many lessons, presents opportunities for students to critique the reasoning of others, and construct arguments. Examples of where students engage in the full intent of MP3 include the following: Chapter 4, Lesson 2, Problem 28, You Be the Teacher, "Your friend solves the equation -4.2x=21. Is your friend correct? Explain your reasoning." The student work is provided to examine. Chapter 6, Lesson 1, Problem 20, You Be the Teacher, "Your friend uses the percent proportion to answer the question below. Is your friend correct? Explain your reasoning. '40% of what number is 34?'" The student work is provided to examine. The Student Edition labels MP3 as "MP Construct Arguments," however, these activities do not always require students to construct arguments. In the Student Edition, "Construct Arguments" was labeled only once for students and "Build Arguments" was labeled once for students. For example: Chapter 2, Lesson 1, Construct Arguments, students construct viable arguments by writing general rules for multiplying (i) two integers with the same sign and (ii) two integers with different signs. Students are prompted to "Construct an argument that you can use to convince a friend of the rules you wrote in Exploration 1(c)." Chapter 8, Lesson 4, Exploration 1, Build Arguments is identified in the Math Practice blue box with the following question, "How does taking multiple random samples allow you to make conclusions about two populations?" In Chapter 1, Lesson 4, Subtracting Integers, students are shown an example of subtracting integers. In Laurie's notes, teachers are prompted, "Ask students if it is possible to determine when the difference of two negative numbers will be positive and when the difference of two negative numbers will be negative." In Chapter 5, Lesson 2, Example 1, students find a unit rate based on given information. In Laurie's notes, teachers are prompted, "There are several ways in which students may explain their reasoning. Take time to hear a variety of approaches." This is labeled as MP3, but there is no support for teachers to assist students in constructing a viable argument or critiquing the thoughts of others. Chapter 1, Lesson 2, Example 2, The Teacher's Guide is noted with MP3 with the following directions, "'When you add two integers with different signs, how do you know if the sum is positive or negative?' Students answered a similar question in Example 1, but now they should be using the concept of absolute value, even if they don't use the precise language. You want to hear something about the size of the number, meaning its absolute value." There is no reference to MP3 in the Student Edition in this Lesson. The materials attend to the vocabulary at the beginning of each chapter in the Getting Ready section. For example, in the Getting Ready section for Chapter 3, students read, "The following vocabulary terms (like terms, linear expression, factoring an expression) are defined in this chapter. Think about what each term might mean and record your thoughts." In Laurie's Notes for the chapter, teachers are provided with the following notes regarding the vocabulary: "A. These terms represent some of the vocabulary that students will encounter in Chapter 3. Discuss the terms as a class. B. Where have students heard the word like terms outside of a math classroom? In what contexts? Students may not be able to write the actual definition, but they may write phrases associated with like terms. C. Allowing students to discuss these terms now will prepare them for understanding the terms as they are presented in the chapter. D. When students encounter a new definition, encourage them to write in their Student Journals. They will revisit these definitions during the Chapter Review." Key vocabulary for a section is noted in a box in the margins of the student textbook, along with a list of pages where the students will encounter the vocabulary. Vocabulary also appears in some of the Key Ideas boxes. For example, in Chapter 6, Lesson 4, the Key Idea box contains the definition for percent of change, percent of increase, and percent of decrease with an equation of how to find each. Each chapter has a review section that includes a list of vocabulary important to the unit and the page number the students will find the terms. For example, in Chapter 4, Review, teachers are given the prompt: "As a review of the chapter vocabulary, have students revisit the vocabulary section in their Student Journals to fill in any missing definitions and record examples of each term." In the Student Edition, the terms and page number are provided and students are asked to "Write the definition and give an example of each vocabulary term." Additionally, there is a Graphic Organizer Section where students need to create a "Summary Triangle" for each concept. The Chapter 4, Laurie's Notes, Chapter 4 Overview states, "Be sure to use precise language when discussing multiplying or dividing an inequality by a negative quantity. Use language such as, "The direction of the inequality symbol must be reversed." Simply saying, "switch the sign" is not precise." In Chapter 7, Chapter Exploration includes a list of vocabulary words related to probability. Laurie's Notes (page T-282) guides teachers to have students use contextual clues and record notes and definitions related to the mathematical terms throughout the chapter. In Chapter 9, Section 9.4, Laurie's Notes, "Motivate, guides teachers to play a game that will help students remember vocabulary and their meanings relating to triangles." In Chapter 2, Lesson 1, Laurie's Notes remind teachers that "students should say, "Negative 5 times negative 6 equals 30". Teachers are advised to respond to students saying, "minus 5", by reminding them that minus represents an operation. In Chapter 8, Lesson 1, Laurie's Notes, teachers are asked to discuss the following, "Define unbiased sample and biased sample. Give a few examples of each. Then ask students to write the definitions in their own words and share an example of each type of sample. The size of a sample can have a great influence on the results. A sample that is not large enough may not be unbiased and a sample that is too large may be too cumbersome to use. As a rule of thumb, a sample of 30 is usually large enough to provide accurate data for modest population sizes." In Chapter 7, Lesson 1, Laurie's Notes, teachers are asked to "Discuss the vocabulary words: experiment, outcomes, event, and favorable outcomes. You can relate the vocabulary to the exploration and to rolling two number cubes. 'What does it mean to perform an experiment at random?' All of the possible outcomes are equally likely. Ask students to identify the favorable outcomes for the events of choosing each color of marble. green (2), blue (1), red (1), yellow (1), purple (1) Be sure students understand that there can be more than one favorable outcome. 'What are some other examples of experiments and events? What are the favorable outcomes for these events?' Sample answer: An experiment is rolling a number cube with the numbers 1–6. An event is rolling a number greater than 4, with favorable outcomes of 5 and 6."
CommonCrawl
Numerical Study of Atrial Fibrillation Effects on Flow Distribution in Aortic Circulation Amin Deyranlou1, Josephine H. Naish2, Christopher A. Miller3,4,5, Alistair Revell1 & Amir Keshmiri ORCID: orcid.org/0000-0003-4747-277X1 Annals of Biomedical Engineering (2020)Cite this article Atrial fibrillation (AF) is the most common type of arrhythmia, which undermines cardiac function. Atrial fibrillation is a multi-facet malady and it may occur as a result of other diseases or it may trigger other problems. One of the main complications of AF is stroke due to the possibility of clot formation inside the atrium. However, the possibility of stroke occurrence due to the AF and the location from which an embolus dispatches are subject of debate. Another hypothesis about the embolus formation during AF is thrombus formation in aorta and carotid arteries, embolus detachment and its movement. To investigate the possibility of the latter postulation, the current work suggests a parametric study to quantify the sensitivity of aortic flow to four common AF traits including lack of atrial kick, atrial remodelling, left ventricle systolic dysfunction, and high frequency fibrillation. The simulation was carried out by coupling several in-house codes and ANSYS-CFX module. The results reveal that AF traits lower flow rate at left ventricular outflow tract, which in general lowers blood perfusion to systemic, cerebral and coronary circulations. Consequently, it leads to endothelial cell activation potential (ECAP) increase and variation of flow structure that both suggest predisposed areas to atherogenesis and thrombus formation in different regions in ascending aorta, aortic arch and descending thoracic aorta. Atrial fibrillation (AF) is the most common arrhythmia. It can exist in paroxysmal, persistent and long-standing persistent forms.28 AF normally occurs in adults, and the likelihood of occurrence roughly increases with increasing age.57 In the UK alone around 1,180,000 AF cases were recorded between 2015 and 2016. The statistical data for the same region for the period 2004 to 2016 shows that the incidence of AF tends to increase as the population becomes older.8 While AF has been considered as an independent risk factor, it occurs concomitantly with other diseases like hypertension and heart failure or can autonomously cause other types of cardiovascular diseases (CVDs) such as heart failure6 and stroke.34 Besides healthcare related issues, patients suffering from AF incur significant treatment costs,45 since the disease necessitates the long-term clinical treatment and follow-up. Perhaps the most significant complication associated with AF is blood stasis inside the left atrium (LA) and formation of thrombus. Embolism of the thrombus can lead to distant organ ischaemia and infarction. In particular, cerebral embolism leads to a stroke. In a longitudinal study of participants from Framingham (known as the Framingham Heart Study),62 it was concluded that patients with AF are more vulnerable to ischaemic stroke and the condition worsens as the population become older. Additionally, a study by Camm et al.10 demonstrated that AF related strokes are severe. While stroke is considered one of the main consequences of AF, a recent study by Gómez-Outes et al.19 articulated that only a small proportion of deaths in AF population is because of ischaemic stroke; but the main reasons are heart failure, sudden death and myocardial infarction. Generally, discussion about AF effects is very challenging because it occurs in conjunction with other diseases; furthermore, the concomitant incidence of electrophysiological disorder, structural remodelling and flow changes during AF, make it a complicated disease. One practical approach to explore in isolation the effect of various parameters and their impact on the disease is mathematical modelling of AF. Since AF stems from disorder in mechanical and electrical characteristics of the heart, mathematical modelling of electrophysiology and electromechanical behaviour of the heart during AF has been the focus of a significant body of research in recent years.61 However, to explore AF effects on haemodynamics of the cardiovascular system, lumped modelling and computational fluid dynamics (CFD) are two feasible techniques. Using the lumped modelling approach for AF,54 two studies have been performed to explore AF effects on cerebrovascular circulation55 and its relevance to cognitive impairment.4 Similarly, further investigations on exercise tolerance during AF5 and the efficiency of the aortic and pulmonary valves52,56 have been accomplished. Recently, using a proposed multiscale approach,21 Scarsoglio et al.53 investigated AF effects on cardiovascular haemodynamics. Their findings clearly demonstrate that the arterial system cannot significantly damp AF effects; which thus remain as persistent perturbations with potential for adverse impact on the cardiovascular system. Unlike 2D/3D CFD methods, lumped and one-dimensional approach cannot examine local variations of flow structure and associated haemodynamic metrics during AF. Therefore, employing 3D CFD approach, Choi et al.12 examined different aorta morphologies during AF-resulted strokes. The main outcome of their study emphasised that in cases with mild aortic arch (AoA) curvature, the possibility of stroke occurrence during AF increases up to three-fold comparing with the normal cardiac rhythm. One of the primary studies of intracardiac flow during AF was undertaken by Zhang et al.66 Using an idealised model of an LA they mainly examined the role of left atrial appendage (LAA) during AF. They demonstrated that during AF, the vortex structure changes and emptying of the LAA doesn't take place appropriately, which can increase the possibility of thromboembolism. Koizumi et al.35 explored AF effects on LA haemodynamics using a patient-specific model. They evaluated two main biomarkers of AF, i.e. lack of atrial kick (AK) at late diastole and high frequency fibrillation (HFF). Their results suggested that both AF features influence blood flow and increase the possibility of blood stasis inside the LAA. In another effort by Otani et al.,47 effects of structural remodelling of LA due to AF on intra-atrial flow characteristics were examined. The study confirmed a mechanistic link between LA structural remodelling and thrombosis. Masci et al.39 improved the personalised CFD simulation of the intra-atrial flow during AF for risk stratification of stroke and therapy planning. Recently, Garcia-Isla et al.18 performed a sensitivity analysis on different configurations of LAA and pulmonary veins to quantify the risk of thrombus formation during AF. In the context of AF, stroke is regularly postulated to be linked to intra-atrial clot, but it is less commonly considered that thrombus formation due to AF may also occur in the main aortic conduits. Given the literature about AF, its different aspects have been explored, both clinically and numerically, however, less attention has been paid to the downstream impact of AF on the circulatory system using 3D patient-specific geometries. In this study, four main consequences of AF—lack of AK, left atrial remodelling (LAR), left ventricular systolic dysfunction (LVSD) and HFF—are examined numerically to predict flow changes in the systemic circulation. To mimic four AF-associated defects, a lumped model for the left heart is employed, which produces the corresponding flow rate at the aortic root. Subsequently, the obtained flow rates are applied as the inflow to a patient-specific model obtained using 4D PC-MRI modality. Therefore, present study aims to investigate changes in haemodynamic metrics of aortic circulation, flow perfusion and genesis of vascular anomalies, specifically atherogenesis. Magnetic Resonance Imaging Data Acquisition In this study, aortic anatomical and 4D flow data were acquired for a 31-year-old healthy male volunteer using a 3T-Philips Achieva MRI scanner located at the NIHR Manchester Clinical Research Facility at Manchester Royal Infirmary, UK. The study was approved by the National Research Ethics Service (REC Ref 04/Q14002/11) and the subject gave informed consent. A high resolution T2-weighted structural scan was performed under breath-hold to extract morphological information; and a free-breathing, ECG gated, 4D phase contrast magnetic resonance imaging (PC-MRI) scan was used to extract 3D velocity data over 20 cardiac phases. Table 1 shows the parameters used for each scan. In Table 1, Scan 1 and Scan 2 refer to the flow and anatomy images, respectively. Table 1 Scan parameters, MRI data. The anatomy was reconstructed from the data of a healthy volunteer. The geometry comprises ascending aorta (AA), AoA, descending aorta (DA), and the main branches including left coronary artery (LCA), right coronary artery (RCA), right subclavian artery (RSCA), right common carotid artery (RCCA), left common carotid artery (LCCA), and left subclavian artery (LSCA). To reconstruct the geometry, SimVascular image processing toolbox (Version 19.03.09),37 and CAD software, SolidWorks 2017 (SP 2.0) were used. More details about the geometry reconstruction have been provided in the Supplementary Materials. Governing Equations In this study the blood flow was considered as an incompressible, homogenous, and Newtonian fluid. Findings show that for the large vessel, in healthy condition and in shear rates above 100 s−1 the blood behaves like a Newtonian fluid.20,24,31 Therefore, in this study the continuity equation along with Navier–Stokes equations were invoked, which are defined as follows: $$\frac{\partial \rho }{\partial t} + \frac{\partial }{{\partial x_{i} }}\left( {\rho u_{i} } \right) = 0$$ $$\frac{{\partial \rho u_{j} }}{\partial t} + \frac{\partial }{{\partial x_{k} }}\left( {\rho u_{k} u_{j} } \right) = \frac{{\partial \sigma_{fij} }}{{\partial x_{i} }} + \rho f_{j}$$ In which \(\rho\) is the density, equal to 1060 kg/m3 for the blood, uk denotes fluid velocity components, xi is the coordinate system, fj denotes the body force per unit of volume, which is also equal to zero and \(\sigma_{ij}\) is stress tensor that for the Newtonian fluid it can be defined as follows: $$\sigma_{fij} = - \,p\delta_{ij} + \lambda \delta_{ij} \frac{{\partial u_{k} }}{{\partial x_{k} }} + \mu \left( {\frac{{\partial u_{i} }}{{\partial x_{j} }} + \frac{{\partial u_{j} }}{{\partial x_{i} }}} \right)$$ where p is the pressure, \(\delta_{ij}\) is the Kronecker delta. Furthermore, \(\mu\) is the first coefficient of viscosity (dynamic viscosity) and was taken 0.0035 Pa s for the blood, and \(\lambda\) is the second coefficient of viscosity (volume viscosity) assumed to be zero for an incompressible flow. Furthermore, to calculate the changes in blood perfusion throughout the aorta and its main branches during AF, the area average flow rate is integrated over a cardiac cycle through: $$Q_{\text{total}} = \mathop \smallint \limits_{0}^{{t_{\text{cc}} }} \left( {{\iint }Q_{A} \left( t \right){\text{d}}A} \right){\text{d}}t$$ Boundary Conditions (BCs) The model consists of one inlet and seven outlets as displayed in Fig. 1. For the inlet a subject-specific velocity waveform extracted from the PC-MRI data was prescribed as a plug flow. In this study the aorta was assumed to be rigid, which is a reasonable compromise of accuracy, data availability and computational cost.9 Furthermore, at the wall vicinity no-slip condition was applied. For all the outlets, three-element Windkessel (RCR) model was used, which has been demonstrated an ample 0D–3D coupling.33,40,48 Additionally, this model can compensate the absence of the wall elasticity.50 The RCR model and its corresponding parameters are defined in the Supplementary Materials. A schematic of model construction, from PC-MRI data acquisition (Anatomy and 4D flow data) to geometry reconstruction, and selected boundary conditions for the inlet and outlets. Compact Lumped Model for the Arterial Circulation For the parametric study of AF effects, the left heart function was mimicked using the model proposed by Simaan et al.,58 which considers the LA, mitral valve (MV), left ventricle (LV) and aortic valve (AV) that were coupled with the aorta and systemic circulation. To study different phases inside the LA, i.e. reservoir, conduit and booster pump, the LA compliance has been modified as a time-variant parameter. The circuit can produce corresponding inlet waveform for the left ventricular outflow tract (LVOT) as the left heart parameters change, so the resultant waveform can be applied as the inlet BC at aortic root to investigate flow distribution/perfusion during AF. More details are provided in the Supplementary Materials. Four AF Characteristics Since AF directly impacts the mechanical and functional characteristics of LA and LV, it therefore affects the flow at LVOT. In the following, each AF defect and the corresponding parameter used to mimic specific abnormality is introduced as follows: Lack of AK the AK usually occurs at late diastole to eject remaining blood into the LV. As the atrium loses its active contraction, the flow toward the LV reduces.2 To mimic this abnormality, it can be reflected through the LA elastance by assuming that it remains constant during a cardiac cycle.54 The comparison was made for six different LA constant elastances (ELAC) in different orders of magnitude with the values of 0.002, 0.02, 0.2, 2, 20 and 200 mmHg/mL, which correspond to ELAC1 to ELAC6, respectively. LAR can result from chronic AF,46 owing to the genesis of fibrosis substrate and larger LA size.36 In this study it is postulated that LAR is associated to the alteration of the LA compliance. As compliance is inversely proportional to elastance, six different LA elastances (ELA) were used. Numeric values for (ELAmin1,ELAmax1) to (ELAmin6,ELAmax6) are (0.002,0.003), (0.02,0.03), (0.2,0.3), (2,3), (20,30) and (200,300) mmHg/mL, respectively. Noting that the baseline values for the normal LA elastance was taken (0.2, 0.3) mmHg/mL, which is in a same order adopted by Scarsoglio et al. 54. LVSD is another side effect of AF which appears as instantaneous or permanent change in LV function.11 To simulate this condition, it was assumed that the LV elastance (ELV) changes, and therefore variations are compared for five different maximum elastances (ELVmax) to mimic its systolic dysfunction. The chosen values for ELV1max to ELV5max are 0.3, 0.5, 1, 1.5 and 2 mmHg/mL, respectively, while ELVmin for all the cases was kept constant and it is equal to 0.05 mmHg/mL.54,59 The normal value in this case is (ELVmin = 0.05,ELVmax = 2)mmHg/mL. HFF heartbeats in a patient with AF normally ranges between 100 and 175 bpm. To investigate this feature of AF, three different cases, i.e. 75 (normal case), 100 and 150 bpm were chosen,13 while it was assumed that the diastolic volume remains constant at different frequencies. The baseline values of the left heart model are presented in the Supplementary Materials, and they are indicated as normal (N) in the figures. In this study the pattern of flow waveform at different cycles was assumed to remain unchanged, and the irregularities were ignored. Indeed, given regression analyses, which is based on preceding (RRp) and pre-preceding (RRpp) interval of waveforms during AF, it has been confirmed that for RRp/RRpp = 1, the cardiac parameters reflect average values during AF.59,60 The continuity and Navier–Stokes equations were discretised numerically using ANSYS-CFX 19.0, which uses finite volume method. The advection terms were discretised using high-resolution method—this scheme uses either 1st order or 2nd order accuracy in space depending on flow field condition to impose the boundedness condition. Moreover, a 2nd order backward Euler scheme was invoked to discretise the time derivative. The convergence criteria for the simulation are based on root mean square (RMS) of residuals of mass and momentum equations and were set to 10−6. To implement Windkessel model for all the outlets, the differential equations were discretised implicitly using 1st order backward Euler scheme. Furthermore, for the inlet, a Fourier series with eight harmonics was fitted to the data obtained from 4D PC-MRI, using the least square method. Finally, the set of first order ordinary differential equations (ODEs) obtained from the left heart lumped model were solved using a fourth-order Runge–Kutta method. To obtain a converged solution which is independent of grid size, four different grid sizes were examined. The computational domain consists of tetrahedral elements, which are accompanied with five prism layers for a proper treatment of near wall region. Mesh sensitivity analyses showed that the computational domain with 6.6 million elements is fine enough to capture all the flow features precisely. Furthermore, for a stable solution, timestep size of 0.1 ms was chosen and the simulation was performed for four cardiac cycles to get fully converged temporal solution. More details are provided in the Supplementary Materials. In this section two sets of validation have been presented. To show the robustness of the left heart model, Fig. 2 represents the waveforms of the flow rate, pressure, volume changes and LV function using the parameters employed in study by Simaan et al.,58 which was validated against the clinical data (Figs. 2a–2e). Figure 2a displays flow rates across the AV and MV and shows the key moments in a cardiac cycle. Figure 2b displays pressure waveforms of aorta, LV and LA. For a normal heartbeat of 75 bpm the model can successfully predict systolic and diastolic pressures around 113 and 75 mmHg, respectively, and CO of 5.12 L/min, which are in physiological ranges reported for healthy individuals.23 Furthermore, LA pressure is successfully predicted to vary between 10 and 20 mmHg, which has a good agreement with in-vivo measurements.44 In Fig. 2c the LV volume changes between 60 and 140 mL, while the LA volume alters between 20 and 60 mL, which are well located within the range measured amongst adults.22,25 Figures 2d and 2e plot left ventricle pressure (LVP) vs. left ventricle volume (LVV) for afterload and preload, respectively; a linear relation for the end systolic pressure volume relation (ESPVR) is recovered as noted in previous studies.58 Finally, Figure 2f shows the coincidence of the volume changes of LV and aortic flow when the AV is open, which is in agreement with the clinical measurements.25 Validation of the left heart lumped model; (a) flow rate at the LVOT, (b) pressure waveform of the LV, LA and aorta, (c) volume changes of the LA and LV, (d) ESPVR for different afterload conditions in a constant end diastolic volume, (e) ESPVR for different preload conditions, (f) volume changes of the LV, and aortic flow rate; validation of CFD against in-vivo (4D PC-MRI data) (g) ascending aorta, (h) aortic arch (between brachiocephalic artery and LCCA), (i) DTAO. To evaluate the accuracy of the CFD model, the resulting flow rates from the numerical modelling are compared against the data obtained from 4D PC-MRI. To extract in-vivo data from different cross sections, the flow data—known as phase and magnitude images—were imported to an open source code, Segment26 and the flow rate at each particular plane was extracted individually and collected to obtain the total value. The comparison was made for three sections in AA, AoA (between brachiocephalic trunk and LCCA) and descending thoracic aorta outlet (DTAO), as shown in Figs. 2g–2i. The MRI data were taken from twenty phases in a cardiac cycle, while the numerical data was obtained for two different assumptions for the outlet BCs to test the effects of flow downstream; therefore, constant average pressure (CAP), which neglects the peripheral arteries resistance and RCR model that includes this effect. Furthermore, in Table 2 the mean flow rate of different cases is compared. Results showed that the RCR model can successfully predict flow waveforms and their mean values (comparing with the in-vivo data), while the CAP predicts flow rate less accurate, particularly at distal region with respect to the aortic root. Table 2 Average flow rate across three defined sections. Cardiac Metrics In this section the main cardiac metrics including pressure, flow rate, cardiac output (CO), stroke volume (SV) and ejection fraction (EF) are examined during four AF defects. In Figs. 3(a.1)–3(a.3) and 3(b.1)–3(b.3), aortic, LA and LV pressures are depicted, respectively, for different ELAC and ELA. Comparing these models, meaningful differences can be seen in aortic and intracardiac pressures (LA and LV pressures) for the values below and in normal range, however, as the elastance increases above the normal value (ELA3), the difference becomes negligible. Figures 3(c.1)–3(c.3) show pressure changes due to the LVSD. As displayed, significant changes occur for aortic and intracardiac pressures. Moreover, for the lower values of ELV, the peak pressure for the aorta and LV takes place later. Finally in Figs. 3(d.1)–3(d.3) the pressures are shown in different HR. Results show that increase in number of beats during AF increases the intracardiac and aortic pressures significantly, and it changes slightly the waveform patterns. Pressure waveforms at: (a.1) aorta, (a.2) LA, and (a.3) LV, in Lack of AK; (b.1) aorta, (b.2) LA, and (b.3) LV in LAR; (c.1) aorta, (c.2) LA, and (c.3) LV in LVSD; (d.1) aorta, (d.2) LA, and (d.3) LV in HFF (all the pressure waveforms are shown for a cardiac cycle). Figure 4 shows the flow passes across the AV and MV during different AF-related defects. In Figs. 4(a) and 4(b) the aortic and mitral flows are compared for different elastances in absence and presence of AK, respectively. Like the pressure waveforms, the flow rate difference for the cases with and without AK is observable for the lower values of elastance (ELA1–ELA3/ELAC1–ELAC3), however, for the larger values, the difference becomes negligible. Flow rates at the AV and MV; (a) lack of AK, (b) LAR, (c) LVSD, and (d) HFF. Figure 4c displays different flow waveforms across the MV and AV due to the changes in LV elastance. The results demonstrate that LVSD is accompanied with flow reduction from the LA to the LV during the passive contraction, which consequently reduces flow at LVOT. Furthermore, as the ELV decreases, the peak of aortic flow waveform takes place later, which is in accordance of the pressure waveform described earlier. In Fig. 4d the flow rates are shown for different Heart Rates (HRs). By increasing the HR, the blood flow from the LA to the LV reduces, which decreases aortic flow as well. Furthermore, a significant raise occurs for the peak flow of MV at 150 bpm. To explore the heart function, Fig. 5 displays LV pressure–volume relation (LVPVR), CO, SV and EF. On LVPVR diagrams, three main characteristics of LV functionality can be detected, which are preload, afterload and LV contractility.17 Figure 5a.1 compares LVPVR diagram for different LA elastances with and without AK. Results show that for the LA elastance below ELA3, lack of AK causes a decrease in preload and afterload, which lead to SV and CO reduction as shown in Figs. 5a.2 and 5a.3. For the larger elastances (ELA4–ELA6) the AK effect gradually disappears, and no differences can be observed in cardiac metrics between ELA and ELAC. Figures 5b.1–5b.3 illustrate changes in cardiac metrics during LVSD. The outcomes show that reduction in LV systolic function is accompanied with increase in preload and afterload, while the LV contractility diminishes drastically, and causes SV, EF and CO decrease. Figures 5c.1–5c.3 display cardiac metrics in various HRs. Considering LVPVR diagram, it reveals that as the HR increases, preload changes slightly, however, afterload undergoes more significant changes, while the contractile of the LV does not affect visibly. As a result, CO increases, however, reducing in time through which the valves remain open, SV and EF decrease significantly. LVPVR loop during (a.1) lack of AK and LAR, (b.1) LVSD, and (c.1) HFF; CO and duration in which AV and MV remain open during (a.2) lack of AK and LAR, (b.2) LVSD, and (c.2) HFF; changes of EF and SV per beat during (a.3) lack of AK and LAR, (b.3) LVSD, and (c.3) HFF. AF Effects on Flow Distribution Throughout the Aortic Circulation In order to systematically assess AF related changes in a qualitative manner, standard haemodynamic metrics are invoked. Time-averaged wall shear stress (TAWSS), oscillatory shear index (OSI) and TAWSS gradient (TAWSSG) are employed to consider the mean behaviour of WSS, occurrence of reversed flow and local variation of WSS, respectively.30,51 Furthermore, the ratio of OSI to TAWSS, known as endothelial cell activation potential (ECAP),15 which shows thrombogenic-prone regions through the arterial system. Previous studies have shown that for the values of TAWSS less than 0.36 Pa, monocytes are prone to adhere to endothelial cells which could lead thrombogenesis.16,63 Moreover, high OSI values indicate disturbed flow region at the vascular wall, where the WSS vector drastically change its direction over the cardiac cycle. Therefore, ECAP value around 1.4 is considered as the threshold value for thrombogenesis. Moreover, to improve visualisation of the unsteady flow features, iso-surfaces of the Q-criterion are used.27 As discussed in the previous section, lack of AK, LAR, LVSD and HFF all alter flow passes across AV. Therefore, changes in inflow boundary condition during AF would influence aortic haemodynamics. To this aim ECAP and TAWSSG in a cardiac cycle, and vortex strength and velocity contours at systolic peak are examined. To compare haemodynamic variations of the defects with each other, two cases are presented for each anomaly, which are compared against the baseline values. ELA1 and ELA6 for the LAR, ELV1 and ELV3 for the LVSD, and 100 and 150 bpm for the case of HFF. Results are displayed in Fig. 6 for three different LA compliances. From ELA1 to ELA3, the flow rate at LVOT increases, which lowers ECAP at the AA, AoA, DA. For the ELA values higher than ELA3, LA loses its active contraction, and so the flow across the LVOT diminishes, which is accompanied by higher ECAP. Comparison of haemodynamic metrics in aortic circulation in normal and AF-associated defects: (a) ECAP = OSI/TAWSS, and (b) TAWSSG (for each AF-related defect two cases are shown). Patterns of TAWSSG within the considered range do not change significantly, however, the variations become more visible for the larger vessels including aortic artery, LCCA, RCCA, LSCA and RSCA, while for the LCA and RCA this variation is less significant. Furthermore, given the diameter of the artery, and across the bends and curvatures, TAWSSG increases. In LVSD, decrease in LV elastance reduces the flow output, which significantly influences ECAP and TAWSSG. For ELV of 0.3 mmHg/mL (ELV1), ECAP crosses the threshold value of 1.4 mmHg/mL, while for ELV3 it marginally exceeds the limit. Therefore, the susceptible regions in ELV1 and ELV3 are the aortic root, AoA and DA. Moreover, for ELV1 the ECAP grows considerably at supra-aortic branches. TAWSSG also undergoes significant changes. Indeed, for low ELV, TAWSSG decreases since less blood flows to the aortic circulation; while it increases as the ELV increases. The results demonstrate that for low values of ELV, TAWSSG decreases along with decrease in TAWSS (contours of TAWSS and OSI are presented in the Supplementary Materials). Finally, the fourth column in Fig. 6 demonstrates ECAP and TAWSSG during high HR. The results show that in the high HR, ECAP decreases, while TAWSSG increases, conversely. This increase is more pronounced at the AA and AoA. Figure 7 shows the flow structure in terms of velocity, streamline and vortex strength. To visualise vortex strength, Q-criterion is used, and the results have been depicted for two iso-surface values of 250 and 2500. Additionally, velocity contours at two different cross sections, in the AA and DA are displayed, which also contain planar streamlines. All the results in Fig. 7 are illustrated at systolic peak. (a) vortex intensity for two iso-surface values using Q-criterion; (b) velocity contours and in-plane streamlines at two planes across the AA and DA (shown by green arrows); (the results are compared between normal aortic flow and AF-related defects). The results are shown for the LAR, LVSD and HFF and compared with the baseline values. Figure 7a shows that LAR does not affect vortex strength significantly, while it could change the velocity magnitude and vortex arrangement, specifically at the DA as shown in Fig. 7b. In contrast with LAR, LVSD can strongly affect aortic flow distribution. During severe systolic dysfunction (ELV1), the LV produces very poor inflow waveform, which diminishes vortex strength and creates poor vortex core regions at the AA and nearly uniform flow at the DA. By increasing ELV, for ELV3 the vortex strength increases, and the flow develops two vortices at the AA with lower intensity, while the flow is partially disturbed at the DA. Furthermore, the results for higher HR, i.e. 100 and 150 bpm show that they do not change the vortex strength meaningfully, however, for the highest HR—150 bpm in this study—the flow tends to form stronger vortices. Worth mentioning that for the mid-value of HR, the flow at systolic peak does not form vortex at the AA and DA. Blood Perfusion In this section changes in blood perfusion throughout the aorta and its main branches during AF are investigated. Figure 8 displays blood perfusion in different branches of the considered aorta. The figure shows percentage flow through each outlet, and for each outlet, the perfusion variations are displayed for LAR, LVSD and HFF during AF. Flow percentage goes through each branch during various AF anomalies. For each case the flow percentage is evaluated based on mean flow rate at the inlet. Each number at top of the chart denotes the flow variation among different cases of particular AF defect; first number (red) for LAR, second number (crimson) for LVSD, third number (blue) for HFF, and finally fourth number (black) shows the total flow changes among different AF anomalies. (a) LCA, (b) RCA, (c) RSCA, (d) RCCA, (e) LCCA, (f) LSCA, and (g) DTAO. The results demonstrate that among six sets of the LA elastance, ELA3 produces the largest flow rate, while deviation from ELA3, which is considered as the normal elastance, leads to lower flow rates at LVOT. Figure 8 shows the flow percentage through each branch for the various ELAs. The results illustrate that as the flow rate decreases; the general trend for percentage flow through the coronary arteries is incremental, while it is decremental for the DTAO. Furthermore, given the location of supra-aortic arteries the flow percentage changes in different LA elastances. For the LCCA and RCCA the patterns are similar to the coronary arteries, however, for the RSCA and LSCA, no regular pattern can be observed. During LVSD as clarified in Fig. 8, the aortic flow decreases as the ELV becomes smaller (LV contraction capability reduces). The results show that as the ELVmax increases and becomes closer to the normal LV elastance, the percentage flow through the coronary arteries decreases, while the flow at the DTAO increases. Moreover, there are significant flow variations in the RCA (15.22%) and LCA (4.82%) in different ELVmax. Flow variations in the supra-aortic arteries does not follow a regular pattern, however, in the LCCA and RCCA show a descending trend as flow increases at the LVOT, while in the LSCA reveals an ascending trend. In case of HFF, by increasing the HR, the flow percentage through the coroanary arteries increases (13.78% for the LCA and 11.27% for the RCA), while it decreases for the DTAO. Furthemore, the LCCA and RSCA reveal an increasing trend, and in general, flow in the RCCA tends to decrease, while for the LSCA, it increases. In summary, during AF since the flow at LVOT reduces, the overall perfusion decreases, while the flow percentage alters comparing with the normal condition. In this study four common AF attributes (lack of AK, LAR, LVSD and HFF) on aortic flow distribution are examined. The pressure at the aortic root, LV and LA, the flow rate across the AV and MV were considered. Furthermore, other metrics including LVPVR, SV, EF, CO, ECAP, TAWSSG, vortex intensity and structure were examined. Using different sets of elastance for the LA, AK and LAR effects were studied. The results showed that as the compliance of the LA decreases, the intra-cardiac and aortic pressure increases. This increase is much more severe for the LA, which imposes massive stress on it. Furthermore, in lower compliance—in reality it happens because of fibrogenesis—the atrium loses its active contraction attributes, which is in accordance with clinical reports.1 Consequently, the preload decreases for any compliance deviation from the normal value; however, the afterload becomes less than the normal value for the more compliant atrium, while it exceeds the normal value as the atrium becomes stiffer. Therefore, the SV and CO decreases, while EF does not change significantly. However, for the less compliant LA once it loses the AK feature, EF declines meaningfully. The overall changes in LA compliance results a flow reduction across the MV and AV. Furthermore, considering blood perfusion, flow percentage through the RCA and LCA increases, while it decreases at the DTAO. Noteworthy to mention that the slopes of increase and decrease are marginally higher for the less compliant atrium. Therefore, lowering in flow passage across the LVOT leads to ECAP increase in some spots of the AoA and DA, which suggests thrombogenesis hazard. Moreover, changes in the LA elastance—as a result of either lack of AK or LAR—slightly decreases velocity magnitude and alters vortex structure because of flow reduction. LVSD is another common defect occurs during AF. During severe dysfunction, the aorta, LV and LA experience significant pressure drop. However, once the AV closes and MV opens, the LA and LV pressures rise slightly for the lower LV elastance (more severe dysfunction) that imply higher stresses on the LA and LV. Therefore, by decreasing in LV functionality, both preload and afterload increase, while the contractility decreases significantly that all result a drastic reduction of CO, SV and EF. Moreover, current findings suggest that during LVSD, EF is correlated negatively to the ratio of LV end systolic pressure to SV (ESP/SV), while it is correlated positively to the ratio of LV end systolic pressure to LV end systolic volume (ESP/ESV). Similar conclusions were taken through a beat-by-beat analysis on seven AF patients by Munitinga et al.43 (the comparisons are presented in the Supplementary Materials). Weak LV performance lowers blood flow across the MV and AV. Noteworthy to mention that the main flow reduction across the MV occurs during the passive LA contraction, while it slightly reduces during its active contraction. Furthermore, considering the blood perfusion, it explains that decrease in LV systolic function alters flow percentage through proximal and distal arteries in a way that it rises at the RCA and LCA, while it reduces at the DTAO. Considering haemodynamic metrics in severe LVSD, it reveals observable changes in vortex structure and patterns that are reflected as a drastic decrease in TAWSS, and emergence of reversed flow (OSI increase) at some regions; therefore, the concomitant effect is that the ECAP crosses the threshold limit of 1.4. Therefore, for ELV1 the possibility of thrombogenesis increases at the AA, AoA and DA. However, as the LV recovers its normal function, ECAP decreases, so it reduces the thrombogenesis hazard. One of the frequent AF features is HFF in which the HR undergoes up to three-fold increase. Additionally, HFF is accompanied with irregular HR at different beats,14 which was neglected in this study. Current findings showed that the aortic and intra-cardia pressures increase significantly. The obtained average LA pressure is around 20 mmHg, which has been observed among patients with persistent AF.64 Since the end diastolic volume is assumed to remain constant, therefore, the preload and the LV contractile are unchanged, while the afterload increases for the higher HR. Also, since HR irregularity is ignored—the condition occurs during atrial flutter—CO increases, while the SV and EF reduce significantly as concluded by Anselmino et al.3 Therefore, by increasing HR, despite negligible changes at aortic peak flow, however, the blood finds a little time to flow through the AV and decreases the circulatory perfusion. Like the other AF-related defects, during abnormal HR, the flow percentage through the RCA and LCA increases, while it decreases through the DTAO. Considering flow structure inside the aortic conduit, at HR = 100 bpm, vortices do not form at the AA and DA during the systolic peak, whereas for HR = 150 bpm, the vortices at the AA and DA are retrieved with higher intensities. In overall, decrease in ECAP during HFF, it decreases thrombus formation due to fatty substance adhesion; in contrast the critical increase in TAWSSG enhances the possibility of luminal lesions and damage on endothelia cells, which increases thrombogenesis risk. In Table 3, the key findings of this work have been presented qualitatively. The results suggested that abnormal aortic flow significantly alters cardiac metrics and haemodynamic parameters. Consequently, the overall impact is an increase in the possibility of plaque formation, specifically at the AoA and DA. This possibility was emphasised by Blackshear et al.7 at DA. Accordingly, arterial stenosis and subsequent rupture, increase stroke threat as a consequence of embolus movement toward the carotid arteries. To establish a mechanistic link between the AF-related defects and their associations to the stroke, more investigations on different aortic morphologies are recommended. Table 3 The trend of variation of each cardiac/haemodynamic metric in various AF defects (a qualitative summary of the key findings). The present study has some limitations should be improved in future studies to address AF anomalies more precisely. The current work targeted a parametric study of an isolated AF-associated defects on aortic flow circulation. However, more precise outcome would be obtained if AF patient-specific inflow data was available, i.e. both image-based flow rate and velocity profile. In fact, it has been shown that the image-based subject-specific velocity profile could change haemodynamic metrics, particularly amongst patients and at AA and AoA.42,49,65 In this study, the RCR Windkessel model was used for all the flow outlets, however, more accurate results will be obtained using the lumped model for the coronary arteries.32,38 The flow is assumed to be laminar, however, it has shown that it tends to become turbulent, particularly at AA and AoA.41 Another factor might hinder the accuracy of the result is the rigid wall assumption. The wall compliance can be included either as an interaction of the blood and vessel or by obtaining the history of deformation. The former requires subject-specific constitutive data, while the latter requires different morphological states in a cycle. These two models have been employed in a number of studies, however, they have some drawbacks (like lack of suitable clinical data and computational burden) that should be amended.9,29,50 Ascending aorta AF: Atrial kick AoA: Aortic arch Boundary condition Constant average pressure CFD: Descending aorta DTAO: Descending thoracic aorta outlet ECAP: Endothelial cell activation potential EF: ESPVR: End systolic pressure volume relation HFF: High frequency fibrillation Left atrium LAA: Left atrial appendage Left atrial remodelling LCA: Left coronary artery LCCA: Left common carotid artery LSCA: Left subclavian artery LV: Left ventricle LVOT: Left ventricular outflow tract LVP: Left ventricular pressure LVPVR: LV pressure–volume relation LVSD: Left ventricular systolic dysfunction LVV: Left ventricular volume OSI: Oscillatory shear index ODE: Ordinary differential equation PC-MRI: Phase contrast magnetic resonance imaging RCA: Right coronary artery RCCA: Right common carotid artery RSCA: Right subclavian artery SV: TAWSS: Time-averaged wall shear stress TAWSSG: TAWSS gradient Allessie, M., J. Ausma, and U. Schotten. Electrical, contractile and structural remodeling during atrial fibrillation. Cardiovasc. Res. 54:230–246, 2002. Alpert, J. S., P. Petersen, and J. Godtfredsen. Atrial fibrillation: natural history, complications, and management. Annu. Rev. Med. 39:41–52, 1988. Anselmino, M., S. Scarsoglio, C. Camporeale, A. Saglietto, F. Gaita, and L. Ridolfi. Rate control management of atrial fibrillation: may a mathematical model suggest an ideal heart rate? PLoS ONE 10:1–9, 2015. Anselmino, M., S. Scarsoglio, A. Saglietto, F. Gaita, and L. Ridolfi. Transient cerebral hypoperfusion and hypertensive events during atrial fibrillation: a plausible mechanism for cognitive impairment. Sci. Rep. 6:28635, 2016. Anselmino, M., S. Scarsoglio, A. Saglietto, F. Gaita, and L. Ridolfi. A computational study on the relation between resting heart rate and atrial fibrillation hemodynamics under exercise. PLoS ONE 12:1–15, 2017. Anter, E., M. Jessup, and D. J. Callans. Atrial fibrillation and heart failure treatment considerations for a dual epidemic. Circulation 2009. https://doi.org/10.1161/CIRCULATIONAHA.108.821306. Blackshear, J. L., L. A. Pearce, R. G. Hart, M. Zabalgoitia, A. Labovitz, R. W. Asinger, and J. L. Halperin. Aortic plaque in atrial fibrillation. Stroke 30:834–840, 1999. British Heart Foundation. Bhf Cvd Statistics Compendium 2017. London: British Heart Foundation, 2017. Brown, A. G., Y. Shi, A. Marzo, C. Staicu, I. Valverde, P. Beerbaum, P. V. Lawford, and D. R. Hose. Accuracy vs. computational time: translating aortic simulations to the clinic. J. Biomech. 45:516–523, 2012. Camm, A. J., et al. Guidelines for the management of atrial fibrillation. Eur. Heart J. 31:2369–2429, 2010. Cha, Y. M., M. M. Redfield, W. K. Shen, and B. J. Gersh. Atrial fibrillation and ventricular dysfunction: a vicious electromechanical cycle. Circulation 109:2839–2843, 2004. Choi, H. W., T. Luo, J. A. Navia, and G. S. Kassab. Role of aortic geometry on stroke propensity based on simulations of patient-specific models. Sci. Rep. 7:7065, 2017. Clark, D. M., V. J. Plumb, A. E. Epstein, and G. N. Kay. Hemodynamic effects of an irregular sequence of ventricular cycle lengths during atrial fibrillation. J. Am. Coll. Cardiol. 30:1039–1045, 1997. Daoud, E. G., R. Weiss, M. Bahu, B. P. Knight, F. Bogun, R. Goyal, M. Harvey, S. A. Strickberger, K. C. Man, and F. Morady. Effect of an irregular ventricular rhythm on cardiac output. Am. J. Cardiol. 78:1433–1436, 1996. Di Achille, P., G. Tellides, C. A. Figueroa, and J. D. Humphrey. A haemodynamic predictor of intraluminal thrombus formation in abdominal aortic aneurysms. Proc. R. Soc. A 470:20140163, 2014. Doyle, B., K. Miller, A. Wittek, and P. M. F. Nielsen. Computational Biomechanics for Medicine. New York: Springer, pp. 1–122, 2014. https://doi.org/10.1007/978-1-4419-5874-7. Fukuta, H., and W. C. Little. The cardiac cycle and the physiologic basis of left ventricular contraction, ejection, relaxation, and filling. Heart Fail. Clin. 4:1–11, 2008. García-Isla, G., A. L. Olivares, E. Silva, M. Nuñez-Garcia, C. Butakoff, D. Sanchez-Quintana, H. G. Morales, X. Freixa, J. Noailly, T. De Potter, and O. Camara. Sensitivity analysis of geometrical parameters to study haemodynamics and thrombus formation in the left atrial appendage. Int. J. Numer. Method. Biomed. Eng. 34:1–14, 2018. Gómez-Outes, A., M. L. Suárez-Gea, and J. M. García-Pinilla. Causes of death in atrial fibrillation: Challenges and opportunities. Trends Cardiovasc. Med. 27:494–503, 2017. Graf, C., and J. P. Barras. Rheological properties of human blood plasma—a comparison of measurements with three different viscometers. Experientia 35:224–225, 1978. Guala, A., C. Camporeale, F. Tosello, C. Canuto, and L. Ridolfi. Modelling and subject-specific validation of the heart-arterial tree system. Ann. Biomed. Eng. 43:222–237, 2014. Gutman, J., Y. S. Wang, D. Wahr, and N. B. Schiller. Normal left atrial function determined by 2-dimensional echocardiography. Am. J. Cardiol. 51:336–340, 1983. Guyton, A. C., and J. E. Hall. Text book of Medical Physiology. Amsterdam: Elsevier Health Sciences, 2006. Haidekker, M. A., A. G. Tsai, T. Brady, H. Y. Stevens, J. A. Frangos, E. Theodorakis, and M. Intaglietta. A novel approach to blood plasma viscosity measurement using fluorescent molecular rotors. Am. J. Physiol. Circ. Physiol. 282:H1609–H1614, 2002. Hammermeister, K. E., and J. R. Warbasse. The rate of change of left ventricular volume in man. Circulation 49:739–747, 2012. Heiberg, E., J. Sjögren, M. Ugander, M. Carlsson, H. Engblom, and H. Arheden. Design and validation of segment—freely available software for cardiovascular image analysis. BMC Med. Imaging 10:1–13, 2010. Hunt, J. C. R., A. A. Wray, and P. Moin. Eddies, streams, and convergence zones in turbulent flows. Stud. Turb. Using Num. Simul. Databases 1:193–208, 1988. Iwasaki, Y. K., K. Nishida, T. Kato, and S. Nattel. Atrial fibrillation pathophysiology: implications for management. Circulation 124:2264–2274, 2011. Jin, S., J. Oshinski, and D. P. Giddens. Effects of wall motion and compliance on flow patterns in the ascending aorta. J. Biomech. Eng. 125:347–354, 2003. Kabinejadian, F., M. McElroy, A. Ruiz-Soler, H. L. Leo, M. A. Slevin, L. Badimon, and A. Keshmiri. Numerical assessment of novel helical/spiral grafts with improved hemodynamics for distal graft anastomoses. PLoS ONE 11:e0165892, 2016. Karimi, S., M. Dabagh, P. Vasava, M. Dadvar, B. Dabir, and P. Jalali. Effect of rheological models on the hemodynamics within human aorta: CFD study on CT image-based geometry. J. Nonnewton. Fluid Mech. 207:42–52, 2014. Kim, H. J., I. E. Vignon-Clementel, C. A. Figueroa, K. E. Jansen, and C. A. Taylor. Developing computational methods for three-dimensional finite element simulations of coronary blood flow. Finite Elem. Anal. Des. 46:514–525, 2010. Kim, H. J., I. E. Vignon-Clementel, C. A. Figueroa, J. F. Ladisa, K. E. Jansen, J. A. Feinstein, and C. A. Taylor. On coupling a lumped parameter heart model and a three-dimensional finite element aorta model. Ann. Biomed. Eng. 37:2153–2169, 2009. Kirchhof, P., et al. 2016 ESC guidelines for the management of atrial fibrillation developed in collaboration with EACTS. Eur. Heart J. 37:2893–2962, 2016. Koizumi, R., K. Funamoto, T. Hayase, Y. Kanke, M. Shibata, Y. Shiraishi, and T. Yambe. Numerical analysis of hemodynamic changes in the left atrium due to atrial fibrillation. J. Biomech. 48:472–478, 2015. Kuppahally, S. S., N. Akoum, N. S. Burgon, T. J. Badger, E. G. Kholmovski, S. Vijayakumar, S. N. Rao, J. Blauer, E. N. Fish, E. V. R. DiBella, R. S. MacLeod, C. McGann, S. E. Litwin, and N. F. Marrouche. Left atrial strain and strain rate in patients with paroxysmal and persistent atrial fibrillation: Relationship to left atrial structural remodeling detected by delayed-enhancement MRI. Circ. Cardiovasc. Imaging 3:231–239, 2010. Lan, H., A. Updegrove, N. M. Wilson, G. D. Maher, S. C. Shadden, and A. L. Marsden. A re-engineered software interface and workflow for the open-source simvascular cardiovascular modeling package. J. Biomech. Eng. 140:024501, 2018. Mantero, S., R. Pietrabissa, and R. Fumero. The coronary bed and its role in the cardiovascular system: a review and an introductory single-branch model. J. Biomed. Eng. 14:109–116, 1992. Masci, A., M. Alessandrini, D. Forti, F. Menghini, L. Dedé, C. Tommasi, A. Quarteroni, and C. Corsi. A patient-specific computational fluid dynamics model of the left atrium in atrial fibrillation: development and initial evaluation (Conference paper). 10263:392–400, 2017. McElroy, M., and A. Keshmiri. Impact of using conventional inlet/outlet boundary conditions on haemodynamic metrics in a subject-specific rabbit aorta. Proc. Inst. Mech. Eng. Part H 232:103–113, 2018. Miyazaki, S., K. Itatani, T. Furusawa, T. Nishino, M. Sugiyama, Y. Takehara, and S. Yasukochi. Validation of numerical simulation methods in aortic arch using 4D Flow MRI. Heart Vessels 32:1032–1044, 2017. Morbiducci, U., R. Ponzini, D. Gallo, C. Bignardi, and G. Rizzo. Inflow boundary conditions for image-based computational hemodynamics: impact of idealized versus measured velocity profiles in the human aorta. J. Biomech. 46:102–109, 2013. Muntinga, H. J., A. T. M. Gosselink, P. K. Blanksma, P. J. De Kam, E. E. Van Der Wall, and H. J. G. M. Crijns. Left ventricular beat to beat performance in atrial fibrillation: dependence on contractility, preload, and afterload. Heart 82:575–580, 1999. Nakatani, S., M. J. Garcia, M. S. Firstenberg, L. Rodriguez, R. A. Grimm, N. L. Greenberg, P. M. McCarthy, P. M. Vandervoort, and J. D. Thomas. Noninvasive assessment of left atrial maximum dP/dt by a combination of transmitral and pulmonary venous flow. J. Am. Coll. Cardiol. 34:795–801, 1999. Natale, A., and J. Jalife. Atrial Fibrillation: From Bench to Bedside. New York: Springer, 2008. Nattel, S., B. Burstein, and D. Dobrev. Atrial remodeling and atrial fibrillation. Circ. Arrhythmia Electrophysiol. 1:62–73, 2008. Otani, T., A. Al-Issa, A. Pourmorteza, E. R. McVeigh, S. Wada, and H. Ashikaga. A computational framework for personalized blood flow analysis in the human left atrium. Ann. Biomed. Eng. 44:3284–3294, 2016. Pirola, S., Z. Cheng, O. A. Jarral, D. P. O'Regan, J. R. Pepper, T. Athanasiou, and X. Y. Xu. On the choice of outlet boundary conditions for patient-specific analysis of aortic flow using computational fluid dynamics. J. Biomech. 60:15–21, 2017. Pirola, S., O. A. Jarral, D. P. O'Regan, G. Asimakopoulos, J. R. Anderson, J. R. Pepper, T. Athanasiou, and X. Y. Xu. Computational study of aortic hemodynamics for patients with an abnormal aortic valve: the importance of secondary flow at the ascending aorta inlet. APL Bioeng. 2:026101, 2018. Romarowski, R. M., A. Lefieux, S. Morganti, A. Veneziani, and F. Auricchio. Patient-specific CFD modelling in the thoracic aorta with PC-MRI–based boundary conditions: a least-square three-element Windkessel approach. Int. J. Numer. Method. Biomed. Eng. 34:1–21, 2018. Ruiz-Soler, A., F. Kabinejadian, M. A. Slevin, P. J. Bartolo, and A. Keshmiri. Optimisation of a novel spiral-inducing bypass graft using computational fluid dynamics. Sci. Rep. 7:1–14, 2017. Scarsoglio, S., C. Camporeale, A. Guala, and L. Ridolfi. Fluid dynamics of heart valves during atrial fibrillation: a lumped parameter-based approach. Comput. Methods Biomech. Biomed. Eng. 19:1060–1068, 2016. Scarsoglio, S., C. Gallo, and L. Ridolfi. Effects of atrial fibrillation on the arterial fluid dynamics: a modelling perspective. Meccanica 53:3251–3267, 2018. Scarsoglio, S., A. Guala, C. Camporeale, and L. Ridolfi. Impact of atrial fibrillation on the cardiovascular system through a lumped-parameter approach. Med. Biol. Eng. Comput. 52:905–920, 2014. Scarsoglio, S., A. Saglietto, M. Anselmino, F. Gaita, and L. Ridolfi. Alteration of cerebrovascular haemodynamic patterns due to atrial fibrillation: an in silico investigation. J. R. Soc. Interface 14:20170180, 2017. Scarsoglio, S., A. Saglietto, F. Gaita, L. Ridolfi, and M. Anselmino. Computational fluid dynamics modelling of left valvular heart diseases during atrial fibrillation. PeerJ 4:e2240, 2016. Scheinman, M. M., and M. H. Crawford. Atrial fibrillation. Curr. Diagn. Treat. Cardiol. 4:e2006, 2014. Simaan, M. A., A. Ferreira, S. Chen, J. F. Antaki, and D. G. Galati. A dynamical state space representation and performance analysis of a feedback-controlled rotary left ventricular assist device. IEEE Trans. Control Syst. Technol. 17:15–28, 2009. Tanabe, M., K. Onishi, K. Dohi, T. Kitamura, M. Ito, T. Nobori, and T. Nakano. Assessment of left ventricular systolic function in patients with chronic atrial fibrillation and dilated cardiomyopathy using the ratio of preceding to prepreceding R-R intervals. Int. J. Cardiol. 108:197–201, 2006. Thomas, J. D., Z. B. Popović, S. Zhuang, R. A. Grimm, K. A. Mowrey, T. N. Mazgalev, T. Tabata, Y. Zhang, and D. W. Wallick. Slow rate during AF improves ventricular performance by reducing sensitivity to cycle length irregularity. Am. J. Physiol. Circ. Physiol. 283:H2706–H2713, 2015. Vagos, M. R. S. S., I. G. M. van Herck, J. Sundnes, H. J. Arevalo, A. G. Edwards, and J. T. Koivumäki. Computational modeling of electrophysiology and pharmacotherapy of atrial fibrillation: recent advances and future challenges. Front. Physiol. 9:1–29, 2018. Wolf, P. A., R. D. Abbott, and W. B. Kannel. Atrial fibrillation as an independent risk factor for stroke : the framingham study. Stroke 22:983–988, 1991. Worthen, G. S., L. A. Smedly, M. G. Tonnesen, D. Ellis, N. F. Voelkel, J. T. Reeves, and P. M. Henson. Effects of shear stress on adhesive interaction between neutrophils and cultured endothelial cells. J. Appl. Physiol. 63:2031–2041, 1987. Yoshida, K., M. Ulfarsson, H. Oral, T. Crawford, E. Good, K. Jongnarangsin, F. Bogun, F. Pelosi, J. Jalife, F. Morady, and A. Chugh. Left atrial pressure and dominant frequency of atrial fibrillation in humans. Hear. Rhythm 8:181–187, 2011. Youssefi, P., A. Gomez, C. Arthurs, R. Sharma, M. Jahangiri, and C. A. Figueroa. Impact of patient-specific inflow velocity profile on hemodynamics of the thoracic aorta. J. Biomech. Eng. 140:011002, 2017. Zhang, L. T., and M. Gay. Characterizing left atrial appendage functions in sinus rhythm and atrial fibrillation using computational models. J. Biomech. 41:2515–2523, 2008. Amin Deyranlou would like to acknowledge the Ph.D. scholarship (President's Doctoral Scholar) awarded by the University of Manchester. Amir Keshmiri would also like to acknowledge the pump priming fund awarded by Professor Bernard Keavney for conducting additional MRI scans. Department of Mechanical, Aerospace and Civil Engineering (MACE), The University of Manchester, Manchester, M13 9PL, UK Amin Deyranlou , Alistair Revell & Amir Keshmiri Division of Cardiovascular Sciences, School of Medical Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, M13 9PL, UK Josephine H. Naish Division of Cardiovascular Sciences, School of Medical Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Oxford Road, Manchester, M13 9PL, UK Christopher A. Miller Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Southmoor Road, Wythenshawe, Manchester, M13 9PL, UK Wellcome Centre for Cell-Matrix Research, Division of Cell-Matrix Biology & Regenerative Medicine, School of Biology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Oxford Road, Manchester, M13 9PL, UK Search for Amin Deyranlou in: Search for Josephine H. Naish in: Search for Christopher A. Miller in: Search for Alistair Revell in: Search for Amir Keshmiri in: Correspondence to Amir Keshmiri. Associate Editor Umberto Morbiducci oversaw the review of this article Supplementary material 1 (DOCX 1764 kb) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Deyranlou, A., Naish, J.H., Miller, C.A. et al. Numerical Study of Atrial Fibrillation Effects on Flow Distribution in Aortic Circulation. Ann Biomed Eng (2020) doi:10.1007/s10439-020-02448-6 4D phase contrast magnetic resonance imaging
CommonCrawl
communications earth & environment A chemical threshold controls nanocrystallization and degassing behaviour in basalt magmas Rheology of nanocrystal-bearing andesite magma and its roles in explosive volcanism Satoshi Okumura, Kentaro Uesugi, … Akira Miyake Compositional boundary layers trigger liquid unmixing in a basaltic crystal mush Victoria C. Honour, Marian B. Holness, … Marlon M. Jean Metastable silica high pressure polymorphs as structural proxies of deep Earth silicate melts E. Bykova, M. Bykov, … L. Dubrovinsky Poirierite, a dense metastable polymorph of magnesium iron silicate in shocked meteorites Naotaka Tomioka, Luca Bindi, … Yu Kodama Melt inclusion vapour bubbles: the hidden reservoir for major and volatile elements Swetha Venugopal, Federica Schiavi, … Glyn Williams-Jones Experimental measurements of the viscosity and melt structure of alkali basalts at high pressure and temperature Barbara Bonechi, Vincenzo Stagno, … Mario Gaeta Mysterious long-living ultrahigh-pressure or secondary impact crisis T. G. Shumilova, A. A. Zubov, … A. L. Vasiliev Dendritic crystallization in hydrous basaltic magmas controls magma mobility within the Earth's crust Fabio Arzilli, Margherita Polacci, … Mike R. Burton Excess water storage induced by viscous strain localization during high-pressure shear experiment Jacques Précigout, Holger Stünitz & Johan Villeneuve Alex Scarani ORCID: orcid.org/0000-0001-9519-08361, Alessio Zandonà ORCID: orcid.org/0000-0003-0091-95462, Fabrizio Di Fiore ORCID: orcid.org/0000-0002-6749-280X1, Pedro Valdivia3, Rizaldi Putra3, Nobuyoshi Miyajima ORCID: orcid.org/0000-0002-6226-56753, Hansjörg Bornhöft4, Alessandro Vona ORCID: orcid.org/0000-0002-5483-56231, Joachim Deubener ORCID: orcid.org/0000-0002-3474-74904, Claudia Romano1 & Danilo Di Genova5 Communications Earth & Environment volume 3, Article number: 284 (2022) Cite this article An increasing number of studies are being presented demonstrating that volcanic glasses can be heterogeneous at the nanoscale. These nano-heterogeneities can develop both during viscosity measurements in the laboratory and during magma eruptions. Our multifaceted study identifies here total transition metal oxide content as a crucial compositional factor governing the tendency of basalt melts and glasses towards nanolitization: at both anhydrous and hydrous conditions, an undercooled trachybasalt melt from Mt. Etna readily develops nanocrystals whose formation also hampers viscosity measurements, while a similar but FeO- and TiO2-poorer basalt melt from Stromboli proves far more stable at similar conditions. We therefore outline a procedure to reliably derive pure liquid viscosity without the effect of nanocrystals, additionally discussing how subtle compositional differences may contribute to the different eruptive styles of Mt. Etna and Stromboli. Volcanic eruptions occur daily—about 100 times per year1—and some potentially have extreme destructive power as they can release an enormous amount of energy in a very short time. Depending on their size and style (explosive or effusive), volcanic eruptions can alter climate both locally and globally, cause mass extinctions, fatalities, famine and diseases2,3,4,5,6,7. As such, they represent a serious threat to human activities, infrastructures, and economy. It follows that the probabilistic prediction and mitigation of volcanic risks have acquired paramount importance in modern Earth science. The mechanistic understanding of magma fragmentation and hence explosive behavior of volcanoes represents indeed one of the three grand challenges in volcanology and eruption forecasting8. Probabilistic predictions of volcanic eruptions rely on numerical modeling of magmatic processes9,10,11,12,13. Among the parameters necessary to model eruptive scenarios, those describing magma transport and hence rheology are the most crucial. Magma viscosity is the central parameter that controls rheology and thus the flow behavior from the storage environment to the volcanic vent14,15,16,17,18,19. In particular, the temperature and chemistry dependence of melt viscosity is key in controlling magma transport, its decompression rate and thereby the overall eruptive style20,21,22,23,24,25,26,27. Empirical models of melt viscosity28,29,30,31 are routinely used to approximate magma viscosity under eruptive conditions. These models are based on experimental data obtained over several decades from viscometry measurements (ref. 31 and references therein). However, numerous studies have highlighted the challenges associated with accurately determining the viscosity of melts prone to partial crystallization21,22,23,32,33,34,35, which can lead to overestimating the viscosity of the liquid by up to two orders of magnitude21,32. The experiments appear particularly challenging for compositions containing iron and titanium oxides and viscosity values between ~109 and ~1012 Pa s, where pervasive nanoscale crystallization can easily go undetected if the samples are not checked post-measurement with adequate detection limits and spatial resolution. Given the importance of low-temperature measurements for the extrapolation of both anhydrous and especially hydrous viscosity to eruptive conditions, these experimental difficulties bring into question the accuracy of our present knowledge of the pure liquid viscosity of magmas. This uncertainty necessarily reverberates into numerical modeling of volcanic eruptions, within which magma viscosity influences dramatically the eruptive scenario. Here, we demonstrate the experimental challenges related to the correct determination of the viscosity of magmatic liquids, using a multipronged approach that includes viscosity measurements, Raman spectroscopy and transmission electron microscopy. As previously inferred32, near-Tg (the glass transition temperature) nanocrystallization of iron titanium oxides substantially increases the low-temperature viscosity of a Mt. Etna trachybasalt (Italy), while such phenomenon is far less impacting (if not absent) in a Stromboli high-K basalt (Italy) measured at similar conditions. We attribute this difference to small but significant differences in total iron and titanium content, which can dramatically affect the tendency of the magmas to reach oversaturation in the vicinity of Tg and (nano)crystallize. We thereby emphasize the need for a critical review and validation of the published viscosity data, providing a vademecum for correct experimental determination of viscosity based on our previous studies32,36. However, our results have implications that are potentially further-reaching than the laboratory, since nanocrystals have been identified in various rocks formed during explosive eruptions21,32,33,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51. Only in the last 4 years, 60 studies21,23,31,32,33,34,35,36,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112 have been published reporting or inferring the presence of nanocrystals in natural and synthetic samples (Fig. 1). Because the formation of nanocrystals can increase magma viscosity in unexpected ways21,32 and trigger vigorous nucleation of bubbles upon magma decompression and undercooling46,47,48,49,101, our study aims to stimulate a reappraisal of magma rheology and fragmentation in the possible case of syn-eruptive nanocrystals formation. We initiate this process through the provocative speculation that the peculiarly dissimilar eruptive activities of Mt. Etna and Stromboli (summarized in section "On the eruption styles of Mt. Etna and Stromboli volcanoes") may also originate from a different degree of undercooling-driven oversaturation in Fe- and Ti-oxides, controlling the tendency of their magmas to (nano)crystallize. High-pressure and -temperature experiments using hydrous melts at eruptive conditions confirm that nanolitization occurs more pervasively in Mt. Etna basalt as compared to Stromboli fostering bubble nucleation. We finally discuss the relevance of our results within the realm of basaltic volcanism. Fig. 1: Literature sources dealing with nanolites. Cumulative number of volcanologically relevant studies either explicitly focusing on nanocrystals or referring to their presence and role in magmatic products and/or in synthetic melts and glasses. On the eruption styles of Mt. Etna and Stromboli volcanoes The recent explosive activity of Mt. Etna is characterized by weak Strombolian eruptions and more energetic paroxysms113,114. The latter consist of lava fountains that can have a duration ranging from several minutes to a few hours and produce eruptive plumes of more than 10 km in height115,116,117,118,119. Lava fountaining at Etna has been interpreted as generated when large volumes of volatiles rapidly exsolve from the magma during its fast ascent and decompression along the conduit120,121 producing magma fountains with exit velocities ranging between 33 and 125 m s–1 122,123,124,125. Recently, it has been suggested126 that lava fountaining is a distinct style, separate from effusive and explosive eruption styles, that is produced when magma ascends rapidly and fragments above the vent. Ordinary activity at Stromboli consists of persistent, rhythmic, mild Strombolian-type eruptions, driven by the bursting of large bubbles called slugs127. Ejected pyroclasts are made of degassed and highly porphyritic (phenocryst- and microlite-rich) magma stalling at shallow levels (~1 km b.s.l.)128. Mechanisms leading to Strombolian eruptions imply slow magma ascent rates (<0.01–0.1 m s–1), permitting significant bubble coalescence to form gas slugs and promote decoupled gas-melt flow27. At Stromboli, this activity is occasionally punctuated by short-lived, highly energetic explosions, known as paroxysms129,130,131. These eruptions are characterized by the emission of pyroclasts to a height of a few kilometers above the craters, the production of 3–5 km high eruptive columns and occasionally pyroclastic flows132,133,134. The triggering mechanism of these paroxysms is still the subject of debate135, but there is consensus on the primary role of a deep, volatile-rich aphyric basaltic magma rapidly rising (1–3 m s–1)136,137 from the lower magmatic system. The main difference between the paroxysms of Stromboli and Mt. Etna lies in their duration. While paroxysmal eruptions at Stromboli are single explosions and have a duration of a few minutes132, the sustained lava fountaining can persist at Mt. Etna for more than 1 h118,120. As such, Stromboli paroxysms have been recently defined as basaltic Vulcanian eruptions132, in which the eruption overpressure is provided by a small amount of volatile-rich magma undergoing closed system degassing, while the shallow crystal-rich viscous magma acts as a weak plug138,139. In contrast, the long duration of lava fountains at Mt. Etna testifies to sustained conditions of coupling between the melt and the gas phases during closed system degassing115,126. Near-Tg and superliquidus viscosity measurements were performed on ETN and STR and compared to the whole-curve parameterization based on the Mauro–Yue–Ellison–Gupta–Allan (MYEGA) equation (Eq. (1), ref. 140): $${{{{{{\rm{log }}}}}}}_{10}{{{{{\rm{\eta }}}}}}={{{{{{\rm{log }}}}}}}_{10}{{{{{{\rm{\eta }}}}}}}_{{{\infty }}}+\left(12-{{{{{{\rm{log }}}}}}}_{10}{{{{{{\rm{\eta }}}}}}}_{{{{{{\rm{\infty }}}}}}}\right)\frac{{{{{{{\rm{T}}}}}}}_{{{{{{\rm{g}}}}}}}}{{{{{{\rm{T}}}}}}}{{{{{\rm{exp }}}}}}\left[\left(\frac{{{{{{\rm{m}}}}}}}{12-{{{{{{\rm{log }}}}}}}_{10}{{{{{{\rm{\eta }}}}}}}_{{{{{{\rm{\infty }}}}}}}}-1\right)\left(\frac{{{{{{{\rm{T}}}}}}}_{{{{{{\rm{g}}}}}}}}{{{{{{\rm{T}}}}}}}-1\right)\right]$$ where Tg is the glass transition temperature, m the fragility index and log10η∞ the viscosity at infinite temperature, which was fixed equal to −2.9 ± 0.3 in agreement with literature results from materials and Earth sciences31,141. Low-temperature viscosity measurements of ETN exhibited a physically unrealistic trend and an evident time-dependent increase in viscosity (Supplementary Fig. 2). We therefore used Tg and m derived elsewhere by calorimetry and Brillouin spectroscopy36. Literature data enabled the very accurate description of high-temperature concentric-cylinder data (Fig. 2a). In the case of STR (Fig. 2b), Tg and m were fit to our viscosity data measured at both low- and high-temperature (Fig. 2b), retrieving values in good agreement with previous studies (e.g.131,142, m = 42.0 and Tg = 643.9 °C). Low-temperature viscosity measurements performed on STR remarkably ran stable up to ~780 °C (Supplementary Fig. 2 in the attached Supplementary Data 1). Above this temperature, we observed a slight increase in viscosity over the timescale of the measurement (i.e., 20 min). We subsequently investigated the origin of the observed time-dependent increase in viscosity during micropenetration (MP) measurements by Raman spectroscopy. Fig. 2: Viscosity measurements of Mt. Etna and Stromboli basalts. Viscosity measurements performed by micropenetration viscometry (MP; low-temperature range) and rotational viscometry (CC; high-temperature range) on a ETN and b STR melts, compared to the complete viscosity curves of the pure melts. The low-temperature range is shown in higher detail in panels (c) and (d), manifesting the much wider temperature interval in which STR was reliably measurable, whereas ETN invariably exhibited a time-dependent increase in viscosity and a strong deviation from the expected values (see also Supplementary Fig. 2). Red lines correspond to the MYEGA fit using Eq. (1). Error bars correspond to the standard deviations of MP measurements. Where not reported, error bars are smaller than symbols. The Raman spectra (Fig. 3) of ETN and STR starting materials were typical for basalt glasses143,144. Post-run samples manifested instead a gradual dwindling of the high-wavenumber envelope (> 800 cm−1) associated with the stretching vibrations of oxygens in the silicate network145. Additional intensity emerged at ~300 and ~660 cm−1, where the main peaks of Fe-Ti-oxides such as magnetite are most typically observed32,146; these Raman bands appeared already at 640 °C in ETN and intensified at higher temperatures, while they were only visible in STR after the MP measurements performed at the highest temperatures, namely 777 and 797 °C. Fig. 3: Post-micropenetration Raman spectra reveal nanocrystallization. Raman spectra collected after micropenetration viscometry: a ETN and b STR. Numbers in the legend indicate the temperature (°C) of the viscosity measurement. The intensity of Raman spectra increases at ~300 and ~660 cm−1 and decreases at >800 cm−1 when Fe-Ti-oxides developed during the viscosity measurements, as highlighted by arrows and vertical dotted lines. This occurred for all ETN samples and for only two STR samples measured at the highest temperature (777 and 797 °C). DSC upscans performed at 30 K min−1 further confirmed the inferred higher susceptibility of ETN to near-Tg crystallization (Fig. 4). The endothermic glass transition of ETN (onset at 653 °C) was closely followed (after ~100 °C) by two distinct exothermic events: a first broad peak with onset at 758 °C and a second, sharper and more intense one with a maximum at 860 °C. After that, the sample melted over a broad endotherm with an offset at 1228 °C. In contrast, STR proved to be much more stable against crystallization: we observed in this sample a single broad exothermic event starting at 963 °C, i.e., ~285 °C above its glass transition (onset at 679 °C); the subsequent melting process was completed at 1251 °C. The behavior of ETN closely resembled that of synthetic melts in which the early precipitation of nanosized TiO2-bearing seeds is exploited to control the subsequent heterogeneous nucleation of aluminosilicate crystals, as for the production of conventional glass-ceramics147. Fig. 4: DSC upscans documenting a different crystallization behavior for ETN and STR glasses. DSC upscans performed at 30 K min−1 using ETN and STR samples. Labels: Tglass for the glass transition interval, Tx1 and Tx2 for exothermic crystallization events, Tm for the melting endotherm. We subsequently subjected ETN to four heat treatments (heating and cooling rates = 10 K min−1) respectively up to 684 °C (ETN+1), 693 °C (ETN+10), 708 °C (ETN+25) and 732 °C (ETN+50), using a DSC for higher temperature precision (see also Supplementary Fig. 3 and Section 3.3). These temperatures are located only slightly above the previously determined Tg of 641 °C36, where viscosity η = 1012 Pa s; the total time spent above this temperature ranged between 8 and 18 min. The Raman spectra of the obtained samples (Supplementary Fig. 4) exhibited nonetheless similar features to those in Fig. 3, namely the emergence of the Raman signatures for Fe-Ti-oxides at ~300 and ~660 cm−1 32,146. We thereafter characterized these materials also by (S)TEM, to directly assess their nanostructural modification. TEM imaging confirmed that the homogeneous ETN starting material (Supplementary Fig. 5) developed an increasingly heterogeneous nanostructure during the heat treatments (Fig. 5). Even before the identification of nanocrystals with well-defined lattice fringes in ETN+50, the amorphous silicate matrix of ETN+1, ETN+10 and ETN+25 exhibited the clear emergence of high- and low-contrast regions with a size below 10 nm, hinting at an incipient compositional reorganization of the samples before crystal nucleation. Notice that such features would go undetected by X-ray diffraction and scanning electron microscopy, due to the lack of long-range order and the small size. High-angle annular dark-field (HAADF) micrographs and EDS mappings in STEM mode (Fig. 6) strengthened these observations: Fe, Al and Ti extensively clustered into a channel-like nanostructure already in sample ETN+10, subsequently giving rise to the formation of Fe-, Ti- and Al-bearing nanocrystals and a SiO2-enriched amorphous matrix in ETN+50. The local SiO2 enrichment of the matrix around nanocrystals reproduced the formation of diffusion barriers and core-shell nanostructures recently observed in synthetic and natural melts undergoing non-isochemical crystallization32,47,147,148,149,150,151,152,153,154. Fig. 5: High-resolution TEM micrographs of nano-heterogeneities and nanocrystals in ETN sample. High-resolution TEM micrographs collected from samples: a ENT+1, b ETN+10, c ETN+25 and d ETN+50, with an inset at higher magnification detailing the presence of nanocrystals with well-visible lattice fringes. Mind the different magnification levels, scale bars are provided on each figure. Fig. 6: STEM-HAADF micrographs and EDS-elemental maps, demonstrating nanoscale clustering of Fe, Ti, and Al. STEM-HAADF micrographs and EDS-elemental maps of samples: a ETN+10 and b ETN+50, detailing the heterogeneous distribution of Si, Al, Fe and Ti at the nanoscale. Mind the different magnification levels, scale bars are provided. A chemical threshold between Mt. Etna and Stromboli We have shown that an anhydrous trachybasaltic melt from Mt. Etna is substantially more prone to (nano)crystallization than a similar material from Stromboli, as evident from the metastability diagrams sketched in Fig. 7 based on this work and previous literature sources47,155. These dissimilar responses to a dwell at temperatures above Tg must necessarily originate from inherent differences between the samples, i.e., their composition. A quick comparison reveals that ETN contains ~80% more TiO2 than STR (1.67 ± 0.05 wt.% and 0.92 ± 0.10 wt.%, respectively) and ~30% more FeOtot (10.05 ± 0.17 wt.% and 7.58 ± 0.25 wt.%, respectively), with only minor discrepancies in the other oxides. This higher content in transition metal oxides is likely to be responsible for the stronger tendency of Mt. Etna trachybasalt toward nanocrystallization: although these components can be assumed to be homogeneously distributed in the stable melt, their solubility in the undercooled liquid is strongly temperature-dependent and is additionally affected by changes in oxygen fugacity156,157,158,159,160,161,162. A sufficiently long dwell at medium-to-deep undercooling can therefore induce nucleation and growth of Fe-Ti-oxide crystals simply due to their melt oversaturation, which will ultimately be more pronounced in a FeO- and TiO2-richer melt. This explanation is supported by fundamental studies of devitrification and crystal nucleation in aluminosilicate melts containing nucleating agents (e.g., TiO2 and ZrO2): differences by only a few wt.% typically mark the transition between unchallenging glass quenchability and extensive crystal precipitation163,164. Fig. 7: Metastability diagrams of anhydrous basalts from Mt. Etna and Stromboli. Metastability diagrams of anhydrous basalts from a Mt. Etna and b Stromboli, providing the approximate timescales and temperatures at which heterogeneities/crystals are detected during laboratory experiments performed at ambient pressure within this work and in some relevant literature sources47,155. Only time spent above Tg and below Tm during heating and cooling experiments is considered here, since no crystals are expected to form outside these boundaries (at least at such relatively short timescales); similarly, the effect of oxygen fugacity is disregarded. The textured fields delimited by red dashed lines signal the onset of nucleation and/or crystal growth in the homogeneous undercooled melts; we provide fictive temperature trends to approximate the role of melt relaxation in the two systems. Nanocrystallization and magma dynamics: moving forward Our study signals that nanocrystallization may have a non-negligible and far-reaching impact on magma flow properties, although our observations were gathered in the laboratory during DSC or micropenetration measurements, i.e., upon heating of a vitrified melt. Similar scenarios would therefore seem to verify only sporadically in nature, such as during the reheating of a solidified plug due to the ascent of hot magma in a volcanic conduit. Nonetheless, it was recently demonstrated that nanocrystals can also precipitate as a result of fast magma undercooling from eruptive temperatures followed by an isothermal dwell (Fig. 7a) in the order of seconds47. These observations suggest that nanocrystallization may occur before or after fragmentation and, in fact, recent detailed studies of volcanic deposits go in this direction44,45,46,104. Experimental studies and observations of natural products suggest that Fe-Ti-bearing nanolite formation can moreover favor gas-melt coupling by (i) fostering bubble nucleation47,48,49,65,110,111 and (ii) inhibiting gas bubble motion, coalescence and eventually outgassing due to the formation of aggregates and to an increase in viscosity21,32,103. The likelihood of these processes is also supported by studies performed on microlites: it is well known165,166,167,168 that Fe-Ti-oxides are unrivaled in their ability to facilitate bubble nucleation65. Previous melt decompression experiments produced aggregates of bubbles and oxide crystals, which coated part of the outer bubble surfaces in a shell-like morphology169. It was also found85 that the transition from effusive to explosive eruptions occurs at the ascent rates which overlap with those at which new Fe-Ti-oxides are nucleated in the conduit. Recently, bubble number density values in pyroclastic samples from a wide range of Plinian eruptions were consistently associated with heterogeneous nucleation on Fe-Ti-oxides101. It thus appears that eruption styles can be also controlled by the formation of Fe-Ti-oxides, acting as nucleation sites for syn-explosive vesicles in the shallow conduit, thereby triggering magma dehydration and a dramatic increase in viscosity. So far, we have demonstrated that small differences in the anhydrous composition of basalts control the nanocrystallization of Fe-Ti-oxides. Transferring our findings to a broad range of eruptive scenarios requires the investigation of hydrous magma at eruptive temperatures and pressures. This led us to an additional series of experiments mimicking conduit conditions, where basalts can be subjected to rapid decompression and cooling during explosive eruptions126,170,171. Starting from superliquidus temperatures (≥1250 °C) and stable confining pressures (≥400 MPa) in an end-loaded piston cylinder apparatus (see Methods), hydrous basalts from Mt. Etna and Stromboli were subjected to rapid cooling down to the fictive temperature (at ~20 K s−1) that also induced a rapid decompression (<100 MPa). The microstructures of the products retrieved after the experiments are shown in Fig. 8 and revealed a remarkably different response between the two magmas. At initial pressure of 1000 MPa and water content of 1.68 wt.% for ETN and 4.10 wt.% for STR (determined post-run by FTIR), ETN matrix intensively nanocrystallized (Supplementary Fig. 6) and exhibited nucleation of pyroxene microlite clusters at SEM scale (Fig. 8a), while STR was quenchable as a homogeneous glass (Fig. 8b). Therefore, although the STR melt dissolved significantly more water, and thus is expected to be less viscous than the ETN melt, it was possible to quench the melt as a pure glass. In a second experiment, we increased the dissolved water content to ~6 wt.% for both melts and lowered the initial pressure to 400 MPa. Here, degassing and devitrification took place in both melts, although the obtained microstructures were markedly different. The groundmass of ETN 6.30 wt.% H2O (nominal, we estimated water content as ≥4.5 wt.% by Raman spectroscopy) was scattered with nanolites and far richer in bubbles (Fig. 8c), whose irregular shapes (Fig. 8e) suggest deformation due to the high viscosity of the residual melt; conversely, STR 5.93 wt.% H2O (determined post-run by FTIR) exhibited (Fig. 8d) rounded bubbles occasionally associated to rosettes of acicular microlites (Fig. 8f), in a crystal-free residual glassy phase as confirmed by Raman spectroscopy analysis (Supplementary Fig. 6). It is worth noting that the Raman spectrum of the sample ETN 6.30 wt.% H2O shows a relatively sharp shoulder at ~3650 cm−1 of the water region (2700–4000 cm−1). It would thus appear that the formation of nanolites depletes the melt from structurally bonded water to form quasi-crystalline domains and thus further increases magma viscosity. This agrees with recent findings41 suggesting that the formation of isolated Fe(OH)2 cluster could occur in iron-rich Martian basalts. Fig. 8: Backscattered electron (BSE) images of experimental groundmasses obtained from high-temperature and -pressure experiments. Left column: BSE images of ETN samples with measured 1.68 wt.% H2O by FTIR (a) and nominal 6.30 wt.% H2O (c and e). Right column: BSE images of STR samples with measured 4.10 wt.% (b) and 5.93 wt.% H2O (d and f). Pressure used for the syntheses is reported on the figure, with temperature ranging between 1250 and 1300 °C. See Methodology for more details. We therefore hypothesize here that a different tendency toward nanolite formation and gas-melt coupling could play a key role in modulating different eruptive styles during paroxysmal events at Mt. Etna and Stromboli volcanoes. As we reported above, Stromboli paroxysmal explosions are typically short-lived events driven by the eruption of a volatile-richer magma (as compared to the typical low-energy Strombolian activity) undergoing closed system degassing. Conversely, Mt. Etna paroxysms are characterized by long-standing lava fountaining, in which the coupling of melt and bubbles is sustained for hours during magma ascent and degassing: such conditions may be favored by the nanostructuration of Etnean magmas demonstrated in this work. Indeed, scoria produced during fountain-fed activity at Mt. Etna172 provided textural evidence supporting a marked vesicle size polydispersity (large and small size populations). Notably, while large vesicles result from deep volatile exsolution, the small vesicle population has been interpreted as due to syn-eruptive nucleation115. We furthermore speculate that Fe-Ti-oxide nanolites may prevent efficient bubble coalescence during magma ascent. This effect would hinder the achievement of the percolation threshold173 where the system transitions from closed- to open-system degassing174, or at least limit the formation of efficient permeable pathways175, ultimately inhibiting gas-melt decoupling (i.e., outgassing). The sustained fountaining activity at Mt. Etna would be therefore maintained by the eruption of coupled gas-melt mixtures, where the increase of magma viscosity may sustain the overpressure of the expanding bubbles, continuously accelerating and eventually fragmenting by inertia above the vent126,176. In contrast, the transient short-lived explosive paroxysms at Stromboli are likely to arise due to easier bubble coalescence, also because their magma is drastically less prone to syn-eruptive nanocrystallization: consequently, the conditions for sustained lava fountaining cannot be reached. Because relatively high iron and titanium content also appears to be a common feature of several highly explosive basaltic eruptions, such as Masaya triple layer and Fontana lapilli (Nicaragua), Etna 122 BC, Tarawera 1886 (New Zealand) and Curacautín ignimbrite (Llaima volcano, Chile)177,178,179,180,181,182, we propose that chemical composition and more specifically transition metal oversaturation can play a key role in the dynamics of explosive volcanism not only for the cases studied in this work. To support these speculations, future investigations should be directed toward the exploration of nanoscale magma dynamics at representative temperatures and pressures for eruptive conditions. This will require inter alia the conception of novel experimental facilities171,183,184, at best combined with synchrotron radiation47 to allow in situ observations at timescales and with instrumental resolutions otherwise inaccessible in the laboratory. Nanocrystallization and viscosity measurements: challenges and best practices Our results manifest the inherent challenges related to the experimental determination of low-temperature viscosity in the case of magmatic melts, typically prone to phase separation, crystallization, or other compositional modifications (e.g., partial dehydration in water-bearing materials) during measurements in the vicinity of Tg. Measurements performed on a heterogeneous sample will yield incorrect viscosity values up to two orders of magnitude, amplifying the uncertainties related to the numerical modeling of magmatic processes and therefore deteriorating our ability to efficiently predict the style of volcanic eruptions from a probabilistic point of view. Because sample instability is a complex function of melt composition and measuring conditions, it can only be qualitatively evaluated beforehand and must be continuously minimized using a sound experimental approach. We propose here below a vademecum based on our experiences in the lab: Check the glass homogeneity before the measurements: Raman spectroscopy and TEM are valuable probes to verify the absence of amorphous heterogeneities or Fe-Ti-oxide crystals in starting materials32. Be aware that these nanostructures may be invisible to X-ray diffraction and scanning electron microscopy, due to their small size and lack of long-range order. If the glass is heterogeneous, try to remelt it at higher temperatures and/or quench it faster. Always check samples after the measurements as shown here. A quick comparison between the Raman spectrum of the starting material and those of measured samples can reveal incipient phase separation, the appearance of crystals or a partial loss in volatile components. If the sample does not exhibit evident signs of crystallization during the measurements (as Stromboli basalt in this work), the viscosity measurements can be performed using conventional methods (such as micropenetration, beam bending, parallel plate, fiber elongation), involving a long dwell time (tens of minutes) above Tg. If the viscosity of the sample changes during isothermal measurements and/or if post-measurement analyses reveal material modifications, a more conservative approach should be preferred: one can still derive viscosity from (flash) calorimetry measurements using the rate matching method, as described elsewhere32,77,185,186. Despite the need for melt relaxation, minimize the temperature excursion above the glass transition during the measurements. For particularly unstable samples (such as Mt. Etna basalt in this work), the liquid viscosity curve can be parameterized with sufficient accuracy using Tg derived via DSC (running a measurement with matching rates at 10 K min−1) and a fragility index m obtained from Brillouin spectroscopy36. Our study demonstrates that a simple dwell above the glass transition temperature on a timescale typical for viscosity measurements can induce the formation of nanocrystals in volcanic melts. Our results confirm that the formation of amorphous nanostructures and nanocrystals significantly increases the viscosity of volcanic melts up to 2 log units. This phenomenon is assumed to arise due to the compositional evolution of the residual melt, the formation of highly viscous amorphous nanoshells and possibly the agglomeration of nanocrystals. We therefore provide a vademecum for the correct experimental determination of melt viscosity, stressing the need to verify the nanoscale changes that may have occurred in the samples during the measurements. Our study also has implications for the eruptive dynamics of magmas. Within the compositional domain of basalts, we identify a key chemical parameter, i.e., TiO2 and FeOtot. content: while Mt. Etna trachybasalt is rich in transition metal oxides and extremely prone to nanocrystallization, Stromboli basalt is a far more stable melt. Because these magmas are involved in two different eruptive styles, we hypothesize that their different nanoscale dynamics may participate in controlling magma degassing and thus the eruptive dynamics of the two volcanoes. Starting materials We used a trachybasaltic lava rock from the 2001 Mt. Etna eruption, Italy187 and a high-K basaltic pumice sample erupted during the March 15, 2007 paroxysmal event of Stromboli volcano, Italy15,188. The rock samples were crushed using a jaw crusher and a ring-mill and melted in a Fe-saturated Pt crucible at 1400 °C for 2 h. The melts were rapidly quenched to glasses. The chemical composition of the starting glasses is reported as oxides concentration in Supplementary Table 1 and was determined using a JEOL-JXA8200 electron microprobe at the Bayerisches Geoinstitut (University of Bayreuth, Germany). The chemical analyses were carried out at 15 kV acceleration voltage and 5 nA beam current; a defocused 10 μm beam was used for all elements. Calibration standards were synthetic wollastonite for Ca and Si, periclase for Mg, hematite for Fe, spinel for Al, natural orthoclase for K, and albite for Na. Sodium and potassium were analyzed first to prevent alkali migration effects39. The homogeneity of the samples was tested by measuring the chemical composition at twenty different points across the surface, before and after micropenetration viscometry. Viscometry The high-temperature superliquidus viscosity of Mt. Etna's trachybasalt (ETN, 1225 °C < T < 1400 °C, 101.47 Pa s > η > 100.62 Pa s) was measured using a Rheotronic II Rotational Viscometer (Theta Instruments) and a concentric-cylinder geometry at the Experimental Volcanology and Petrology Laboratory (EVPLab, Roma Tre University, Italy). The apparatus is equipped with an Anton Paar Rheolab Qc viscometer head (full-scale torque of 75 mN m). Temperature was monitored using a factory-calibrated S-type thermocouple (precision of ±2 °C189) placed near the crucible walls. Accuracy in viscometry measurements was better than 0.06 log units190. The melt was stirred at a shear rate (γ̇) of 10 s−1 with a Pt80Rh20 spindle (3.2 and 42 mm in diameter and wetted length, respectively) in air at 1 atm and 1400 °C for 5 h to ensure thermo-chemical homogenization. Subsequently, the temperature was lowered with steps of 25 °C down to 1225 °C, each time waiting until steady temperature and viscosity values were attained (~45 min). At the end of the measurement, the spindle was extracted, and the sample contained in the crucible (volume: ~15 cm3) was allowed to quench in air under continuous water flow to the crucible walls at ~120 K min−1 (ref. 191). The high-temperature superliquidus viscosity of Stromboli's basalt (STR, 1227 °C < Τ < 1432 °C, 101.92 Pa s > η > 100.84 Pa s) was measured using concentric-cylinder viscometry (Haake RV 20, Karlsruhe, Germany) at the Institute of Non-Metallic Materials (TU Clausthal, Germany). The torque reading of the device was calibrated at strain rates from 0.1 to 96 s−1 using the standard DGG-1192,193 and the error in viscosity was found to be ±0.02 log10 units. The post-run ETN and STR samples were drilled and crushed to obtain samples for the other analytical examinations. We subjected doubly polished glass disks (3 mm thick) of ETN and STR samples to low-temperature micropenetration viscometry measurements (MP). ETN viscosity was measured at 640, 691 and 728 °C, whereas STR viscosity was measured at ten different temperatures (Supplementary Table S2) ranging from 662 to 797 °C. We used a vertical dilatometer (Bähr VIS 404) at TU Clausthal. The setup consists of a SiO2 rod pushing a sapphire sphere of radius r = 0.75 mm, under a constant Ar flow. When measuring viscosity below 700 °C, we applied a force of 3.92 N (400 g load); we decreased the force (N) to 1.96 (200 g load), 0.98 (100 g load), 0.49 (50 g load), 0.15 (15 g load) and 0.05 (5 g load) to measure viscosity at higher temperatures. The temperature was controlled with an S-type thermocouple (Pt-PtRh) placed at ~2 mm from the sample surface. The temperature error is estimated at ±5 °C considering the accuracy of the S-type thermocouple and its distance from the sample194. We followed standard procedures195 to achieve thermal equilibration of the sample at the target measuring temperature: a heating rate of 0.17 K s−1 (10 K min−1) was imposed up to 100 °C less than the desired temperature, which was then approached with a slower heating rate of 0.08 K s−1 (5 K min−1). After reaching the final dwell temperature, the samples were allowed to relax for approximately ~60–600 s before the load was applied. The indentation depth of the sapphire sphere into the sample was measured as a function of time using a linear variable displacement transducer and viscosity was determined according to the literature196. We estimated the measurement accuracy by measuring the viscosity of the standard glass DGG-1: the certified viscosity data192 were reproduced with a standard deviation of ±0.1 in log units. Differential scanning calorimetry Amorphous shards of ETN and STR (mass: ~20 ± 5 mg) were subjected to controlled heat treatments in a differential scanning calorimeter (DSC, 404 F3 Pegasus, Netzsch) at the EVPLab and at TU Clausthal, in PtRh20 crucibles and under N2 5.0 atmosphere (25–80 ml min−1 flow rate). The instruments were calibrated using melting temperatures and enthalpy of fusion of reference materials (pure metals: In, Sn, Bi, Zn, Al, Ag, and Au) up to 1610 °C. First, we performed a simple upscan at 30 K min−1 for both ETN and STR materials, to gain insight into their general crystallization behavior. We then subjected ETN glass to controlled heat treatments within its glass transition interval: we heated at 10 K min−1 up to either 1, 10, 25 or 50 K above the emergence of Tpeak (Supplementary Fig. 1) in the heat flow curve, defined as the signal undershoot (exothermic = positive value) after the onset of the glass transition Tonset (Supplementary Fig. 1)32,77,94,197,198; we subsequently cooled the material down to room temperature at 10 K min−1 (Supplementary Fig. 2). The samples are hereafter named ETN+1, ETN+10, ETN+25 and ETN+50, respectively. These DSC upscans at 10 K min−1 incidentally confirmed the homogeneity of our starting material and the reproducibility of our experimental procedure, since we obtained from the heat flow curves an average Tonset of 644.8 ± 0.7 °C and an average Tpeak of 682.2 ± 0.5 °C over all four samples analyzed (Supplementary Table 2). The samples were characterized before and after DSC and micropenetration measurements using confocal Raman imaging microscopes at the Institute of Non-Metallic Materials (alpha300R, WITec GmbH). The Raman microscope is equipped with a 100× objective, a 532 nm diode green laser (532 nm) and a CCD detector. The integration time employed with the alpha300R microscope was 7 s (3 accumulations, 13 mW laser power). Raman spectra were invariably collected from cracked or polished samples, to exclude possible surface effects. Spectra were acquired in the range from 200 to 1300 cm−1. The Raman spectrometer was calibrated using a silicon standard. Transmission electron microscopy (TEM) High-resolution TEM micrographs were collected from powdered samples using Phillips CM20FEG and FEI Titan G2 80-200S/TEM microscopes. The analysis in scanning transmission electron microscopy (STEM) mode was carried out using a FEI Titan G2 80-200S/TEM (Bayerisches Geoinstitut, University of Bayreuth, Germany), operated at 200 kV and equipped with an energy-dispersive X-ray spectrometer (EDS) system consisting of four silicon drift detectors (Bruker, QUANTAX EDS). Synthesis of hydrous samples and decompression experiments Water was dissolved in the anhydrous melts under pressure and temperature, in an end-loaded piston cylinder apparatus at the Bayerisches Geoinstitut (University of Bayreuth, Germany). The anhydrous and homogenous glasses were crushed and sieved to obtain two size fractions of <100 and 100–250 μm. The two fractions of powder were mixed in a 1:1 ratio and loaded with known amounts of distilled water in Au80Pd20 capsules of 12 mm length, 4.6 and 5 mm of inner and outer diameter, respectively. The anhydrous glass powders and water were added in a stepwise fashion to achieve a homogeneous distribution of water. A metal piston was used to compact the glass powder at each step. The capsules were sealed by arc welding. The capsules were weighed before and after being placed in an oven at 110 °C for at least an hour to ensure that water was unable to escape. The syntheses were performed between 400 and 1000 MPa and 1250–1300 °C for 24 h using a 3/4 talc-pyrex-Al2O3 assembly and a graphite furnace. Heating was performed at 100 K min−1, while pressure was adjusted manually to the target value through an upper 500-bar ram and a lower ram, it was set to automatic control once the target pressure and temperature values were achieved. Temperature was measured by a type S thermocouple (Pt90Rh10) and a friction correction was applied based on calibrations199,200. The runs were terminated by turning off the electrical power that imposed a cooling of ~20 K s−1 between the experimental temperature and the fictive temperature of the melt. Although the experiment was terminated under a nominal isobaric quench mode of the apparatus, a rapid drop in pressure (<100 MPa) was observed following the turning off the electrical power. As such, the hydrous melt was subjected to fast cooling and decompression. For each experiment, we recovered material from the top, middle and bottom part of the capsule to check for sample homogeneity. The recovered samples were embedded in epoxy and polished for chemical, SEM and Raman spectroscopy analyses. The reproducibility of our results was tested by using different piston cylinder apparatuses at the Bayerisches Geoinstitut and by repeating the experiments up to six times for each apparatus. Water content determination Water content of glasses was measured using Fourier-transform infrared spectroscopy (FTIR). We used a Bruker IFS 120 spectrometer connected to a Bruker IR microscope. Spectra were acquired using a tungsten light source with a Si-coated CaF2 beam-splitter and narrow-band MCT (mercury; cadmium; telluride) detector. FTIR measurements were acquired between 1000 and 6000 cm−1 on doubly polished samples (⁓0.2–0.3 mm thickness). The analyzed spot was 100 μm in diameter with a spectral resolution of 4 cm−1. For each spectrum, 200 scans were accumulated. All hydrous glasses were measured at least three times at different spots to account for heterogeneities. Total water contents were derived from the peak areas of the OH− and H2O bands (⁓4500 and 5200 cm−1, respectively), which corresponds to the combination of the stretching and bending modes. We used the "two Gaussians" (GG) baseline201 and the "GGpar" integral molar absorption coefficients (εOH− = 0.62; εH2Omol = 0.71) following ref. 201 and the density of glasses was calculated using ref. 131. For the ETN sample with the highest water content loaded before the synthesis (6.3 wt.%), FTIR estimation was not possible due to pervasive crystallization. Nevertheless, with Raman spectroscopy we estimated a minimum water content of 4.5 wt.% after ref. 66. The raw data used for this study are publicly available online at https://doi.org/10.6084/m9.figshare.21466452. Loughlin, S. C., Vye-Brown, C., Sparks, R. S. J., Brown, S. K. & Jenkins, S. Global Volcanic Hazards and Risk (Cambridge University Press, 2015). Baxter, P. J. Human impacts of volcanoes. in Volcanoes and the Environment 273–303 (Cambridge University Press, 2005). https://doi.org/10.1017/CBO9780511614767.011. McConnell, J. R. et al. Extreme climate after massive eruption of Alaska's Okmok volcano in 43 BCE and effects on the late Roman Republic and Ptolemaic Kingdom. Proc. Natl. Acad. Sci. USA. 117, 15443–15449. https://doi.org/10.1073/pnas.2002722117 (2020). Kandlbauer, J., Hopcroft, P. O., Valdes, P. J. & Sparks, R. S. J. Climate and carbon cycle response to the 1815 Tambora volcanic eruption. J. Geophys. Res. Atmos. 118, 12497–12507 (2013). Auker, M. R., Sparks, R. S. J., Siebert, L., Crosweller, H. S. & Ewert, J. A statistical analysis of the global historical volcanic fatalities record. J. Appl. Volcanol. 2, 1–24 (2013). Newhall, C. G., Self, S. & Robock, A. Anticipating future Volcanic Explosivity Index (VEI) 7 eruptions and their chilling impacts. Geosphere 14, 572–603 (2018). Oppenheimer, C. Climatic, environmental and human con sequences of the largest known historic eruption: Tambora volcano (Indonesia) 1815. Prog. Phys. Geogr. 27, 230–259 (2003). Manga, M. et al. Volcanic eruptions and their repose, unrest, precursors, and timing. Volcanic Eruptions and Their Repose, Unrest, Precursors, and Timing (National Academies Press, 2017). https://doi.org/10.17226/24650. Melnik, O. & Sparks, R. S. J. Transient models of conduit flows during volcanic eruptions. in Statistics in Volcanology (eds Mader, H. M., Coles, S.G., Connor, C.B. & Connor, L. J.) 1–25 (The Geological Society of London, 2006). de' Michieli Vitturi, M., Clarke, A. B. B., Neri, A. & Voight, B. Transient effects of magma ascent dynamics along a geometrically variable dome-feeding conduit. Earth Planet. Sci. Lett. 295, 541–553 (2010). La Spina, G., de' Michieli Vitturi, M. & Clarke, A. B. Transient numerical model of magma ascent dynamics: application to the explosive eruptions at the Soufrière Hills Volcano. J. Volcanol. Geotherm. Res. 336, 118–139 (2017). Dufek, J. & Bergantz, G. W. Transient two-dimensional dynamics in the upper conduit of a rhyolitic eruption: A comparison of closure models for the granular stress. J. Volcanol. Geotherm. Res. 143, 113–132 (2005). Rosi, M. et al. Defining the pre-eruptive states of active volcanoes for improving eruption forecasting. Front. Earth Sci. 10, 1–20 (2022). Llewellin, E. W. & Manga, M. Bubble suspension rheology and implications for conduit flow. J. Volcanol. Geotherm. Res. 143, 205–217 (2005). Vona, A., Romano, C., Dingwell, D. B. & Giordano, D. The rheology of crystal-bearing basaltic magmas from Stromboli and Etna. Geochim. Cosmochim. Acta 75, 3214–3236 (2011). Mader, H. M., Llewellin, E. W. & Mueller, S. P. The rheology of two-phase magmas: a review and analysis. J. Volcanol. Geotherm. Res. 257, 135–158 (2013). Dingwell, D. B. Volcanic dilemma: flow or blow? Science 273, 1054–1055 (1996). Lesher, C. E. & Spera, F. J. Chapter 5 – Thermodynamic and transport properties of silicate melts and magma. in The Encyclopedia of Volcanoes 2nd edn (ed Sigurdsson, H.) 113–141 (Academic Press, 2015). https://doi.org/10.1016/B978-0-12-385938-9.00005-5. Lavallée, Y. & Kendrick, J. E. Chapter 5 – A review of the physical and mechanical properties of volcanic rocks and magmas in the brittle and ductile regimes. in Forecasting and Planning for Volcanic Hazards, Risks, and Disasters. Hazards and Disasters Series, Vol. 2 (ed Papale, P.) 153–238 (Elsevier, 2021). Dingwell, D. B., Romano, C. & Hess, K.-U. The effect of water on the viscosity of a haplogranitic melt under P-T-X conditions relevant to silicic volcanism. Contrib. Mineral. Petrol. 124, 19–28 (1996). Di Genova, D. et al. A chemical tipping point governing mobilization and eruption style of rhyolitic magma. Nature 552, 235–238 (2017). Richet, P., Lejeune, A. M., Holtz, F. & Roux, J. Water and the viscosity of andesite melts. Chem. Geol. 128, 185–197 (1996). Liebske, C., Behrens, H., Holtz, F. & Lange, R. A. The influence of pressure and composition on the viscosity of andesitic melts. Geochim. Cosmochim. Acta 67, 473–485 (2003). Whittington, A. G., Richet, P. & Holtz, F. Water and the viscosity of depolymerized aluminosilicate melts. Geochim. Cosmochim. Acta 64, 3725–3736 (2000). Giordano, D. & Dingwell, D. B. Viscosity of hydrous Etna basalt: implications for Plinian-style basaltic eruptions. Bull. Volcanol. 65, 8–14 (2003). Cassidy, M., Manga, M., Cashman, K. V. & Bachmann, O. Controls on explosive-effusive volcanic eruption styles. Nat. Commun. 9, 2839 (2018). Gonnermann, H. M. & Manga, M. Dynamics of magma ascent in the volcanic conduit. in Modeling Volcanic Processes: The Physics and Mathematics of Volcanism (eds. Fagents, S. A., Gregg, T. K. P. & Lopes, R. M. C.). 55 (Cambridge University Press, 2012). Giordano, D., Russell, J. K. & Dingwell, D. B. Viscosity of magmatic liquids: a model. Earth Planet. Sci. Lett. 271, 123–134 (2008). Hui, H. & Zhang, Y. Toward a general viscosity equation for natural anhydrous and hydrous silicate melts. Geochim. Cosmochim. Acta 71, 403–416 (2007). Duan, X. Model for calculating the viscosity of natural iron-bearing silicate melts over a wide range of temperatures, pressures, oxygen fugacites, and compositions. Am. Mineral. 99, 2378–2388 (2014). Langhammer, D., Di Genova, D. & Steinle-Neumann, G. Modelling the viscosity of anhydrous and hydrous volcanic melt. Geochem. Geophys. Geosyst. 22, e2021GC009918 (2021). Di Genova, D., Zandona, A. & Deubener, J. Unravelling the effect of nano-heterogeneity on the viscosity of silicate melts: implications for glass manufacturing and volcanic eruptions. J. Non. Cryst. Solids 545, 120248 (2020). Kleest, C., Webb, S. L. & Fanara, S. Rheology of melts from the colli albani volcanic district (Italy): a case study. Contrib. Mineral. Petrol. 175, 82 (2020). Giordano, D. et al. Viscosity of Palmas-type magmas of the Paraná Magmatic Province (Rio Grande do Sul State, Brazil): implications for high-temperature silicic volcanism. Chem. Geol. 560, 119981 (2021). Bouhifd, M. A., Richet, P., Besson, P., Roskosz, M. & Ingrin, J. Redox state, microstructure and viscosity of a partially crystallized basalt melt. Earth Planet. Sci. Lett. 218, 31–44 (2004). Cassetta, M. et al. Estimating the viscosity of volcanic melts from the vibrational properties of their parental glasses. Sci. Rep. 11, 13072. https://doi.org/10.1038/s41598-021-92407-5 (2021). Barone, G. et al. Nanoscale surface modification of Mt. Etna volcanic ashes. Geochim. Cosmochim. Acta 174, 70–84 (2016). Lerner, A. H. et al. Improving the reliability of Fe- and S- XANES measurements in silicate glasses: correcting beam damage and identifying Fe-oxide nanolites in hydrous and anhydrous melt inclusions. Chem. Geo. 586, 120610 (2021). Hughes, E. C. et al. High spatial resolution analysis of the Iron oxidation state in silicate glasses using the electron probe. Am. Mineral. 103 (2018). Galoisy, L. & Georges, C. The unique speciation of iron in calc-alkaline obsidians. Chem. Geol. 559, 119925. https://doi.org/10.1016/j.chemgeo.2020.119925 (2020). Larre, C. et al. Particular H2O dissolution mechanism in iron-rich melt: application to martian basaltic melt genesis. J. Raman Spectrosc. 51, 493–507 (2020). Allabar, A., Gross, E. S. & Nowak, M. The effect of initial H2O concentration on decompression – induced phase separation and degassing of hydrous phonolitic melt. Contrib. Mineral. Petrol. 2, 1–19 (2020). Tacchetto, T. et al. Pre-nucleation geochemical heterogeneity within glassy anatectic inclusions and the role of water in glass preservation. Contrib. Mineral. Petrol. 176, 1–19 (2021). Matsumoto, K. & Geshi, N. Shallow crystallization of eruptive magma inferred from volcanic ash microtextures: a case study of the 2018 eruption of Shinmoedake volcano, Japan. Bull. Volcanol. 83, 1–14. https://doi.org/10.1007/s00445-021-01451-6 (2021). Kennedy, E., Sari, B. & Scott, M. C. Chemical and structural alterations in the amorphous structure of obsidian due to nanolites. Microsc. Microanal. 28, 1–7. https://doi.org/10.1017/s1431927621013957 (2022). Knafelc, J. et al. Havre 2012 pink pumice is evidence of a short-lived, deep-sea, magnetite nanolite-driven explosive eruption. Commun. Earth Environ. 3, 1–11 (2022). Di Genova, D. et al. In situ observation of nanolite growth in volcanic melt: a driving force for explosive eruptions. Sci. Adv. 6, 1–13 (2020). Di Genova, D., Caracciolo, A. & Kolzenburg, S. Measuring the degree of "nanotilization" of volcanic glasses: understanding syn-eruptive processes recorded in melt inclusions. Lithos 318–319, 209–218 (2018). Cáceres, F. et al. Can nanolites enhance eruption explosivity? Geology 48, 1–5 (2020). Mujin, M. & Nakamura, M. A nanolite record of eruption style transition. Geology 42, 611–614 (2014). Mujin, M., Nakamura, M. & Miyake, A. Eruption style and crystal size distributions: crystallization of groundmass nanolites in the 2011 Shinmoedake eruption. Am. Mineral 102, 2367–2380 (2017). Geissman, J. W., Newberry, N. G. & Peacor, D. R. Discrete single-domain and pseudo-single-domain titanomagnetite particles in silicic glass of an ash-flow tuff. Can. J. Earth Sci. 20, 334–338 (1983). Schlinger, C. M. & Smith, R. M. Superparamagnetism in volcanic glasses of the KBS Tuff: transmission electron microscopy and magnetic behavior. Geophys. Res. Lett. 13, 729–732 (1986). Schlinger, C. M., Smith, R. M. & Veblen, D. R. Geologic origin of magnetic volcanic glasses in the KBS tuff. Geology 14, 959–962 (1986). Schlinger, C. M., Rosenbaum, J. G. & Veblen, D. R. Fe-oxide microcrystals in welded tuff from southern Nevada; origin of remanence carriers by precipitation in volcanic glass. Geology 16, 556–559 (1988). Schlinger, C. M., Scom, D. G. R., Papaefthymiou, G. C. & Veblen, D. R. The nature of magnetic single domains in volcanic glasses of the KBS Tuff minerals appear subsequent to the The colorless shards have to a basic experiments and dark and optical variations in these particular rapidly cooled glasses separates of KBS shards. J. Geophys. Res. 93, 9137–9156 (1988). Eick, P. M. & Schlinger, C. M. The use of magnetic susceptibility and its frequency dependence for delineation of a magnetic stratigraphy in ash‐flow tuffs. Geophys. Res. Lett. 17, 783–786 (1990). Schlinger, C. M., Veblen, D. R. & Rosenbaum, J. G. Magnetism and magnetic mineralogy of ash flow tuffs from Yucca Mountain, Nevada. J. Geophys. Res. 96, 6035 (1991). Stevenson, R. J., Dingwell, D. B., Webb, S. L. & Bagdassarov, N. S. The equivalence of enthalpy and shear stress relaxation in rhyolitic obsidians and quantification of the liquid-glass transition in volcanic processes. J. Volcanol. Geotherm. Res. 68, 297–306 (1995). Sharp, T. G., Stevenson, R. J. & Dingwell, D. B. Microlites and 'nanolites' in rhyolitic glass: microstructural and chemical characterization. Bull. Volcanol. 57, 631–640 (1996). Stevenson, R. J., Dingwell, D. B., Bagdassarov, N. S. & Manley, C. R. Measurement and implication of 'effective' viscosity for rhyolite flow emplacement. Bull. Volcanol. 63, 227–237 (2001). Platz, T., Cronin, S. J., Smith, I. E. M., Turner, M. B. & Stewart, R. B. Improving the reliability of microprobe-based analyses of andesitic glasses for tephra correlation. The Holocene 17, 573–583 (2007). Seaman, S. J., Dyar, M. D. & Marinkovic, N. The effects of heterogeneity in magma water concentration on the development of flow banding and spherulites in rhyolitic lava. J. Volcanol. Geotherm. Res. 183, 157–169 (2009). Burgess, K. D. et al. Submicrometer-scale spatial heterogeneity in silicate glasses using aberration-corrected scanning transmission electron microscopy. Am. Mineral. 101, 2677–2688 (2016). Shea, T. Bubble nucleation in magmas: a dominantly heterogeneous process? J. Volcanol. Geotherm. Res. 343, 155–170 (2017). Di Genova, D. et al. Effect of iron and nanolites on Raman spectra of volcanic glasses: reassessment of existing strategies to estimate the water content. Chem. Geol. 475, 76–86 (2017). Colombier, M. et al. Textural evolution of magma during the 9.4-ka trachytic explosive eruption at Kilian Volcano, Chaîne des Puys, France. Bull. Volcanol. 79, 24 (2017). Schiavi, F. et al. Water quantification in silicate glasses by Raman spectroscopy: Correcting for the effects of confocality, density and ferric iron. Chem. Geol. 483, 312–331 (2018). Ovalle, J. T. et al. Formation of massive iron deposits linked to explosive volcanic eruptions. Sci. Rep. 8, 1–11 (2018). Wilding, M. et al. Exploring the structure of glass-forming liquids using high energy X-ray diffraction, containerless methodology and molecular dynamics simulation. J. Non-Crystalline Solids X 3, 100027 (2019). Liedl, A. et al. A 3D imaging textural characterization of pyroclastic products from the 1538 AD Monte Nuovo eruption (Campi Flegrei, Italy). Lithos 340–341, 316–331 (2019). Hughes, E. C. et al. Low analytical totals in EPMA of hydrous silicate glass due to sub-surface charging: obtaining accurate volatiles by difference. 505, 48–56 (2019). Morrison, A. A. et al. Rheological investigation of lunar highland and mare impact melt simulants. Icarus 317, 307–323 (2019). Castilla, S. C. et al. Pre-eruptive conditions and pyroclastic emplacement of the last known vulcanian eruption of Azufral Volcano, SW Colombia. J. South Am. Earth Sci. 91, 372–386 (2019). Mujin, M. & Nakamura, M. Late-stage groundmass differentiation as a record of magma stagnation, fragmentation, and rewelding. Bull. Volcanol. 82, 48 (2020). Hughes, E. C. et al. The microanalysis of iron and sulphur oxidation states in silicate glass – understanding the effects of beam damage. in IOP Conference Series: Materials Science and Engineering 891 (2020). Al-Mukadam, R., Di Genova, D., Bornhöft, H. & Deubener, J. High rate calorimetry derived viscosity of oxide melts prone to crystallization. J. Non. Cryst. Solids 536, 119992 (2020). Le Losq, C., Moretti, R., Oppenheimer, C., Baudelet, F. & Neuville, D. R. In situ XANES study of the influence of varying temperature and oxygen fugacity on iron oxidation state and coordination in a phonolitic melt. Contrib. Mineral. Petrol. 4, 1–13 (2020). Le Losq, C., Cicconi, M. R. & Neuville, D. R. Iron in silicate glasses and melts: implications for volcanological processes. ESSOAr 1–29. https://doi.org/10.1002/essoar.10503261.1 (2020). González-García, D., Giordano, D., Russell, J. K. & Dingwell, D. B. A Raman spectroscopic tool to estimate chemical composition of natural volcanic glasses. Chem. Geol. 556, 119819 (2020). Blundy, J. D. et al. Effect of redox on Fe-Mg-Mn exchange between olivine and melt and an oxybarometer for basalts. Contributions to Mineralogy and Petrology Vol. 7 (Springer Berlin Heidelberg, 2020). Giordano, D. et al. Raman spectroscopy from laboratory and proximal to remote sensing: a tool for the volcanological sciences. Remote Sens. 12, 805 (2020). Lormand, C. et al. Slow ascent of unusually hot intermediate magmas triggering strombolian to sub-plinian eruptions. J. Petrol. 61, egaa077 (2020). Colombier, M. et al. Rheological change and degassing during a trachytic vulcanian eruption at Kilian Volcano, Chaîne des Puys, France. Bull. Volcanol. 82, 463–463 (2020). Burgisser, A., Arbaret, L., Martel, C., Forien, M. & Colombier, M. The role of oxides in the shallow vesiculation of ascending magmas. J. Volcanol. Geotherm. Res. 406, 107072 (2020). Romano, C. et al. Modelling and physico-chemical constraints to the 4.5 ka Agnano-Monte Spina Plinian eruption (Campi Flegrei, Italy). Chem. Geol. 532, 119301 (2020). Knafelc, J., Bryan, S. E., Gust, D. & Cathey, H. E. Defining pre-eruptive conditions of the Havre 2012 submarine rhyolite eruption using crystal archives. Front. Earth Sci. 8, 310 (2020). Sahagian, D. & Carley, T. L. Explosive volcanic eruptions and spinodal decomposition: a different approach to deciphering the tiny bubble paradox. Geochem. Geophys. Geosyst. 21, 1–9. https://doi.org/10.1029/2019GC008898 (2020). Buono, G. et al. Dynamics of degassing in evolved alkaline magmas: petrological, experimental and theoretical insights. Earth Sci. Rev. 211, 103402 (2020). Samaniego, P. et al. Linking magmatic processes and magma chemistry during the post-glacial to recent explosive eruptions of Ubinas volcano (southern Peru). J. Volcanol. Geotherm. Res. 407, 107095 (2020). Giuliani, L. et al. Evolution of textures, crystal size distributions and growth rates of plagioclase, clinopyroxene and spinel crystallized at variable cooling rates from a mid-ocean ridge basaltic melt. Earth Sci. Rev. 204, 103165 (2020). Cáceres, F. et al. From melt to crystals: the effects of cooling on Fe-Ti oxide nanolites crystallisation and melt polymerisation at oxidising conditions. Chem. Geol. 563, 120057 (2021). González-García, D. et al. Retrieving dissolved H2O content from micro-Raman spectroscopy on nanolitized silicic glasses: application to volcanic products of the Paraná Magmatic Province, Brazil. Chem. Geol. 567, 120058 (2021). Stabile, P. et al. The effect of iron and alkali on the nanocrystal-free viscosity of volcanic melts: a combined Raman spectroscopy and DSC study. Chem. Geol. 559, 119991 (2021). Nienhuis, E. T., Tuheen, M., Du, J. & McCloy, J. S. In situ pair distribution function analysis of crystallizing Fe-silicate melts. J. Mater. Sci. 56, 5637–5657 (2021). Rose-koga, E. F. et al. Silicate melt inclusions in the new millennium: a review of recommended practices for preparation, analysis, and data presentation. Chem. Geol. 570, 120145 (2021). Rotolo, S. G. et al. Volcanological evolution of Pantelleria Island (Strait of Sicily) peralkaline volcano: a review. Comptes Rendus Géoscience—Sciences la Planète 353, 111–132 (2021). Cabié, M., Neisius, T. & Blanc, W. Combined FIB/SEM tomography and TEM analysis to characterize high aspect ratio Mg-silicate particles inside silica-based optical fibres. Mater. Charact. 178, 111261 (2021). Liu, E. J. Magma behaving brittly. Nat. Geosci. 14, 108–181. https://doi.org/10.1038/s41561-021-00724-1 (2021). Haag, M. B. et al. Multi-proxy case study of a Neoproterozoic rhyolite flow in southernmost Brazil: Emplacement mechanisms and implications for ancient felsic lavas. J. South Am. Earth Sci. 107, 102982 (2021). Hajimirza, S., Gonnermann, H. M. & Gardner, J. E. Reconciling bubble nucleation in explosive eruptions with geospeedometers. Nat. Commun. 12, 1–8 (2021). Giuliani, L. et al. Crystal-chemical variations of spinel, clinopyroxene, and plagioclase in MORB basaltic melt induced by continuous cooling. Chem. Geol. 594, 120765 (2022). Pereira, L. et al. A feedback mechanism between crystals and bubbles in a RuO2-bearing melt. J. Non. Cryst. Solids 582, 121456 (2022). Yoshida, K., Tamura, Y. & Ono, S. Variety of the drift pumice clasts from the 2021 Fukutoku-Oka-no-Ba eruption, Japan. Isl. Arc 31, 1–17. https://doi.org/10.1111/iar.12441 (2022). Pistone, M., Formo, E., Whittington, A. G., Herbst, T. & Cottrell, E. Direct nanoscale observations of degassing-induced crystallisation in felsic magmas. Contrib. Mineral. Petrol. 177, 38 (2022). Jones, T. J., Cashman, K. V., Liu, E. J., Rust, A. C. & Scheu, B. Magma fragmentation: a perspective on emerging topics and future directions. Bull. Volcanol. 84, 45 (2022). Kurokawa, A. K., Miwa, T. & Ishibashi, H. Aging in magma rheology. Sci. Rep. 12, 10015 (2022). Okumura, S. H., Mujin, M., Tsuchiyama, A. & Miyake, A. 3D crystal size distributions of pyroxene nanolites from nano X-ray computed tomography: improved correction of crystal size distributions from CSDCorrections for magma ascent dynamics in conduits. Am. Mineral 107, 1766–1778 (2022). Jordanova, D. et al. A detailed magnetic record of Pleistocene climate and distal ash dispersal during the last 800 kyrs – The Suhia Kladenetz quarry loess-paleosol sequence near Pleven (Bulgaria). Glob. Planet. Change 214, 103840 (2022). Vigliotti, L., Bilardello, D., Winkler, A. & Del Carlo, P. Rock magnetic fingerprint of Mt Etna volcanic ash. Geophys. J. Int. 231, 749–769 (2022). Dubosq, R. et al. Bubbles and atom clusters in rock melts: a chicken and egg problem. J. Volcanol. Geotherm. Res. 428, 107574 (2022). Jones, T. J. et al. Inflated pyroclasts in proximal fallout deposits reveal abrupt transitions in eruption behaviour. Nat. Commun. 13, 1–12 (2022). Branca, S. & Del Carlo, P. Types of eruptions of Etna volcano AD 1670-2003: Implications for short-term eruptive behaviour. Bull. Volcanol. 67, 732–742 (2005). Corsaro, R. A. et al. Monitoring the December 2015 summit eruptions of Mt. Etna (Italy): Implications on eruptive dynamics. J. Volcanol. Geotherm. Res. 341, 53–69 (2017). Polacci, M., Corsaro, R. A. & Andronico, D. Coupled textural and compositional characterization of basaltic scoria: Insights into the transition from Strombolian to fire fountain activity at Mount Etna, Italy. Geology 34, 201–204 (2006). Andronico, D., Cristaldi, A., Del Carlo, P. & Taddeucci, J. Shifting styles of basaltic explosive activity during the 2002–03 eruption of Mt. Etna, Italy. J. Volcanol. Geotherm. Res. 180, 110–122 (2009). Alparone, S., Andronico, D., Lodato, L. & Sgroi, T. Relationship between tremor and volcanic activity during the Southeast Crater eruption on Mount Etna in early 2000. J. Geophys. Res. 108, 2241 (2003). Andronico, D., Cristaldi, A. & Scollo, S. The 4-5 September 2007 lava fountain at South-East Crater of Mt Etna, Italy. J. Volcanol. Geotherm. Res. 173, 325–328 (2008). Barsotti, S. et al. Quantitative assessment of volcanic ash hazards for health and infrastructure at Mt. Etna (Italy) by numerical simulation. J. Volcanol. Geotherm. Res. 192, 85–96 (2010). Calvari, S., Cannavò, F., Bonaccorso, A., Spampinato, L. & Pellegrino, A. G. Paroxysmal explosions, lava fountains and ash plumes at Etna Volcano: eruptive processes and hazard implications. Front. Earth Sci. 6, 107 (2018). Wilson, L., Parfitt, E. A. & Head, J. W. Explosive volcanic eruptions—VIII. The role of magma recycling in controlling the behaviour of Hawaiian‐style lava fountains. Geophys. J. Int. 121, 215–225 (1995). Giuffrida, M., Viccaro, M. & Ottolini, L. Ultrafast syn-eruptive degassing and ascent trigger high-energy basic eruptions. Sci. Rep. 8, 147 (2018). Calvari, S. et al. Lava effusion—a slow fuse for paroxysms at Stromboli volcano? Earth Planet. Sci. Lett. 301, 317–323 (2011). Carbone, D., Zuccarello, L., Messina, A., Scollo, S. & Rymer, H. Balancing bulk gas accumulation and gas output before and during lava fountaining episodes at Mt. Etna. Sci. Rep. 5, 1–11 (2015). Calvari, S. Multidisciplinary approach yields insight into Mt. Etna eruption. Eos (Washington, DC) 82, 653–656 (2001). La Spina, G. et al. Explosivity of basaltic lava fountains is controlled by magma rheology, ascent rate and outgassing. Earth Planet. Sci. Lett. 1, 116658 (2021). James, M. R., Lane, S. J. & Chouet, B. Gas slug ascent through changes in conduit diameter: laboratory insights into a volcano-seismic source process in low-viscosity magmas. J. Geophys. Res. 111, B05201 (2006). Chouet, B., Dawson, P. & Martini, M. Shallow-conduit dynamics at Stromboli Volcano, Italy, imaged from waveform inversions. in Fluid Motions in Volcanic Conduits: A Source of Seismic and Acoustic Signals (Geological Society of London, 2008). https://doi.org/10.1144/SP307.5. Bertagnini, A., Di Roberto, A. & Pompilio, M. Paroxysmal activity at Stromboli: lessons from the past. Bull. Volcanol. 73, 1229–1243 (2011). Barberi, F., Rosi, M. & Sodi, A. Volcanic hazard assessment at Stromboli based on review of historical data. Acta Vulcanol 3, 173–187 (1993). Misiti, V. et al. Viscosity of high-K basalt from the 5th April 2003 Stromboli paroxysmal explosion. Chem. Geol. 260, 278–285 (2009). Giordano, G. & De Astis, G. The summer 2019 basaltic Vulcanian eruptions (paroxysms) of Stromboli. Bull. Volcanol. 83, 1 (2021). Viccaro, M. et al. Shallow conduit dynamics fuel the unexpected paroxysms of Stromboli volcano during the summer 2019. Sci. Rep. 11, 1–15 (2021). Andronico, D. et al. Uncovering the eruptive patterns of the 2019 volcano. Nat. Commun. 12, 1–14. https://doi.org/10.1038/s41467-021-24420-1 (2021). Métrich, N., Bertagnini, A. & Pistolesi, M. Paroxysms at Stromboli Volcano (Italy): source, genesis and dynamics. Front. Earth Sci. 9, 1–17 (2021). Pichavant, M., Di Carlo, I., Le Gac, Y., Rotolo, S. G. & Scaillet, B. Experimental constraints on the deep magma feeding system at Stromboli Volcano, Italy. J. Petrol. 50, 601–624 (2009). Le Gall, N. & Pichavant, M. Experimental simulation of bubble nucleation and magma ascent in basaltic systems: implications for Stromboli volcano. Am. Mineral. 101, 1967–1985 (2016). Cimarelli, C., Di Traglia, F. & Taddeucci, J. Basaltic scoria textures from a zoned conduit as precursors to violent Strombolian activity. Geology 38, 439–442 (2010). Oppenheimer, J. et al. Analogue experiments on the rise of large bubbles through a solids-rich suspension: a "weak plug" model for Strombolian eruptions. Earth Planet. Sci. Lett. 531, 115931 (2020). Mauro, J. C., Yue, Y. Z., Ellison, A. J., Gupta, P. K. & Allan, D. C. Viscosity of glass-forming liquids. Proc. Natl. Acad. Sci. USA. 106, 19780–19784 (2009). Zheng, Q., Mauro, J. C., Ellison, A. J., Potuzak, M. & Yue, Y. Universality of the high-temperature viscosity limit of silicate liquids. Phys. Rev. B Condens. Matter Mater. Phys. 83, 13–15 (2011). Giordano, D. et al. The rheological evolution of alkaline Vesuvius magmas and comparison with alkaline series from the Phlegrean Fields, Etna, Stromboli and Teide. Geochim. Cosmochim. Acta 73, 6613–6630 (2009). Di Muro, A. et al. Micro-Raman determination of iron redox state in dry natural glasses: application to peralkaline rhyolites and basalts. Chem. Geol. 259, 78–88 (2009). Di Genova, D. et al. Raman spectra of Martian glass analogues: a tool to approximate their chemical composition. J. Geophys. Res. Planets 121, 740–752 (2016). McMillan, P. F. Structural studies of silicate glasses and melts-applications and limitations of Raman spectroscopy. Am. Mineral. 69, 622–644 (1984). de Faria, D. L. A., Silva, S. V. & de Oliveira, M. T. Raman microspectroscopy of some iron oxides and oxyhydroxides. J. Raman Spectrosc. 28, 873–878 (1997). Zandona, A., Groß, C. B. M., Rüdinger, B. & Deubener, J. A threshold heating rate for single-stage heat treatments in glass-ceramics containing seed formers. J. Am. Ceram. Soc. 104, 4433–4444 (2021). Bhattacharyya, S. et al. Direct evidence of al-rich layers around nanosized ZrTiO4 in glass: putting the role of nucleation agents in perspective. Cryst. Growth Des. 10, 379–385 (2010). Kleebusch, E., Patzig, C., Höche, T. & Rüssel, C. The evidence of phase separation droplets in the crystallization process of a Li2O-Al2O3-SiO2 glass with TiO2 as nucleating agent – An X-ray diffraction and (S)TEM-study supported by EDX-analysis. Ceram. Int. 44, 2919–2926 (2018). Kleebusch, E. et al. The formation of nanocrystalline ZrO2 nuclei in a Li2O-Al2O3-SiO2 glass – a combined XANES and TEM study. Sci. Rep. 7, 1–12 (2017). Fotheringham, U., Wurth, R. & Rüssel, C. Thermal analyses to assess diffusion kinetics in the nano-sized interspaces between the growing crystals of a glass ceramics. Thermochim. Acta 522, 144–150 (2011). Höche, T. et al. Temporal evolution of diffusion barriers surrounding ZrTiO4 nuclei in lithia aluminosilicate glass-ceramics. Cryst. Growth Des. 12, 1556–1563 (2012). Mitchell, A. L. et al. Nanoscale microstructure and chemistry of transparent gahnite glass-ceramics revealed by atom probe tomography. Scr. Mater. 203, 114110 (2021). Zandona, A., Patzig, C., Rüdinger, B., Hochrein, O. & Deubener, J. TiO2(B) nanocrystals in Ti-doped lithium aluminosilicate glasses. J. Non-Crystalline Solids X 2, 100025 (2019). Conte, A. M., Perinelli, C. & Trigila, R. Cooling kinetics experiments on different Stromboli lavas: effects on crystal morphologies and phases composition. J. Volcanol. Geotherm. Res. 155, 179–200 (2006). Ryerson, F. J. & Watson, E. B. Rutile saturation in magmas: implications for Ti-Nb-Ta depletion in island-arc basalts. Earth Planet. Sci. Lett. 86, 225–239 (1987). Ayers, J. C. et al. The solubility of titanite in silicate melt determined from growth and dissolution experiments. Contrib. Mineral. Petrol. 177, 37 (2022). Gaetani, G. A., Asimow, P. D. & Stolper, E. M. A model for rutile saturation in silicate melts with applications to eclogite partial melting in subduction zones and mantle plumes. Earth Planet. Sci. Lett. 272, 720–729 (2008). Andreeva, O. A. et al. Silicate liquid immiscibility as a result of Fenner-type crystal fractionation of Wangtian'e Tholeiitic Melts, Northeast China. Petrology 28, 357–373 (2020). Hill, R. & Roeder, P. The crystallization of spinel from basaltic liquid as a function of oxygen fugacity. J. Geol. 82, 709–729 (1974). Toplis, M. J. & Carroll, M. R. An experimental study of the influence of oxygen fugacity on Fe-Ti oxide stability, phase relations, and mineral-melt equilibria in ferro-basaltic systems. J. Petrol. 36, 1137–1170 (1995). Hrma, P. Crystallization during processing of nuclear waste glass. J. Non. Cryst. Solids 356, 3019–3025 (2010). Zandona, A. et al. Glass-forming ability and ZrO2 saturation limits in the magnesium aluminosilicate system. Ceram. Int. 48, 8433–8439 (2022). Zandona, A. et al. Glass formation and devitrification behavior of alkali (Li, Na) aluminosilicate melts containing TiO2. J. Non. Cryst. Solids 582, 121448 (2022). Hurwitz, S. & Navon, O. Bubble nucleation in rhyolitic melts: experiments at high pressure, temperature, and water content. Earth Planet. Sci. Lett. 122, 267–280 (1994). Mangan, M. T., Sisson, T. W. & Hankins, W. B. Decompression experiments identify kinetic controls on explosive silicic eruptions. Geophys. Res. Lett. 31, 1–5 (2004). Larsen, J. F. Heterogeneous bubble nucleation and disequilibrium H2O exsolution in Vesuvius K-phonolite melts. J. Volcanol. Geotherm. Res. 175, 278–288 (2008). Knipping, J. L., Webster, J. D., Simon, A. C. & Holtz, F. Accumulation of magnetite by flotation on bubbles during decompression of silicate magma. Sci. Rep. 9, 1–7 (2019). 3852. Pleše, P. et al. Production and detachment of oxide crystal shells on bubble walls during experimental vesiculation of andesitic magmas. Contrib. Mineral. Petrol. 174, 21 (2019). La Spina, G., Burton, M. R., de' Michieli Vitturi, M. & Arzilli, F. Role of syn-eruptive plagioclase disequilibrium crystallization in basaltic magma ascent dynamics. Nat. Commun. 7, 13402 (2016). Arzilli, F. et al. Magma fragmentation in highly explosive basaltic eruptions induced by rapid crystallization. Nat. Geosci. 12, 1023–1028 (2019). Polacci, M. et al. The role of syn-eruptive vesiculation on explosive basaltic activity at Mt. Etna, Italy. J. Volcanol. Geotherm. Res. 179, 265–269 (2009). Colombier, M. et al. Degassing and gas percolation in basaltic magmas. Earth Planet. Sci. Lett. 573, 117134 (2021). Namiki, A. & Manga, M. Transition between fragmentation and permeable outgassing of low viscosity magmas. J. Volcanol. Geotherm. Res. 169, 48–60 (2008). Valdivia, P., Marshall, A. A., Brand, B. D., Manga, M. & Huber, C. Mafic explosive volcanism at Llaima Volcano: 3D x-ray microtomography reconstruction of pyroclasts to constrain shallow conduit processes. Bull. Volcanol. 84, 2. https://doi.org/10.1007/s00445-021-01514-8 (2022). Mueller, S., Melnik, O., Spieler, O., Scheu, B. & Dingwell, D. B. Permeability and degassing of dome lavas undergoing rapid decompression: an experimental determination. Bull. Volcanol. 67, 526–538 (2005). Bamber, E. C. et al. Pre- and syn -eruptive conditions of a basaltic Plinian eruption at Masaya Volcano, Nicaragua: The Masaya Triple Layer (2 .1 ka). J. Volcanol. Geotherm. Res. 392, 106761 (2020). Szramek, L. Mafic Plinian eruptions: is fast ascent required? J. Geophys. Res. Solid Earth 121, 7119–7136. https://doi.org/10.1002/2016JB013208 (2016). Coltelli, M., Del Carlo, P. & Vezzoli, L. Discovery of a Plinian basaltic eruption of Roman age at Etna volcano, Italy. Geology 26, 1095–1098 (1998). Costantini, L., Houghton, B. F. & Bonadonna, C. Constraints on eruption dynamics of basaltic explosive activity derived from chemical and microtextural study: the example of the Fontana Lapilli Plinian eruption, Nicaragua. J. Volcanol. Geotherm. Res. 189, 207–224 (2010). Rowe, M. C. et al. Tarawera 1886: an integrated review of volcanological and geochemical characteristics of a complex basaltic eruption. New Zeal. J. Geol. Geophys. 64, 296–319 (2021). Marshall, A. A. et al. The mafic Curacautín ignimbrite of Llaima volcano, Chile. J. Volcanol. Geotherm. Res. 421, 107418 (2022). Le Gall, N. et al. In situ quantification of crystallisation kinetics of plagioclase and clinopyroxene in basaltic magma: implications for lava flow. Earth Planet. Sci. Lett. 568, 117016 (2021). Arzilli, F. et al. Dendritic crystallization in hydrous basaltic magmas controls magma mobility within the Earth's crust. Nat. Commun. 13, 3354 (2022). Al-Mukadam, R., Götz, I. K., Stolpe, M. & Deubener, J. Viscosity of metallic glass-forming liquids based on Zr by fast-scanning calorimetry. Acta Mater 221, 117370 (2021). Schawe, J. E. K. & Hess, K.-U. The kinetics of the glass transition of silicate glass measured by fast scanning calorimetry. Thermochim. Acta 677, 85–90 (2019). Behncke, B. & Neri, M. The July-August 2001 eruption of Mt. Etna (Sicily). Bull. Volcanol. 65, 461–476 (2003). Landi, P. et al. Magma dynamics during the 2007 Stromboli eruption (Aeolian Islands, Italy): mineralogical, geochemical and isotopic data. J. Volcanol. Geotherm. Res. 182, 255–268 (2009). Di Fiore, F., Vona, A., Kolzenburg, S., Mollo, S. & Romano, C. An extended rheological map of Pāhoehoe—'A'ā Transition. J. Geophys. Res. Solid Earth 126, 1–23 (2021). Di Fiore, F., Vona, A., Costa, A., Mollo, S. & Romano, C. Quantifying the influence of cooling and shear rate on the disequilibrium rheology of a trachybasaltic melt from Mt. Etna. Earth Planet. Sci. Lett. 594, 117725 (2022). Di Fiore, F. et al. Kinetic partitioning of major and trace cations between clinopyroxene and phonotephritic melt under convective stirring conditions: New insights into clinopyroxene sector zoning and concentric zoning. Chem. Geol. 584, 120531 (2021). Meerlender, G. Viskositäts-Temperaturverhalten des Standardglases I der DGG. Glas. Ber. 47, 1–3 (1974). Deubener, J. et al. Viscosity, relaxation and elastic properties of photo-thermo-refractive glass. J. Non. Cryst. Solids 355, 126–131 (2009). Behrens, H. et al. Structural relaxation mechanisms in hydrous sodium borosilicate glasses. J. Non. Cryst. Solids 497, 30–39 (2018). Di Genova, D., Romano, C., Alletti, M., Misiti, V. & Scarlato, P. The effect of CO2 and H2O on Etna and Fondo Riccio (Phlegrean Fields) liquid viscosity, glass transition temperature and heat capacity. Chem. Geol. 377, 72–86 (2014). Douglas, R. W., Armstrong, W. L., Edward, J. & Hall, D. A penetration viscometer. Glas. Technol. 6, 52–55 (1965). Al-Mukadam, R., Zandona, A. & Deubener, J. Kinetic fragility of pure TeO2 glass. J. Non. Cryst. Solids 554, 1–6 (2021). Scarani, A. et al. Determination of cooling rates of glasses over four orders of magnitude. Contrib. Mineral. Petrol. 177, 1–17 (2022). Rustioni, G., Audetat, A. & Keppler, H. The composition of subduction zone fluids and the origin of the trace element enrichment in arc magmas. Contrib. Mineral. Petrol. 176, 51 (2021). Audétat, A., Miyajima, N., Wiesner, D. & Audinot, J.-N. Confirmation of slow Ti diffusion in quartz by diffusion couple experiments and evidence from natural samples. Geology 49, 963–967 (2021). Ohlhorst, S., Behrens, H. & Holtz, F. Compositional dependence of molar absorptivities of near-infrared OH- and H2O bands in rhyolitic to basaltic glasses. Chem. Geol. 174, 5–20 (2001). A.S., F.D.F., A.V. and C.R. acknowledge the Grant of Excellence Departments, MIUR-Italy (ARTICOLO 1, COMMI 314 − 337 LEGGE 232/2016). D.D.G. and P.V. acknowledge funding by Deutsche Forschungsgemeinschaft (DFG) project DI 2751/2–1. A.Z. also acknowledges the DFG for funding his research through the Walter Benjamin Program, grant no. 448961237, ZA 1188/1–1. The TEM facility at Bayerisches Geoinstitut is supported by DFG grant INST 91/251-1 FUGG. Dipartimento di Scienze, Università degli Studi Roma Tre, Largo San L. Murialdo 1, 00146, Rome, Italy Alex Scarani, Fabrizio Di Fiore, Alessandro Vona & Claudia Romano CNRS, CEMHTI UPR3079, University of Orléans, F-45071, Orléans, France Alessio Zandonà Bavarian Research Institute of Experimental Geochemistry and Geophysics (BGI), University of Bayreuth, Universitätsstraße 30, 95440, Bayreuth, Germany Pedro Valdivia, Rizaldi Putra & Nobuyoshi Miyajima Institute of Non-metallic Materials, Clausthal University of Technology, Zehntnerstraße 2a, D-38678, Clausthal-Zellerfeld, Germany Hansjörg Bornhöft & Joachim Deubener Institute of Environmental Geology and Geoengineering (IGAG), National Research Council of Italy (CNR), Rome, Italy Danilo Di Genova Alex Scarani Fabrizio Di Fiore Pedro Valdivia Rizaldi Putra Nobuyoshi Miyajima Hansjörg Bornhöft Alessandro Vona Joachim Deubener Claudia Romano Conceptualization: A.S., A.Z., F.D.F., A.V., D.D.G. Funding acquisition: A.Z., A.V., N.M., C.R., D.D.G. Investigation: A.S., A.Z., F.D.F., P.V., N.M., H.B., R.P. Methodology: A.S., A.Z., F.D.F., A.V., P.V., N.M., D.D.G. Resources: N.M., J.D., C.R. Supervision: D.D.G. and A.V. Visualization: A.S., A.Z., N.M., R.P. Writing—original draft: D.D.G., A.Z., A.S., A.V. Writing—review and editing: A.S., A.Z., F.D.F., P.V., N.M., H.B., A.V., J.D., C.R., D.D.G. Correspondence to Alex Scarani or Alessio Zandonà. Communications Earth & Environment thanks the anonymous reviewers for their contribution to the peer review of this work. Primary Handling Editor: Joe Aslin. Peer reviewer reports are available. Description of Additional Supplementary Files Supplementary Data 1 Peer Review File Scarani, A., Zandonà, A., Di Fiore, F. et al. A chemical threshold controls nanocrystallization and degassing behaviour in basalt magmas. Commun Earth Environ 3, 284 (2022). https://doi.org/10.1038/s43247-022-00615-2 Da micro a macro: come minuscoli cristalli influenzano l'attività vulcanica Ginevra Chelli Nature Italy (2023) From micro to macro: how tiny crystals influence volcanic activity Communications Earth & Environment (Commun Earth Environ) ISSN 2662-4435 (online)
CommonCrawl
5 is Prime But 7 is Not Prime in the Ring $\Z[\sqrt{2}]$ In the ring \[\Z[\sqrt{2}]=\{a+\sqrt{2}b \mid a, b \in \Z\},\] show that $5$ is a prime element but $7$ is not a prime element. A Prime Ideal in the Ring $\Z[\sqrt{10}]$ Consider the ring \[\Z[\sqrt{10}]=\{a+b\sqrt{10} \mid a, b \in \Z\}\] and its ideal \[P=(2, \sqrt{10})=\{a+b\sqrt{10} \mid a, b \in \Z, 2|a\}.\] Show that $p$ is a prime ideal of the ring $\Z[\sqrt{10}]$. Dimension of Null Spaces of Similar Matrices are the Same Suppose that $n\times n$ matrices $A$ and $B$ are similar. Then show that the nullity of $A$ is equal to the nullity of $B$. In other words, the dimension of the null space (kernel) $\calN(A)$ of $A$ is the same as the dimension of the null space $\calN(B)$ of $B$. Group of $p$-Power Roots of 1 is Isomorphic to a Proper Quotient of Itself Let $p$ be a prime number. Let \[G=\{z\in \C \mid z^{p^n}=1\} \] be the group of $p$-power roots of $1$ in $\C$. Show that the map $\Psi:G\to G$ mapping $z$ to $z^p$ is a surjective homomorphism. Also deduce from this that $G$ is isomorphic to a proper quotient of $G$ itself. If a Prime Ideal Contains No Nonzero Zero Divisors, then the Ring is an Integral Domain Let $R$ be a commutative ring. Suppose that $P$ is a prime ideal of $R$ containing no nonzero zero divisor. Then show that the ring $R$ is an integral domain. Use Lagrange's Theorem to Prove Fermat's Little Theorem Use Lagrange's Theorem in the multiplicative group $(\Zmod{p})^{\times}$ to prove Fermat's Little Theorem: if $p$ is a prime number then $a^p \equiv a \pmod p$ for all $a \in \Z$. Rotation Matrix in Space and its Determinant and Eigenvalues For a real number $0\leq \theta \leq \pi$, we define the real $3\times 3$ matrix $A$ by \cos\theta & -\sin\theta & 0 \\ \sin\theta &\cos\theta &0 \\ (a) Find the determinant of the matrix $A$. (b) Show that $A$ is an orthogonal matrix. (c) Find the eigenvalues of $A$. Given Graphs of Characteristic Polynomial of Diagonalizable Matrices, Determine the Rank of Matrices Let $A, B, C$ are $2\times 2$ diagonalizable matrices. The graphs of characteristic polynomials of $A, B, C$ are shown below. The red graph is for $A$, the blue one for $B$, and the green one for $C$. From this information, determine the rank of the matrices $A, B,$ and $C$. Graphs of characteristic polynomials Two Matrices with the Same Characteristic Polynomial. Diagonalize if Possible. -3 &-5 &-3 \\ \end{bmatrix}.\] For this problem, you may use the fact that both matrices have the same characteristic polynomial: \[p_A(\lambda)=p_B(\lambda)=-(\lambda-1)(\lambda+2)^2.\] (a) Find all eigenvectors of $A$. (b) Find all eigenvectors of $B$. (c) Which matrix $A$ or $B$ is diagonalizable? (d) Diagonalize the matrix stated in (c), i.e., find an invertible matrix $P$ and a diagonal matrix $D$ such that $A=PDP^{-1}$ or $B=PDP^{-1}$. (Stanford University Linear Algebra Final Exam Problem) Show that Two Fields are Equal: $\Q(\sqrt{2}, \sqrt{3})= \Q(\sqrt{2}+\sqrt{3})$ Show that fields $\Q(\sqrt{2}+\sqrt{3})$ and $\Q(\sqrt{2}, \sqrt{3})$ are equal. Find the Inverse Matrix of a Matrix With Fractions Find the inverse matrix of the matrix \frac{2}{7} & \frac{3}{7} & \frac{6}{7} \\[6 pt] \frac{6}{7} &\frac{2}{7} &-\frac{3}{7} \\[6pt] -\frac{3}{7} & \frac{6}{7} & -\frac{2}{7} A Matrix Similar to a Diagonalizable Matrix is Also Diagonalizable Let $A, B$ be matrices. Show that if $A$ is diagonalizable and if $B$ is similar to $A$, then $B$ is diagonalizable. If Every Nonidentity Element of a Group has Order 2, then it's an Abelian Group Let $G$ be a group. Suppose that the order of nonidentity element of $G$ is $2$. Then show that $G$ is an abelian group. In this post, we explain how to diagonalize a matrix if it is diagonalizable. As an example, we solve the following problem. Diagonalize the matrix 4 & -3 & -3 \\ 3 &-2 &-3 \\ -1 & 1 & 2 \end{bmatrix}\] by finding a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$. (Update 10/15/2017. A new example problem was added.) Diagonalizable by an Orthogonal Matrix Implies a Symmetric Matrix Let $A$ be an $n\times n$ matrix with real number entries. Show that if $A$ is diagonalizable by an orthogonal matrix, then $A$ is a symmetric matrix. Group Homomorphism, Conjugate, Center, and Abelian group Let $G$ be a group. We fix an element $x$ of $G$ and define a map \[ \Psi_x: G\to G\] by mapping $g\in G$ to $xgx^{-1} \in G$. Then prove the followings. (a) The map $\Psi_x$ is a group homomorphism. (b) The map $\Psi_x=\id$ if and only if $x\in Z(G)$, where $Z(G)$ is the center of the group $G$. (c) The map $\Psi_y=\id$ for all $y\in G$ if and only if $G$ is an abelian group. Group Homomorphism, Preimage, and Product of Groups Let $G, G'$ be groups and let $f:G \to G'$ be a group homomorphism. Put $N=\ker(f)$. Then show that we have \[f^{-1}(f(H))=HN.\] A Group Homomorphism and an Abelian Group Let $G$ be a group. Define a map $f:G \to G$ by sending each element $g \in G$ to its inverse $g^{-1} \in G$. Show that $G$ is an abelian group if and only if the map $f: G\to G$ is a group homomorphism. Eigenvalues and their Algebraic Multiplicities of a Matrix with a Variable Determine all eigenvalues and their algebraic multiplicities of the matrix 1 & a & 1 \\ a &1 &a \\ 1 & a & 1 \end{bmatrix},\] where $a$ is a real number. Order of the Product of Two Elements in an Abelian Group Let $G$ be an abelian group with the identity element $1$. Let $a, b$ be elements of $G$ with order $m$ and $n$, respectively. If $m$ and $n$ are relatively prime, then show that the order of the element $ab$ is $mn$. Page 27 of 38« First«...1020...2425262728293031...»Last » Square Root of an Upper Triangular Matrix. How Many Square Roots Exist? Probability of Having Lung Cancer For Smokers Calculate $A^{10}$ for a Given Matrix $A$ Find the Nullspace and Range of the Linear Transformation $T(f)(x) = f(x)-f(0)$
CommonCrawl
davidmoxey.uk Below is a complete list of publications in journals, books and conference proceedings. Where available, links are included to preprints or the printed article. Citation counts and other versions can be found on Google Scholar. Journal Articles · Book Chapters · Conference Papers · Dissertations S. Xu, M. Rasouli, R. M. Kirby, D. Moxey and H. Sundar A geometrically informed algebraic multigrid preconditioned iterative approach for solving high-order finite element systems under review in SIAM J. Sci. Comput., January 2022. BibTeX Abstract @unpublished{xu-2022, title = {A geometrically informed algebraic multigrid preconditioned iterative approach for solving high-order finite element systems}, author = {Xu, S. and Rasouli, M. and Kirby, R. M. and Moxey, D. and Sundar, H.}, note = {under review in SIAM J. Sci. Comput.}, keywords = {journal} Algebraic multigrid (AMG) is conventionally applied in a black-box fashion, agnostic to the underlying geometry. In this work, we propose that using geometric information – when available – to assist with setting up the AMG hierarchy is beneficial, especially for solving linear systems resulting from high-order finite element discretizations. For geometric multigrid, it is known that using p-coarsening before h-coarsening can provide better scalability, but setting up p-coarsening is non-trivial in AMG. Our method, called geometrically informed algebraic multigrid (GIAMG), with minimal information of the geometry from the user, is able to set up a grid hierarchy that includes p-coarsening at the top grids. A major advantage of using p-coarsening with AMG – beyond the benefits known in the context of GMG – is the increased sparsification of coarse grid operators. We extensively evaluate GIAMG by testing on the 3D Helmholtz and incompressible Navier–Stokes operators, and demonstrate mesh-independent convergence, and excellent parallel scalability. We also compare the performance of GIAMG with existing AMG packages, including Hypre and ML. J. Slaughter, D. Moxey and S. J. Sherwin Large eddy simulation of an inverted multi-element wing in ground effect under review in Flow Turbul. Combust., December 2022. BibTeX Abstract @unpublished{slaughter-2022, title = {Large eddy simulation of an inverted multi-element wing in ground effect}, author = {Slaughter, J. and Moxey, D. and Sherwin, S. J.}, note = {under review in Flow Turbul. Combust.}, Due to the proprietary nature of modern motorsport and Formula 1, current scientific literature lacks relevant studies and benchmarks that can be used to test and validate new methods. Due to the release of a free geometry - the Imperial Front Wing - we present a computational study of a multi-element aerofoil at a ride height of 0.36h/c and a Reynolds Number of 2.2 \times 105. A 0.16c slice of the Imperial has been examined using high-order Spectral/hp Element Methods. Time averaged force data is presented finding lift and drag coefficients of -8.33 and 0.17 respectively. Transient analysis of the force- and surface pressure data resulted in salient mode identification with respect to the transition mechanisms of each element.The mainplane and flap laminar separation were studied and the cross-spectral phase presented for the lower frequency modes. At a St=40 an in-phase relationship was identified between mainplane and flap Laminar Separation Bubbles, whilst at St=60 a distinct out-of-phase relationship was identified. Wake results including wake-momentum deficit and turbulent kinetic energy plots have been presented - showing wake meandering and subsequent break down due to a Kelvin-Helmholtz instability. These results, particularly the transition mechanisms will allow for the construction of a data set to validate novel methods in this area. J. Eichstädt, J. Peiró and D. Moxey Efficient vectorised kernels for unstructured high-order finite element fluid solvers on GPU architectures in two dimensions Comput. Phys. Commun., 284, p. 108624, 2023. 10.1016/j.cpc.2022.108624 BibTeX Abstract @article{eichstadt-2023, title = {Efficient vectorised kernels for unstructured high-order finite element fluid solvers on GPU architectures in two dimensions}, author = {Eichst\"adt, J. and Peir\'o, J. and Moxey, D.}, journal = {Comput. Phys. Commun.}, url = {https://www.sciencedirect.com/science/article/pii/S0010465522003435}, doi = {10.1016/j.cpc.2022.108624} We develop efficient kernels for elemental operators of matrix-free solvers of the Helmholtz equation, which are the core operations for incompressible Navier-Stokes solvers, for use on graphics-processing units (GPUs). Our primary concern in this work is the extension of matrix-free routines to efficiently evaluate this elliptic operator on regular and curvilinear triangular elements in a tensor-product manner. We investigate two types of efficient CUDA kernels for a range of polynomial orders and thus varying arithmetic intensities: the first maps each elemental operation to a \emphCUDA-thread for a completely vectorised kernel, whilst the second maps each element to a \emphCUDA-block for nested parallelism. Our results show that the first option is beneficial for elements with low polynomial order, whereas the second option is beneficial for elements of higher order. The crossover point between these two schemes for the hardware used in this study lies at around P=4-5, depending on element type. For both options, we highlight the importance of the layout of data structures, which necessitates the development of \emphinterleaved elemental data for vectorised kernels, and analyse the effect of selecting different memory spaces on the GPU. As the considered kernels are foremost memory-bandwidth bound, we develop kernels for curved elements that trade memory bandwidth against additional arithmetic operations, and demonstrate improved throughput in selected cases. We further compare our optimised \emphCUDA kernels against optimised \emphOpenACC kernels, to contrast the performance between a native and a portable programming model for GPUs. F. F. Buscariolo, J. Hoessler, D. Moxey, A. Jassim, K. Gouder, J. Basler, Y. Murai, G. R. S. Assi and S. J. Sherwin Spectral/hp element simulation of flow past a Formula One front wing: validation against experiments J. Wind. Eng. Ind. Aerod., 221, p. 104832, 2022. 10.1016/j.jweia.2021.104832 BibTeX Abstract @article{buscariolo-2022, title = {Spectral/$hp$ element simulation of flow past a Formula One front wing: validation against experiments}, author = {Buscariolo, F. F. and Hoessler, J. and Moxey, D. and Jassim, A. and Gouder, K. and Basler, J. and Murai, Y. and Assi, G. R. S. and Sherwin, S. J.}, journal = {J. Wind. Eng. Ind. Aerod.}, url = {https://arxiv.org/pdf/1909.06701}, doi = {10.1016/j.jweia.2021.104832} Emerging commercial and academic tools are regularly being applied to the design of road and race cars, but there currently are no well-established benchmark cases to study the aerodynamics of race car wings in ground effect. In this paper we propose a new test case, with a relatively complex geometry, supported by the availability of CAD model and experimental results. We refer to the test case as the Imperial Front Wing, originally based on the front wing and endplate design of the McLaren 17D race car.cv A comparison of different resolutions of a high fidelity spectral/hp element simulation using under-resolved DNS/implicit LES approach with fourth and fifth polynomial order is presented. The results demonstrate good correlation to both the wall-bounded streaklines obtained by oil flow visualization and experimental PIV results, correctly predicting key characteristics of the time-averaged flow structures, namely intensity, contours and locations. This study highlights the resolution requirements in capturing salient flow features arising from this type of challenging geometry, providing an interesting test case for both traditional and emerging high-fidelity simulations. E. Laughton, V. Zala, A. Narayan, R. M. Kirby and D. Moxey Fast barycentric-based evaluation over spectral/hp elements J. Sci. Comp., 90, p. 78, 2022. 10.1007/s10915-021-01750-2 BibTeX Abstract @article{laughton-2022, title = {Fast barycentric-based evaluation over spectral/$hp$ elements}, author = {Laughton, E. and Zala, V. and Narayan, A. and Kirby, R. M. and Moxey, D.}, journal = {J. Sci. Comp.}, pages = {78}, url = {https://link.springer.com/content/pdf/10.1007/s10915-021-01750-2.pdf}, doi = {10.1007/s10915-021-01750-2} As the use of spectral/hp element methods, and high-order finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental high-order operations have grown. Core tasks such as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendous attention. With the expansion of the types of problems to which high-order methods are applied, and correspondingly the growth in types of numerical tasks accomplished through high-order methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/hp element methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/hp element library Nektar++. G. Mengaldo, D. Moxey, M. Turner, R. C. Moura, A. Jassim, M. Taylor, J. Peiró and S. J. Sherwin Industry-relevant implicit large-eddy simulation of a high-performance road car via spectral/hp element methods SIAM Review, (63), pp. 723–755, 2021. 10.1137/20M1345359 BibTeX Abstract @article{mengaldo-2020, title = {Industry-relevant implicit large-eddy simulation of a high-performance road car via spectral/$hp$ element methods}, author = {Mengaldo, G. and Moxey, D. and Turner, M. and Moura, R. C. and Jassim, A. and Taylor, M. and Peir\'o, J. and Sherwin, S. J.}, journal = {SIAM Review}, issue = {63}, doi = {10.1137/20M1345359}, url = {https://arxiv.org/pdf/2009.10178} We present a successful deployment of high-fidelity Large-Eddy Simulation (LES) technologies based on spectral/hp element methods to industrial flow problems, which are characterized by high Reynolds numbers and complex geometries. In particular, we describe the numerical methods, software development and steps that were required to perform the implicit LES of a real automotive car, namely the Elemental Rp1 model. To the best of the authors' knowledge, this simulation represents the first fifth-order accurate transient LES of an entire real car geometry. Moreover, this constitutes a key milestone towards considerably expanding the computational design envelope currently allowed in industry, where steady-state modelling remains the standard. To this end, a number of novel developments had to be made in order to overcome obstacles in mesh generation and solver technology to achieve this simulation, which we detail in this paper. The main objective is to present to the industrial and applied mathematics community, a viable pathway to translate academic developments into industrial tools, that can substantially advance the analysis and design capabilities of high-end engineering stakeholders. The novel developments and results were achieved using the academic-driven open-source framework Nektar++ M. B. Lykkegaard, T. Dodwell and D. Moxey Accelerating uncertainty quantification of groundwater flow modelling using deep neural networks Comput. Meth. Appl. Mech. Eng., 383, p. 113895, 2021. 10.1016/j.cma.2021.113895 BibTeX Abstract @article{lykkegaard-2021, title = {Accelerating uncertainty quantification of groundwater flow modelling using deep neural networks}, author = {Lykkegaard, M. B. and Dodwell, T. and Moxey, D.}, journal = {Comput. Meth. Appl. Mech. Eng.}, doi = {10.1016/j.cma.2021.113895} This paper presents a novel algorithmic approach which fuses Markov Chain Monte Carlo (MCMC) and Machine Learning methods to accelerate the uncertainty quantification of fluid flow in a heterogeneous porous medium, such as groundwater flow. We formulate the governing mathematical model as a Bayesian inverse problem, permitting us to consider the model parameters as a random process with an underlying probability distribution. MCMC allows us to sample from this distribution given some real observations of the system, but it comes with some limitations: it can be prohibitively expensive when dealing with costly likelihood functions, subsequent samples are often highly correlated, and the standard Metropolis-Hastings algorithm suffers from the curse of dimensionality. This paper designs a Metropolis-Hastings proposal which exploits a deep neural network (DNN) approximation of the model, trained on samples from the prior parameter distribution, to significantly accelerate the Bayesian computations. The approach is developed by modifying a delayed acceptance (DA) model hierarchy, whereby, instead of merely screening proposals with a coarse model before passing them to the fine, proposals are generated by running short subchains using an inexpensive DNN approximation in conjunction with the preconditioned Crank-Nicolson (pCN) transition kernel. As a result, the proposal distribution inherits its dimension-independence from the pCN kernel and subsequent fine model proposals are less correlated. Using a simple adaptive error model, we estimate and correct for the bias of the DNN approximation with respect to the posterior distribution on-the-fly. The approach is tested on a synthetic example, using different DNNs trained on a varying number of prior samples. The results show that the cost of uncertainty quantification using our novel approach can be reduced by up to 75% compared to single-level pCN MCMC, depending on the precomputation cost and accuracy of the employed DNN. E. Laughton, G. Tabor and D. Moxey A comparison of interpolation techniques for non-conformal high-order discontinuous Galerkin methods title = {A comparison of interpolation techniques for non-conformal high-order discontinuous Galerkin methods}, author = {Laughton, E. and Tabor, G. and Moxey, D.}, url = {https://www.sciencedirect.com/science/article/pii/S0045782521001560/pdfft}, The capability to incorporate moving geometric features within models for complex simulations is a common requirement in many fields. The fluid mechanics within aeronautical applications, for example, routinely feature rotating (e.g. turbines, wheels and fan blades) or sliding components (e.g. in compressor or turbine cascade simulations). With an increasing trend towards the high-fidelity modelling of these cases, in particular combined with the use of high-order discontinuous Galerkin methods, there is therefore a requirement to understand how different numerical treatments of the interfaces between the static mesh and the sliding/rotating part impact on overall solution quality. In this article, we compare two different approaches to handle this non-conformal interface. The first is the so-called mortar approach, where flux integrals along edges are split according to the positioning of the non-conformal grid. The second is a lesser-documented point-to-point interpolation method, where the interior and exterior quantities for flux evaluations are interpolated from elements lying on the opposing side of the interface. Although the mortar approach has advantages in terms of its numerical properties, in that it preserves the local conservation properties of DG methods, in the context of complex 3D meshes it poses significant implementation difficulties which the point-to-point method handles more readily. In this article we examine the numerical properties of each method, focusing not only on observing convergence orders for smooth solutions, but also how each method performs in under-resolved simulations of linear and nonlinear hyperbolic problems, to inform the use of these methods in implicit large-eddy simulations. Z. Yan, Y. Pan, G. Castiglioni, K. Hillewaert, J. Peiró, D. Moxey and S. J. Sherwin Nektar++: Design and implementation of an implicit spectral/hp element compressible flow solver using a Jacobian-free Newton Krylov approach Comput. Math. Appl., 81, pp. 351–372, 2021. 10.1016/j.camwa.2020.03.009 BibTeX Abstract @article{yan-2020, title = {\emph{Nektar++}: Design and implementation of an implicit spectral/hp element compressible flow solver using a Jacobian-free Newton Krylov approach}, author = {Yan, Z. and Pan, Y. and Castiglioni, G. and Hillewaert, K. and Peir\'o, J. and Moxey, D. and Sherwin, S. J.}, journal = {Comput. Math. Appl.}, doi = {10.1016/j.camwa.2020.03.009} At high Reynolds numbers the use of explicit in time compressible flow simulations with spectral/hp element discretisation can become significantly limited by time step. To alleviate this limitation we extend the capability of the spectral/hp element open-source software framework, Nektar++, to include an implicit discontinuous Galerkin compressible flow solver. The integration in time is carried out by a singly diagonally implicit Runge-Kutta method. The non-linear system arising from the implicit time integration is iteratively solved by the Jacobian-free Newton Krylov (JFNK) method. A favourable feature of the JFNK approach is its extensive use of the explicit operators available from the previous explicit in time implementation. The functionalities of different building blocks of the implicit solver are analyzed from the point view of software design and placed in appropriate hierarchical levels in the C++ libraries. In the detailed implementation, the contributions of different parts of the solver to computational cost, memory consumption and programming complexity are also analyzed. A combination of analytical and numerical methods is adopted to simplify the programming complexity in forming the preconditioning matrix. The solver is verified and tested using cases such as manufactured compressible Poiseuille flow, Taylor-Green vortex, turbulent flow over a circular cylinder at Re = 3900 and shock wave boundary-layer interaction. The results show that the implicit solver can speed-up the simulations while maintaining good simulation accuracy. J. Marcon, G. Castiglioni, D. Moxey, S. J. Sherwin and J. Peiró rp-adaptation for compressible flows Int. J. Numer. Meth. Eng., 121 (23), pp. 5405–5425, 2020. 10.1002/nme.6529 BibTeX Abstract @article{marcon-2020, title = {$rp$-adaptation for compressible flows}, author = {Marcon, J. and Castiglioni, G. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.}, journal = {Int. J. Numer. Meth. Eng.}, url = {https://onlinelibrary.wiley.com/doi/10.1002/nme.6529}, doi = {10.1002/nme.6529} We present an rp-adaptation strategy for the high-fidelity simulation of compressible inviscid flows with shocks. The mesh resolution in regions of flow discontinuities is increased by using a variational optimiser to r-adapt the mesh and cluster degrees of freedom there. In regions of smooth flow, we locally increase or decrease the local resolution through increasing or decreasing the polynomial order of the elements. This dual approach allows us to take advantage of the strengths of both methods for best computational performance, thereby reducing overall cost of the simulation. The adaptation workflow uses a sensor for both discontinuities and smooth regions that is cheap to calculate, but the framework is general and could be used in conjunction with other feature-based sensors or error estimators. We demonstrate this proof-of-concept using two geometries at transonic and supersonic flow regimes. The method was implemented in the open-source spectral/hp element framework \em Nektar++, and its dedicated high-order mesh generation tool \em NekMesh. The results show that the proposed rp-adaptation methodology is a reasonably cost-effective way of improving accuracy. J. Eichstädt, M. Vymazal, D. Moxey and J. Peiró A comparison of the shared-memory parallel programming models OpenMP, OpenACC and Kokkos in the context of implicit solvers for high-order FEM title = {A comparison of the shared-memory parallel programming models OpenMP, OpenACC and Kokkos in the context of implicit solvers for high-order FEM}, author = {Eichst\"adt, J. and Vymazal, M. and Moxey, D. and Peir\'o, J.}, doi = {10.1016/j.cpc.2020.107245}, url = {https://davidmoxey.uk/assets/pubs/2020-cpc-comparison.pdf} We consider the application of three performance-portable programming models in the context of a high-order spectral element, implicit time-stepping solver for the Navier-Stokes equations. We aim to evaluate whether the use of these models allows code developers to deliver high-performance solvers for computational fluid dynamics simulations that are capable of effectively utilising both many-core CPU and GPU architectures. Using the core elliptic solver for the Navier-Stokes equations as a benchmarking guide, we evaluate the performance of these models on a range of unstructured meshes and give guidelines for the translation of existing codebases and their data structures to these models. D. Moxey, R. Amici and R. M. Kirby Efficient matrix-free high-order finite element evaluation for simplicial elements SIAM J. Sci. Comput., 42 (3), pp. C97–C123, 2020. 10.1137/19M1246523 BibTeX Abstract @article{moxey-2020b, title = {Efficient matrix-free high-order finite element evaluation for simplicial elements}, author = {Moxey, D. and Amici, R. and Kirby, R. M.}, journal = {SIAM J. Sci. Comput.}, pages = {C97-C123}, url = {https://davidmoxey.uk/assets/pubs/2020-vectorisation.pdf}, doi = {10.1137/19M1246523} With the gap between processor clock speeds and memory bandwidth speeds continuing to increase, the use of arithmetically intense schemes, such as high-order finite element methods, continues to be of considerable interest. In particular, the use of matrix-free formulations of finite element operators for tensor-product elements of quadrilaterals in two dimensions and hexahedra in three dimensions, in combination with single-instruction multiple-data (SIMD) instruction sets, is a well-studied topic at present for the efficient implicit solution of elliptic equations. However, a considerable limiting factor for this approach is the use of meshes comprising of only quadrilaterals or hexahedra, the creation of which is still an open problem within the mesh generation community. In this article, we study the efficiency of high-order finite element operators for the Helmholtz equation with a focus on extending this approach to unstructured meshes of triangles, tetrahedra and prismatic elements using the spectral/hp element method and corresponding tensor-product bases for these element types. We show that although performance is naturally degraded when going from hexahedra to these simplicial elements, efficient implementations can still be obtained that are capable of attaining 50–70% of the peak FLOPS of processors with both AVX2 and AVX512 instruction sets. D. Moxey, C. D. Cantwell, Y. Bao, A. Cassinelli, G. Castiglioni, S. Chun, E. Juda, E. Kazemi, K. Lackhove, J. Marcon, G. Mengaldo, D. Serson, M. Turner, H. Xu, J. Peiró, R. M. Kirby and S. J. Sherwin Nektar++: enhancing the capability and application of high-fidelity spectral/hp element methods @article{moxey-2020a, title = {\emph{Nektar++}: enhancing the capability and application of high-fidelity spectral/$hp$ element methods}, author = {Moxey, D. and Cantwell, C. D. and Bao, Y. and Cassinelli, A. and Castiglioni, G. and Chun, S. and Juda, E. and Kazemi, E. and Lackhove, K. and Marcon, J. and Mengaldo, G. and Serson, D. and Turner, M. and Xu, H. and Peir\'o, J. and Kirby, R. M. and Sherwin, S. J.}, Nektar++ is an open-source framework that provides a flexible, performant and scalable platform for the development of solvers for partial differential equations using the high-order spectral/hp element method. In particular, \emphNektar++ aims to overcome the complex implementation challenges that are often associated with high-order methods, thereby allowing them to be more readily used in a wide range of application areas. In this paper, we present the algorithmic, implementation and application developments associated with our \emphNektar++ version 5.0 release. We describe some of the key software and performance developments, including our strategies on parallel I/O, on \emphin situ processing, the use of collective operations for exploiting current and emerging hardware, and interfaces to enable multi-solver coupling. Furthermore, we provide details on a newly developed Python interface that enable more rapid on-boarding of new users unfamiliar with spectral/hp element methods, C++ and/or \emphNektar++. This release also incorporates a number of numerical method developments – in particular: the method of moving frames (MMF), which provides an additional approach for the simulation of equations on embedded curvilinear manifolds and domains; a means of handling spatially variable polynomial order; and a novel technique for quasi-3D simulations (which combine a 2D spectral element and 1D Fourier spectral method) to permit spatially-varying perturbations to the geometry in the homogeneous direction. Finally, we demonstrate the new application-level features provided in this release, namely: a facility for generating high-order curvilinear meshes called \emphNekMesh; a novel new \emphAcousticSolver for aeroacoustic problems; our development of a 'thick' strip model for the modelling of fluid-structure interaction (FSI) problems in the context of vortex-induced vibrations (VIV). We conclude by commenting on some lessons learned and by discussing some directions for future code development and expansion. M. Vymazal, D. Moxey, S. Sherwin, C. D. Cantwell and R. M. Kirby On weak Dirichlet boundary conditions for elliptic problems in the continuous Galerkin method J. Comput. Phys., 394, pp. 732–744, 2019. 10.1016/j.jcp.2019.05.021 BibTeX Abstract @article{vymazal-2019, title = {On weak Dirichlet boundary conditions for elliptic problems in the continuous Galerkin method}, author = {Vymazal, M. and Moxey, D. and Sherwin, S. and Cantwell, C. D. and Kirby, R. M.}, journal = {J. Comput. Phys.}, doi = {10.1016/j.jcp.2019.05.021}, url = {https://davidmoxey.uk/assets/pubs/2019-weak-bcs.pdf} We combine continuous and discontinuous Galerkin methods in the setting of a model diffusion problem. Starting from a hybrid discontinuous formulation, we replace element interiors by more general subsets of the computational domain - groups of elements that support a piecewise-polynomial continuous expansion. This step allows us to identify a new weak formulation of Dirichlet boundary condition in the continuous framework. We show that the boundary condition leads to a stable discretization with a single parameter insensitive to mesh size and polynomial order of the expansion. The robustness of the approach is demonstrated on several numerical examples. A. Yakhot, Y. Feldman, D. Moxey, S. J. Sherwin and G. E. Karniadakis Turbulence in a localized puff in a pipe Flow Turbul. Combust., 103 (1), pp. 1–24, 2019. 10.1007/s10494-018-0002-8 BibTeX Abstract @article{yakhot-2019, title = {Turbulence in a localized puff in a pipe}, author = {Yakhot, A. and Feldman, Y. and Moxey, D. and Sherwin, S. J. and Karniadakis, G. E.}, journal = {Flow Turbul. Combust.}, url = {https://davidmoxey.uk/assets/pubs/2018-puff-turb.pdf}, doi = {10.1007/s10494-018-0002-8} We have performed direct numerical simulations of a spatio-temporally intermittent flow in a pipe for Rem = 2250. From previous experiments and simulations of pipe flow, this value has been estimated as a threshold when the average speeds of upstream and downstream fronts of a puff are identical. We investigated the structure of an individual puff by considering three-dimensional snapshots over a long time period. To assimilate the velocity data, we applied a conditional sampling based on the location of the maximum en- ergy of the transverse (turbulent) motion. Specifically, at each time instance, we followed a turbulent puff by a three-dimensional moving window centered at that location. We collected a snapshot-ensemble (10000 time instances, snap- shots) of the velocity fields acquired over T = 2000D/U time interval inside the moving window. The cross-plane velocity field inside the puff showed the dynamics of a developing turbulence. In particular, the analysis of the cross- plane radial motion yielded the illustration of the production of turbulent kinetic energy directly from the mean flow. A snapshot-ensemble averaging over 10000 snapshots revealed azimuthally arranged large-scale (coherent) structures indicating near-wall sweep and ejection activity. The localized puff is about 15-17 pipe diameters long and the flow regime upstream of its upstream edge and downstream of its leading edge is almost laminar. In the near-wall region, despite the low Reynolds number, the turbulence statistics, in particular, the distribution of turbulence intensities, Reynolds shear stress, skewness and flatness factors, become similar to a fully-developed turbulent pipe flow in the vicinity of the puff upstream edge. In the puff core, the velocity profile becomes flat and logarithmic. It is shown that this "fully-developed turbulent flash" is very narrow being about two pipe diameters long. D. Moxey, S. P. Sastry and R. M. Kirby Interpolation error bounds for curvilinear finite elements and their implications on adaptive mesh refinement J. Sci. Comp., 78 (2), pp. 1045–1062, 2019. 10.1007/s10915-018-0795-6 BibTeX Abstract @article{moxey-2019, title = {Interpolation error bounds for curvilinear finite elements and their implications on adaptive mesh refinement}, author = {Moxey, D. and Sastry, S. P. and Kirby, R. M.}, pages = {1045-1062}, url = {http://dx.doi.org/10.1007/s10915-018-0795-6} There is an increasing requirement from both academia and industry for high-fidelity flow simulations that are able to accurately capture complicated and transient flow dynamics in complex geometries. Coupled with the growing availability of high-performance, highly parallel computing resources, there is therefore a demand for scalable numerical methods and corresponding software frameworks which can deliver the next-generation of complex and detailed fluid simulations to scientists and engineers in an efficient way. In this article we discuss recent and upcoming advances in the use of the spectral/hp element method for addressing these modelling challenges. To use these methods efficiently for such applications, is critical that computational resolution is placed in the regions of the flow where it is needed most, which is often not known \empha priori. We propose the use of spatially and temporally varying polynomial order, coupled with appropriate error estimators, as key requirements in permitting these methods to achieve computationally efficient high-fidelity solutions to complex flow problems in the fluid dynamics community. M. Turner, J. Peiró and D. Moxey Curvilinear mesh generation using a variational framework Comput. Aided Design, 103, pp. 73–91, 2018. 10.1016/j.cad.2017.10.004 BibTeX Abstract @article{turner-2018, title = {Curvilinear mesh generation using a variational framework}, author = {Turner, M. and Peir\'o, J. and Moxey, D.}, journal = {Comput. Aided Design}, doi = {10.1016/j.cad.2017.10.004}, url = {http://www.sciencedirect.com/science/article/pii/S0010448517301744} We aim to tackle the challenge of generating unstructured high-order meshes of complex three-dimensional bodies, which remains a significant bottleneck in the wider adoption of high-order methods. In particular we show that by adopting a variational approach to the generation process, many of the current popular high-order generation methods can be encompassed under a single unifying framework. This allows us to compare the effectiveness of these methods and to assess the quality of the meshes they produce in a systematic fashion. We present a detailed overview of the theory and numerical implementation of the framework, and in particular we highlight how this can be effectively exploited to yield a highly-efficient parallel implementation. The effectiveness of this approach is examined by considering a number of two- and three-dimensional examples, where we show how it can be used for both mesh quality optimisation and untangling of invalid meshes. J. Eichstädt, M. Green, M. Turner, J. Peiró and D. Moxey Accelerating high-order mesh generation with an architecture-independent programming model Comput. Phys. Commun., 229, pp. 36–53, 2018. 10.1016/j.cpc.2018.03.025 BibTeX Abstract title = {Accelerating high-order mesh generation with an architecture-independent programming model}, author = {Eichst\"adt, J. and Green, M. and Turner, M. and Peir\'o, J. and Moxey, D.}, doi = {10.1016/j.cpc.2018.03.025}, url = {https://www.sciencedirect.com/science/article/pii/S0010465518300973} Heterogeneous manycore performance-portable programming models and libraries, such as Kokkos, have been developed to facilitate portability and maintainability of high-performance computing codes and enhance their resilience to architectural changes. Here we investigate the suitability of the \emphKokkos programming model for optimizing the performance of the high-order mesh generator \emphNekMesh, which has been developed to efficiently generate meshes containing millions of elements for industrial problem involving complex geometries. We describe the variational approach for \empha posteriori high-order mesh generation employed within \emphNekMesh and its parallel implementation. We discuss its optimisation for modern manycore massively parallel shared-memory CPU and GPU platforms using \emphKokkos and demonstrate that we achieve increased performance on multicore CPUs and accelerators compared with a native \emphPthreads implementation. Further, we show that we achieve additional speedup and cost reduction by running on GPUs without any hardware-specific code optimisation. D. de Grazia, D. Moxey, S. J. Sherwin, M. A. Kravtsova and A. I. Ruban DNS of a compressible boundary layer flow past an isolated three-dimensional hump in a high-speed subsonic regime Phys. Rev. Fluids, 3, p. 024101, 2018. 10.1103/PhysRevFluids.3.024101 BibTeX Abstract @article{degrazia-2016, title = {DNS of a compressible boundary layer flow past an isolated three-dimensional hump in a high-speed subsonic regime}, author = {de Grazia, D. and Moxey, D. and Sherwin, S. J. and Kravtsova, M. A. and Ruban, A. I.}, journal = {Phys. Rev. Fluids}, doi = {10.1103/PhysRevFluids.3.024101}, url = {https://davidmoxey.uk/assets/pubs/2018-prf.pdf} In this paper we study the boundary-layer separation produced in a high-speed subsonic boundary layer by a small wall roughness. Specifically, we present a direct numerical simulation (DNS) of a two-dimensional boundary-layer flow over a flat plate encountering a three-dimensional Gaussian-shaped hump. This work was motivated by the lack of DNS data of boundary-layer flows past roughness elements in a similar regime which is typical of civil aviation. The Mach and Reynolds numbers are chosen to be relevant for aeronautical applications when considering small imperfections at the leading edge of wings. We analyze different heights of the hump: The smaller heights result in a weakly nonlinear regime, while the larger result in a fully nonlinear regime with an increasing laminar separation bubble arising downstream of the roughness element and the formation of a pair of streamwise counterrotating vortices which appear to support themselves. D. Ekelschot, D. Moxey, S. J. Sherwin and J. Peiró A p-adaptation method for compressible flow problems using a goal-based error estimator Comput. Struct., 181, pp. 55–69, 2017. 10.1016/j.compstruc.2016.03.004 BibTeX Abstract @article{ekelschot-2017, title = {A $p$-adaptation method for compressible flow problems using a goal-based error estimator}, author = {Ekelschot, D. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.}, journal = {Comput. Struct.}, doi = {10.1016/j.compstruc.2016.03.004}, url = {https://davidmoxey.uk/assets/pubs/2016-padapt.pdf} An accurate calculation of aerodynamic force coefficients for a given geometry is of fundamental importance for aircraft design. High-order spectral/hp element methods, which use a discontinuous Galerkin discretisation of the compressible Navier–Stokes equations, are now increasingly being used to improve the accuracy of flow simulations and thus the force coefficients. To reduce error in the calculated force coefficients whilst keeping computational cost minimal, we propose a p-adaptation method where the degree of the approximating polynomial is locally increased in the regions of the flow where low resolution is identified using a goal-based error estimator as follows. Given an objective functional such as the aerodynamic force coefficients, we use control theory to derive an adjoint problem which provides the sensitivity of the functional with respect to changes in the flow variables, and assume that these changes are represented by the local truncation error. In its final form, the goal-based error indicator represents the effect of truncation error on the objective functional, suitably weighted by the adjoint solution. Both flow governing and adjoint equations are solved by the same high-order method, where we allow the degree of the polynomial within an element to vary across the mesh. We initially calculate a steady-state solution to the governing equations using a low polynomial order and use the goal-based error indicator to identify parts of the computational domain that require improved solution accuracy which is achieved by increasing the approximation order. We demonstrate the cost-effectiveness of our method across a range of polynomial orders by considering a number of examples in two- and three-dimensions and in subsonic and transonic flow regimes. Reductions in both the number of degrees of freedom required to resolve the force coefficients to a given error, as well as the computational cost, are both observed in using the p-adaptive technique. D. Moxey, C. D. Cantwell, R. M. Kirby and S. J. Sherwin Optimizing the performance of the spectral/hp element method with collective linear algebra operations Comput. Meth. Appl. Mech. Eng., 310, pp. 628–645, 2016. 10.1016/j.cma.2016.07.001 BibTeX Abstract title = {Optimizing the performance of the spectral/hp element method with collective linear algebra operations}, author = {Moxey, D. and Cantwell, C. D. and Kirby, R. M. and Sherwin, S. J.}, doi = {10.1016/j.cma.2016.07.001} As high-performance computing hardware evolves, increasing core counts mean that memory bandwidth is becoming the deciding factor in attaining peak CPU performance. Methods that make efficient use of memory and caches are therefore essential for modern hardware. High-order finite element methods, such as those implemented in the spectral/hp framework \nekpp, are particularly well-suited to this environment. Unlike low-order methods that typically utilize sparse storage, matrices representing high-order operators have greater density and richer structure. In this paper, we show how these qualities can be exploited to increase runtime performance by amalgamating the action of key operators on multiple elements into a single, memory-efficient block. We investigate different strategies for achieving optimal performance across a range of polynomial orders and element types. As these strategies all depend on external factors such as BLAS implementation and the geometry of interest, we present a technique for automatically selecting the most efficient strategy at runtime. A. Bolis, C. D. Cantwell, D. Moxey, D. Serson and S. J. Sherwin An adaptable parallel algorithm for the direct numerical simulation of incompressible turbulent flows using a Fourier spectral/hp element method and MPI virtual topologies @article{bolis-2016, title = {{An adaptable parallel algorithm for the direct numerical simulation of incompressible turbulent flows using a Fourier spectral/hp element method and MPI virtual topologies}}, author = {Bolis, A. and Cantwell, C. D. and Moxey, D. and Serson, D. and Sherwin, S. J.}, url = {http://www.sciencedirect.com/science/article/pii/S001046551630100X} A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-\spectralhp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used. J.-E. W. Lombard, D. Moxey, S. J. Sherwin, J. F. A. Hoessler, S. Dhandapani and M. J. Taylor Implicit large-eddy simulation of a wingtip vortex AIAA J., 54 (2), pp. 506–518, 2016. 10.2514/1.J054181 BibTeX Abstract @article{lombard-2016, title = {Implicit large-eddy simulation of a wingtip vortex}, author = {Lombard, J.-E. W. and Moxey, D. and Sherwin, S. J. and Hoessler, J. F. A. and Dhandapani, S. and Taylor, M. J.}, journal = {AIAA J.}, url = {http://arxiv.org/abs/1507.06012}, doi = {10.2514/1.J054181} In this article, recent developments in numerical methods for performing a large-eddy simulation of the formation and evolution of a wingtip vortex are presented. The development of these vortices in the near wake, in combination with the large Reynolds numbers present in these cases, makes these types of test cases particularly challenging to investigate numerically. First, an overview is given of the spectral vanishing viscosity/implicit large-eddy simulation solver that is used to perform the simulations, and techniques are highlighted that have been adopted to solve various numerical issues that arise when studying such cases. To demonstrate the method's viability, results are presented from numerical simulations of flow over a NACA 0012 profile wingtip at Rec=1.2⋅106 and they are compared against experimental data, which is to date the highest Reynolds number achieved for a large-eddy simulation that has been correlated with experiments for this test case. The model in this paper correlates favorably with experiment, both for the characteristic jetting in the primary vortex and pressure distribution on the wing surface. The proposed method is of general interest for the modeling of transitioning vortex-dominated flows over complex geometries. S. Yakovlev, D. Moxey, S. J. Sherwin and R. M. Kirby To CG or to HDG: a comparative study in 3D J. Sci. Comp., 67 (1), pp. 192–220, 2016. 10.1007/s10915-015-0076-6 BibTeX Abstract @article{yakovlev-2016, title = {{To CG or to HDG: a comparative study in 3D}}, author = {Yakovlev, S. and Moxey, D. and Sherwin, S. J. and Kirby, R. M.}, pages = {{192-220}}, url = {https://davidmoxey.uk/assets/pubs/2015-hdg.pdf}, Since the inception of discontinuous Galerkin (DG) methods for elliptic problems, there has existed a question of whether DG methods can be made more computationally efficient than continuous Galerkin (CG) methods. Fewer degrees of freedom, approximation properties for elliptic problems together with the number of optimization techniques, such as static condensation, available within CG framework make it challenging for DG methods to be competitive until recently. However, with the introduction of a static-condensation-amenable DG method – the hybridizable discontinuous Galerkin (HDG) method – it has become possible to perform a realistic comparison of CG and HDG methods when applied to elliptic problems. In this work, we extend upon an earlier 2D comparative study, providing numerical results and discussion of the CG and HDG method performance in three dimensions. The comparison categories covered include steady-state elliptic and time-dependent parabolic problems, various element types and serial and parallel performance. The postprocessing technique, which allows for superconvergence in the HDG case, is also discussed. Depending on the linear system solver used and the type of the problem (steady-state vs time-dependent) in question the HDG method either outperforms or demonstrates a comparable performance when compared with the CG method. The HDG method however falls behind performance-wise when the iterative solver is used, which indicates the need for an effective preconditioning strategy for the method. D. Moxey, D. Ekelschot, Ü. Keskin, S. J. Sherwin and J. Peiró High-order curvilinear meshing using a thermo-elastic analogy Comput. Aided Design, 72, pp. 130–139, 2016. 10.1016/j.cad.2015.09.007 BibTeX Abstract title = {High-order curvilinear meshing using a thermo-elastic analogy}, author = {Moxey, D. and Ekelschot, D. and Keskin, {\"U}. and Sherwin, S. J. and Peir{\'o}, J.}, doi = {10.1016/j.cad.2015.09.007} With high-order methods becoming increasingly popular in both academia and industry, generating curvilinear meshes that align with the boundaries of complex geometries continues to present a significant challenge. Whereas traditional low-order methods use planar-faced elements, high-order methods introduce curvature into elements that may, if added naively, cause the element to self-intersect. Over the last few years, several curvilinear mesh generation techniques have been designed to tackle this issue, utilising mesh deformation to move the interior nodes of the mesh in order to accommodate curvature at the boundary. Many of these are based on elastic models, where the mesh is treated as a solid body and deformed according to a linear or non-linear stress tensor. However, such methods typically have no explicit control over the validity of the elements in the resulting mesh. In this article, we present an extension of this elastic formulation, whereby a thermal stress term is introduced to 'heat' or 'cool' elements as they deform. We outline a proof-of-concept implementation and show that the adoption of a thermo-elastic analogy leads to an additional degree of robustness, by considering examples in both two and three dimensions. G. Mengaldo, D. de Grazia, D. Moxey, P. E. Vincent and S. J. Sherwin Dealiasing techniques for high-order spectral element methods on regular and irregular grids J. Comput. Phys., 299, pp. 56–81, 2015. 10.1016/j.jcp.2015.06.032 BibTeX Abstract title = {{Dealiasing techniques for high-order spectral element methods on regular and irregular grids}}, author = {Mengaldo, G. and de Grazia, D. and Moxey, D. and Vincent, P. E. and Sherwin, S. J.}, High-order methods are becoming increasingly attractive in both academia and industry, especially in the context of computational fluid dynamics. However, before they can be more widely adopted, issues such as lack of robustness in terms of numerical stability need to be addressed, particularly when treating industrial-type problems where challenging geometries and a wide range of physical scales, typically due to high Reynolds numbers, need to be taken into account. One source of instability is aliasing effects which arise from the nonlinearity of the underlying problem. In this work we detail two dealiasing strategies based on the concept of consistent integration, the first of which uses a localised approach which is useful when the nonlinearities only arise in parts of the problem and the second a more traditional approach of using a higher quadrature. The main goal of both dealiasing techniques is to improve the robustness of high order spectral element methods, thereby reducing aliasing-driven instabilities. We demonstrate how these two strategies can be effectively applied to both continuous and discontinuous discretisations, where in the latter both volumetric and interface approximations must be considered. We show the key features of each dealiasing technique applied to the scalar conservation law with numerical examples and we highlight the main differences in implementation between continuous and discontinuous spatial discretisations. C. D. Cantwell, D. Moxey, A. Comerford, A. Bolis, G. Rocco, G. Mengaldo, D. de Grazia, S. Yakovlev, J.-E. Lombard, D. Ekelschot, B. Jordi, H. Xu, Y. Mohamied, C. Eskilsson, B. Nelson, P. Vos, C. Biotto, R. M. Kirby and S. J. Sherwin Nektar++: An open-source spectral/hp element framework Comput. Phys. Commun., 192, pp. 205–219, 2015. 10.1016/j.cpc.2015.02.008 BibTeX Abstract @article{cantwell-2015, title = {Nektar++: An open-source spectral/hp element framework}, author = {Cantwell, C. D. and Moxey, D. and Comerford, A. and Bolis, A. and Rocco, G. and Mengaldo, G. and de Grazia, D. and Yakovlev, S. and Lombard, J.-E. and Ekelschot, D. and Jordi, B. and Xu, H. and Mohamied, Y. and Eskilsson, C. and Nelson, B. and Vos, P. and Biotto, C. and Kirby, R. M. and Sherwin, S. J.}, Nektar++ is an open-source software framework designed to support the development of high-performance scalable solvers for partial differential equations using the spectral/hp element method. High-order methods are gaining prominence in several engineering and biomedical applications due to their improved accuracy at reduced computational cost. However, their proliferation is often limited by implementational complexity, which makes practically embracing these methods particularly challenging. Nektar++ is an initiative to overcome this limitation by encapsulating the mathematical complexities of the underlying method within an efficient C++ framework, making the techniques more accessible to the broader scientific and industrial communities for solving a range of problems. The software supports a variety of discretisation techniques and implementation strategies, supporting methods research as well as application-focused computation, and the multi-layered structure of the framework allows the user to embrace as much or as little of the complexity as they need. The libraries capture the mathematical constructs of spectracl/hp element methods, while the associated collection of pre-written PDE solvers provides out-of-the-box application-level functionality and a template for users who wish to develop solutions for addressing questions in their own scientific domains. D. Moxey, M. D. Green, S. J. Sherwin and J. Peiró An isoparametric approach to high-order curvilinear boundary-layer meshing title = {An isoparametric approach to high-order curvilinear boundary-layer meshing}, author = {Moxey, D. and Green, M. D. and Sherwin, S. J. and Peir{\'o}, J.}, doi = {10.1016/j.cma.2014.09.019}, The generation of high-order curvilinear meshes for complex three-dimensional geometries is presently a challenging topic, particularly for meshes used in simulations at high Reynolds numbers where a thin boundary layer exists near walls and elements are highly stretched in the direction normal to flow. In this paper, we present a conceptually simple but very effective and modular method to address this issue. We propose an isoparametric approach, whereby a mesh containing a valid coarse discretisation comprising of high-order triangular prisms near walls is refined to obtain a finer prismatic or tetrahedral boundary-layer mesh. The validity of the prismatic mesh provides a suitable mapping that allows one to obtain very fine mesh resolutions across the thickness of the boundary layer. We describe the method in detail for a high-order approximation using modal basis functions, discuss the requirements for the splitting method to produce valid prismatic and tetrahedral meshes and provide a sufficient criterion of validity in both cases. By considering two complex aeronautical configurations, we demonstrate how highly stretched meshes with sufficient resolution within the laminar sublayer can be generated to enable the simulation of flows with Reynolds numbers of 106 and above. E. Ferrer, D. Moxey, S. J. Sherwin and R. H. J. Willden Stability of projection methods for incompressible flows using high order pressure-velocity pairs of same degree: Continuous and Discontinuous Galerkin formulations Commun. Comp. Phys., 16 (3), pp. 817–840, 2014. 10.4208/cicp.290114.170414a BibTeX Abstract @article{ferrer-2014, title = {{Stability of projection methods for incompressible flows using high order pressure-velocity pairs of same degree: Continuous and Discontinuous Galerkin formulations}}, author = {Ferrer, E. and Moxey, D. and Sherwin, S. J. and Willden, R. H. J.}, doi = {10.4208/cicp.290114.170414a}, journal = {Commun. Comp. Phys.}, url = {https://davidmoxey.uk/assets/pubs/2014-temporal.pdf} This paper presents limits for stability of projection type schemes when using high order pressure-velocity pairs of same degree. Two high order h/p variational methods encompassing continuous and discontinuous Galerkin formulations are used to explain previously observed lower limits on the time step for projection type schemes to be stable, when h- or p-refinement strategies are considered. In addition, the analysis included in this work shows that these stability limits do not depend only on the time step but on the product of the latter and the kinematic viscosity, which is of particular importance in the study of high Reynolds number flows. We show that high order methods prove advantageous in stabilising the simulations when small time steps and low kinematic viscosities are used. Drawing upon this analysis, we demonstrate how the effects of this instability can be reduced in the discontinuous scheme by introducing a stabilisation term into the global system. Finally, we show that these lower limits are compatible with Courant-Friedrichs-Lewy (CFL) type restrictions, given that a sufficiently high polynomial order or a mall enough mesh spacing is selected. D. de Grazia, G. Mengaldo, D. Moxey, P. E. Vincent and S. J. Sherwin Connections between the discontinuous Galerkin method and high-order flux reconstruction schemes Int. J. Numer. Meth. Fl., 75 (12), pp. 860–877, 2014. 10.1002/fld.3915 BibTeX Abstract title = {{Connections between the discontinuous Galerkin method and high-order flux reconstruction schemes}}, author = {de Grazia, D. and Mengaldo, G. and Moxey, D. and Vincent, P. E. and Sherwin, S. J.}, doi = {10.1002/fld.3915}, url = {https://davidmoxey.uk/assets/pubs/2014-frdg.pdf}, journal = {Int. J. Numer. Meth. Fl.} With high-order methods becoming more widely adopted throughout the field of computational fluid dynamics, the development of new computationally efficient algorithms has increased tremendously in recent years. The flux reconstruction approach allows various well-known high order schemes to be cast within a single unifying framework. Whilst a connection between flux reconstruction and the discontinuous Galerkin method has been established elsewhere, it still remains to fully investigate the explicit connections between the many popular variants of the discontinuous Galerkin method and the flux reconstruction approach. In this work, we closely examine the connections between three nodal versions of tensor product discontinuous Galerkin spectral element approximations and two types of flux reconstruction schemes for solving systems of conservation laws on quadrilateral meshes. The different types of discontinuous Galerkin approximations arise from the choice of the solution nodes of the Lagrange basis representing the solution and from the quadrature approximation used to integrate the mass matrix and the other terms of the discretisation. By considering both a linear and nonlinear advection equation on a regular grid, we examine the mathematical properties which connect these discretisations. These arguments are further confirmed by the results of an empirical numerical study. J. Cohen, C. D. Cantwell, N. P. C. Hong, D. Moxey, M. Illingworth, A. Turner, J. Darlington and S. J. Sherwin Simplifying the Development, Use and Sustainability of HPC Software J. Open Res. Soft., 2 (1), 2014. 10.5334/jors.az BibTeX Abstract @article{cohen-2014, author = {Cohen, J. and Cantwell, C. D. and Hong, N. P. Chue and Moxey, D. and Illingworth, M. and Turner, A. and Darlington, J. and Sherwin, S. J.}, title = {Simplifying the Development, Use and Sustainability of HPC Software}, journal = {J. Open Res. Soft.}, url = {https://davidmoxey.uk/assets/pubs/2014-jors.pdf}, doi = {10.5334/jors.az} Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for high-performance computing (HPC), scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it. K. Avila, D. Moxey, A. de Lozar, M. Avila, D. Barkley and B. Hof The onset of turbulence in pipe flow Science, 333 (6039), pp. 192–196, 2011. 10.1126/science.1203223 BibTeX Abstract @article{avila-2011, title = {{The onset of turbulence in pipe flow}}, author = {Avila, K. and Moxey, D. and de Lozar, A. and Avila, M. and Barkley, D. and Hof, B.}, number = {6039}, journal = {Science}, note = {published as a research article}, doi = {10.1126/science.1203223}, url = {https://davidmoxey.uk/assets/pubs/2011-science.pdf} Shear flows undergo a sudden transition from laminar to turbulent motion as the velocity increases and the onset of turbulence radically changes transport efficiency and mixing properties. Even for the well-studied case of pipe flow, it has not been possible to determine at what Reynolds number the motion will be either persistently turbulent or ultimately laminar. We show that in pipes, turbulence which is transient at low Reynolds numbers becomes sustained at a distinct critical point. Through extensive experiments and computer simulations we are able to identify and characterize the processes ultimately responsible for sustaining turbulence. In contrast to the classical Landau-Ruelle-Takens view that turbulence arises from an increase in the temporal complexity of fluid motion, here spatial proliferation of chaotic domains is the decisive process and intrinsic to the nature of fluid turbulence. D. Moxey and D. Barkley Distinct large-scale turbulent-laminar states in transitional pipe flow Proc. Nat. Acad. Sci., 107 (18), pp. 8091–8096, 2010. 10.1073/pnas.0909560107 BibTeX Abstract title = {{Distinct large-scale turbulent-laminar states in transitional pipe flow}}, author = {Moxey, D. and Barkley, D.}, journal = {Proc. Nat. Acad. Sci.}, doi = {10.1073/pnas.0909560107}, url = {https://davidmoxey.uk/assets/pubs/2010-pnas.pdf} When fluid flows through a channel, pipe, or duct, there are two basic forms of motion: smooth laminar motion and complex turbulent motion. The discontinuous transition between these states is a fundamental problem that has been studied for more than 100 years. What has received far less attention is the large-scale nature of the turbulent flows near transition once they are established. We have carried out extensive numerical computations in pipes of variable lengths up to 125 diameters to investigate the nature of transitional turbulence in pipe flow. We show the existence of three fundamentally different turbulent states separated by two distinct Reynolds numbers. Below Re1 2300, turbulence takes the form of familiar equilibrium (or long-time transient) puffs that are spatially localized and keep their size independent of pipe length. At Re1 the flow makes a striking transition to a spatio-temporally intermittent flow that fills the pipe. Irregular alternation of turbulent and laminar regions is inherent and does not result from random disturbances. The fraction of turbulence increases with Re until Re2 2600 where there is a continuous transition to a state of uniform turbulence along the pipe. We relate these observations to directed percolation and argue that Re1 marks the onset of infinite-lifetime turbulence. On the generation of curvilinear meshes through subdivision of isoparametric elements in New Challenges in Grid Generation and Adaptivity for Scientific Computing, Springer, 2015, pp. 203–215. 10.1007/978-3-319-06053-8_10 BibTeX Abstract @inbook{moxey-2015d, title = {On the generation of curvilinear meshes through subdivision of isoparametric elements}, author = {Moxey, D. and Green, M. D. and Sherwin, S. J. and Peir\'o, J.}, booktitle = {New Challenges in Grid Generation and Adaptivity for Scientific Computing}, publisher = {Springer}, doi = {10.1007/978-3-319-06053-8_10}, url = {https://davidmoxey.uk/assets/pubs/2014-tet.pdf} Recently, a new mesh generation technique based on the isoparametric representation of curvilinear elements has been developed in order to address the issue of generating high-order meshes with highly stretched elements. Given a valid coarse mesh comprising of a prismatic boundary layer, this technique uses the shape functions that define the geometries of the elements to produce a series of subdivided elements of arbitrary height. The purpose of this article is to investigate the range of conditions under which the resulting meshes are valid, and additionally to consider the application of this method to different element types. We consider the subdivision strategies that can be achieved with this technique and apply it to the generation of meshes suitable for boundary-layer fluid problems. J. Peiró, D. Moxey, B. Jordi, S. J. Sherwin, B. W. Nelson, R. M. Kirby and R. Haimes High-order visualization with ElVis in IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach, Springer, 2015, pp. 521–534. 10.1007/978-3-319-12886-3_24 BibTeX Abstract @inbook{moxey-2015c, title = {{High-order visualization with ElVis}}, author = {Peir{\'o}, J. and Moxey, D. and Jordi, B. and Sherwin, S. J. and Nelson, B. W. and Kirby, R. M. and Haimes, R.}, booktitle = {IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach}, publisher = {Springer} Accurate visualization of high-order meshes and flow fields is a fundamental tool for the verification, validation, analysis and interpretation of high-order flow simulations. Standard visualization tools based on piecewise linear approximations can be used for the display of highorder fields but their accuracy is restricted by computer memory and processing time. More often than not, the accurate visualization of complex flows using this strategy requires computational resources beyond the reach of most users. This chapter describes ElVis, a truly high-order and interactive visualization system created for the accurate and interactive visualization of scalar fields produced by high-order spectral/hp finite element simulations. We show some examples that motivate the need for such a visualization system and illustrate some of its features for the display and analysis of simulation data. D. Moxey, M. Hazan, S. J. Sherwin and J. Peiró Curvilinear mesh generation for boundary layer problems in IDIHOM: Industrialization of High-Order Methods-A Top-Down Approach, Springer, 2015, pp. 41–64. 10.1007/978-3-319-12886-3_3 BibTeX Abstract @inbook{moxey-2015b, title = {Curvilinear mesh generation for boundary layer problems}, author = {Moxey, D. and Hazan, M. and Sherwin, S. J. and Peir{\'o}, J.}, In this article, we give an overview of a new technique for unstructured curvilinear boundary layer grid generation, which uses the isoparametric mappings that define elements in an existing coarse prismatic grid to produce a refined mesh capable of resolving arbitrarily thin boundary layers. We demonstrate that the technique always produces valid grids given an initially valid coarse mesh, and additionally show how this can be extended to convert hybrid meshes to meshes containing only simplicial elements. B. Liu, C. D. Cantwell, D. Moxey, M. Green and S. J. Sherwin Vectorised spectral/hp element matrix-free operator for anisotropic heat transport in tokamak edge plasma in 8th European Congress on Computational Methods in Applied Sciences and Engineering, 2022. 10.23967/eccomas.2022.291 BibTeX @inproceedings{liu-2022, author = {Liu, B. and Cantwell, C. D. and Moxey, D. and Green, M. and Sherwin, S. J.}, title = {Vectorised spectral/hp element matrix-free operator for anisotropic heat transport in tokamak edge plasma}, booktitle = {8th European Congress on Computational Methods in Applied Sciences and Engineering}, doi = {10.23967/eccomas.2022.291}, url = {https://www.scipedia.com/public/Liu_et_al_2022b}, Near-Wall Turbulence in a Localized Puff in a Pipe in Progress in Turbulence VIII, 2019, pp. 15–20. 10.1007/978-3-030-22196-6_3 BibTeX Abstract @inproceedings{yakhot-2019b, editor = {{\"O}rl{\"u}, Ramis and Talamelli, Alessandro and Peinke, Joachim and Oberlack, Martin}, title = {Near-Wall Turbulence in a Localized Puff in a Pipe}, booktitle = {Progress in Turbulence VIII}, url = {https://davidmoxey.uk/assets/pubs/2019-nearwall-turb.pdf} We have performed direct numerical simulations of a transitional flow in ai pipe for Rem=2250 when turbulence manifests in the form of fleshes (puffs). From experiments and simulations, Rem approx 2250 has been estimated as a threshold when the average speeds of upstream and downstream fronts of a puff are identical (Song et al. in J Fluid Mech 813:283–304, 2017, [1]). The flow regime upstream of its trailing edge and downstream of its leading edge is almost laminar. To collect the velocity data, at each time instance, we followed a turbulent puff by a three-dimensional moving window centered at the location of the maximum energy of the transverse (turbulent) motion. In the near-wall region, despite the low Reynolds number, the turbulence statistics, in particular, the distribution of turbulence intensities and Reynolds shear stress becomes similar to a fully-developed turbulent pipe flow. J. Eichstädt, D. Moxey and J. Peiró Towards a performance-portable high-order implicit flow solver in 2019 AIAA Aerospace Sciences Meeting, 2019. 10.2514/6.2019-1404 BibTeX Abstract @inproceedings{eichstadt-2019b, title = {Towards a performance-portable high-order implicit flow solver}, author = {Eichst\"adt, J. and Moxey, D. and Peir\'o, J.}, booktitle = {2019 AIAA Aerospace Sciences Meeting}, doi = {10.2514/6.2019-1404}, url = {https://davidmoxey.uk/assets/pubs/2019-aiaa-scitech-2.pdf} We present an approach for robust high-order mesh generation specially tailored to streamlined bodies. The method is based on a semi-structured approach which combines the high quality of structured meshes in the near-field with the flexibility of unstructured meshes in the far-field. We utilise medial axis technology to robustly partition the near-field into blocks which can be meshed coarsely with a linear swept mesher. A high-order mesh of the near-field is then generated and split using an isoparametric approach which allows us to obtain highly stretched elements aligned with the flow field. Special treatment of the partition is performed on the wing root juntion and the trailing edge — into the wake — to obtain an H-type mesh configuration with anisotropic hexahedra ideal for the strong shear of high Reynolds number simulations. We then proceed to discretise the far-field using traditional robust tetrahedral meshing tools. This workflow is made possible by two sets of tools: CADfix, focused on CAD system, the block partitioning of the near-field and the generation of a linear mesh; and NekMesh, focused on the curving of the high-order mesh and the generation of highly-stretched boundary layer elements. We demonstrate this approach on a NACA0012 wing attached to a wall and show that a gap between the wake partition and the wall can be inserted to remove the dependency of the partitioning procedure on the local geometry. J. Marcon, J. Peiró, D. Moxey, N. Bergemann, H. Bucklow and M. R. Gammon A semi-structured approach to curvilinear mesh generation around streamlined bodies @inproceedings{marcon-2019, title = {A semi-structured approach to curvilinear mesh generation around streamlined bodies}, author = {Marcon, J. and Peir\'o, J. and Moxey, D. and Bergemann, N. and Bucklow, H. and Gammon, M. R.}, doi = {10.2514/6.2019-1725 }, url = {https://davidmoxey.uk/assets/pubs/2019-aiaa-scitech.pdf} J. Marcon, M. Turner, J. Peiró, D. Moxey, C. R. Pollard, H. Bucklow and M. Gammon High-order curvilinear hybrid mesh generation for CFD simulations title = {High-order curvilinear hybrid mesh generation for CFD simulations}, author = {Marcon, J. and Turner, M. and Peir\'o, J. and Moxey, D. and Pollard, C. R. and Bucklow, H. and Gammon, M.}, We describe a semi-structured method for the generation of high-order hybrid meshes suited for the simulation of high Reynolds number flows. This is achieved through the use of highly stretched elements in the viscous boundary layers near the wall surfaces. CADfix is used to first repair any possible defects in the CAD geometry and then generate a medial object based decomposition of the domain that wraps the wall boundaries with partitions suitable for the generation of either prismatic or hexahedral elements. The latter is a novel distinctive feature of the method that permits to obtain well-shaped hexahedral meshes at corners or junctions in the boundary layer. The medial object approach allows greater control on the "thickness" of the boundary-layer mesh than is generally achievable with advancing layer techniques. CADfix subsequently generates a hybrid straight-sided mesh of prismatic and hexahedral elements in the near-field region modelling the boundary layer, and tetrahedral elements in the far-field region covering the rest of the domain. The mesh in the near-field region provides a framework that facilitates the generation, via an isoparametric technique, of layers of highly stretched elements with a distribution of points in the direction normal to the wall tailored to efficiently and accurately capture the flow in the boundary layer. The final step is the generation of a high-order mesh using NekMesh, a high-order mesh generator within the Nektar++ framework. NekMesh uses the CADfix API as a geometry engine that handles all the geometrical queries to the CAD geometry required during the high-order mesh generation process. We will describe in some detail the methodology using a simple geometry, a NACA wing tip, for illustrative purposes. Finally, we will present two examples of application to reasonably complex geometries proposed by NASA as CFD validation cases: the Common Research Model and the Rotor 67. D. Moxey, C. D. Cantwell, G. Mengaldo, D. Serson, D. Ekelschot, J. Peiró, S. J. Sherwin and R. M. Kirby Towards p-adaptive spectral/hp element methods for modelling industrial flows in Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2016, 2017, pp. 63–79. 10.1007/978-3-319-65870-4_4 BibTeX Abstract @inproceedings{moxey-2017a, title = {Towards $p$-adaptive spectral/$hp$ element methods for modelling industrial flows}, author = {Moxey, D. and Cantwell, C. D. and Mengaldo, G. and Serson, D. and Ekelschot, D. and Peir\'o, J. and Sherwin, S. J. and Kirby, R. M.}, booktitle = {Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2016}, url = {https://davidmoxey.uk/assets/pubs/2017-icosahom16.pdf} M. Turner, D. Moxey, J. Peiró, M. Gammon, C. R. Pollard and H. Bucklow A framework for the generation of high-order curvilienar hybrid meshes for CFD simulations in Procedia Engineering, 2017, 203, pp. 206–218. 10.1016/j.proeng.2017.09.808 BibTeX Abstract @inproceedings{turner-2017b, title = {A framework for the generation of high-order curvilienar hybrid meshes for CFD simulations}, author = {Turner, M. and Moxey, D. and Peir\'o, J. and Gammon, M. and Pollard, C. R. and Bucklow, H.}, booktitle = {Procedia Engineering}, doi = {10.1016/j.proeng.2017.09.808}, We present a pipeline of state-of-the-art techniques for the generation of high-order meshes that contain highly stretched elements in viscous boundary layers, and are suitable for flow simulations at high Reynolds numbers. The pipeline uses CADfix to generate a medial object based decomposition of the domain, which wraps the wall boundaries with prismatic partitions. The use of medial object allows the prism height to be larger than is generally possible with advancing layer techniques. CADfix subsequently generates a hybrid straight-sided (or linear) mesh. A high-order mesh is then generated a posteriori using NekMesh, a high-order mesh generator within the Nektar++ framework. During the high-order mesh generation process, the CAD definition of the domain is interrogated; we describe the process for integrating the CADfix API as an alternative backend geometry engine for NekMesh, and discuss some of the implementation issues encountered. Finally, we illustrate the methodology using three geometries of increasing complexity: a wing tip, a simplified landing gear and an aircraft in cruise configuration. A variational framework for high-order mesh generation in Procedia Engineering, 2016, 82, pp. 127–135. 10.1016/j.proeng.2016.11.069 BibTeX Abstract title = {A variational framework for high-order mesh generation}, The generation of sufficiently high quality unstructured high-order meshes remains a significant obstacle in the adoption of high-order methods. However, there is little consensus on which approach is the most robust, fastest and produces the 'best' meshes. In this work we aim to provide a route to investigate this question, by examining popular high-order mesh generation methods in the context of an e cient variational framework for the generation of curvilinear meshes. By considering previous works in a variational form, we are able to compare their characteristics and study their robustness. Alongside a description of the theory and practical implementation details, including an e cient multi-threading parallelisation strategy, we demonstrate the e↵ectiveness of the framework, showing how it can be used for both mesh quality optimisation and untangling of invalid meshes. J.-E. Lombard, D. Moxey and S. J. Sherwin The wing-tip vortex test case in European Congress on Computational Methods in Applied Sciences and Engineering, Crete, Greece, 2016. BibTeX Abstract @inproceedings{lombard-2016a, title = {The wing-tip vortex test case}, author = {Lombard, J.-E. and Moxey, D. and Sherwin, S. J.}, booktitle = {European Congress on Computational Methods in Applied Sciences and Engineering, Crete, Greece}, url = {https://davidmoxey.uk/assets/pubs/2016-eccomas-2.pdf} We present a spectral/hp element discritisation, using the Nektar++ code, for performing a Large Eddy Simulation (LES) of the formation and evolution of a wingtip vortex as a test case involving a 3D geometry. The development of these vortices in the near wake, in combination with the large Reynolds numbers, make this test case particularly challenging to simulate. We consider flow over a NACA 0012 profile wingtip at 1.2 million Reynolds number, based on chord length and compare them against experimental data, which is to date the highest Reynolds number achieved for a LES that has been correlated with experiments for this test case. The jetting of the primary vortex and pressure distribution on the wing surface in our model were successfully correlated with the experiment however the vortex formation over the rear wing tip has some discrepancies which lead act as a motivator for further testing of high-fidelity methods in this test case. The formation of the wingtip vortex test case is of general interest for the modeling of transitioning vortex dominated flows over complex geometries which is of particular interest to industries such as high-lift configurations in aircraft, wind-turbine or propeller and automotive design. M. Turner, D. Moxey, S. J. Sherwin and J. Peiró Automatic generation of 3D unstructured high-order curvilinear meshes in Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering, 2016, pp. 428–433. 10.7712/100016.1825.8410 BibTeX Abstract @inproceedings{turner-2016a, title = {Automatic generation of 3D unstructured high-order curvilinear meshes}, author = {Turner, M. and Moxey, D. and Sherwin, S. J. and Peir\'o, J.}, booktitle = {Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering}, url = {https://davidmoxey.uk/assets/pubs/2016-eccomas.pdf}, doi = {10.7712/100016.1825.8410} The generation of suitable, good quality high-order meshes is a significant obstacle in the academic and industrial uptake of high-order CFD methods. These methods have a number of favourable characteristics such as low dispersion and dissipation and higher levels of numerical accuracy than their low-order counterparts, however the methods are highly susceptible to inaccuracies caused by low quality meshes. These meshes require significant curvature to accuratly describe the geometric surfaces, which presents a number of difficult challenges in their generation. As yet, research into the field has produced a number of interesting technologies that go some way towards achieving this goal, but are yet to provide a complete system that can systematically produce curved high-order meshes for arbitrary geometries for CFD analysis. This paper presents our efforts in that direction and introduces an open-source high-order mesh generator, NekMesh, which has been created to bring high-order meshing technologies into one coherent pipeline which aims to produce 3D high-order curvilinear meshes from CAD geometries in a robust and systematic way. J. Cohen, C. Cantwell, D. Moxey, J. Nowell, P. Austing, X. Guo, J. Darlington and S. J. Sherwin TemPSS: A service providing software parameter templates and profiles for scientific HPC in IEEE eScience (Munich, Germany), 2015. 10.1109/eScience.2015.43 BibTeX Abstract @inproceedings{cohen-2015a, title = {{TemPSS: A service providing software parameter templates and profiles for scientific HPC}}, author = {Cohen, J. and Cantwell, C. and Moxey, D. and Nowell, J. and Austing, P. and Guo, X. and Darlington, J. and Sherwin, S. J.}, booktitle = {IEEE eScience (Munich, Germany)}, doi = {10.1109/eScience.2015.43}, url = {https://davidmoxey.uk/assets/pubs/2015-tempss.pdf} Generating and managing input data for large-scale scientific computations has, for many classes of application, always been a challenging process. The emergence of new hardware platforms and increasingly complex scientific models compounds this problem as configuration data can change depending on the underlying hardware and properties of the computation. In this paper we present TemPro, a web based service for building and managing application input files in a semantically focused manner using the concepts of software parameter templates and job profiles. Many complex, distributed applications require the expertise of more than one individual to allow an application to run efficiently on different types of hardware. TemPro supports collaborative development of application inputs through the ability to save, edit and extend job profiles that define the inputs to an application. We describe the concepts of templates and profiles and the structures that developers provide to add an application template to the TemPro service. In addition, we detail the implementation of the service and its functionality. M. Turner, D. Moxey and J. Peiró Automatic mesh sizing specification of complex three dimensional domains using an octree structure in 24th International Meshing Roundtable, 2015. BibTeX Abstract @inproceedings{turner-2015, title = {Automatic mesh sizing specification of complex three dimensional domains using an octree structure}, booktitle = {24th International Meshing Roundtable}, author = {Turner, M. and Moxey, D. and Peir\'o, J.}, url = {https://davidmoxey.uk/assets/pubs/2015-imr24.pdf} A system for automatically specifying a distribution of mesh sizing throughout three dimensional complex domains is presented, which aims to reduce the level of user input required to generate a mesh. The primary motivation for the creation of this system is for the production of suitable linear meshes that are sufficiently coarse for high-order mesh generation purposes. Resolution is automatically increased in regions of high curvature, with the system only requiring three parameters from the user to successfully generate the sizing distribution. This level of automation is achieved through the construction of an octree description of the domain, which targets the curvature of the surfaces and guides the generation of the mesh. After the construction of the octree, an ideal mesh spacing specification is calculated for each octant, based on a relation to the radii of curvature of the domain surfaces and mesh gradation criteria. The system is capable of accurately estimating the number of elements that will be produced prior to the generation process, so that the meshing parameters can be altered to coarsen the mesh before effort is wasted generating the actual mesh. J. Cohen, D. Moxey, C. D. Cantwell, P. Austing, J. Darlington and S. J. Sherwin Ensuring an effective user experience when managing and running scientific HPC software in 2015 IEEE/ACM 1st International Workshop on Software Engineering for High Performance Computing in Science (SE4HPCS), 2015, pp. 56–59. 10.1109/SE4HPCS.2015.16 BibTeX Abstract @inproceedings{cohen-2015b, title = {Ensuring an effective user experience when managing and running scientific HPC software}, booktitle = {2015 IEEE/ACM 1st International Workshop on Software Engineering for High Performance Computing in Science (SE4HPCS)}, author = {Cohen, J. and Moxey, D. and Cantwell, C. D. and Austing, P. and Darlington, J. and Sherwin, S. J.}, url = {https://davidmoxey.uk/assets/pubs/2015-se4hpcs.pdf}, doi = {10.1109/SE4HPCS.2015.16} With CPU clock speeds stagnating over the last few years, ongoing advances in computing power and capabilities are being supported through increasing multi- and many-core parallelism. The resulting cost of locally maintaining large-scale computing infrastructure, combined with the need to perform increasingly large simulations, is leading to the wider use of alternative models of accessing infrastructure, such as the use of Infrastructure-as-a-Service (IaaS) cloud platforms. The diversity of platforms and the methods of interacting with them can make using them with complex scientific HPC codes difficult for users. In this position paper, we discuss our approaches to tackling these challenges on heterogeneous resources. As an example of the application of these approaches we use Nekkloud, our web-based interface for simplifying job specification and deployment of the Nektar++ high-order finite element HPC code. We also present results from a recent Nekkloud evaluation workshop undertaken with a group of Nektar++ users. D. Moxey, D. Ekelschot, U. Keskin, S. J. Sherwin and J. Peiró A thermo-elastic analogy for high-order curvilinear meshing with control of mesh validity and quality title = {{A thermo-elastic analogy for high-order curvilinear meshing with control of mesh validity and quality}}, author = {Moxey, D. and Ekelschot, D. and Keskin, U. and Sherwin, S. J. and Peir{\'o}, J.}, url = {https://davidmoxey.uk/assets/pubs/2014-elasticity.pdf} In recent years, techniques for the generation of high-order curvilinear mesh have frequently adopted mesh deformation procedures to project the curvature of the surface onto the mesh, thereby introducing curvature into the interior of the domain and lessening the occurrence of self-intersecting elements. In this article, we propose an extension of this approach whereby thermal stress terms are incorporated into the state equation to provide control on the validity and quality of the mesh, thereby adding an extra degree of robustness which is lacking in current approaches. J. Cohen, D. Moxey, C. D. Cantwell, P. Burovskiy, J. Darlington and S. J. Sherwin Nekkloud: A software environment for high-order finite element analysis on clusters and clouds in 2013 IEEE International Conference on Cluster Computing, 2013, pp. 1–5. 10.1109/cluster.2013.6702616 BibTeX Abstract author = {Cohen, J. and Moxey, D. and Cantwell, C. D. and Burovskiy, P. and Darlington, J. and Sherwin, S. J.}, booktitle = {2013 IEEE International Conference on Cluster Computing}, title = {Nekkloud: A software environment for high-order finite element analysis on clusters and clouds}, pages = {1-5}, doi = {10.1109/cluster.2013.6702616}, url = {https://davidmoxey.uk/assets/pubs/2013-cluster.pdf} As the capabilities of computational platforms continue to grow, scientific software is becoming ever more complex in order to target these platforms effectively. When using large-scale distributed infrastructure such as clusters and clouds it can be difficult for end-users to make efficient use of these platforms. In the libhpc project we are developing a suite of tools and services to simplify job description and execution on heterogeneous infrastructure. In this paper we present Nekkloud, a web-based software environment that builds on elements of the libhpc framework, for running the Nektar++ high-order finite element code on cluster and cloud platforms. End-users submit their jobs via Nekkloud, which then handles their execution on a chosen computing platform. Nektar++ provides a set of solvers that support scientists across a range of domains, ensuring that Nekkloud has a broad range of use cases. We describe the design and development of Nekkloud, user experience and integration with both local campus infrastructure and remote cloud resources enabling users to make better use of the resources available to them. in WSSPE13 Workshop, Supercomputing, 2013. BibTeX Abstract booktitle = {WSSPE13 Workshop, Supercomputing}, url = {https://davidmoxey.uk/assets/pubs/2013-wsspe13.pdf} Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud computing become more widely accepted for HPC computations, scientists require more support from computer scientists and resource providers to develop efficient code and make optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. The use of such frameworks has implications for the sustainability of scientific software. In this paper we set out our developing understanding of these challenges based on work carried out in the libhpc project. J. Cohen, J. Darlington, B. Fuchs, D. Moxey, C. D. Cantwell, P. Burovskiy, S. J. Sherwin and N. P. C. Hong libHPC: Software sustainability and reuse through metadata preservation in First Workshop on Maintainable Software Practices in e-Science, 8th IEEE International Conference on eScience, 2012. BibTeX Abstract @inproceedings{cohen-2012, title = {libHPC: Software sustainability and reuse through metadata preservation}, booktitle = {First Workshop on Maintainable Software Practices in e-Science, 8th IEEE International Conference on eScience}, author = {Cohen, J. and Darlington, J. and Fuchs, B. and Moxey, D. and Cantwell, C. D. and Burovskiy, P. and Sherwin, S. J. and Hong, N. P. Chue}, url = {https://davidmoxey.uk/assets/pubs/2012-escience.pdf} Software development, particularly of complex scientific applications, requires a detailed understanding of the problem(s) to be solved and an ability to translate this understanding into the generic constructs of a programming language. We believe that such knowledge – information about a code's "building blocks", especially the low-level functions and procedures in which domain-specific tasks are implemented – can be very effectively leveraged to optimise code execution across platforms and operating systems. However, all too often such knowledge gets lost during the development process, which can bury the scientist's understanding in the code in a manner that makes it difficult to recover or extract later on. In this paper, we describe our work in the EPSRC-funded libHPC project to build a framework that captures and utilises this information to achieve optimised performance in dynamic, heterogeneous networked execution environments. The aim of the framework is to allow scientists to work in high-level scripting environments based on component libraries to provide descriptions of applications which can then be mapped to optimal execution configurations based on available resources. A key element in our approach is the use of "co-ordination forms" – or functional paradigms – for creating optimised execution plans from components. Our main exemplar application is an advanced finite element framework, Nektar++, and we detail ongoing work to undertake profiling and performance analysis to extract software metadata and derive optimal execution configurations, to target resources based on their hardware metadata. D. Moxey Spatio-temporal dynamics in pipe flow PhD thesis, University of Warwick, 2011. BibTeX Abstract @phdthesis{moxey-2011, author = {Moxey, D.}, title = {{Spatio-temporal dynamics in pipe flow}}, school = {University of Warwick}, url = {https://davidmoxey.uk/assets/pubs/2011-thesis.pdf} When fluid flows through a channel, pipe or duct, there are two basic forms of motion: smooth laminar flow and disordered turbulent motion. The transition between these two states is a fundamental and open problem which has been studied for over 125 years. What has received far less attention are the intermittent dynamics which possess qualities of both turbulent and laminar regimes. The purpose of this thesis is therefore to investigate large-scale intermittent states through extensive numerical simulations in the hopes of further understanding the transition to turbulence in pipe flow. "Snakes on a plane": An introduction to the study of polymer chains using Monte Carlo methods Master's thesis, University of Warwick, 2007. BibTeX Abstract @mastersthesis{moxey-2007, title = {``Snakes on a plane'': An introduction to the study of polymer chains using Monte Carlo methods}, url = {https://davidmoxey.uk/assets/pubs/2007-project.pdf} In this report, a number of basic Monte Carlo methods for modelling polymer chains are presented (including configurational-bias Monte Carlo and the pruned-enriched Rosenbluth method). These are then used to investigate the behaviour of the collapse of polymer chains around the well-studied theta-point. Additionally, a flat-histogram version of PERM is outlined and applied to the problem of polymers both tethered to and in close proximity to an adsorbing surface. Copyright © David Moxey 2022
CommonCrawl
Spatial distribution and risk factors for human cysticercosis in Colombia Erika Galipó1,2, Matthew A. Dixon ORCID: orcid.org/0000-0002-1710-62373,4,5, Claudio Fronterrè6, Zulma M. Cucunubá3,4 nAff8, Maria-Gloria Basáñez3,4, Kim Stevens2, Astrid Carolina Flórez Sánchez7 & Martin Walker2,3 Parasites & Vectors volume 14, Article number: 590 (2021) Cite this article Cysticercosis is a zoonotic neglected tropical disease (NTD) that affects humans and pigs following the ingestion of Taenia solium eggs. Human cysticercosis poses a substantial public health burden in endemic countries. The World Health Organization (WHO) aims to target high-endemicity settings with enhanced interventions in 17 countries by 2030. Between 2008 and 2010, Colombia undertook a national baseline serosurvey of unprecedented scale, which led to an estimated seroprevalence of T. solium cysticercus antibodies among the general population of 8.6%. Here, we use contemporary geostatistical approaches to analyse this unique dataset with the aim of understanding the spatial distribution and risk factors associated with human cysticercosis in Colombia to inform how best to target intervention strategies. We used a geostatistical model to estimate individual and household risk factors associated with seropositivity to T. solium cysticercus antibodies from 29,253 people from 133 municipalities in Colombia. We used both independent and spatially structured random effects at neighbourhood/village and municipality levels to account for potential clustering of exposure to T. solium. We present estimates of the distribution and residual correlation of seropositivity at the municipality level. High seroprevalence was identified in municipalities located in the north and south of Colombia, with spatial correlation in seropositivity estimated up to approximately 140 km. Statistically significant risk factors associated with seropositivity to T. solium cysticercus were related to age, sex, educational level, socioeconomic status, use of rainwater, consumption of partially cooked/raw pork meat and possession of dogs. In Colombia, the distribution of human cysticercosis is influenced by socioeconomic considerations, education and environmental factors related to the spread of T. solium eggs. This information can be used to tailor national intervention strategies, such as targeting spatial hotspots and more highly exposed groups, including displaced people and women. Large-scale seroprevalence surveys accompanied by geospatial mapping are an essential step towards reaching the WHO's 2021‒2030 NTD roadmap targets. The zoonotic tapeworm, Taenia solium, is responsible for taeniasis/cysticercosis which is included in the World Health Organization's (WHO's) list of prioritised neglected tropical diseases (NTDs) [1]. Humans are the definitive hosts of T. solium and harbour the adult tapeworm in their bowel. Pigs are intermediate hosts, infected by larval cysts (cysticerci) following ingestion of parasite eggs and proglottids [2] in human faeces. Eggs hatch in the pig's digestive system, and the released oncospheres first penetrate the intestinal wall, entering the bloodstream, and then become encysted in striated muscle, brain, liver and subcutaneous and other tissues. Porcine cysticercosis is often asymptomatic [2, 3], although cysts in pig brain tissue can cause neurocysticercosis (NCC) and epileptic seizures [4]. Humans contract taeniasis following consumption of tissue cysts in poorly cooked pork meat. Taeniasis is usually asymptomatic, but mild symptoms, including abdominal pain, distension, diarrhoea and nausea, may appear [2]. Humans can also be infected with T. solium eggs, typically from ingestion of food contaminated with human faecal material [5] or food washed with contaminated water [6]. Internal auto-infestation following regurgitation of proglottids in the stomach has also been suggested as an additional route of infection [2, 5, 7]. Infection with T. solium eggs causes cysticercosis which manifests most severely when cysts migrate to the central nervous system, resulting in NCC [2]. Morbidity from NCC associated with seizures, epilepsy and other neurological sequelae is driven by the number and location of cysts or following the degeneration of viable cysts [8]. Taeniasis/cysticercosis is widely endemic globally. Taenia solium cysticercosis antibody seroprevalence, indicative of exposure, ranges from 1.8 to 31.2% in Latin America, from 12.6 to 19.2% in Asia and from 7.7 to 34.5% in Africa (as measured using an enzyme-linked Immunoelectrotransfer blot [EITB] assay) [9], which highlights substantial variation in exposure to T. solium eggs across settings. NCC is responsible for the predominant disease burden associated with T. solium infection, accounting for approximately 30% of epilepsy cases in endemic countries and 3% globally [10]. In addition, this zoonosis impacts the pork meat market, with small producers experiencing economic losses due to the reduction in value of infected pork meat [4] and a market shift towards home slaughtering and selling [11]. In Colombia, taeniasis/cysticercosis poses a substantial public health problem [12], with an estimated life-time prevalence of epilepsy of 20.9 per 1000 individuals and a prevalence of neurocysticercosis (by computed tomography scan) of 13.9% [13]. The country-wide prevalence of T. solium cysticercus antibodies was estimated at 8.6% from a national serosurvey of more than 29,000 people conducted between 2008 and 2010 [14]. Despite the unprecedented scale of this epidemiological survey—and the development by the Pan American Health Organization in 2015 of a formal plan of surveillance and control in Colombia [12]—there has been little implementation of systematic surveillance or intervention activities. Consequently, the epidemiology of T. solium in Colombia is unlikely to have changed substantively during the past decade since these data were generated. Thus, the dataset remains the most comprehensive and relevant country-wide cross-sectional 'snapshot' of T. solium epidemiology anywhere across the globe and a unique information resource. Here, we analyse this dataset using a contemporary geostatistical approach to understand the spatial distribution of T. solium cysticercus seropositivity in Colombia, as well as individual and household risk factors associated with exposure to the parasite. This work extends the original analysis of these data [14] by integrating the effects of individual covariates and spatial clustering at multiple hierarchical levels within a single statistical framework. We present maps of the spatial distribution of T. solium cysticercus seropositivity in Colombia, estimates of spatial correlation and demographic, socioeconomic, behavioural and other risk factors associated with exposure to this zoonotic NTD. The data were collected by the Colombian National Health Institute (Instituto National de Salud) between 2008 and 2010 with the aim of estimating T. solium human cysticercosis antibody seroprevalence and associated risk factors. Details of the original data collection can be found in [14]. Briefly, individuals aged from 2 to 64 years, from 23 departments and Bogotá district, living in 133 municipalities with > 5000 inhabitants and a health centre were eligible for inclusion. The small proportion of total municipalities sampled (133/1122) was due to logistical and financial constraints. A three-stage cluster random sampling approach was used, covering 23 out of Colombia's 32 departments (first administrative level unit) and Bogotá district (Additional file 1: Figure S1). The municipality constituted the primary sample unit (PSU) and was stratified according to level of urbanization, rural and urban population composition and the Unsatisfied Basic Needs Index (Indice de Necesidades Básicas Insatisfechas) [15]. Within each stratum, the secondary sample unit (SSU) was defined as a neighbourhood (urban) or village (rural) with > 10 households and selected by random sampling. Finally, 10 households in each SSU were randomly selected, and one person belonging to each household (between the age of 2 and 64 years) was selected at random from those present at the interview. Following informed consent, finger-prick blood samples were obtained from 29,360 participants, and each sample was assessed for the presence of circulating T. solium cysticercus antibodies at the National Health Institute Reference Laboratory (Laboratorio de Parasitología del Instituto Nacional de Salud) by enzyme-linked immunosorbent assay (ELISA), with a reported sensitivity of 100% and specificity of 97.5% [16]. Participants also completed a questionnaire on sociodemographic information, hygiene habits, health conditions, food consumption habits, living conditions and animal ownership and management. The questionnaire was developed by the research team in Colombia, with input from experts on cysticercosis. It was first tested in a pilot survey carried out in 216 homes in the municipality of Caqueza (Department of Cundinamarca), from 28 August to 2 September 2008 and adjusted accordingly. Teams in the field were trained on the use of the questionnaire before it was applied on the whole sample. Details on the cleaning and coding of this dataset can be found in Additional file 1: Text S1. Model-building and analysis of residual spatial correlation Before performing the geospatial analysis, an initial exploratory analysis was undertaken (using R version 4.0.5 [17]). Given the clustered nature of the data, a hierarchical univariate mixed-effects logistic regression model was fitted to test the association between each explanatory variable (covariate) and human seropositivity to T. solium cysticerci, with each model including two independent random effects terms to capture correlation at the municipality and neighbourhood/village (depending on urban or rural location) levels. Explanatory variables with a P-value ≤ 0.25 (a conservative cut-off to avoid missing potentially important variables), derived from a likelihood ratio test, were retained in the subsequent hierarchical multivariable mixed-effects logistic regression model. The generic structure of all models is given by: $$\begin{array}{*{20}c} {\varvec{Y}~\sim ~Bern\left( \user2{\mu } \right),~} \\ {logit\left( \user2{\mu } \right) = \mathbf{\beta X~ + Z~ + U,} } \\ {\varvec{Z}~\sim ~N\left( {0,~\tau } \right),} \\ {\varvec{U}~\sim ~N\left( {0,~\sigma } \right),~} \\ \end{array}$$ where \({\varvec{Y}}\) is a binary vector of observations indicating whether an individual tested positive for T. solium cysticercus antibodies, assuming a Bernoulli distribution; \({\varvec{\mu}}\) is a vector of probabilities for testing positive; \({\varvec{\beta}}\) is a vector of regression coefficients, and \(\mathbf{X}\) is the design matrix of explanatory variables; \({\varvec{U}}\) and \({\varvec{Z}}\) are vectors of independent and normally distributed random effects terms associated with municipalities and neighbourhoods/villages, respectively; and \(\sigma\) and \(\tau\) are the standard deviations of the respective random effects terms (indicative of the degree of variability at each hierarchical level). From the final fitted models, adjusted odds ratios (ORs), 95% confidence intervals (95% CIs) and P-values were obtained for each risk factor. All notations/parameters are summarised in Additional file 1: Table S1. A sub-analysis on risk factors in those individuals owning pigs (n = 3154) was also conducted (methodological details are given in Additional file 1: Text S1). Following fitting of the multivariable mixed-effects model, a variogram analysis was performed to assess the presence of residual spatial correlation [17]. Since the geographical coordinates were available only for the municipalities and not for the neighbourhoods/villages, the empirical variogram was computed only on \(\widehat{{\varvec{U}}}\), the estimated random effects at the municipality level. A Monte Carlo test for the null hypothesis of spatial independence was performed based on 10,000 random permutations of \(\widehat{{\varvec{U}}}\) amongst the sampled municipalities. The variograms computed on the permuted random effects represent the sampling distribution of the estimated variogram in the absence of spatial correlation. If the empirical variogram ordinates fall outside of the 95% CI obtained from the Monte Carlo test, then there is some evidence of spatial correlation at municipality level. Incorporating spatial structure In the presence of spatial correlation, the independent random effects at the municipality level, \({\varvec{U}}\), were replaced with a set of spatially structured random effects, \({\varvec{S}}({\varvec{x}})\), where \({\varvec{x}}\) is a vector with the centroids of the sampled municipalities. \({\varvec{S}}({\varvec{x}})\) is a spatial Gaussian process with variance \({\sigma }^{2}\) and correlation function\(\rho \left(\mu \right)=\mathrm{exp}\left(-\frac{\mu }{\varphi }\right)\), where \(\mu\) is the distance between a pair of municipality centroids and \(\varphi\) is a parameter that controls the rate at which the spatial correlation decays with increasing distance. Conditional on these spatially structured random effects, the observations can still be considered as independent Bernoulli random variables [18]. The spatially structured model was fitted using the integrated nested Laplace approximation (INLA) and stochastic partial differential equation (SPDE) approaches [19, 20] which implement approximate Bayesian inference in a computationally less intensive manner to alternative Markov chain Monte Carlo (MCMC) approaches. A flat Gaussian prior with mean and precision equal to zero was assigned to the model intercept term; other fixed effects were assigned independent vague Gaussian priors with mean zero and precision equal to 0.001. For the precision of the independent neighbourhood/village random effects, \(1/\tau\), a vague Gamma prior was used, and for the parameters \({\sigma }^{2}\) and \(\varphi\) of the spatially-structured random effects, we adopted penalised complexity priors [21]. Adjusted ORs and 95% credible intervals (95% CrIs) were obtained for each risk factor from the final fitted model. Study population and seroprevalence distribution Of the 29,360 observations, 29,253 (99.6%) observations were kept for analysis, with 107 removed due to missing covariate values. Participants were mostly located in urban areas (77.9%), mostly aged 21‒50 years (64.4%) and mostly women (68.5%); the main occupational activity was housewife/houseman (44.5%). Socioeconomic stratum 1 (lowest of 4 socioeconomic strata, excluding displaced people) was the most frequently represented socioeconomic stratum (49.5%), and participants most frequently had a partial or complete secondary school educational level (45.8%) (Table 1). The mean seroprevalence of T. solium cysticercus antibodies was 9.6%, ranging from 0.5% in the Department of Caldas to 38.7% in the Department of Vaupés (Additional file 1: Table S2). Municipalities with the highest seroprevalence were located in the north and south of Colombia (Fig. 1), while municipalities with lower seroprevalence were concentrated in the central part of the country. Table 1 Total number of respondents for each covariate level and total number positive for circulating Taenia solium cysticercus antibodies Seroprevalence of cysticercosis in Colombia, 2008–2010. Seroprevalence of Taenia solium cysticercus antibodies in 133 municipalities in Colombia. Departments are outlined in pale grey lines and sampled municipalities are shown in solid colours Risk factors for human seropositivity without spatial structure From the univariate mixed-effects logistic regression model with two random effects, food consumption in streets, washing hands after toilet usage and owning animals other than dogs and pigs (cattle, cats, birds) were excluded from further (multivariate) analysis, having a P-value > 0.25 (Additional file 1: Table S3). Consequently, 16 explanatory variables were included in the multivariable mixed-effect logistic regression with two random effects. Increasing age (as age categories), being female, owning dogs and using rainwater as a water source were significantly associated with increased odds of being seropositive for T. solium cysticercus antibodies; increasing education level, socioeconomic status and consuming partially cooked/raw pork meat once per week were significantly associated with decreased odds of being seropositive (Additional file 1: Table S4). Risk factor analysis results from the sub-analysis of those owning pigs (n = 3154) are reported in Additional file 1: Text S2 and Additional file 1: Table S5. Geographical variation in random effects and spatial correlation Figure 2 shows a map of the residual variation in the seroprevalence of T. solium cysticercus antibodies at the municipality level that is unexplained by the covariates in the non-spatial mixed-effects model. Figure 3 shows a variogram analysis carried out on the municipalities' estimated random effects. The empirical variogram falls partially outside of the 95% confidence bands, suggesting the presence of spatial correlation in seroprevalence at the municipality level (unexplained by the covariates) up to approximately 120–140 km; further than this distance, the variation between two spatial points starts to plateau This estimate was determined more precisely from the fitted geostatistical model (see below) to a value of 139 km. Residual variation in Taenia solium cysticercus seroprevalence at the municipality level across Colombia. The map represents the residual variation in cysticercus seroprevalence at the municipality level that is not explained by the covariates in the non-spatial mixed-effects model Estimated variogram for the mixed-effects model residuals at the municipality level (blue dots) across Colombia, including 95% confidence intervals obtained from a permutation test under the null hypothesis that there is no spatial correlation (blue-shaded area). The blue dots fall outside of the confidence bands up to approximately 120–140 km of separation, indicating spatial correlation up to this distance (confirmed by the geostatistical model) Geostatistical model The geostatistical model estimated a strong spatial correlation at the municipality level of up to 139 km. The ORs and 95% CrIs associated with each covariate included in the final multivariable model (which accounts for spatial correlation at the municipality level) are given in Table 2. Notably, the odds of testing positive for T. solium cysticercus antibodies was 1.29-fold (95% CrI = 1.15–1.46) greater for females than for males, and the odds of testing positive generally increased with age. For example, adults aged between 21 and 60 years were approximately twofold more likely to test positive than children in the age range 2–10 years. Lower educational levels were significantly associated with increased odds of seropositivity, with the highest estimated odds associated with no formal education. Displaced people had 2.20-fold (95% CrI = 1.15–4.28) higher odds of being seropositive than people in the highest socioeconomic stratum; there was no significant difference among other socioeconomic strata. The use of rainwater as a water source was associated with 1.6-fold (95% CrI = 1.21–2.13) higher odds of being positive compared to the use of a well or cistern, and dog owners were at significantly increased odds of testing positive (OR = 1.19, 95% CrI = 1.08–1.31) than non-owners. Consumption of partially cooked/raw pork meat once per week was associated with a significantly decreased odds of testing positive (OR: 0.59, 95% CrI: 0.36 – 0.90) compared to no consumption. Place of residence, occupation, frequency of washing vegetables, excreta elimination and owning animals other than dogs (including pigs) were not significantly associated with testing positive for T. solium cysticercus antibodies. Table 2 Geostatistical multivariable logistic regression model results: odds of testing positive for Taenia solium cysticercus antibodies The 2008–2010 Colombian cysticercosis serosurvey generated unique and unprecedented information on exposure to T. solium cysticercosis at a national scale. The work presented here extends the original analysis of these data [14] by using contemporary geostatistical techniques to evaluate individual-level risk factors associated with seropositivity to T. solium cysticerci and, simultaneously, spatial clustering at a sub-national (municipality) scale. The results contribute important information on factors associated with exposure to T. solium cysticerci. They also indicate that similar large-scale epidemiological surveys will be needed if hyperendemic foci of transmission are to be identified and targeted for intensified interventions in 17 endemic countries, as per the WHO's 2021–2030 NTD roadmap targets for taeniasis/cysticercosis [22]. Here, and in the original analysis of these data [14], women were more likely than men to be positive for T. solium cysticercus antibodies. This finding is consistent with the results of numerous other studies undertaken in Latin America [2, 9, 23,24,25,26,27]; by contrast, in other endemic regions, such as sub-Saharan Africa, being male is associated with an increased risk of exposure [28] and of antigen positivity [29, 30]. The mechanisms underlying these epidemiological patterns remain unclear. Different household roles associated with handling household-owned animals, food and water may be important, although many variables pertaining to these activities were accounted for in this analysis. Notwithstanding the underlying cause, women could be an important target for educational campaigns in Colombia, not just because of their apparent increased risk of exposure, but also because they are often being responsible for the majority of food handling and preparation activities, which would be all the more important if they were also tapeworm carriers. The trend for increasing seropositivity with age is unsurprising given that T. solium cysticercus antibodies probably persist for several years. Seropositivity may thus be considered as an indicator of lifetime prior exposure. Praet et al. [31] explored age-dependent dynamics of T. solium cysticercus antibody positivity in more depth by fitting mathematical models to similar age–seroprevalence data collected in Ecuador. Their results suggested that higher antibody seroreversion rates occur following first exposure (representing the primary humoral response), followed by a lower seroreversion rate after the boosting effect of subsequent exposures (representing secondary humoral response), causing saturation in antibody seroprevalence with age. Hence, where transmission is relatively intense—and repeated exposures are common—one might expect to see similar saturating age–seroprevalence profiles. By contrast, in lower transmission settings, the effect of seroreversion following first exposure—and the less frequent boosting effect of subsequent exposures—may be more evident in seroprevalence profiles, possibly resulting in a decline in seropositivity in older age groups. Exposure to T. solium is known to be greater for individuals with lower educational levels, those from lower socioeconomic strata [6, 32] and those facing social marginalisation [9, 33,34,35]. Our findings are consistent with these previously reported findings, with the odds of displaced people testing positive being almost twofold higher than people in the highest socioeconomic stratum. Internal displacement in Colombia is a major issue that often involves the poorest and most disadvantaged people [36], but if the control of T. solium is to become comprehensive, displaced people may require enhanced interventions. Health education could be one such option for control in specific populations using tools such as "The Vicious Worm" [37], as there is some evidence that health education campaigns specific to T. solium can impact transmission [38]. It is, however, likely that to achieve substantial, sustained reductions in the prevalence of T. solium or elimination, particularly in highly endemic areas, a One Health approach targeting the whole T. solium system, including infections in pigs, humans and the environment, will be required [39, 40], as recently shown by intervention trials in Peru and Zambia [41, 42]. The only variable related to food and water sources or hygiene practices that was significantly associated with seropositivity to T. solium cysticercus antibodies was the use of rainwater. Individuals in households using rainwater as opposed to water stored in wells or cisterns had a 1.6-fold higher odds of seropositivity. Waterborne cysticercosis transmission is supported in the literature, given that the eggs can survive in fresh, brackish and salt waters [32, 43,44,45] and can contaminate vegetables [45]. Other variables, such as open-field defecation or the use of unsanitary latrines [46, 47], that one might also expect to be associated with exposure to T. solium were not identified in our analysis as significant risk factors. We also found that the odds of seropositivity significantly decreased when individuals consumed partially cooked/raw pork meat once per week, an observation possibly confounded by wealth (i.e., wealthier individuals consuming more meat). One might expect that consumption of partially cooked/raw pork meat would be associated with increased odds of seropositivity, given that taeniasis (adult tapeworm) carriers are at risk of autoinfection. However, more research is needed to understand the relative contribution of this route of transmission to overall cysticercosis risk [48]. A particularly striking finding of our analysis was the association between owning dogs and significantly increased odds of test positivity. Dogs in Asia have been reported to test positive for T. solium antibodies [49, 50], potentially implicating them as alternative intermediate hosts. Transmission to humans has also been suggested to occur via the consumption of raw or uncooked canine meat [51], although this practice is thought to be extremely rare and not widely reported in Latin America. Moreover, the role of dogs as potential hosts for T. solium remains somewhat speculative. Given the coprophagic habits of dogs and their close interaction with humans, it is also possible (and perhaps more likely) that dogs act as mechanical vectors of T. solium eggs. A further striking finding is that among the 10.8% (n = 3,154) of individuals owning pigs, we did not find a significantly increased odds of seropositivity, only a non-significant increase in those owning fewer than 10 pigs (possibly indicative of smallholder, subsistence farmers, compared to individuals owning > 10 pigs, which may represent wealthier farmers). A further sub-analysis of pig owners (Additional file 1: Text S2) found no association between seropositivity and pig management practices (e.g. free roaming, feeding wastes, drinking free water, among others). These findings contrast with those reported in other studies in Latin America and other geographical settings, in which human cysticercosis has been associated with owning pigs [2, 33, 52]. Some farming practices, such as using waste or water and mix concentrate as feed, and the lack of drainage systems were non-significantly associated with increased seropositivity. However, because this sub-analysis was based on a much smaller sample (n = 3154) with only 388 seropositive individuals, there was limited power to detect significant associations. In addition to exploring individual and household risk factors associated with exposure to T. solium, our geostatistical approach enabled identification of spatial clusters where seropositivity was higher, so-called hotspots (in the north and south of Colombia), or lower (in the central and western areas of the country) than could be explained by the included covariates (Fig. 2). Hotspots where seropositivity was higher than could be explained by the covariates coincided with areas with higher seroprevalence (16‒40%) in the northern coastal area and areas bordering Venezuela (Departments of Atlántico, Magdalena, Cesar, La Guajira), in the northern-central region (Departments of Antioquia and Bolívar), in Vaupés (south-east, bordering Brazil) and in the south, in regions bordering Peru and Brazil (Department of Amazonas; Fig. 1). Neither human nor pig population density was explicitly included in the model and, therefore, these variables could help to explain some of this clustering (because of the potential for increased contamination of the environment with T. solium eggs where humans and pigs are abundant). While population densities are heterogeneous across Colombia, some of the highest human population densities are generally found in the north and north-east of the country [53], alongside the highest pig population in the Pacific (east costal), Andean (north-east/north-west) and Caribbean regions (north), as estimated from the Gridded Livestock Database in 2007 [54]. Furthermore, it should be noted that given the level of spatial analysis, we were only able to detect spatial variation at the municipality level. Local climatic, environmental and ecological conditions may also play a role in the observed clustering. In a recent systematic review, Jansen et al. [45] identified that Taenia spp. eggs can survive in the environment for up to 1 year in favourable conditions of high humidity, moderate temperatures (5‒25 °C) and presence of surface water. Moreover, invertebrates, including dung beetles (Ammophorus rubripes), can also act as mechanical vectors for the dispersal of Taenia spp eggs [55, 56]. Hence, it is highly likely that local conditions—unaccounted for in our statistical model—will influence spatial patterns of exposure. Although the serosurvey data analysed here are unique in presenting a picture of exposure to T. solium cysticercosis at a national scale, geographical coverage is incomplete and the sampling approach may have introduced some biases. In particular, the selection of municipalities with > 5000 individuals and a health centre is likely to have created a bias towards sampling in more densely populated urban areas. This led to an underrepresentation of rural communities, which may typically have had less access to health care and possibly lower overall health. In addition, nine departments were excluded from sampling (due to logistical and resource constraints) and overall, only a relatively small fraction (12%) of Colombia's municipalities were sampled (133/1122). Women are highly represented, and this is likely due to the decision of randomizing only the individuals present at the interview for inclusion in the study. Also, the data were collected in 2008–2010, over a decade ago, and may therefore not reflect precisely contemporary epidemiological conditions. Nonetheless, we believe that, in the absence of wide-spread national control efforts, the distribution and endemic situation of T. solium are unlikely to have changed substantively over the past decade and, therefore, the data provide a useful snapshot of endemic conditions across the country. Due to the nature of surveys, other forms of bias and reverse causation are also possible. Moreover, it cannot be excluded that any of the encountered associations are confounded by unmeasurable or unknown risk factors and that the a priori decision to drop a certain number of variables might have increased the model residuals, not including possible confounders. On the other hand, the unstructured nature of some variables or the probable collinearity with other exposures made this choice desirable. Despite the lack of data concerning some geographical areas in Colombia, the authors still consider the study outcomes as valuable and indicative of the situation of cysticercosis in the country. In addition, the information provided in the current study could be further used to build models that can spatially predict the disease seroprevalence in non-sampled areas [17], offering a cost-effective tool for decision-makers in places where direct sampling did not take place. Mapping the distribution and seroprevalence of T. solium in endemic countries is a crucial next step in realising the WHO's goals of implementing intensified control in hyperendemic areas of 17 countries by 2030 [22]. Currently, country-wide data on the transmission of T. solium, such as those analysed here for Colombia, are scarce, and thus there is a great deal of work to be done to identify hyperendemic areas in which to implement intensified interventions. Moreover, although working definitions of 'hyperendemicity' have been proposed [57], there is not yet a consensus on the definition of endemicity levels for T. solium infection. Geostatistical approaches will play an important role in identifying areas of high transmission, particularly if they can be parameterized to identify likely areas of high transmission using Geographical Information System (GIS) data that have comprehensive global coverage. Although our study focused on the identification of risk factors associated with exposure to T. solium and residual degrees of spatial clustering, similar geostatistical and machine learning approaches can be used that focus on predicting the spatial distribution of disease using GIS data [17]. Such approaches, conducted at national and global scales, will be crucial in assisting progress towards the WHO's 2030 goals [22, 58]. Taeniasis/cysticercosis is a major public health problem and an important cause of epilepsy and other neurological sequelae in many regions of the world. The WHO aims to target this zoonotic NTD with enhanced control where transmission is most intense, although epidemiological data at national and subnational scales remain scarce. The 2008–2010 baseline epidemiological survey undertaken by the Colombian government remains unprecedented in scale and geographical coverage, generating data that are unique and provide a highly valuable resource for understanding the spatial epidemiology of T. solium cysticercosis. By taking a contemporary geostatistical approach, we have highlighted key associations between human cysticercosis antibody seropositivity and individual- and household-level risk factors, while also identifying spatial hotspots of exposure, unexplained by the measured covariates. These findings could be used to inform the design of intervention strategies in Colombia, such as targeting spatial hotspots and more highly exposed groups (such as displaced people and women), and also to illustrate how important geostatistical modelling will be as a tool to inform and support the WHO NTD roadmap in its 2021–2030 goals for taeniasis/cysticercosis. The data that support the findings of this study are available from the Instituto Nacional de Salud (Bogotá, Colombia) but restrictions apply to the availability of these data, which were used under licence for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Instituto Nacional de Salud. World Health Organisation. Taenia solium Taeniasis/cysticercosis diagnostic tools. 2016. Report of a stakeholder meeting, Geneva, 17–18 December 2015. https://apps.who.int/iris/bitstream/handle/10665/206543/9789241510516_eng.pdf. Accessed 24 Oct 2021. García HH, Gonzalez AE, Evans CAW, Gilman RH. Taenia solium cysticercosis. Lancet. 2003;16:547–56. U.S. Centers for Disease Control and Prevention. Parasite—cysticercosis: biology. https://www.cdc.gov/parasites/cysticercosis/biology.html. Accessed 25 Oct 2021. Trevisan C, Mkupasi EM, Ngowi HA, Forkman B, Johansen MV. Severe seizures in pigs naturally infected with Taenia solium in Tanzania. Vet Parasitol. 2016;220:67–71. Garcia HH, Del Brutto OH. Taenia solium cysticercosis. Infect Dis Clin North Am. 2000;14(1):97–119. Enander RT, Ramírez Amaya A, Enander RA, Gute DM. Neurocysticercosis: risk and primary prevention strategies update. Int J Environ Health Res. 2010;20(5):329–65. Gupta DS, Goyal AK, Tandon PN, Jurel SK, Srivastava S, Dangi UR, et al. Platyhelminthes in tongue—a rare case and review. J Oral Maxillofac Surg. 2012;70(11):2605–9. Garcia HH, Rodriguez S, Friedland JS. Immunology of Taenia solium taeniasis and human cysticercosis. Parasite Immunol. 2014;36(8):388–96. Coral-Almeida M, Gabriël S, Abatih EN, Praet N, Benitez W, Dorny P. Taenia solium human cysticercosis: a systematic review of sero-epidemological data from endemic zones around the world. PLoS Negl Trop Dis. 2015;9(7):e0003919. Ndimubanzi PC, Carabin H, Budke CM, Nguyen H, Qian YJ, Rainwater E, et al. A systematic review of the frequency of neurocyticercosis with a focus on people with epilepsy. PLoS Negl Trop Dis. 2010;2(4):e870. Nkwengulila G. The financial costs associated with porcine cysticercosis and epilepsy in Iringa Rural District. Health. 2014;6(21):2959–65. https://doi.org/10.4236/health.2014.621334. Organización Panamericana de la Salud/Organización Mundial de la Salud Oficina Regional para las Américas. Informe Primera Reunión Regional sobre Control de Taenia solium en América Latina. Colombia, October 2015. https://www.paho.org/hq/dmdocuments/2016/primera-reunion-regional-control-tena-solium-americas-2015.pdf. Accessed 25 Oct 2021. Bruno E, Bartoloni A, Zammarchi L, Strohmeyer M, Bartalesi F, Bustos JA, et al. Epilepsy and neurocysticercosis in Latin America: a systematic review and meta-analysis. PLoS Negl Trop Dis. 2013;7(10):e2480. Flórez Sánchez AC, Pastrán SM, Vargas NS, Beltrán M, Enriquez Y, et al. Cysticercosis in Colombia seroprevalence study 2008–2010. Acta Neurol Colomb. 2013;29(2):73–86. National Administrative Department of Statistics (DANE). Información técnica. Censo Nacional de Población y Vivienda 2018. 2020. https://www.dane.gov.co/index.php/estadisticas-por-tema/demografia-y-poblacion/censo-nacional-de-poblacion-y-vivenda-2018/informacion-tecnica. Accessed 25 Oct 2021. Corredor A, López MC, Duque S, Nicholls RS. Estandarización y evaluación de ELISA en eluidos de sangre seca recolectada en papel de filtro para el diagnóstico de cisticercocis. Biomedica. 1996;16(2):131–3. Giorgi E, Fronterrè C, Macharia PM, Alegana VA, Snow RW, Diggle PJ. Model building and assessment of the impact of covariates for disease prevalence mapping in low-resource settings: to explain and to predict. J R Soc Interface. 2021;18(179):20210104. Diggle PJ, Tawn JA, Moyeed RA. Model-based geostatistics. J R Stat Soc Ser C Appl Stat. 1998;47(3):299–350. Rue H, Martino S, Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J R Stat Soc Ser B Stat Methodol. 2009;71(2):319–92. Lindgren F, Rue H, Lindström J. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. J R Stat Soc Ser B Stat Methodol. 2011;73(4):423–98. Fuglstad GA, Simpson D, Lindgren F, Rue H. Constructing priors that penalize the complexity of Gaussian random fields. J Am Stat Assoc. 2019;114(525):445–52. World Health Organization. Ending the neglect to attain the sustainable development goals: a road map for neglected tropical diseases 2021–2030. https://www.who.int/publications/i/item/9789240010352. Accessed 25 Oct 2021. Allan JC, Velasquez-Tohom M, Garcia-Noval J, Torres-Alvarez R, Yurrita P, Fletes C, et al. Epidemiology of intestinal taeniasis in four, rural Guatemalan communities. Trop Med Parasitol. 1996;90(2):157–65. Garcia-Noval J, Allan JC, Fletes C, Moreno E, De Mata F, Torres-Alvarez R, et al. Epidemiology of Taenia solium taeniasis and cysticercosis in two rural Guatemalan communities. Am J Trop Med Hyg. 1996;55(3):282–9. Garcia HH, Araoz R, Gilman RH, Valdez J, Gonzalez AE, Gavidia C, et al. Increased prevalence of cysticercosis and taeniasis among professional fried pork vendors and the general population of a village in the Peruvian highlands. Am J Trop Med Hyg. 1998;59(6):902–5. Agudelo-Flórez P, Restrepo BN, Palacio LG. Knowledge and practices concerning taeniasis-cysticercosis in Colombian pig-breeders. Rev Salud Pública. 2009;11(2):191–9 (in Spanish). Wardrop NA, Thomas LF, Atkinson PM, de Glanville WA, Cook EAJ, Wamae CN, et al. The influence of socio-economic, behavioural and environmental factors on Taenia spp transmission in Western Kenya: Evidence from a cross-sectional survey in humans and pigs. PLoS Negl Trop Dis. 2015;10(1):e0004394. Mwanjali G, Kihamia C, Kakoko DV, Lekule F, Ngowi H, Johansen MV, et al. Prevalence and risk factors associated with human Taenia solium infections in Mbozi District, Mbeya Region, Tanzania. PLoS Negl Trop Dis. 2013;7(3):e2102. Kanobana K, Praet N, Kabwe C, Dorny P, Lukanu P, Madinga J, et al. High prevalence of Taenia solium cysticerosis in a village community of Bas-Congo, Democratic Republic of Congo. Int J Parasitol. 2011;41(10):1015–8. Carabin H, Millogo A, Cissé A, Gabriël S, Sahlu I, Dorny P, et al. Prevalence of and factors associated with human cysticercosis in 60 villages in three provinces of Burkina Faso. PLoS Negl Trop Dis. 2015;9(11):e0004248. Praet N, Speybroeck N, Rodriguez-Hidalgo R, Benitez-Ortiz W, Berkvens D, Brandt J, et al. Age-related infection and transmission patterns of human cysticercosis. Int J Parasitol. 2010;40(1):85–90. Nithiuthai S, Anantaphruti MT, Waikagul J, Gajadhar A. Waterborne zoonotic helminthiases. Vet Parasitol. 2004;126(1–2):167–93. Sánchez AL, Medina MT, Ljungström I. Prevalence of taeniasis and cysticercosis in a population of urban residence in Honduras. Acta Trop. 1998;69(2):141–9. Moro PL, Lopera L, Bonifacio N, Gilman RH, Silva B, Verastegui M, et al. Taenia solium infection in a rural community in the Peruvian Andes. Ann Trop Med Parasitol. 2003;97(4):373–9. Rodríguez-Morales AJ, Yepes-Echeverri MC, Acevedo-Mendoza WF, Marín-Rincón HA, Culquichicón C, Parra-Valencia E, et al. Mapping the residual incidence of taeniasis and cysticercosis in Colombia, 2009–2013, using geographical information systems: Implications for public health and travel medicine. Travel Med Infect Dis. 2018;22:51–7. Internal Displacement Monitoring Centre (IDMC). Colombia. http://www.internal-displacement.org/countries/colombia. Accessed 25 Oct 2021. Johansen MV, Trevisan C, Braae UC, Magnussen P, Ertel RL, Mejer H, et al. The vicious worm: a computer-based Taenia solium education tool. Trends Parasitol. 2014;30(8):372–4. Carabin H, Millogo A, Ngowi HA, Bauer C, Dermauw V, Koné AC, et al. Effectiveness of a community-based educational programme in reducing the cumulative incidence and prevalence of human Taenia solium cysticercosis in Burkina Faso in 2011–14 (EFECAB): a cluster-randomised controlled trial. Lancet Glob Health. 2018;6(4):e411–25. de Coster T, Van Damme I, Baauw J, Gabriël S. Recent advancements in the control of Taenia solium: a systematic review. Food Waterborne Parasitol. 2018;13:e00030. CystiTeam Group for Epidemiology and Modelling of Taenia solium Taeniasis/Cysticercosis. The World Health Organization 2030 goals for Taenia solium: Insights and perspectives from transmission dynamics modelling. Gates Open Res. 2019;3:1546. O'Neal SE, Pray IW, Vilchez P, Gamboa R, Muro C, Moyano LM, et al. Geographically targeted interventions versus mass drug administration to control Taenia solium cysticercosis. Peru Emerg Infect Dis. 2021;27(9):2389–98. Gabriël S, Mwape KE, Hobbs EC, Devleesschauwer B, Van Damme I, Zulu G, et al. Evidence for potential elimination of active Taenia solium transmission in Africa? N Engl J Med. 2020;383(4):396–7. Scandrett WB, Gajadhar AA. Recovery of putative taeniid eggs from silt in water associated with an outbreak of bovine cysticercosis. Can Vet J. 2004;45(9):758–60. Alidadi S, Oryan A. Water as a potential transmission route of infection with tapeworms. Air Water Borne Dis. 2015;4(2):e135. Jansen F, Dorny P, Gabriël S, Dermauw V, Johansen MV, Trevisan C. The survival and dispersal of Taenia eggs in the environment: what are the implications for transmission? A systematic review. Parasit Vectors. 2021;14(1):1–16. Humphries DL, Stephenson LS, Pearce EJ, The PH, Dan HT, Khanh LT. The use of human faeces for fertilizer is associated with increased intensity of hookworm infection in Vietnamese women. Trans R Soc Trop Med Hyg. 1997;91(5):518–20. Corrales LF, Izurieta R, Moe CL. Association between intestinal parasitic infections and type of sanitation system in rural El Salvador. Trop Med Int Health. 2006;11(12):1821–31. Skrip LA, Dermauw V, Dorny P, Ganaba R, Millogo A, Tarnagda Z, et al. Data-driven analyses of behavioral strategies to eliminate cysticercosis in sub-Saharan Africa. PLoS Negl Trop Dis. 2021;15(3):e0009234. Ito A, Putra MI, Subahar R, Sato MO, Okamoto M, Sako Y, et al. Dogs as alternative intermediate hosts of Taenia solium in Papua (Irian Jaya), Indonesia confirmed by highly specific ELISA and immunoblot using native and recombinant antigens and mitochondrial DNA analysis. J Helminthol. 2002;76(4):311–4. Ito A, Wandra T, Yamasaki H, Nakao M, Sako Y, Nakaya K, et al. Cysticercosis/taeniasis in Asia and the Pacific. Vector-Borne Zoonotic Dis. 2004;4(2):95–107. Wandra T, Swastika K, Dharmawan NS, Purba IE, Sudarmaja IM, Yoshida T, et al. The present situation and towards the prevention and control of neurocysticercosis on the tropical island, Bali, Indonesia. Parasit Vectors. 2015;8:148. Dermauw V, Carabin H, Ganaba R, Cissé A, Tarnagda Z, Gabriël S, et al. Factors associated with the 18-month cumulative incidence of seroconversion of active infection with Taenia solium cysticercosis: a cohort study among residents of 60 villages in Burkina Faso. Am J Trop Med Hyg. 2018;99(4):1018–27. Socioeconomic Data and Applications Center (SEDAC). Maps: population density grid, v3. http://sedac.ciesin.columbia.edu/data/set/gpw-v3-population-density/maps/2. Accessed 25 Oct 2021. Robinson TP, William Wint GR, Conchedda G, Van Boeckel TP, Ercoli V, Palamara E, et al. Mapping the global distribution of livestock. PLoS ONE. 2014;9(5):e96084. Gomez-Puerta LA, Garcia HH, Gonzalez AE. Experimental porcine cysticercosis using infected beetles with Taenia solium eggs. Acta Trop. 2018;183(1):92–4. Vargas-Calla A, Gomez-Puerta LA, Pajuelo MJ, Garcia HH, Gonzalez AE. Molecular detection of taeniid eggs in beetles collected in an area endemic for Taenia solium. Am J Trop Med Hyg. 2018;99(5):1198–200. Dixon MA, Winskill P, Harrison WE, Whittaker C, Schmidt V, Sarti E, et al. Force-of-infection of Taenia solium porcine cysticercosis: a modelling analysis to assess global incidence and prevalence trends. Sci Rep. 2020;10(1):17637. Dixon MA, Braae UC, Winskill P, Devleesschauwer B, Trevisan C, Van Damme I, et al. Modelling for Taenia solium control strategies beyond 2020. Bull World Health Organ. 2020;98(3):198–205. MAD, ZMC and MGB acknowledge funding from the Medical Research Council (MRC) Centre for Global Infectious Disease Analysis (reference MR/R015600/1), jointly funded by the UK MRC and the UK Foreign, Commonwealth & Development Office (FCDO), under the MRC/FCDO Concordat agreement and is also part of the European and Developing Countries Clinical Trials Partnership (EDCTP2) programme supported by the European Union. MAD has also been funded by a Medical Research Council Doctoral Training Partnership (MRC DTP) research studentship to support this work. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Zulma M. Cucunubá Present address: Departamento de Epidemiología Clínica, Pontificia Universidad Javeriana, Bogotá, Colombia Department of Epidemiological Sciences, Animal and Plant Health Agency, New Haw, Addlestone, Surrey, UK Erika Galipó Department of Pathobiology and Population Sciences and London Centre for Neglected Tropical Disease Research, Royal Veterinary College, Hatfield, UK Erika Galipó, Kim Stevens & Martin Walker Department of Infectious Disease Epidemiology and London Centre for Neglected Tropical Disease Research, School of Public Health, Imperial College London, London, UK Matthew A. Dixon, Zulma M. Cucunubá, Maria-Gloria Basáñez & Martin Walker Medical Research Centre for Global Infectious Disease Analysis, School of Public Health, Imperial College London, London, UK Matthew A. Dixon, Zulma M. Cucunubá & Maria-Gloria Basáñez Schistosomiasis Control Initiative (SCI) Foundation, Edinburgh House, 170 Kennington Lane, Lambeth, London, SE11 5DP, UK Matthew A. Dixon Centre for Health Informatics, Computing and Statistics, Lancaster University, Lancaster, UK Claudio Fronterrè Grupo de Parasitología, Instituto Nacional de Salud, Bogotá, Colombia Astrid Carolina Flórez Sánchez Maria-Gloria Basáñez Kim Stevens Martin Walker ACFS and the work group at Grupo de Parasitología (Instituto Nacional de Salud, Colombia) conceived the original study idea, participated in the design of the study and were involved in field implementation and data collection. MAD, CF, EG, MW and MGB were involved in data analysis and interpretation. MAD, CF and EG prepared the draft manuscript. EG, MAD, CF, ZMC, MGB, KS, ACFS and MW read and edited draft versions of the manuscript and approved the final manuscript. Correspondence to Matthew A. Dixon. Ethical approval for the original study (Flórez Sánchez et al. [14]) was obtained through the Research Ethics Committee of the Instituto Nacional de Salud (INS), Colombia in 2009. All authors declare they do not have conflicts of interest. Table S1. Notation of parameters used for model building (analysis of residual spatial correlation) and incorporation of spatial structure. Table S2. Seroprevalence of Taenia solium cysticercus antibodies in Colombia. Table S3. Distribution of seropositive individuals, crude odds ratios (ORs) of testing positive for Taenia solium cysticercus antibodies by ELISA and associated 95% confidence intervals (CIs) from the univariate mixed-effects model. Table S4. Distribution of seropositive individuals, multivariable mixed-effects logistic regression adjusted ORs of testing positive for Taenia solium cysticercus antibodies by ELISA and associated CIs. Table S5. Pig management sub-set analysis (n = 3154). Pig management practices, distribution of seropositive individuals, crude odds ratios (ORs) of testing positive for Taenia solium cysticercus antibodies by ELISA and associated 95% CIs. Figure S1. Map displaying the sampled municipalities in Colombia (2008–2010). Text S1. Supplementary methods. Text S2. Results: pig management sub-analysis Galipó, E., Dixon, M.A., Fronterrè, C. et al. Spatial distribution and risk factors for human cysticercosis in Colombia. Parasites Vectors 14, 590 (2021). https://doi.org/10.1186/s13071-021-05092-8 Taenia solium Cysticercosis Helminths and helminthic diseases The LCNTDR Collection: Advances in scientific research for NTD control
CommonCrawl
The biogeographic differentiation of algal microbiomes in the upper ocean from pole to pole Kara Martin1,2 na1, Katrin Schmidt3 na1, Andrew Toseland ORCID: orcid.org/0000-0002-6513-956X3, Chris A. Boulton ORCID: orcid.org/0000-0001-7836-93914, Kerrie Barry5, Bánk Beszteri ORCID: orcid.org/0000-0002-6852-15886, Corina P. D. Brussaard7, Alicia Clum5, Chris G. Daum ORCID: orcid.org/0000-0003-3895-58925, Emiley Eloe-Fadrosh ORCID: orcid.org/0000-0002-8162-12765, Allison Fong8, Brian Foster5, Bryce Foster5, Michael Ginzburg ORCID: orcid.org/0000-0001-9348-69088, Marcel Huntemann ORCID: orcid.org/0000-0002-1284-37485, Natalia N. Ivanova ORCID: orcid.org/0000-0002-5802-94855, Nikos C. Kyrpides ORCID: orcid.org/0000-0002-6131-04625, Erika Lindquist5, Supratim Mukherjee ORCID: orcid.org/0000-0002-6322-22715, Krishnaveni Palaniappan5, T. B. K. Reddy5, Mariam R. Rizkallah8, Simon Roux5, Klaas Timmermans7, Susannah G. Tringe ORCID: orcid.org/0000-0001-6479-84275, Willem H. van de Poll ORCID: orcid.org/0000-0003-4095-43389, Neha Varghese5, Klaus U. Valentin8, Timothy M. Lenton ORCID: orcid.org/0000-0002-6725-74984, Igor V. Grigoriev ORCID: orcid.org/0000-0002-3136-89035,10, Richard M. Leggett ORCID: orcid.org/0000-0003-3044-42972, Vincent Moulton1 & Thomas Mock ORCID: orcid.org/0000-0001-9604-03623 230 Altmetric Microbial biooceanography Eukaryotic phytoplankton are responsible for at least 20% of annual global carbon fixation. Their diversity and activity are shaped by interactions with prokaryotes as part of complex microbiomes. Although differences in their local species diversity have been estimated, we still have a limited understanding of environmental conditions responsible for compositional differences between local species communities on a large scale from pole to pole. Here, we show, based on pole-to-pole phytoplankton metatranscriptomes and microbial rDNA sequencing, that environmental differences between polar and non-polar upper oceans most strongly impact the large-scale spatial pattern of biodiversity and gene activity in algal microbiomes. The geographic differentiation of co-occurring microbes in algal microbiomes can be well explained by the latitudinal temperature gradient and associated break points in their beta diversity, with an average breakpoint at 14 °C ± 4.3, separating cold and warm upper oceans. As global warming impacts upper ocean temperatures, we project that break points of beta diversity move markedly pole-wards. Hence, abrupt regime shifts in algal microbiomes could be caused by anthropogenic climate change. Phytoplankton are a diverse group of largely photoautotrophic microorganisms encompassing algae and cyanobacteria1,2, contributing approximately half of the annual global carbon fixation3. Although the interconnected oceans generally do not limit their global dispersal4,5,6 many studies have shown that their local diversity is correlated with geographical partitioning based on either oceanographic fronts that separate populations or larger-scale ecosystem gradients such as the latitude gradient in local species diversity7,8,9,10. However, there is also evidence that environmental and ecological selection in geographically well-defined and seemingly unstructured marine ecosystems likely plays a role in generating and maintaining microbial diversity11. Regardless as to whether inter or intra-specific variations are being considered to explain microbial diversity patterns in the global ocean, two variables usually explain most of the relatedness between species and populations, respectively: temperature and whole-community chlorophyll a9,11. Temperature is known to be a strong selecting agent evidenced by thermal tolerance limits according to the geographic origin of species9,12,13. Furthermore, temperature together with salinity and the flow of currents creates ecological boundaries in the upper ocean such as oceanographic fronts, which might impact the structure and evolution of inter and intra-specific diversity across spatio-temporal scales10,14. Chlorophyll a on the other hand, which is a proxy for the biomass of phytoplankton, suggests that ecological selection is at play via interactions with organisms that benefit from phytoplankton and vice versa11. Besides herbivores such as copepods and krill, heterotrophic microbes such as bacteria and archaea are among those groups with significant interactions with phytoplankton15. Some of them even form intimate relationships including mutualism and symbiosis16,17. The space where most of the interactions between phytoplankton and heterotrophic prokaryotes take place is the phycosphere, a microscale mucus region that is rich in organic matter surrounding a phytoplankton cell analogous to the rhizosphere in plants18,19. Thus, organic matter released by phytoplankton are used as substrates for prokaryotes, which sometimes provide essential bioactive compounds in return, such as vitamin B12. About 60% of examined heterokont microalgae (e.g. diatoms) require vitamin B12 that is synthesized by bacteria and archaea20. Thus, those bacteria have formed a mutualistic relationship with phytoplankton that potentially help to sustain primary productivity in many parts of the global ocean16. There is also evidence for species-specific diversity of algal microbiomes. Often, it is the phytoplankton partner that recruits heterotrophic microbes via the secretion of infochemicals, which elicits a response from the other microbes19. As these signalling processes can be species-specific and likely have co-evolved in association with responding partners, algal microbiomes are complex and dynamic and their diversity might be either driven by ecological or environmental selection, generating and maintaining these intimate relationships over space and evolutionary time. As algal microbiomes underpin some of the largest food webs on Earth and drive global biogeochemical cycles, significant international efforts, especially over the last decade have provided insights into what drives their diversity and global biogeography in the global ocean. For instance, large-scale ocean omics studies in the epipelagic realm as part of the Tara Oceans project21,22 showed that associations among microbes were non-randomly distributed in co-occurrence networks and that their structure was driven by both local and global patterns15. Microbial networks that included a significant amount of prokaryotic phytoplankton (cyanobacteria) even appear to be responsible for the majority of carbon exported in the oligotrophic ocean23. Interestingly, some of the co-occurrence networks that contained eukaryotic phytoplankton groups were not taxon-specific and dominated by mutual exclusions, which suggests that their biogeography may be influenced by predator-prey dynamics24. These studies have provided a step change in our understanding of how ecological interactions in the context of changing environmental conditions likely influence the diversity of the photoautotrophic microbial interactome in the global ocean. However, to assess how environmental conditions such as temperature and variable nutrient concentrations impact the diversity of algal microbiomes, it is instrumental to include polar oceans. With their inclusion, the complete spectrum of environmental parameters that co-vary can be used to assess how these parameters on a truly global scale from pole-to-pole impact differences in the variation of species identities and abundances between local assemblages across larger regions (beta diversity)25,26 of interacting algal microbiomes, which, to the best of our knowledge, has not been addressed in previous studies. The application of beta diversity enables us to understand the degree of differentiation among biological communities, which across the complete latitudinal scale from pole to pole will provide insights into how marine microbes are latitudinally distributed. As the Arctic and Southern Oceans and specifically their eukaryotic phytoplankton and associated prokaryotes are often not included in global biodiversity surveys, our understanding of how environmental variables including habitat characteristics of polar oceans influence differences in their diversity and activity is incomplete. However, with the inclusion of polar communities, biogeographic differentiation will not likely reveal drivers responsible for small-scale and local differences in the relatedness of communities because the extreme ends of the environmental spectrum are being considered. Rather, this approach will provide insights into environmental variables that are likely responsible for the most latitudinal differentiation of microbial diversity, potentially overshadowing variables responsible for local differences in microbial diversity patterns. Our study, therefore, addresses how large-scale environmental differences on a nearly complete latitudinal scale from pole-to-pole correlate with the biogeographical differentiation of algal microbiomes including the gene activity of eukaryotic phytoplankton. Furthermore, as the upper ocean is experiencing significant warming due to the production of anthropogenic carbon dioxide, we estimate how their biogeographic differentiation might alter based on a model from the IPCC 5th Assessment Report. The main outcome of our work shows that physico-chemical differences between polar and non-polar upper oceans have a strong influence on the dissimilarity of algal microbiomes with respect to changes in the diversity of their co-occurring microbes but also the gene expression activity of their primary producers. These results suggest that there is an ecological boundary in sub-polar oceans of both hemispheres, which not only alters the spatial scaling of algal microbiomes but also shifts pole-wards due to global warming. A meta-omics resource for algal microbiomes in the upper ocean from pole to pole Three different omics datasets were collected for this study from chlorophyll a maximum layers of the Arctic, Atlantic and Southern Oceans (Fig. 1A): (1) 79× eukaryotic metatranscriptomes, 2) 57 × 16S and (3) 54 × 18S rDNA amplicon (V4 region) datasets as subsets of the 82 total samples (Fig. 1A). Sequencing was done at the U.S. Department of Energy Joint Genome Institute (JGI) as part of the JGI Community Science Project 532/300780 (Sea of Change: Eukaryotic Phytoplankton Communities in the Arctic Ocean). Fig. 1: Sampling sites and environmental metadata. A Stations for metatranscriptome sequencing (green) and 16 and 18S rDNA amplicon sequencing (red). Map was generated using Ocean Data view. B Latitude versus temperature (degree celsius). C Latitude versus nitrate and nitrite concentrations. D Latitude versus silicate concentrations. E Latitude versus phosphate concentrations. Nutrient concentrations in µmol L−1. This dataset consists of sequence data from 4 separate cruises: ARK-XXVII/1 (PS80)—17th June to 9th July 2012; Stratiphyt-II—April to May 2011; ANT-XXIX/1 (PS81)—1st to 24th November 2012 and ANT-XXXII/2 (PS103)—20th December 2016 to 26th January 2017 and covers a transect of the Atlantic Ocean from Greenland to the Weddell Sea (71.36°S to 79.09°N). The 79 eukaryotic metatranscriptomes were sequenced (Illumina HiSeq-2000 instrument) to an average depth of 251 Mbp each based on standard JGI protocols. These data were processed by the Integrated Microbial Genomes and Microbiomes (IMG) pipeline at JGI27. For estimating microbial diversity, 16S and 18S rDNA amplicon datasets were generated (Illumina MiSeq) with an average sequencing depth of 71.8 Mbp and 52.5 Mbp per sample, comprising an average of 393,247 and 142,693 sequences per sample, respectively. A custom bioinformatics pipeline was built for 18S rDNA classifications including a model to normalise the copy number of 18S rDNAs according to the estimated genome sizes of diverse eukaryotic microbes (Supplementary Figs. 1, 2). Rarefaction analysis of all sequence datasets indicated that adequate sampling was achieved for all three types of datasets (Supplementary Fig. 3). Of the total number of contigs (34,241,890) in our metatranscriptome dataset, 36,354,419 non-redundant genes could be predicted, and from these genes ca. 31% (11,205,641 genes) could be assigned to a Pfam domain28. Most of the identified prokaryotic and eukaryotic taxa were present at more than 20 stations and had an evenness of J' ≥0.5 (Supplementary Figs. 4, 5). Only 22% of the 18S dataset could be assigned to taxa at the levels of species (Supplementary Figs. 4a, 6c), while for the 16S dataset, 47% could be assigned to taxa at the levels of genus (Supplementary Fig. 4b, Supplementary Fig. 6d). The metatranscriptomes represent a set of 36,354,419 non-redundant genes of which nearly 28% could be annotated as being of eukaryotic origin, and 31% had homology to known protein domains in the Pfam database. All sequence data were accompanied by measurements of temperature, salinity, dissolved inorganic nitrate/nitrite, phosphate and silicate at the depth of sampling (Fig. 1B–E; Supplementary Table 1). Temperatures in both hemispheres ranged from ca. −1.74 to 29.02 °C reflecting the pole to equator distribution of annual average upper ocean temperatures (Fig. 1B). Salinity varied between 31.0 and 36.9 PSU. Dissolved inorganic nutrients (µmol L−1) were most highly concentrated in the Southern Ocean with minima for all nutrients at ca. 30°S/N (Fig. 1C–E). Based on a canonical correspondence analysis (CCA) all Pfams from metatranscriptomes against these individual environmental variables (Supplementary Fig. 6a, b), temperature was determined to account for the highest percentage of variation compared to all other environmental variables in each dataset. Temperature also had a significantly positive correlation (R2 ≥ 0.63; p-value ≤ 0.001) with prokaryotic and eukaryotic diversity (Shannon Index) (Supplementary Fig. 7). Co-occurrence networks of expressed genes and microbial taxa The first pole-to-pole eukaryotic metatranscriptomes from chlorophyll a maximum layers (Fig. 1A) enabled us to provide insights into how global-scale environmental conditions in the upper ocean drive biogeographic differentiation of eukaryotic community gene expression. To identify which environmental variable was most responsible for a possible latitudinal differentiation in gene co-expression networks, we applied a weighted gene co-occurrence network analysis (WGCNA)29 based on Pfam gene counts. Our WGCNA revealed that there were two gene co-expression networks only based on positive links (Fig. 2A, Supplementary Table 5). A correlation statistical analysis which is part of the WGCNA package was conducted. This involved taking each network's 'eigengene', a term used by WGCNA, which is the first principal component of a network, to be representative of that network in order to conduct a correlation analysis of networks to the environmental variables as shown in Fig. 2B. Based on this work, temperature was identified as the primary driver for both networks, which corroborates results from our CCA analysis (See above and Supplementary Fig. 6). Whereas salinity was co-correlated with temperature, the major inorganic nutrients such as nitrate, phosphate and silicate were significantly (p-value ≤0.001) anti-correlated to temperature and salinity. The gene co-expression network designated as blue (N = 1614 Pfams) has a strong positive relationship with temperature (correlation coefficient of +0.72; p-value = 2e−12), hence, this is considered to be the warm network. The network designated as turquoise (N = 2369 Pfams) has a strong negative relationship with temperature (correlation coefficient of −0.8; p-value = 1e−16), hence, this is considered to be the cold network. 7,172,786 genes with an average length of 757 bps were part of the cold network whereas the warm network was composed of 4,954,085 genes that had an average length of 655 bps. The average GC content of transcripts in the cold network was 51% and in the warm network was 52%. 831,540,849 reads of the cold network and 1,239,584,159 reads of the warm network could be assigned Pfam domains. Unassigned Pfams designated as grey (N = 2 Pfams) did not form a co-expression network and had only a significantly positive correlation (+0.39; p-value = 8e−04) with latitude. Fig. 2: Co-occurrence networks of protein families in eukaryotic metatranscriptomes and their gene ontology. On the log10-scaled gene counts of protein families (Pfams), two networks were found: A blue = warm (n = 1614) and turquoise = cold (n = 2369). B Co-occurrence analysis of Pfam protein families dataset, two networks were found, a turquoise (cold) and blue (warm), and also a grey (2 Pfams: no network). Correlation heatmap between the networks and environmental parameters. The colours correspond to the correlation values, red is positively correlated and blue is negatively correlated. The values in each of the squares correspond to the assigned Pearson correlation coefficient value on top and p-value in brackets below. C Gene ontology (GO) analysis of the co-occurrence of Pfam protein families dataset for both co-occurrence networks. Gene ontology (GO) analyses with Pfams from both networks (Fig. 2C; Supplementary Fig. 8) showed that the cold network was enriched in several molecular functions associated with catalytic activity in general and specifically with acting on proteins and RNAs. Strongly enriched in the warm network were cellular components including mitochondria, ribosomes, non-membrane bound organelles, and the envelope. The mapping of the node-specific Pfam abundance for each network across all stations is shown in Fig. 3A, B. Pfams of the cold network mainly recruit from the Southern Ocean and the Arctic (86.7% total) with the lowest abundance of Pfams mapping to stations between 30°N/S (13.3% total). In contrast, Pfams from the warm network were mainly recruited from the tropical and temperate North Atlantic (48.1% total). Interestingly, slightly more Pfams were recruited from the Arctic (38.7% total) then the Southern Ocean (13.2% total) for this network. Fig. 3: Biogeographical mapping of the node-specific abundance for each protein family (Pfams) network across all stations from pole to pole. Contribution of Pfam containing sequences from individual metatranscriptome sites to corresponding protein family co-occurrence networks. Bubbles scaled according to percentage contribution to total abundance pool. A Pfam biogeography of cold co-occurrence network and B Pfam biogeography of warm co-occurrence network. Abundance is given in percentage contribution to the total sequence pool per site with increasing contribution from small to large circles and from blue to red. To reveal how environmental gradients from the Arctic to the equator influence associations between microbial eukaryotes and prokaryotes, we applied the same WGCNA29 analysis as applied for the eukaryotic metatranscriptomes on log10 transformed normalized (according to genome size, Supplementary Fig. 2) abundances of 18S and 16S rDNA sequences. Co-occurrences were estimated on the normalized abundance of sequences at the species level for eukaryotes (18S) and genus level for prokaryotes (16S). Similar to the gene expression co-occurrence analysis, we obtained two major networks between eukaryotes and prokaryotes that correlated most strongly with temperature and latitude (Fig. 4A, B). Thus, similar to the gene co-expression networks, we identified a cold (Blue; n = 51 species; correlation coefficient of ≤0.79; p-value ≤ 1e−10) and a warm network (Turquoise; n = 70 species; correlation coefficient of ≥0.83; p-value ≤ 3e−12) of co-occurring eukaryotic and prokaryotic microbes (Supplementary Table 2). Unlike for the metatranscriptomes, there were no unassigned 16 and 18S sequences. In the cold network, green algae of the group Prasinophytes were species rich and the Prymnesiophyte Phaeocyctis cordata had the highest number of connections to other species in this cluster (Supplementary Table 2). The prokaryotic community had several highly connected bacterial taxa known to include cold-adapted species some of which co-occurring with diatoms (e.g. Glaciecola)30. Two bacterial taxa in this cluster (Herbaspirillum, Bradyrhizobium) are known to have species that have the ability to fix atmospheric N231,32. Although Coscinodiscophyceae were particularly abundant in cold waters of the Arctic, only one species (Actinocyclus actinochilus) was part of this cluster. The network from warm waters was very different in terms of species composition and co-occurrence patterns. Unlike in the cold network, cyanobacteria were among the most highly connected taxa including Prochlorococcus and Synechococcus. Small and mostly flagellated species from the group of Heterokontophyta dominated the most diverse group of eukaryotes in this cluster. There were also Dinoflagellates, Haptophytes and Pelagophytes. Many highly connected heterotrophic bacteria in this cluster are known to be associated with particles (e.g. soil, biofilm) and two taxa are known to have photoheterotrophic species that contain bacteriochlorophyll (Erythrobacter, Roseivivax)33. This cluster contained neither diatoms nor prasinophytes. There were eight shared classes of species in both co-occurrence networks namely Gammaproteobacteria, Alphaproteobacteria and Flavobacteriia. A full list of the classes of species can be found in Supplementary Table 2. Fig. 4: Co-occurrence networks of 16 and 18S rDNAs, their biodiversity and biogeographical mapping of the node-specific abundance for each taxonomic network across all stations from pole to pole. On the log10 transformed abundances of 18S rDNA species level and 16S rDNA genus level, two networks were found: A cold (n = 51) and warm (n = 70). A list of species names and class names of the species can be found in the Supplementary Table 2. B Co-occurrence analysis of 18S rDNA species level and 16S rDNA genus level, two networks were found, a turquoise (cold) and blue (warm). Correlation heatmap between the networks and environmental parameters. The colours correspond to the correlation values, red is positively correlated and blue is negatively correlated. The values in each of the squares correspond to the assigned Pearson correlation coefficient value on top and p-value in brackets below. C Taxa biogeography of cold 16/18S co-occurrence network. D Taxa biogeography of warm 16/18S co-occurrence network. Abundance is given in percentage contribution to the total sequence pool per site with increasing contribution from small to large circles and from blue to red. Biogeographical mapping of the node-specific 16 and 18S abundance for each network across all stations are shown in Fig. 4C, D. This revealed that 90.01% of sequences from the cold network were recruited from north of 60° in the Arctic Ocean with the opposite biogeographical recruitment pattern for the warm network (78.25 % from stations <60°N). The latitudinal differentiation (beta diversity) for expressed eukaryotic genes and microbial taxa As the co-occurrence analysis revealed for both expressed genes and taxa, that the environmental difference between polar and non-polar upper ocean waters appears to be most responsible for the geographical separation of algal microbiomes, we tested this result by calculating the ratio between regional and local sequence diversity (beta diversity) across all stations, which provides a measure of genetic differentiation between communities across latitudes. The partitioning of cold and warm co-occurrence networks suggests that there are major breakpoints in the genetic differentiation demarking the transition between polar and non-polar upper ocean ecosystems, with temperature and latitude likely being major drivers. In order to test this hypothesis, we calculated a presence–absence matrix for each dataset. A multiple-site dissimilarity was performed on the presence–absence matrix with beta.pair, a function from the betapart R package and a dissimilarity index set by Sørensen34. These values were then plotted against all environmental variables, to enable us to get a range of values in which the breakpoint might be located. We then searched through these possible breakpoints for the one with the lowest mean squared error. The search for breakpoints was performed using all environmental variables including nutrients and salinity as they are known to have an impact on microbial diversity and activity (Supplementary Figs. 9, 10)14 Latitude correlates like temperature (Figs. 2B, 4B, 5A, B). Only the strong latitudinal gradient of temperature showed significant breakpoints in beta diversity, which largely separated cold from warm microbial communities and their associated metabolism (Fig. 5A). For metatranscriptomes, the breakpoint was estimated to be at 18.06 °C (Fig. 5A), for 16S we identified a breakpoint at ca. 9.49 °C (Fig. 5C) and 18S at 13.96 °C (Fig. 5D). The average temperature for the taxonomic and functional beta diversity of eukaryotic phytoplankton and their co-occurring bacteria is 14 °C ± 4.3. The metatranscriptome data enabled us to identify the geographical locations of the breakpoints as the dataset is pole to pole (Fig. 5B). The two breakpoints identified largely separate polar from non-polar oceans (Fig. 5B). Fig. 5: Beta diversity break-point analyses. A, B Represent breakpoints of protein families as part of the metatranscriptome dataset. C, D Represent breakpoints of the 18S rDNA and 16S rDNA datasets. The numbers correspond to sample locations as shown in Fig. 1A. The y-axis represents beta diversity across all stations. The x-axis in A, C and D represents temperature and in B represents latitude. The horizontal lines indicate the breakpoints in beta diversity. For the Pfam protein families dataset in (A), the breakpoint is at 18.06 °C with a p-value of 3.741e−10. In B the breakpoint is at 52.167 degrees altered latitude (37.833 degrees latitude) with a p-value of 2.225e−07. For the 16S rDNA dataset in (C), the breakpoint is at 9.49 °C with a p-value of 1.413e−4. For the 18S rDNA dataset in (D), the breakpoint is at 13.96 °C with a p-value of 8.407e−11. Projection of geographical shifts in beta-diversity breakpoints across the North Atlantic The global ocean is a significant sink of heat with the consequence that the upper ocean has become warmer over the past 100 years due to the anthropogenic production of carbon dioxide. Thus, stratified warm-water masses expand pole-wards. This is of particular relevance in the North Atlantic and North Pacific and even the Arctic Ocean35,36.To simulate how warming of the North Atlantic might impact the beta-diversity breakpoints and therefore local changes in the algal microbiomes, we utilised a model from the IPCC 5th Assessment Report. For estimates of changes over the 21st century, we use the RCP 8.5 HadGEM2-ES CMIP5 experiment37. A historical HadGEM2-ES experiment was also run for CMIP5, which we used to bias-correct the projected temperatures. The resulting shifts in breakpoints from these temperatures are shown in Fig. 6. Grid boxes that contain sea ice in the climatology were ignored from this analysis. Projections from the model show that the most affected geographical region in terms of shifts in the diversity of algal microbiomes over the coming decades is the area between 40 and 60° N, which includes the North Sea and most of the British Isles (Fig. 6). Fig. 6: IPCC-based modelling of climate driven shifts in beta diversity breakpoints. Observations (1961–1990) and modelled (2010–2099) changes over the 21st century, in the thresholds for breakpoints in beta diversity. Regions are shown as red for metatranscriptomes (>18.06 °C), orange for 18S (<18.06 °C, >13.96 °C), yellow for 16S (<13.96 °C, >9.49 °C) and blue for temperatures <9.49 °C for a 1961–1990 observations from the HadISST dataset. Modelled estimates temperatures from the HadGEM2-ES CMIP5 run for the 30-year averages, 2010–2039, 2040–2069, and 2070–2099, respectively. Temperatures from HadGEM2-ES have been calibrated to the HadISST observations as described in methods. Black solid line represents the 15 °C and the dashed line the 14 °C average upper ocean temperature. Our study has provided evidence that differences in environmental conditions between polar and non-polar upper oceans can explain the partitioning of co-occurring sequences into two major algal microbiomes (Figs. 2–4). The latitudinal differentiation of their individual sequences based on beta diversity is mainly correlated with the latitudinal gradient of temperature in the upper ocean, especially at transition zones (breakpoints) between polar and non-polar oceans (Fig. 5), hence corroborating our WGCNA analysis (Figs. 2–4). However, many other environmental parameters including essential nutrients were either significantly negatively or positively correlated with temperature and latitude, suggesting that they also play an important role in the biogeographic differentiation of algal microbiomes in the upper ocean. The negative correlation of inorganic nutrients with temperature (Figs. 2B, 4B) reflects the observation that cold upper waters are usually nutrient-rich whilst warmer upper ocean waters tend to be nutrient poor considering global and annual averages38. Thus, differences in the physical structure (e.g. seasonally mixed vs permanently stratified water) of the upper ocean caused by latitudinal gradients of temperature might be the main reason for the separation into largely polar (cold) and non-polar (warm) algal microbiomes. The difference in recruiting sequences from polar vs non-polar oceans is larger for the two taxonomic networks (Fig. 4C, D) compared to the gene expression networks (Fig. 3A, B). Considering that the number and redundancy of expressed genes and Pfams in metatranscriptomes is significantly higher than the more distinct datasets of 16 and 18S sequences, this numerical difference may have contributed to differences in the degree of latitudinal partitioning. A reason for the stronger recruitment of Pfams from the Arctic (38.7% total) compared to the Southern Ocean (13.2% total) for the warm network might be due to the North Atlantic Current (NAC), which was sampled (Fig. 1), and likely carried microbes from lower latitudes as the NAC is a northward prolongation of the Gulf Stream. In contrast, the frontal system in the Southern Ocean represents a boundary system less prone to a poleward range shift of microbial species from lower latitudes10. Hence, a lower number of Southern Ocean Pfams were recruited for the warm co-occurrence network. Although several global-scale studies, with Tara Oceans22 being the most significant, have already revealed that temperature can be considered the best predictor of local epipelagic plankton diversity9 our study has extended this work by including both polar oceans and by focusing on eukaryotic phytoplankton and their co-occurring prokaryotic microbes. Furthermore, this is the first study, at least to the best of our knowledge, which is based on latitudinal beta diversity to reveal genetic differentiation in marine microbial communities from pole to pole in relation to variable environmental conditions. Our results, therefore, provide insights into how changing environmental conditions correlate with biodiversity changes (breakpoints in beta diversity) subject to large-scale environmental fluctuation and disturbances26. This knowledge is essential for predicting the consequences of global warming (Fig. 6) and therefore may guide environmental management. Most previous studies compared local species diversity (alpha diversity) across latitudes9. Nevertheless, temperature was also identified as one of the most important variables explaining differences in species composition of local communities across large-scale latitudinal gradients. The concept of ocean biogeochemical provinces (Longhurst provinces)39 often matches local differences in upper-ocean microbiomes14 and their linked biogeochemical activity such as nutrient and carbon cycling40. Although our study confirms the large-scale genetic differentiation of algal microbiomes between polar (ICE, SPSS) and non-polar Longhurst provinces (e.g. STSS, NHSTPS, SHSTPS) covered by our pole-to-pole transect, we did not identify geographic differentiation between any of the non-polar Longhurst provinces. Arguably, there are no stronger environmental differences than between polar and non-polar upper oceans mainly caused by strong seasonality closer to the poles, overall low temperatures, the presence of sea ice, and differences in seasonal mixing38. Thus, environmental differences between polar and non-polar oceans may impose much stronger geographic differentiation in biodiversity of algal microbiomes and their expressed genes compared to environmental differences between Longhurst provinces of non-polar oceans (e.g. STSS, NHSTPS, SHSTPS). As the Arctic and the Southern Ocean do not significantly differ in their overall environmental conditions, this may explain why we have not seen a differentiation of algal microbiomes between both polar oceans. Hence, Pfams for the cold co-occurrence cluster have been recruited from both polar oceans (Fig. 3). The enrichment of GO terms for catalytic activity in the cold Pfam network likely reflects metabolic requirements to thrive under polar conditions. Most cold-adapted microbes optimise their enzymes to increase their catalytical activity at lower temperatures41. The optimization of enzymes to low temperature activity is usually facilitated by destabilisation of the molecular structures (e.g. active site). The enrichment of GO terms specifically for the catalytical activity of proteins and RNAs (Fig. 2C) suggests that these polar microbial communities have not only increased their catalytical activity of enzymes but also catalytic activity that acts to modify RNAs42. The GO enrichment of cellular components in the warm network (Fig. 2C) might reflect an increased turnover of subcellular compartments including their membranes due to increased metabolic activity (respiration in mitochondria) and stress (radical oxygen species) at higher temperatures, which is known to occur in microalgae43. The taxonomic differences based on 16 and 18S rDNA sequencing between cold and warm co-occurrence networks largely confirm differences in the biogeographical distribution of individual species across latitudinal regions of the global upper ocean9,22,44,45,46,47. For instance, Prochlorococcus and Synechoccus mainly dominate tropical and subtropical upper oceans together with eukaryotic pico- and nanoflagellates. Those taxa were found to be dominant in the warm network with a significant number of connections to additional taxa. In contrast, the cold network was characterised by abundant and well-connected sequences from phylogenetic groups known to include cold-adapted bacteria (e.g. Polaribacter, Glaciecola) and microalgae such as diatoms (e.g. Actinocyclus actinochilus) and Prymnesiophytes (e.g. Phaecystis cordata). Interestingly, two previous studies have suggested a similar geographic partitioning but for phytoplankton productivity and mainly prokaryotic biodiversity. Behrenfeld et al.38 identified that the physical environment of the upper ocean impacts the net primary production (NPP) of phytoplankton communities. On a global scale including polar oceans, they identified that differences in upper-ocean temperature and stratification across a latitudinal gradient were mainly responsible for the partitioning of NPP. The latter being higher in cold, nutrient-rich, and high-latitude regions whereas lower NPP was observed in warm, nutrient-poor and permanently stratified upper oceans. The demarcation zone between both global regions for NPP was estimated to be at approximately 15 °C on an annual average. This temperature is in good agreement with the average temperature for breakpoints in the taxonomic and functional beta diversity of eukaryotic phytoplankton and their co-occurring bacteria at 14 °C ± 4.3. A similar demarcation boundary was found for the latitudinal partitioning in diversity and activity of prokaryote-enriched metagenomes and metatranscriptomes, respectively48. Thus, our data together with these previous studied provide support for the hypothesis that environmental conditions separating cold (nutrient rich) from warm (nutrient poor) upper oceans are likely responsible for the latitudinal differentiation of algal microbiomes underpinning differences in ocean productivity and global biogeochemical cycles. The latitudinal gradient of temperature caused by seasonal differences in solar radiation together with associated conditions such as differences in upper-ocean stratification and nutrient concentrations appear to be the main drivers. As the anthropogenic production of carbon dioxide raises global temperatures, which has already caused significant ocean warming, it is likely that the spatial distribution of algal microbiomes will change according to poleward shifts in geographical demarcation boundaries matching breakpoints in beta diversity of species and their gene pool. Our model for the North Atlantic shows that the area between 40 and 60° N might be affected the most over the next approximate 100 years as we forecast a complete replacement of cold algal microbiomes (Fig. 6) in this geographical area. As the area between 40 and 60° N is known to be nutrient rich and, therefore, productive especially the North Sea, a replacement of current microbial communities is likely to have significant impact on food webs including fisheries with consequences for associated industries. Taken together, our study confirms the latitudinal distribution pattern in local (alpha) diversity of complex marine microbial communities with a significant decrease from the equator towards polar ecosystems (Supplementary Fig. 7)9. However, pole-to-pole datasets, which represent a more complete spectrum of environmental variables, offer the opportunity to identify the most pronounced differences in the variation of alpha diversity across larger biogeographic regions (beta diversity). The latter, to the best of our knowledge, has never been estimated before for oceanic microbes although this knowledge is instrumental for spatial scaling of changes in diversity, i.e. loss and gain26. The application of beta diversity to pole-to-pole algal microbiomes revealed for the first time that physico-chemical differences between polar and non-polar upper oceans have a strong influence not only on changes in their diversity but also the gene expression activity of their primary producers. Consequently, there appear to be ecological boundaries in sub-polar oceans of both hemispheres, which not only alter the spatial scaling of algal microbiomes (breakpoints in beta diversity), but also shift pole-wards due to global warming. Research cruises This dataset consists of sequence data from 4 separate cruises: ARK-XXVII/1 (PS80)—17th June to 9th July 2012; Stratiphyt-II— April to May 2011; ANT-XXIX/1 (PS81)—1st to 24th November 2012 and ANT-XXXII/2 (PS103)—16th December 2016 to 3rd February 2017 and covers a transect of the Atlantic Ocean from Greenland to the Weddell Sea (71.36°S to 79.09°N) (Supplementary Table 1). In order to study the composition, distribution and activity of microbial communities in the upper ocean across the broadest latitudinal ranges possible, samples have been collected during four field campaigns as shown in Fig. 1A. The first collection of samples was collected in the North Atlantic Ocean from April to May 2011 by Dr. Willem van de Poll of the University of Groningen, Netherlands and Dr. Klaas Timmermans of the Royal Netherlands Institute for Sea Research. The second set of samples was collected in the Arctic Ocean from June to July 2012, and the third set of samples was collected in the South Atlantic Ocean from October to November 2012. Both of which were collected by Dr. Katrin Schmidt of the University of East Anglia. The final set of samples was collected in the Antarctic Ocean from December 2016 to January 2017 by Dr. Allison Fong of the Alfred-Wegener Institute for Polar and Marine Research, Bremerhaven, Germany. Water samples from the Arctic Ocean and South Atlantic Ocean expeditions were collected using 12 L Niskin bottles (Rosette sampler with an attached Sonde (CTD, conductivity, temperature, depth) either at the chlorophyll maximum (10–110 m) and/or upper of the ocean (0–10 m). As soon as the rosette sampler was back on board, water samples were immediately transferred into plastic containers and transported to the laboratory. All samples were accompanied by measurements on salinity, temperature, sampling depth and silicate, nitrate, phosphate concentration (Supplementary Table 1). Water samples were pre-filtered with a 100 μm mesh to remove larger organisms and subsequently filtered onto 1.2 μm polycarbonate filters (Isopore membrane, Millipore, MA, USA). All filters were snap frozen in liquid nitrogen and stored at −80 °C until further analysis. Water samples from the North Atlantic Ocean cruise were also taken with 12 L Niskin bottles attached to a Rosette sampler with a Sonde. However, these samples were filtered onto 0.2 μm polycarbonate filters (Isopore membrane, Millipore, MA, USA) without pre-filtration but snap frozen in liquid nitrogen and stored at −80 °C as the other samples. Water samples from the Southern Ocean cruise were taken with 12 L Niskin bottles attached to an SBE911plus CTD system equipped with 24 Niskin samplers. These samples were filtered onto 1.2 μm polycarbonate membrane filters (Merck Millipore, Germany) in a container cooled to 4 °C and snap frozen in liquid nitrogen and stored at −80 °C as the other samples. Environmental data recorded at the time of sampling can be found in Supplementary Table 1. DNA extractions: Arctic Ocean and South Atlantic Ocean samples DNA was extracted with the EasyDNA Kit (Invitrogen, Carlsbad, CA, USA) with modification to optimise DNA quantity and quality. Briefly, cells were washed off the filter with pre-heated (65 °C) Solution A and the supernatant was transferred into a new tube with one small spoon of glass beads (425–600 μm, acid washed) (Sigma-Aldrich, St. Louis, MO, USA). Samples were vortexed three times in intervals of 3 s to break the cells. RNase A was added to the samples and incubated for 30 min at 65 °C. The supernatant was transferred into a new tube and Solution B was added followed by a chloroform phase separation and an ethanol precipitation step. DNA was pelleted by centrifugation and washed several times with isopropanol, air dried and suspended in 100 μL TE buffer (10 mM Tris-HCl, pH 7.5, 1 mM EDTA, pH 8.0). Samples were snap frozen in liquid nitrogen and stored at −80 °C until sequencing. DNA extractions: North Atlantic Ocean samples North Atlantic Ocean samples were extracted with the ZR-Duet™DNA/RNA MiniPrep kit (Zymo Research, Irvine, USA) allowing simultaneous extraction of DNA and RNA from one sample filter. Briefly, cells were washed from the filters with DNA/RNA Lysis Buffer and one spoon of glass beads (425–600 μm, Sigma-Aldrich, MO, USA) was added. Samples were vortexed quickly and loaded onto Zymno-Spin™IIIC columns. The columns were washed several times and DNA was eluted in 60 μmL, DNase-free water. Samples were snap frozen in liquid nitrogen and stored at −80 °C until sequencing. DNA extractions: Southern Ocean samples DNA from the Southern Ocean samples was extracted with the NucleoSpin Soil DNA extraction kit (Macherey‐Nagel) following the manufacturer's instructions. Briefly, cells were washed from the filters with DNA Lysis Buffer and into a lysis tube containing glass beads was added. Samples were disrupted by bead beating for 2 × 30 s interrupted by 1 min cooling on ice and loaded onto the NucleoSpin columns. The columns were washed three times and DNA was eluted in 50 μL, DNase-free water. Samples were stored at −20 °C until further processing. Amplicon sequencing of 16S and 18S rDNA All extracted DNA samples were sequenced and pre-processed by the Joint Genome Institute (JGI) (Department of Energy, Berkeley, CA, USA). iTAG amplicon sequencing was performed at JGI with primers for the V4 region of the 16S (FW(515F): GTGCCAGCMGCCGCGGTAA; RV(806R): GGACTACNVGGGTWTCTAAT)49 and 18S (FW(565F): CCAGCASCYGCGGTAATTCC; RV(948R): ACTTTCGTTCTTGATYRA)50. (Supplementary Table 6) rRNA gene (on an Illumina MiSeq instrument with a 2 × 300 base pairs (bp) read configuration51. 18S sequences were pre-processed, this consisted of scanning for contamination with the tool Duk (US Department of Energy Joint Genome Institute (JGI), 2017,a) and quality trimming of reads with cutadapt52. Paired end reads were merged using FLASH53 with a max mismatch set to 0.3 and min overlap set to 20. A total of 54 18S samples passed quality control after sequencing. After read trimming, there was an average of 142,693 read pairs per 18S sample with an average length of 367 bp and 2.8 Gb of data over all samples. 16S sequences were pre-processed, this consisted of merging the overlapping read pairs using USEARCH's merge pairs54 with the parameter minimum number of differences (merge max diff pct) set to 15.0 into unpaired consensus sequences. Any reads that could not be merged are discarded. JGI then applied the tool USEARCH's search oligodb tool with the parameters mean length (len mean) set to 292, length standard deviation (len stdev) set to 20, primer trimmed max difference (primer trim max diffs) set to 3, a list of primers and length filter max difference (len filter max diffs) set to 2.5 to ensure the Polymerase Chain Reaction (PCR) primers were located with the correct direction and inside the expected spacing. Reads that did not pass this quality control step were discarded. With a max expected error rate (max exp err rate) set to 0.02, JGI evaluated the quality score of the reads and those with too many expected errors were discarded. Any identical sequence was de-duplicated. These are then counted and sorted alphabetically for merging with other such files later. A total of 57 × 16S samples passed quality control after sequencing. There was an average 393,247 read pairs per sample and an average base length of 253 bp for each sequence with a total of 5.6 Gb. RNA extractions: Arctic Ocean and Atlantic samples RNA from the Arctic and Atlantic Ocean samples was extracted using the Direct-zol RNA Miniprep Kit (Zymo Research, USA). Briefly, cells were washed off the filters with Trizol into a tube with one spoon of glass beads (425–600 μm, Sigma-Aldrich, MO, USA). Filters were removed and tubes bead beaten for 3 min. An equal volume of 95% ethanol was added, and the solution was transferred onto Zymo-Spin™ IICR Column and the manufacturer instructions were followed. Samples were treated with DNAse to remove DNA impurities, snap frozen in liquid nitrogen and stored at −80 °C until sequencing. RNA extractions: Southern Ocean RNA from the Southern Ocean samples was extracted using the QIAGEN RNeasy Plant Mini Kit (QIAGEN, Germany) following the manufacturer's instructions with on-column DNA digestion. Cells were broken by bead beating like for the DNA extractions before loading samples onto the columns. Elution was performed with 30 µm RNase-free water. Extracted samples were snap frozen in liquid nitrogen and stored at −80 °C until sequencing. Metatranscriptome sequencing All samples were sequenced and pre-processed by the U.S. Department of Energy Joint Genome Institute (JGI). Metatranscriptome sequencing was performed on an Illumina HiSeq-2000 instrument27. A total of 79 samples passed quality control after sequencing with 19.87 Gb of sequence read data over all samples for analysis. This comprised a total of 34,241,890 contigs, with an average length of 503 and an average GC% of 51%. This resulted in 36354419 of non-redundant genes detected. JGI employed their suite of tools called BBTools55 for preprocessing the sequences. First, the sequences were cleaned using Duk a tool in the BBTools suite that performs various data quality procedures such as quality trimming and filtering by kmer matching. In our dataset, Duk identified and removed adaptor sequences, and also quality trimmed the raw reads to a phred score of Q10. In Duk the parameters were; kmer-trim (ktrim) was set to r, kmer (k) was set to 25, shorter kmers (mink) set to 12, quality trimming (qtrim) was set to r, trimming phred (trimq) set to 10, average quality below (maq) set to 10, maximum Ns (maxns) set to 3, minimum read length (minlen) set to 50, the flag "tpe" was set to t, so both reads are trimmed to the same length and the "tbo" flag was set to t, so to trim adaptors based on pair overlap detection. The reads were further filtered to remove process artefacts also using Duk with the kmer (k) parameter set to 16. BBMap55 is another a tool in the BBTools suite, that performs mapping of DNA and RNA reads to a database. BBMap aligns the reads by using a multi-kmer-seed-and-extend approach. To remove ribosomal RNA reads, the reads were aligned against a trimmed version of the SILVA database using BBMap with parameters set to; minratio (minid) set to 0.90, local alignment converter flag (local) set to t and fast flag (fast) set to t. Also, any human reads identified were removed using BBMap. BBmerge56 is a tool in the BBTools suite that performs the merging of overlapping paired end reads (Bushnell, 2017). For assembling the metatranscriptome, the reads were first merged with the tool BBmerge, and then BBNorm was used to normalise the coverage so as to generate a flat coverage distribution. This type of operation can speed up assembly and can even result in an improved assembly quality. Rnnotator52 was employed for assembling the metatranscriptome samples 1–68. Rnnotator assembles the transcripts by using a de novo assembly approach of RNA-Seq data and it accomplishes this without a reference genome52. MEGAHIT57 was employed for assembling the metatranscriptome samples 69–82. The tool BBMap was used for reference mapping, the cleaned reads were mapped to metagenome/isolate reference(s) and the metatranscriptome assembly. JGI performed the functional analysis on the metatranscriptomic dataset. JGI's annotation system is called the Metagenome Annotation Pipeline (MAP) (v4.15.2)27. JGI used HMMER 3.1b258 and the Pfam v3059 database for the functional analysis of our metatranscriptomic dataset. This resulted in 11,205,641 genes assigned to one or more Pfam domain. This resulted in 8379 Pfam functional assignments and their gene counts across the 79 samples. The files were further normalised by applying hits per million. 18S rDNA analysis A reference dataset of 18S rRNA gene sequences that represent algae taxa was compiled for the construction of the phylogenetic tree by retrieving sequences of algae and outgroups taxa from the SILVA database (SSUREF 115)60 and Marine Microbial Eukaryote Transcriptome Sequencing Project (MMETSP) database61. The algae reference database consists of 1636 species from the following groups: Opisthokonta, Cryptophyta, Glaucocystophyceae, Rhizaria, Stramenopiles, Haptophyceae, Viridiplantae, Alveolata, Amoebozoa and Rhodophyta. A diagram of the 18S classification pipeline can be found in Supplementary Fig. 1. In order to construct the algae 18S reference database, we first retrieved all eukaryotic species from the SILVA database with a sequence length of > = 1500 base pairs (bp) and converted all base letters of U to T. Under each genus, we took the first species to represent that genus. Using a custom written script (https://github.com/SeaOfChange/SOC/blob/master/get_ref_seqs.pl), the species of interest (as stated above) were selected from the SILVA database, classified with NCBI taxa IDs and a sequence information file produced that describes each of the algae sequences by their sequence ID and NCBI species ID. Taxonomy from the NCBI database, eukaryote sequences from the SILVA database and a list of algal taxa including outgroups were used as input for the script. This information was combined with the MMETSP database excluding duplications. The algae reference database was clustered to remove closely related sequences with CD-HIT (4.6.1)62 using a similarity threshold of 97%. Using ClustalW (2.1)63 we aligned the reference sequences with the addition of the parameter iteration numbers set to 5. The alignment was examined by colour coding each species to their groups and visualising in iTOL64. It was observed that a few species were misaligning to other groups and these were then deleted using Jalview65. The resulting alignment was tidied up with TrimAL (1.1)66 by applying parameters to delete any positions in the alignment that have gaps in 10% or more of the sequence, except if this results in less than 60% of the sequence remaining. A maximum likelihood phylogenetic reference tree and statistics file based on our algae reference alignment was constructed by employing RaxML (8.0.20)67 with a general time reversible model of nucleotide substitution along with the GAMMA model of rate heterogeneity. For a description of the lineages of all species back to the root in the algae reference database, the taxa IDs were submitted for each species to extract a subset of the NCBI taxonomy with the NCBI taxtastic tool (0.8.4)68 Based on the algae reference multiple sequence alignment, with HMMER3 (3.1B1)69 a Profile HMM was created. A pplacer reference package using taxtastic was generated, which produced an organized collection of all the files and taxonomic information into one directory. With the reference package, a SQLite database was created using pplacer's Reference Package PReparer (rppr). With hmmalign, the query sequences were aligned to the reference set and created a combined Stockholm format alignment. Pplacer (re-aligned to the reference set and created a combined Stockholm format alignment. Pplacer (1.1)70 was used to place the query sequences on the phylogenetic reference tree by means of the reference alignment according to a maximum likelihood model70 The place files were converted to CSV with pplacer's guppy tool; in order to easily take those with a maximum likelihood score of > = 0.5 and counted the number of reads assigned to each classification. This resulted in 6,053,291 reads that were taxonomically assigned being taken for analysis. Normalisation of 18S rDNA gene copy number 18S rDNA gene copy number vary widely among eukaryotes. In order to create an estimate of abundances of the species in the samples the data had to be normalised. Previous work has explored the link between copy number and genome size71. However, there is not a single database of 18S rDNA gene copy numbers for eukaryote species. In order to address this, gene copy number and related genome sizes of 185 species across the eukaryote tree was investigated and plotted (Supplementary Fig. 2, Supplementary Table 4)68,71,72,73,74,75,76,77,78,79. Based on the log transformed data, a significant correlation with a R2 of 0.55 with a p-value < 2.2e−16 between genome size and 18S copy number was observed. A regression equation was determined (f(x) = 0.66X + 0.75) as shown in Supplementary Fig. 2. To derive this equation, the genome sizes for the species in the reference datasets were retrieved from the NCBI genome database. Since some of the genome sizes were unavailable, for species with missing genome sizes, an average of available genome sizes in closely related species was taken instead. More specifically, first a taxonomic lineage of the relevant subset of the NCBI database was obtained by submitting the taxa IDs using the NCBI taxtastic tool68. Average genome sizes were then calculated by utilizing the parent ID and taxa ID columns and the known genome sizes of the lowest common ancestor. The 18S datasets were normalised by assigning their genome sizes using the regression equation. The files were further normalised by applying the hits per million reads method. 18S rDNA file preparation In our 18S rDNA dataset, we had taxonomic assignments from the eukaryote node down to the species nodes. We employed Metagenome Analyzer (MEGAN) (5.10.3)80 to cut out specific taxonomic levels. In MEGAN, we extracted the classifications at the taxonomic rank of species. This consisted of a file being generated for each station that contained the species names and their assigned abundances. The files were further normalised to hits per million. In MEGAN, we extracted the leaves of the taxonomy tree at the rank of class and above but excluded assignments to the eukaryote node. Firstly, this consisted of a file being generated for each station that contained all assignments to the class nodes as well as any assignments under their respective lineages down to species being summed up under the individual class node. Secondly, we included nodes that were not highlighted for class taxonomic level on the leaves of the tree in MEGAN. These leaves were not highlighted because in NCBI taxonomy there are species that do not have a taxonomy designation at every taxonomy level. We took the nodes that were not highlighted on leaves of the tree and summed them together within their respective lineages and placed them under a new name. For example, under the phylum Rhizaria, on the leaves of the tree, there is Cercozoa, Gromiidae and unclassified Rhizaria which are not highlighted. Their abundance was summed together and renamed Nc. Rhizaria, "Nc." standing for "No class". The abundances assigned to Rhizaria were not included in this calculation. The leaves of the tree made up 34% of the total 18S rDNA dataset. The internal nodes between the leaves of the tree at the taxonomic rank of class and the eukaryote node was given a "U." in front of their name, "U." standing for "Unknown". This was done to highlight that while they are of course associated with the lower lineages they are in fact considered separate, as those assignments to those nodes could not be determined any lower. The internal nodes made up 29% of the total 18S rDNA dataset. The abundance assigned to the eukaryote node was excluded from our analysis as these sequences could not be classified lower. This comprised of a total of 37% of the 18S rDNA dataset. A file was generated for each station that contained the class nodes, "Nc." nodes and "U." nodes with their respective abundances. The files were further normalised to hits per million. Throughout the paper we refer to the analysis of these files at the taxonomic rank of class. JGI performed the classification analysis on the 16S rDNA dataset81,82. JGI's 16S rDNA classification pipeline (JGI pipeline iTagger v2.1 16S classification pipeline) consists of firstly removing samples with less than 1000 sequences. The remaining samples and the de-duplicated identical sequences from the preprocessing step are then combined and their sequences organized by decreasing abundance. The sequences are divided out based on the criterion as to whether they contained a cluster centroid with a minimum size of at least 3 copies. The low-abundance sequences are put aside and not used for clustering. USEARCH's83 cluster otus command is employed to incrementally cluster the clusterable sequences. This begins at 99% identity and the radius is increased by 1% for each iteration until a OTU clustering identity of 97% is reached. At each step, the sequences are sorted by decreasing abundance. Once clustering is complete, USEARCH's usearch global is used to map the low-abundance sequences to the cluster centroids. These are added to OTU counts if they were in the prescribed percent identity threshold. If they do not fall within this prescribed percent identity threshold they are discarded. USEARCH's UTAX along with the SILVA database is used to evaluate the clustered centroid sequences. The predicted taxonomic classifications are then filtered with a cutoff of 0.5. Any chloroplast sequences identified are removed. The final accepted OTUs and read counts for each sample are finally placed in a taxonomic classification file. In order to normalise the 16S copy number, the 16S copy numbers for the species in the dataset were retrieved from the Ribosomal RNA Operon Copy Number Database (rrnDB)84 The rrnDB database version 5.3 consisted at the time of 3021 bacterial entries. Firstly, since multiple entries of a species are in the rrnDB database due to the presence of different strains, we obtained an average copy number for each species in the rrnDB database, which resulted in 2876 species entries. The higher taxonomic levels for the rrnDB species needed to be established so that we could calculate their average copy number. For a description of the lineages of all species back to the root in the rrnDB database, we submitted the species names for each entry to extract a subset of the NCBI taxonomy with the NCBI taxtastic tool68 thus producing a Taxtastic file. The Taxtastic file based on species from the rrnDB database was used to calculate the average copy number for higher taxonomic levels from the known copy number species level, with the assistance of the parent id and taxa id layout in the Taxtastic file. A Taxtastic file based on 16S rDNA species from our dataset was generated and we assigned our 16S species entries a copy number from species to root from the prepared average copy number rrnDB Taxtastic file. Not all copy numbers in the 16S rDNA dataset were known. We, therefore, took the average of closely related species from the above taxonomic level of those we could get and took that as the copy number for those that were missing in our dataset. The 16S dataset was normalised by dividing by the assigned copy number. The files were further normalised by applying the hits per million reads method. In our 16S rDNA dataset, we had taxonomic assignments from the bacteria node down to the genus nodes. We extracted the classifications at the taxonomic rank of genus. This consisted of a file being generated for each station that contained the genus names and their assigned abundances. The files were further normalised by applying the hits per million reads method. We extracted the leaves of the tree that included class nodes and "Nc." nodes with their respective abundances. This step resulted in 94% of the 16S rDNA dataset. Also, we extracted the internal nodes and placed "U." in front of their names. This resulted in 3% of the 16S rDNA dataset. The abundance assigned to the bacteria node was excluded from our analysis and this comprised of a total of 3% of the 16S rDNA dataset. We generated a file for each station that contained the class nodes, "Nc." nodes and "U." nodes with their respective abundances. The files were further normalised by applying the hits per million reads method. Throughout the paper we refer to the analysis of these files at the taxonomic rank of class. Alpha diversity (Shannon index) in relation to environmental covariates The Shannon index H'85 was used to calculate abundance weighted richness per station. The Shannon index was used over the Simpson index as the latter is heavily weighted towards the most abundant orders. The Shannon index was calculated based on the following equation: $$H^{\prime} =-\mathop{\sum }\limits_{i=1}^{S}{p}_{i}\,{{{{{\rm{In}}}}}}\,{{{{{{\rm{p}}}}}}}_{i}$$ Environmental covariates were related to the Shannon index (H') by fitting generalized linear models. Step-by-step backwards selection of covariates was used for model building, removing non-significant covariates until remaining covariates were significant at a p-value < 0.05. Beta diversity in relation to environmental factors was calculated across the transect based on a Hellinger transformed class abundance matrix using the vegdist function of the vegan package86. The Bray-Curtis dissimilarity index87 was used as a measure of beta-diversity and was calculated based on the following equation: $$B{C}_{ij}=\sum \frac{|{n}_{ik}-{n}_{jk}|}{({n}_{ik}+{n}_{jk})}$$ Evenness and occupancy An abundance, station evenness and occupancy plots were produced for each 18S rDNA class level (n = 54) and 16S rRNA class level (n = 57) (Supplementary Fig. 5, Supplementary Table 3) The x-axis represents the number of times that class taxonomy occurs across the stations. The y-axis represents the evenness of that class taxonomy across stations it occurs in. This was calculated using a Dispersion index, which is a varient of J' of Pielou's evenness88 and based on H' of Shannon85,89. Each circle represents a class taxonomy abundance. The size of each circle is resized by replacing the area of the circle which represented the total abundance for that class with square root of the abundance divided by pi. Canonical correspondence analyses (CCAs) The R package VEGAN90 was employed to perform a Canonical Correspondence Analysis (CCA) on each dataset of 18S, 16S and metatranscript Pfam against the individual environmental variables. The environmental data consisted of temperature, salinity, nitrate/nitrite, phosphate and silicate (Supplementary Fig. 6). A network analysis was performed using the R package Weighted Gene Co-Expression Network Analysis (WGCNA)91 The first analysis was performed on samples of combined prokaryotes at the taxonomic rank of genus and on eukaryotes at the taxonomic rank of species to describe networks derived from their log10-scaled abundances. The prokaryotes and eukaryotes normalised files were combined for each station. A signed adjacency measure for each lineage was determined by raising the absolute value of the Pearson correlation coefficient to the power of 11. A topological overlap measure (TOM) was calculated from the resulting adjacency matrix. Hierarchical clustering was carried out on the TOM measure, which resulted in two networks being discovered in the network (Fig. 4). The second analysis was performed on samples of the metatranscriptome Pfam dataset to describe networks derived from their log10-scaled gene counts. A signed adjacency measure for each lineage was determined by raising the absolute value of the Pearson correlation coefficient to the power of 12. A topological overlap measure (TOM) was calculated from the resulting adjacency matrix. Hierarchical clustering was carried out on the TOM measure, which resulted in two networks being discovered in the network (Fig. 2, Supplementary Table 5). When incorporating environmental data, latitude values were redefined, so that the North pole is 0°, the Equator is 90° and the South pole is 180°. Unaltered environmental data can be found in Supplementary Table 1. Beta diversity break-point analysis The break-point analysis is based on the methodology from ref. 92. The beta diversity indices used in the break-point analyses is the Sørensen indices. A breakpoint was determined and plotted for each of the Pfam protein families, 18S rDNA and 16S rDNA datasets. Breakpoints in the 18S and 16S rDNA datasets were investigated between the temperature range of 7 °C to 29.02 °C. When incorporating environmental data, latitude values were redefined, so that the North pole is 0°, the Equator is 90° and the South pole is 180°. Unaltered environmental data can be found in Supplementary Table 1. The break-point analysis was generated using piecewise regression in R. This was calculated by firstly producing a presence–absence matrix for each dataset. A multiple-site dissimilarity was performed on the presence–absence matrix with beta.pair, a function from the betapart R package and a dissimilarity index set to Sørensen, thus produced a distance object called beta.sor34. Outliers were identified with bagplot, a function from the aplpack R package and then removed from the analyses. Remaining values were then plotted against the environmental variable (temperature or altered latitude), these were searched through for possible breakpoints, that is for the one with the lowest mean squared error. For the 18S rDNA and 16S rDNA datasets, a number of samples in the North Atlantic Ocean did not pass quality control before sequencing. Due to this, when performing the 18S rDNA and 16S rDNA break-points analyses there were gaps in each of the datasets plots in the North Atlantic Ocean region. To investigate the effects of the missing samples, four model scenarios were produced to mimic the missing samples. The first model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with current closest by latitude stations. This resulted in breakpoints for the 18S and 16S rDNA of 20.66 °C and 9.49 °C, respectively. The second model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with values from the Arctic Ocean. This resulted in breakpoints for the 18S and 16S rDNA of 14.4 °C and 12.07 °C, respectively. The third model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with values from the South Atlantic Ocean. This resulted in breakpoints for the 18S and 16S rDNA of 9.49 °C and 12.22 °C, respectively. The fourth model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with values from both the Arctic Ocean and the South Atlantic Ocean. This resulted in breakpoints for the 18S and 16S rDNA of 14.4 °C and 12.22 °C, respectively. A break-point analysis was performed for the Pfam protein families beta diversity against temperate with the North Atlantic Ocean samples (Stratiphyt-II) removed to test whether key results remain unchanged (Supplementary Fig. 10e). A breakpoint of 18.2 °C was determined with a p-value of 1.65e−11. Hence, the main result (Fig. 5A) remains unchanged. IPCC-based modelling of geographical shifts in beta-diversity breakpoints across the North Atlantic To assess where these boundaries are, we began with the HadISST dataset93, taking the 1961–1990 climatology (Fig. 6). For estimates of changes over the 21st century, we used the RCP 8.5 HadGEM2-ES CMIP5 experiment37. A historical HadGEM2-ES experiment was also run for CMIP5, which we used to bias-correct the projected temperatures. This was achieved by determining the differences between the 1961–1990 HadISST and HadGEM2-ES temperatures for each grid box and adding them to the projections. Grid boxes that contain sea ice in the climatology are ignored from this analysis. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. iTAG rDNA Data: https://opendata.earlham.ac. Eukaryotic metatranscriptome data: https://genome.jgi.doe.gov/. (https://doi.org/10.25585/1488054). Falkowski, P. G., Barber, R. T. & Smetacek, V. Biogeochemical controls and feedbacks on ocean primary production. Science 281, 200–206 (1998). Pierella Karlusich, J. J., Ibarbalz, F. M. & Bowler, C. Phytoplankton in the Tara Ocean. Ann. Rev. Mr. Sci. 12, 233–265 (2020). Field, C. B., Behrenfeld, M. J., Randerson, J. T. & Falkowski, P. G. Primary production of the biosphere: Integrating terrestrial and oceanic components. Science 281, 237–240 (1998). Finlay, B. J. Global dispersal of free-living microbial eukaryote species. Science 296, 1061–1063 (2002). Sul, W. J., Oliver, T. A., Ducklow, H. W., Amaral-Zettler, L. A. & Sogin, M. L. Marine bacteria exhibit a bipolar distribution. Proc Natl Acad Sci USA, 110, 2342–2347 (2013). Mayol, E. et al. Long-range transport of airborne microbes over the global tropical and subtropical ocean. Nat. Comms 8, 201 (2017). Casteleyn, G. et al. Limits to gene flow in a cosmopolitan marine planktonic diatom. Proc. Natl Acad. Sci. USA 107, 12952–12957 (2010). Godhe, A. et al. Physical barriers and environmental gradients cause spatial and temporal genetic differentiation of an extensive algal bloom. J. Biogeogr. 43, 1130–1142 (2016). Ibarbalz, F. M. et al. Global trends in marine plankton diversity across kingdoms of life. Cell 179, 1084–1097 (2019). Postel, U. et al. Adaptive divergence across Southern Ocean gradients in the pelagic diatom Fragilariopsis kerguelensis. Mol. Ecol. 29, 4913–4924 (2020). Whittaker, K. A. & Rynearson, T. A. Evidence for the environmental and ecological selection in a microbe with no geographic limits to gene flow. Proc. Nat. Acad. Sci. USA 141, 2651–2656 (2017). Thomas, M. K., Kremer, C. T., Klausmeier, C. A. & Litchman, E. A global pattern of thermal adaptation in marine phytoplankton. Science 338, 1085–1088 (2012). Cavicchioli, R. et al. Scientists' warning to humanity: microorganisms and climate change. Nat. Rev. Micro 17, 569–586 (2019). Richter, D. J. et al. Genomic evidence for global ocean plankton biogeography shaped by large-scale current systems. bioRxiv. Preprint at https://www.biorxiv.org/content/10.1101/867739v2 (2020). Lima-Mendez, G. et al. Determinants of the community structure in the global plankton interactome. Science 348, 1262073 (2015). PubMed Article CAS PubMed Central Google Scholar Kazamia, E., Helliwell, K. E., Purton, S. & Smith, A. G. How mutualisms arise in phytoplankton communities: building eco-evolutionary principles for aquatic microbes. Ecol. Lett. 19, 810–822 (2016). Wilkins, L. G. E. et al. Host-associated microbiomes drive structure and function of marine ecosystems. PLoS Biol. 17, e3000533 (2019). Amin, S. A. et al. Interactions between diatoms and bacteria. Microbiol. Mol. Biol. Rev. 76, 667–684 (2012). Seymour, J. R. et al. Zooming in on the phycosphere: the ecological interface for phytoplankton-bacteria relationships. Nat. Microbiol 2, 17065 (2017). Tang, Y. Z. et al. Most harmful algal bloom species are vitamin B1 and B12 auxotrophs. Proc. Nat. Acad. Sci. USA 107, 20756–20761 (2010). Bork, P. et al. Tara Oceans studies plankton at planetary scale. Science 348, 6237 (2015). Sunagawa, S. et al. Structure and function of the global ocean microbiome. Science 348, 1261359 (2015). Guidi, L. et al. Plankton networks driving carbon export in the oligotrophic ocean. Nature 532, 465–470 (2016). Markussen Bjorbaekmo, M. F., Evenstad, A., Roesaeg, L. L., Krabberoed, A. K. & Logares, R. The planktonic protist interactome: where do we stand after a century of research? ISME J. 14, 544–559 (2020). Lapin, M. & Barnes, B. V. Using landscape ecosystem approaches to assess species and ecosystem diversity. Cons. Biol. 9, 1148–1158 (1995). Mori, A. S., Isbell, F. & Seidl, R. Beta-diversity, community assembly, and ecosystem functioning. Trends Ecol. Evol. 33, 549–564 (2018). Huntemann, M. et al. The standard operating procedure of the DOE-JGI Metagenome Annotation Pipeline (MAP v.4). Stand. Genom. Sci. 11, 17 (2016). El-Gebali, S. et al. The Pfam protein families database in 2019. Nucleic Acids Res. 47, D427–D432 (2019). Langfelder, P. & Horvath, S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinforma. 9, 559 (2008). Von Schreiber, M., Sommer, U. & Juergens, K. Tight coupling of Glaciecola spp. and diatoms during cold-water phytoplankton spring blooms. Front. Microbiol. https://doi.org/10.3389/fmicb.2017.00027 (2017). James, E. K. et al. Herbaspirillum, an endophytic diazotroph colonizing vascular tissue 3Sorghum bicolor L. Moench. J. Exp. Bot. 48, 785–798 (1997). Hennecke, H. Nitrogen fixation genes involved in the Bradyrhizobium japonicum soybean symbiosis. FEBS Lett. 268, 422–426.3 (1990). Oh, H.-M. et al. Complete genome sequence of Erythrobacter litoralis HTCC2594. J. Bact. 191, 2419–2420 (2009). Baselga, A. & Orme, C. D. L. betapart: an R package for the study of beta diversity. Methods Ecol. Evol. 3, 808–812 (2012). Zanna, L., Khatiwala, S., Gregory, J. M., Ison, J. & Heimbach, P. Global reconstruction of historical ocean heat storage and transport. Proc. Nat. Acad. Sci. USA 116, 1126–1131 (2019). Capotondi, A., Alexander, M. A., Bond, N. A., Curchister, E. N. & Scott, J. D. Enhanced upper ocean stratification with climate change in the CMIP3 models. J. Geophys. Res. 117, C4 (2012). Jones, C. D. et al. The HadGEM2-ES implementation of CMIP5 centennial simulations. Geosci. Model Dev. 4, 543–570 (2011). Behrenfeld, M. J. et al. Climate-drive trends in contemporary ocean productivity. Nature 444, 752–755 (2006). Longhurst, A. R. Ecological Geography of the Sea, 2nd ed. (Elsevier, 2006). Fay, A. R. & McKinley, G. A. Global open-ocean biomes: mean and temporal variability. Earth Syst. Sci. Data 6, 273–284 (2014). Bruno, S., Coppola, D., di Prisco, G., Giordano, D. & Verde, C. Enzymes from marine polar regions and their biotechnological applications. Mar. Drugs 17, 544 (2019). CAS PubMed Central Article Google Scholar Giuliodori, A. M. et al. The cspA mRNA is a thermosensor that modulates translation of cold-shock protein CspA. Mol. Cell 37, 21–33 (2010). Shing, H.-S. et al. Genome-wide transcriptome analysis revealed organelle specific responses to temperature variations in algae. Sci. Rep. 6, 37770 (2016). Follows, M. J., Dutkiewicz, S., Grant, S. & Chisholm, S. W. Emergent biogeography of microbial communities in a model ocean. Science 315, 1843–1846 (2007). de Vargas, C. et al. Eukaryotic plankton diversity in the sunlit ocean. Science 348, 1261605–1261605 (2015). Bowman, J. P., McCammon, S. A., Brown, J. L. & McMeekin, T. A. Glaciecola punicea gen. nov., sp. nov. and Glaciecola pallidula gen. nov., sp. nov.: psychrophilic bacteria from Antarctic sea-ice habitats. Int. J. Syst. Bacteriol. 48, 1213–1222 (1998). Methe, B. A. et al. The psychrophilic lifestyle as revealed by the genome sequence of Colwellia psychrerythraea 34H through genomic and proteomic analyses. Proc. Natl Acad. Sci. USA 102, 10913–10918 (2005). Salazar, G. et al. Gene expression changes and community turnover differentially shape the global ocean metatranscriptome. Cell 197, 1068–1083 (2019). Caporaso, J. G. et al. Global patterns of 16S rRNA diversity at a depth of millions of sequences per sample. Proc. Natl Acad. Sci. USA 108, 4516–4522 (2011). Stoeck, T. et al. Multiple marker parallel tag environmental DNA sequencing reveals a highly complex eukaryotic community in marine anoxic water. Mol. Ecol. 19, 21–31 (2010). Tremblay, J. et al. Primer and platform effects on 16S rRNA tag sequencing. Front. Microbiol. 6, 771 (2015). Martin, J. et al. Rnnotator: an automated de novo transcriptome assembly pipeline from stranded RNA-Seq reads. BMC Genomics 11, 663 (2010). Magoc, T. & Salzberg, S. L. FLASH: fast length adjustment of short reads to improve genome assemblies. Bioinformatics 27, 2957–2963 (2011). Edgar, R. C. Updating the 97% identity threshold for 16S ribosomal RNA OTUs. bioRxiv, 192211. Preprint at https://doi.org/10.1093/bioinformatics/bty113 (2017). Bushnell B. BBMap Guide. https://sourceforge.net/projects/bbmap/ (2014). Bushnell, B. et al. BBMerge—accurate paired shotgun read merging via overlap. PLoS ONE 12, 1–15 (2017). Li, D. et al. MEGAHIT: an ultra-fast single-node solution for large and complex metagenomics assembly via succint de Bruijn graph. Bioinformatics 31, 1674–1676 https://doi.org/10.1093/bioinformatics/btv033 (2015). Eddy, S. R. Hidden Markov models. Curr. Opin. Struct. Biol. 6, 361–365 (1996). Finn, R. D. et al. The Pfam protein families database: towards a more sustainable future. Nucleic Acids Res. 44, D279–D285 (2016). Quast, C. et al. The SILVA ribosomal RNA gene database project: improved data processing and web-based tools. Nucl. Acid. Res. 41, D590–D596 (2013). Keeling, P. J. et al. The marine microbial eukaryote transcriptome sequencing project (MMETSP): illuminating the functional diversity of eukaryotic life in the oceans through transcriptome sequencing. PLoS Biol. 12, e1001889 (2014). PubMed PubMed Central Article CAS Google Scholar Li, W. & Godzik, A. Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences. Bioinformatics 22, 1658–1659 (2006). Thompson, J. D. et al. CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice. Nucl. Acids Res. 22, 4673–4680 (1994). Letunic, I. & Bork, P. Interactive Tree Of Life (iTOL): an online tool for phylogenetic tree display and annotation. Bioinformatics 23, 127–128 (2007). Waterhouse, A. M. et al. Jalview Version 2–a multiple sequence alignment editor and analysis workbench. Bioinformatics 25, 1189–1191 (2009). MathSciNet CAS PubMed PubMed Central Article Google Scholar Capella-Gutierrez, S. et al. trimAl: a tool for automated alignment trimming in large-scale phylogenetic analyses. Bioinformatics 25, 1972–1973 (2009). Stamatakis, A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics 30, 1312–1313 (2014). NCBI Resource Coordinators, Database resources of the National Center for Biotechnology Information. Nucl. Acids Res. 44, D7–D19 (2016). Eddy, S. R. A new generation of homology search tools based on probabilistic inference. Genome Inform. 23, 205–211 (2009). Matsen, F. A. et al. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree. BMC Bioinforma. 11, 538 (2010). Prokopowich, C. D. et al. The correlation between rDNA copy number and genome size in eukaryotes. Genome 46, 48–50 (2003). Godhe, A. et al. Quantification of diatom and dinoflagellate biomasses in coastal marine seawater samples by real-time PCR. Appl. Environ. Microbiol. 74, 7174–7182 (2008). Carlton J. M., Perkins, S. L. & Deitsch, K. W. (eds) (2013) Malaria Parasites. Comparative Genomics, Evolution, and Molecular Biology. Caister Academic Press, Hethersett, Norfolk, pp 280. Torres-Machorro, A. L. et al. Ribosomal RNA genes in eukaryotic microorganisms: witnesses of phylogeny? FEMS Microbiol. Rev. 34, 59–86 (2010). Oliver, M. J. et al. The mode and tempo of genome size evolution in eukaryotes. Genome Res. 17, 594–601 (2007). Moreau, H. et al. Gene functionalities and genome structure in Bathycoccus prasinos reflect cellular specializations at the base of the green lineage. Genome Biol. 13, R74 (2012). Boucher, N. et al. Flow cytometric determination of phytoplankton DNA in cultures and oceanic populations. Mar. Ecol. Prog. Ser. 71, 75–84 (1991). Hauser, P. M. et al. Hauser, comparative genomics suggests that the fungal pathogen pneumocystis is an obligate parasite scavenging amino acids from its host's lungs. PLoS ONE 5, e15152 (2010). Nordberg, H. et al. The genome portal of the Department of Energy Joint Genome Institute: 2014 updates. Nucl. Acids Res. 42, D26–D31 (2014). Huson, D. H. et al. Integrative analysis of environmental sequences using MEGAN4. Genome Res. 21, 1552–1560 (2011). Huntemann, M. et al. The standard operating procedure of the DOE-JGI Microbial Genome Annotation Pipeline (MGAP v.4). Stand. Genom. Sci. 10, 4–9 (2015). Chen, I. et al. The IMG/M data management and analysis system v.6.0: new tools and advanced capabilities. Nucl. Acids Res. 49, D751–D763 (2020). PubMed Central Article CAS Google Scholar Edgar, R. C. USEARCH cluster otus. (2010). Klappenbach, J. A. et al. rrndb: the Ribosomal RNA Operon Copy Number Database. Nucl. Acids Res. 29, 181–184 (2001). MacArthur, R. H. & MacArthur, J. W. On bird species diversity. Ecology 42, 594–598 (1961). Oksanen, O. et al. Vegan: community ecology package. R Package Version 2.3–5 (2016). Bray, R. J. & Curtis, J. T. An ordination of the upland forest communities of southern Wisconsin. Ecol. Monogr. 27, 325–349 (1957). Pielou, E. The measurement of diversity in different types of biological collections. J. Theor. Biol. 13, 131–144 (1966). Payne, L. X. et al. Quantifying spatial pattern with evenness indices. Ecol. Appl. 15, 507–520 (2005). Dixon, P. VEGAN, a package of R functions for community ecology. J. Veg. Sci. 14, 927–930 2003). Langfelder, P. & Horvath, S. Eigengene networks for studying the relationship between co-expression modules. BMC Syst. Biol. 1, 54 (2007). Castro-Insua, A. et al. Break the pattern: breakpoints in beta diversity of vertebrates are general across clades and suggest common historical causes. Glob. Ecol. Biogeogr. 25, 1279–1283 (2016). Rayner, N. A. et al. Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res. 108, D14 (2003). We would like to thank Captains Schwarze and Wunderlich and the RV 'Polarstern' crews of the ARK27-1, ANT29-1 and PS103 expeditions for their vital help during sampling. The Southern Ocean sampling was performed as part of project AWI_PS103_04. The work conducted by the U.S. Department of Energy Joint Genome Institute, a DOE Office of Science User Facility, is supported by the Office of Science of the U.S. Department of Energy under contract no. DE-AC02-05CH11231. R.M.L. acknowledges funding from BBSRC Core Strategic Programme Grant BB/CSP1720/1. T.M. acknowledges funding from the U.S. Department of Energy, Joint Genome Institute (Grant 532, Community Science Program) and the Natural Environment Research Council (NERC) (Grants NE/K004530/1; NE/R000883/1). The PhD studentship of K.M. was funded by the University of East Anglia (UEA) and the Earlham Institute. The PhD studentship of K.S. was funded by the School of Environmental Sciences at UEA. These authors contributed equally: Kara Martin, Katrin Schmidt. School of Computing Sciences, University of East Anglia, Norwich Research Park, Norwich, UK Kara Martin & Vincent Moulton Earlham Institute, Norwich Research Park, Norwich, UK Kara Martin & Richard M. Leggett School of Environmental Sciences, University of East Anglia, Norwich Research Park, Norwich, UK Katrin Schmidt, Andrew Toseland & Thomas Mock Global Systems Institute, University of Exeter, Exeter, UK Chris A. Boulton & Timothy M. Lenton U.S. Department of Energy Joint Genome Institute, Lawrence Berkeley National Laboratory, Berkeley, CA, USA Kerrie Barry, Alicia Clum, Chris G. Daum, Emiley Eloe-Fadrosh, Brian Foster, Bryce Foster, Marcel Huntemann, Natalia N. Ivanova, Nikos C. Kyrpides, Erika Lindquist, Supratim Mukherjee, Krishnaveni Palaniappan, T. B. K. Reddy, Simon Roux, Susannah G. Tringe, Neha Varghese & Igor V. Grigoriev Department of Biology, University of Duisburg-Essen, Essen, Essen, Germany Bánk Beszteri Royal Netherlands Institute for Sea Research, Texel, The Netherlands Corina P. D. Brussaard & Klaas Timmermans Alfred Wegener Institute for Polar and Marine Research, Bremerhaven, Germany Allison Fong, Michael Ginzburg, Mariam R. Rizkallah & Klaus U. Valentin Centre for Isotope Research - Oceans, Energy and Sustainability Research Institute Groningen, Faculty of Science and Engineering, University of Groningen, AG Groningen, The Netherlands Willem H. van de Poll Plant and Microbial Biology Department, University of California, Berkeley, CA, USA Kara Martin Andrew Toseland Chris A. Boulton Kerrie Barry Corina P. D. Brussaard Alicia Clum Chris G. Daum Emiley Eloe-Fadrosh Allison Fong Brian Foster Bryce Foster Michael Ginzburg Marcel Huntemann Natalia N. Ivanova Nikos C. Kyrpides Erika Lindquist Supratim Mukherjee Krishnaveni Palaniappan T. B. K. Reddy Mariam R. Rizkallah Simon Roux Klaas Timmermans Susannah G. Tringe Neha Varghese Klaus U. Valentin Timothy M. Lenton Richard M. Leggett Vincent Moulton Thomas Mock T.M. conceived and coordinated the project and helped with sample preparation and data analysis. The data analysis was directed by T.M. and V.M. in cooperation with R.M.L. T.M. wrote the manuscript with contributions from K.M., K.S. V.M., R.M.L, and A.T. K.S. collected samples in the Arctic and South Atlantic with help of M.G. and M.R.R., performed nucleic acid extractions. K.M. performed the analysis of most sequence data generated by JGI. A.T. contributed to sequence analysis performed by K.M. C.P.D.B., K.T. and WvdP provided samples from the Stratiphyt Cruise (22 stations between Canary Islands and Iceland). A.F., K.U.V. and B.B. provided samples from the Southern Ocean and performed sample preparation for sequencing. C.A.B. performed the IPCC-based modelling under the guidance of T.M.L. E.L., K.B., A.C., C.G.D., Bri.F., Bry.F. M.H., K.P., T.B.K.R., N.N.I., N.C.K., S.M. and N.V. performed library preparation, sequencing and IMG-based analyses at JGI. I.V.G., S.R., S.G.T. and E.E.-F. coordinated sequencing work at JGI. Correspondence to Thomas Mock. Peer review information Nature Communications thanks Pedro Cermeño, Senjie Lin, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Martin, K., Schmidt, K., Toseland, A. et al. The biogeographic differentiation of algal microbiomes in the upper ocean from pole to pole. Nat Commun 12, 5483 (2021). https://doi.org/10.1038/s41467-021-25646-9
CommonCrawl
Does an irreversible reaction have an equilbrium between reactants and products? Retrospective analysis 2/13/2017 -- The barium sulfate example is a poor choice. Equilibrium equations should really be defined using activities, and the activity of solid barium sulfate is by definition 1. A previous question, Is every chemical reaction in equilibrium?, started much discussion. I objected to the answer that Curt F. gave and challenged him to derive the equilibrium reaction for a particular irreversible reaction. He countered with a reply indicating that I should make the specific case a separate question - so here it is. I'm going to change the reaction slightly. Given the following reaction between barium chloride and sodium sulfate, does "the" chemical equilibrium exist? $$\ce{BaCl2(aq) + Na2SO4(aq) -> BaSO4 v + 2NaCl(aq)}$$ I contend that "the" equilibrum between reactants and products such as $$K_{\text{eq}} = \frac{\ce{[BaSO4][NaCl]^2}}{\ce{[BaCl2][Na2SO4]}}$$ doesn't exist since when the barium sulfate precipitates there could be a microgram or a kilogram as the product. Furthermore adding solid barium sulfate to the product will not shift the reaction to the left. I'd agree that calling the reaction irreversible and saying that $K_{\text{eq}}$ doesn't exist is a tautology. So given reaction between $\ce{aA + bB}$ to form products $\ce{cC + dD}$ then the reaction is a reversible reaction if the equilibrium such as $$K_{\text{eq}} = \frac{\ce{[C]^{c}[D]^{d}}}{\ce{[A]^{a}[B]^{b}}}$$ exists and if such an equilibrium doesn't exist then it is an irreversible reaction. There are obviously "some" equilibrium in the reaction, but not "the" equilibrium between products and reactants. Water has an autoionization equilibrium and $\ce{H2SO4}$ has two $\mathrm{p}K_\mathrm{a}$'s. Barium sulfate has a $K_{\text{sp}}$. Also the barium sulfate precipitate isn't static once formed. It is dissolving and reprecipitating at the same rate, but the rates depend on the surface area of the precipitate not the "concentration" (or mass) of the precipitate. So, if I'm wrong, how do you calculate $K_{\text{eq}}$ for the overall reaction given? Assume you mix $500\mathrm{ml}$ of $0.1$ molar barium chloride with $500\mathrm{ml}$ of $0.1$ molar sodium sulfate. What is $K_{\text{eq}}$?! thermodynamics equilibrium MaxWMaxW $\begingroup$ I like the question, but I think that it comes down to a purely philosophical question. What do you consider to be the reactants. If you consider $\ce{BaCl2(aq)}$ the reactant, then you need to include, that these are in solution, hence it is $\ce{Ba^2+ + 2Cl- }$. Same for sodium sulfate and sodium chloride. Therefore as reactants only barium ions and sulfate ions are left, and yes, they are in equilibrium. $\endgroup$ – Martin - マーチン♦ Mar 4 '16 at 5:23 $\begingroup$ Why not? I would use $$K^\circ = \exp\{−\Delta_\mathrm{r}G^\circ/ R T\}$$ and then you can formulate your equilibrium constant in terms of activity $$K=\frac{a(\ce{Ba^2+})a(\ce{SO4^2-})}{a(\ce{BaSO4})}.$$ It obviously boils down to $K_\mathrm{sp}$ with all approximations included, but it is still a reaction which is in equilibrium. $\endgroup$ – Martin - マーチン♦ Mar 4 '16 at 5:53 $\begingroup$ @ ChesterMiller - The notion was that K isn't a "true" equilibrium constant between reactants and products since adding more solid barium sulfate to the products wouldn't shift the equilibrium to the left. // @Curt_F. also has made a detailed analysis of why an equilibrium exists using a thermodynamic argument. chemistry.stackexchange.com/a/43262/22102 My thermodynamics is very rusty and I forgot that the activity of a solid is 1 until Martin posted his comment. So the chemistry works, but sidesteps the amount of barium sulfate ppt produced in the "equilibrium." $\endgroup$ – MaxW Mar 4 '16 at 18:04 $\begingroup$ @Shadock - the evaporation of water isn't really a chemical reaction of reactants to yield a product. $\endgroup$ – MaxW Mar 4 '16 at 18:30 $\begingroup$ @Shadock - Considering thermodynamics you end up with two phases like the barium sulfate. It doesn't matter how much liquid water is left, just that there is some. The activity of liquid water will be 1. $\endgroup$ – MaxW Mar 4 '16 at 19:41 The reaction you are interested in is the precipitation of barium sulfate, a relatively insoluble salt. The reverse of this reaction is the dissolution of barium sulfate crystals. The dissolution kinetics of barium sulfate have been studied in a variety of systems over the decades. Here are a few links: Kornicker et al. 1991 reported that barium sulfate dissolution into water reached equilibrium in less than 30 minutes at 25 °C and took about 5 minutes at 60 °C. They also presented a rate law for $\ce{BaSO4}$ dissolution of $r=k A (C_{eq} - C)^2$, where $A$ is the specific area of the barium sulfate crystals, $C_{eq}$ is the equilibrium concentration of barium sulfate, and $C$ the instantaneous concentration of dissolved barium sulfate. This rate law had been used historically (see refs.) but these authors advocate instead for a first-order rate law. Dove and Czank 1995 also disagreed with second-order rate law of Kornicker et al. and proposed a first-order reaction for $\ce{BaSO4}$, i.e. $r = k A (C_{eq} - C)$. I couldn't find any references for barite solubility in sodium chloride solution, but I think it is safe to say that the solubility is finite. Probably the kinetics of dissolution would not be more than an order of magnitude different than in pure water. Since this dissolution is the reverse of the precipitation reaction you are interested in, and since the literature is clear that dissolution happens at non-zero rates, then both "reverse" reaction and forward reaction must be happening in your system. That sounds like the definition of a dynamic equilibrium to me. Note the appearance of $A$ in the equations. The surface area of barium sulfate solid involved in the equilibrium determines the kinetics. Usually we assume that the affect of $A$ is the same for dissolution and precipitation kinetics, so that the effects cancel out. This is why we say that the activity of a solid phase is 1 and does not vary. However, the effects of $A$ do not always cancel out. The solubility of nanoparticles of $\ce{BaSO4}$ is probably higher than the solubility of bulk $\ce{SO4}$ for this reason. Experimentally, one could probe the dynamics of the equilibrium by mixing $\ce{^131BaSO4(s)}$, i.e. radiolabeled barium sulfate, with a saturated solution of $\ce{^138BaSO4(aq)}$. This could be done even in a sodium chloride solution. At regular time intervals, samples of the solution would be taken, which could be filtered to remove any traces of solid particulates, and the levels of the radiotracer that had entered solution could be measured. I don't know, quantitatively, what the outcome of the experiment would be, especially in sodium chloride solution. But I am 100% confident that at some time scale, probably within tens of minutes, radioactivity would appear in the liquid solution. That indicates that there is an equilibrium. The relevant equilibrium constant in this case is the $K_{sp}$ for barium sulfate, as was alluded to in both your question and in the comments. Curt F.Curt F. To continue with this question but in a more general way after a comment on the particular reaction in question. The case of $\ce{BaSO4}$ has been chosen carefully to argue a particular point, but it seems to me that there are two equilibria involved, the first is chemical $\ce{Ba^{2+} + SO_4^{2-} <=> BaSO4_{aq}}$ the second involves no chemical reaction but is between equilibrium of $\ce{BaSO4_{aq}}$ and its solid. Because of the huge insolubility the reaction appears to be irreversible as the product is effectively removed from solution which drives the reaction very much towards products. If the reaction were to be studied in another polar solvent, one might try acetonitrile or dichloromethane for example, where insolubility is probably far less, then perhaps the true equilibrium $\ce{Ba^{2+} + SO_4^{2-} <=> BaSO4}$ would be measured. Technically, all reactions are reversible since thermodynamics does not impose an energy difference or time-scale limit on an equilibrium, and assumes that reactants & products are not artificially separated from one another. One might consider that an irreversible reaction is one in which after some time no appreciable concentration of reactants are left, but this assumes that the forward reaction is rapid enough to occur on what ever time-scale we set ourselves. Thus a better way is to know the reaction's free energy and then, as this becomes the minimum size of the reverse reaction's activation barrier, subjectively estimate whether the reaction is going to be sufficiently exothermic to make it 'irreversible'. As only order of magnitudes of rate constants are required, this can be reasonably be done from a simple Arrhenius type equation. A reaction with a first order rate constant of 1/sec might be considered to be irreversible in some cases, but 1/year in others. Geochemists who study rock formation may perhaps consider these reactions as being very fast. At the molecular level a reaction occurs because there is always a chance that, by random collisions with the surrounding solvent molecules (or other molecules in a gas or vapour), enough energy is imparted to the reactant molecules to overtop the activation barrier and transform them to products, and similarly for products returning to reactants. As the probability of reacting decreases exponentially with energy (Boltzmann distribution) this probability soon becomes vanishingly small with an increase in energy and hence for practical purposes a reaction may, in practice, be considered to be 'irreversible' or similarly products 'un-reactive' even though there is still technically equilibrium between reactants and products. porphyrinporphyrin Also realised it change from H to Na can't be bothered changing it as it's 11:33pm and I have school tomorrow BaCl2(aq)+H2SO4(aq)⟶BaSO4(ppt)+2HCl(aq) Let's just think what (aq) means; it means you have ions floating about in there which have Ks with there solids (If you start from thinking no ppt just ions dissolved) we have Ba^2+ , H^1+ , Cl^1- , and SO4^2- Then you consider the Ks of the different salts which are BaCl2 HCl BaSO4 and H2SO4. They will all go to Ks so all the salts would be forming and being dissolved (unless blocked eg things can become supersaturated). BaSO4 has extremely low Ks so most will ppt at the same time BaCl2 will be go to Ks with the ions Ba^2+ and Cl^1- which are still in solution and H2SO4 would also go to Ks, which means BaCl2 is being formed and therefore there is a reverse reaction. (((note if I had BaSO4 with water which would be in equilibrium (so tiny dissolved/tiny bit of Ba^2+ ions and SO4^2- ions) and I added Cl^-1 ions a negligible amount more BaSO4 will dissolve as BaCl2 would go to equilibrium reducing Ba^2+ ions leading to negligible amount of BaSO4 dissolving to remain at Ks)))-this also show " Le Chatelier's principle" Yea I was trying to post this on original thred but this one everybody seems to be saying this Not the answer you're looking for? Browse other questions tagged thermodynamics equilibrium or ask your own question. Is every chemical reaction in equilibrium? Spontaneous/Non-Spontaneous Reactions and Reversible Reactions Can precipitation reactions attain equilibrium? Derivation of formula for Hydrolysis constant What happens at equilibrium for two solutions that produce gasses? What does it mean for a reaction to favor the reactants/products? Equilibrium constant. Can it be reached? What is the difference between ΔG and ΔrG? Relationship between a temperature increase of an exothermic reaction, Keq, and kf (forward rate constant) Thermodynamics - Series Reaction - Estimating mols of reactants based on Free Energy of Rxn Calculating equilibrium concentrations of silver and halide ions
CommonCrawl
The impact of refugee experiences on education: evidence from Burundi Sonja Fransen ORCID: orcid.org/0000-0002-7709-44181, Carlos Vargas-Silva2 & Melissa Siegel3 IZA Journal of Development and Migration volume 8, Article number: 6 (2018) Cite this article Previous studies suggest that displacement is one of the channels through which conflict impacts schooling outcomes. However, there is scarce evidence on this impact for those who are displaced internationally (i.e. refugees). We use data from Burundi, a country which experienced large-scale conflict-led emigration and substantial post-war refugee return, to explore differences in schooling outcomes between returnees, defined as individuals who were displaced to a neighbouring country and later returned home, and stayees, defined as individuals who never left the country during the conflict (i.e. those who were never displaced and those who were only displaced internally). Our results suggest that, controlling for pre-war characteristics and cohort effects, returning refugees are more likely to have finished primary school than their contemporaries who never left the country. We also find that an additional year spent as a refugee while of schooling age is associated with a four to six percentage point increase in the likelihood of finishing primary school. JEL Classification: F22, D74, I25 The number of displaced persons worldwide is currently at its highest level since the Second World War. More than 65 million people around the globe were forcibly displaced in 2015, of which approximately a third (21 million) were displaced internationally (i.e. refugees). The vast majority of refugees reside in neighbouring developing countries (United High Commissioner for Refugees 2016a). The consequences of displacement for those affected are significant and frequently long-lasting and affect multiple aspects of human life. One important aspect that displacement experiences can affect is access to education. Several studies suggest that forced displacement is one of the key channels through which conflict can have a detrimental impact on schooling outcomes (Chamarbagwala and Moran 2011; Justino et al. 2014; Verwimp and Van Bavel 2014). Most studies have, however, focused on internally displaced persons (IDPs) while there is scarce evidence on the impact of forced displacement on the education of refugees. One of the main reasons for the scarcity of evidence on the impact of refugee experiences on schooling is the lack of datasets that include a large sample of individuals who experienced international displacement and their contemporaries who did not, so that the educational outcomes of both groups can be compared. This type of analysis is only possible in countries which experienced a large outflow of refugees and a large inflow of returnees after the end of the conflict. This paper makes use of a nationally representative survey recently conducted by the authors in Burundi, a country which has experienced these two flows. The survey was conducted during early 2015 and involved interviews with 1500 households resident in 100 communities. The selected communities were distributed across all the provinces of the country according to the demographic weight of these provinces in the 2008 Census. A community representative was also interviewed in each of the communities. In addition, in households with returnee members, a randomly selected returnee was selected for an in-depth interview about experiences while in displacement and upon return. In this paper, we explore differences in schooling outcomes between returnees, defined as individuals who were displaced to a neighbouring country and later returned home, and stayees, defined as individuals who never left the country during the conflict (i.e. those who were never displaced and those who were only displaced internally). Given the low levels of schooling in Burundi, we focus on the completion of primary education and explore differences in the impact of refugee and stayee experiences across different schooling cohorts. Refugee experiences may have negative as well as positive impacts on schooling. For instance, when children are physically fleeing, which may be of greater or lesser duration, they do not have access to education. Before settling in a new location, children frequently end up in a transitory situation where they may not have access to schools. The family could also settle permanently in a remote area with no schools. Moreover, children in displacement are often more likely to become infected with certain diseases (Connolly et al. 2004), experience food shortages (Dharod et al. 2013) and rely on coping mechanisms such as early marriage, all of which may have a negative impact on education (Oh and van der Stouwe 2008). International displacement might also be related to greater loss of property and wealth during the conflict (Fransen et al. 2017) and a need for children to get involved in income-generating activities. On the other hand, refugee experiences could lead to better schooling outcomes compared to those of children who, for various reasons, do not leave their country of origin when conflict erupts. Refugee children have a right to protection and assistance, including the right to a basic education, as stated by the 1951 UN Refugee Convention relating to the Status of Refugees (United High Commissioner for Refugees 1951). Many refugee camps or refugee hosting areas are therefore equipped with primary education facilities, often provided for and/or financed by NGOs or international agencies (United Nations Educational, Scientific and Cultural Organization 2011). Refugee children also frequently have more access to humanitarian assistance and other sources of support than stayees, particularly when they reside in camps. Children may end up being hosted in countries that have better education systems than those back in their country of origin as well. In contrast, their contemporaries in the home country need to rely on their national government to provide services such as education. Many conflict-affected states lack the capacity and/or willingness to provide these services (UNESCO 2011). Children who stay behind are also more likely to be conscripted and experience higher levels of insecurity, two factors that have been shown to have negative consequences for human capital acquisition (Blattman and Annan 2010). Burundi experienced a civil war between 1993 and 2005. The conflict resulted in an estimated 300,000 casualties and an estimated 700,000 refugees (Ngaruku and Nkurunziza 2005). The majority of refugees settled in camps in Northwestern Tanzania (Fransen 2015; Ruiz and Vargas-Silva 2015, 2016, 2017; Whitaker 2002). The United Nations High Commissioner for Refugees (UNHCR) supervised and sponsored the schools in refugee camps in Tanzania (Skonhoft 2010). The education system in Burundi was seriously affected as a result of the war, as national primary enrolment rates plummeted by close to 15% during the conflict (World Bank 2016). The extent to which refugee schooling outcomes differ from those who never left the country is unknown. Given that the large majority of Burundians displaced abroad by the 1993–2005 conflict had returned home in 2015 (Harild et al. 2015; Fransen 2015; Fransen et al. 2017), it is possible to compare the schooling outcomes of returnees with the outcomes of their contemporaries in Burundi with our data. Our results suggest that, controlling for pre-war characteristics and cohort effects, returning refugees are 16 to 28 percentage points more likely to have finished primary school than their contemporaries who never left the country. The result is driven by individuals who were affected by displacement during their school age years. We also find that an additional year spent as a refugee while school aged increases the likelihood of finishing primary school by four to six percentage points. These findings correspond with reports which suggest that children who were of schooling age during the conflict and who were displaced internationally had better access to education facilities than those who stayed in Burundi (Integrated Regional Information Network 2002). We also provide a simple comparison of the schooling outcomes of returnees with those of residents of Kagera, a region of north-western Tanzania that borders Burundi, and there is suggestive evidence that returnees are better off than their hosts in Tanzania. While we cannot completely rule out that some of our findings are driven by pre-war differences between returnees and stayees (i.e. those who never left the country), we show that the results are robust to the inclusion of multiple controls for pre-war economic conditions. We also conduct a placebo test to support this conclusion. The rest of the paper is structured as follows. The next section explains the rights of the displaced to primary school education and discusses the existing evidence. The third section presents the historical background. The fourth section presents the data and methodology. The fifth section presents the main results of the paper. Section 6 presents complementary evidence from the survey on experiences while in displacement. Section 7 presents a comparison of stayees with the outcomes of Tanzanians from the same schooling cohort. Section 8 presents a series of robustness tests, and the last section concludes. Displacement and schooling outcomes: rights and previous evidence The 1951 Convention Relating to the Status of Refugees establishes the right to primary education for refugees. In particular, the Convention states that host governments should ensure that refugees are given the "same treatment as is accorded to nationals with respect to elementary education" (UNHCR 1951). Moreover, UNHCR has a mandate to protect refugees, which includes the provision of education (Waters and LeBlanc 2005). However, in many cases there is a substantial gap between the legal right to education of refugees and the actual provision of such education (Dryden-Peterson 2015a). For instance, a study conducted on Syrian refugee children in Lebanon showed that 80% of them did not attend school in 2013. Similarly, 56% of Syrian school-age children did not attend school in Jordan in that year. School dropout rates and class failure rates were also significantly higher among refugee children as compared to local children (UNICEF 2015). Insufficient access to education is particularly likely in cases of urban displacement, as many urban schools are already stretched and lack space for new pupils. Often it is also unfeasible to build new schools or expand existing ones in urban areas. United High Commissioner for Refugees (2016a, b, c) estimates that only about half of refugee children worldwide had access to primary education in 2015. In the estimations, we compare returnees with all those who remained in Burundi during the conflict. This includes those who were displaced internally (IDPs) as well as those who did not leave their communities of origin during the conflict. In the case of IDPs, the responsibility to provide primary education lies with national authorities. As explained by Justino (2011), educational facilities in IDP camps are not very common and the provision of this service "is typically disorganised, when it exists at all." National authorities are also responsible for providing education to those children who never leave their communities of origin. While these children are not affected by displacement, they suffer from other detrimental consequences of conflict for education, including the destruction of schools, killing and exodus of teachers, household income shocks and decreases in state investments on education (ibid.). There is a substantial literature which has explored the overall impacts of conflict on schooling outcomes (Akresh and De Walque 2008; Chamarbagwala and Moran 2011; Di Maio and Nandi 2013; Ichino and Winter-Ebmer 2004; Lai and Thyne 2007; Leon 2012; Shemyakina 2011; Valente 2014), but just a few studies have explored the specific impact of forced displacement on schooling outcomes. This evidence mostly refers to internal displacement and suggests that these experiences have major negative consequences for schooling outcomes. For instance, Justino et al. (2014) estimated that, in Timor Leste, experiencing displacement decreased school attendance by 8.5 percentage points. For Burundi, Verwimp and Van Bavel (2014) estimated that the probability of completing primary schooling declined by 2 percentage points for every year spent in a camp. However, Verwimp and Van Bavel (2014) used data from 2002, before the large wave of refugee return to the country. That means that they are mostly measuring the negative impact of internal displacement experiences and are not capturing the impact of international displacement experiences (more on this in the next section). Historical background of Burundi Burundi is a small country in the African Great Lakes region that has been one of the poorest in the world for years. The country held the 184th place (out of 188) in the Human Development Index in 2014 (United Nations Development Programme 2015) and gross national income per capita was just USD 270 in 2014, which is substantially lower than the average for sub-Saharan Africa (UDS 1699). Burundi is also densely populated. It occupied the third place in population density in Africa in 2015 with 435 people per square kilometre of land area (World Bank 2016), which is much higher than the average for sub-Saharan Africa (42 people per square kilometre). Even though agricultural land is scarce, approximately 90% of Burundians depend on subsistence farming as their main source of income and nutrition (World Bank 2015). Burundi's history is characterised by tensions between the country's two main ethnic groups: Hutus and Tutsis. These ethnic tensions are part of a complex and multifaceted power struggle that has led to large-scale conflict. In 1993, the events that led to the biggest conflict in Burundi's history started when Melchior Ndadadye became the first democratically elected Hutu president of the country. He was assassinated a few months later by Tutsi soldiers. The assassination led to a long civil war that lasted from 1993 to 2005 (Ngaruko and Nkurunziza 2005). Although there had been previous conflict episodes in Burundi, such as the one in 1972, the scale and intensity of the 1993–2005 conflict set it apart from earlier ones. Whereas previous violent episodes were limited to certain provinces, the 1990s war was a countrywide conflict. Hundreds of thousands Burundians fled the country during the conflict. As shown in Fig. 1, while many Burundian refugees initially went to the Democratic Republic of Congo (DRC) and Rwanda, Tanzania, was by far the main host of refugees.Footnote 1 Burundians who fled to Tanzania were settled in refugee camps in the north-western part of the country (Fransen 2015; Ruiz and Vargas-Silva 2015, 2016, 2017). Refugee experiences in Tanzania typically lasted long, with an average duration of 10 years (Fransen et al. 2017). Burundian refugees in DRC, Rwanda and Tanzania. Source of data is the UNHCR population statistics Living conditions in camps in Tanzania differed across sites but were generally better than those in Burundi during the war. Still, many refugees experienced serious hardship. For example, Burundian refugees in Tanzania were not provided with agricultural land (Harild et al. 2015). This was in contrast to previous cohorts of refugees, such as those who fled the country in 1972. Mobility and economic activities of the refugees were also restricted. After arrival of the refugees, the Tanzanian government announced that refugees were not allowed to go further than 4 km from the camps and that they were not allowed to work outside the camps or engage in agricultural work in the camps (Millner 2013). The majority of the 1993 refugees consequently became entirely dependent on the support of international donors and NGOs during their stay in Tanzania (Harild et al. 2015). Importantly for this study, primary schools in refugee camps in north-western Tanzania were funded by UNHCR, which paid for teacher salaries (Amnesty International 2005). It is estimated that around 90% of primary school-age children who arrived in Tanzania after 1993 were enrolled in school in 2000 (Jackson 2000). Qualitative studies suggest that Burundian refugees were highly motivated to send their children to the schools in camps, particularly the Hutus who felt they had been previously discriminated in the Burundian schooling system (Skonhoft 2010). Moreover, in the past, educated Hutus were one of the main targets of the Tutsi dominated Government and education was often seen as a liability (Nkurunziza and Ngaruko 2002; Skonhoft 2010; Verwimp and Van Babel 2014). Dryden-Peterson (2015b) explains that Burundians who settled in Tanzania following the 1972 conflict were integrated into the national educational system, using a Tanzanian curriculum taught in Kiswahili and English, which are the main languages of Tanzania. The overall goal was to facilitate the integration of these refugees in the host country. However, there was no political will to integrate refugees from the 1993 conflict. Schools in camp areas shifted to a Burundian curriculum taught in Kirundi and French, which are the main languages of Burundi. Hence, while UNHCR supervised the schools, Burundian educators were in charge of developing the education system in the refugee camps (Skonhoft 2010). The overall goal was to facilitate the return of these refugees to Burundi. It is estimated that the number of internally displaced reached 800,000 in 1999 (United Nations Office for the Coordination of Humanitarian Affairs 1999). Internal displacement experiences tended to be short and lasted approximately 1 year (Verwimp and Van Bavel 2014). Living conditions in the displacement camps within Burundi were generally poor. The majority of settlements lacked basic services such as clean drinking water and health care facilities (Zeender and McCallin 2002). Burundi's government was responsible for funding educational facilities in the camps, and reports suggest that at least 50% of school-age internally displaced children did not go to school (Integrated Regional Information Network 2002). The Arusha Peace Agreement was signed in August 2000 and led to the end of the conflict a few years later. In 2005, Burundians elected Pierre Nkurunziza, a Hutu, as President of the country, reinforcing the conditions of the peace agreement. Following the end of the conflict, Burundi experienced a large wave of return of its displaced population. Estimates suggest that over 500,000 Burundians returned from Tanzania between 2000 and 2015 (Fransen 2015). This is a considerable number for a country that had a population of only 6.7 million in 2000. Moreover, during this period, Tanzania stopped the provision of education for Burundians in order to encourage refugees to return home (Dryden-Peterson 2015b). In 2005, the Burundian Government announced that primary education in public schools would be provided for free from the following academic year. The gross primary enrolment rate increased from 82% in the 2004/2005 academic year to 101% in the 2005/2006 academic year (Sommeiller and Wodon 2014). There was a substantial increase in enrolment rates in all provinces of Burundi. The survey was conducted across all provinces of Burundi during January to March 2015.Footnote 2 A total of 1500 households were interviewed in 100 communities. The primary unit of interest was the sous-colline ('sub-hill' in French), which is the smallest administrative unit in the country. There are an estimated 8000 to 9000 sous-collines in Burundi, but there were no national lists available. As such, the first step was to select 100 collines (the second smallest administrative unit), which were distributed over the 17 provinces of the country according to the demographic weight of these provinces in the 2008 Census. Each colline typically consists of between two and ten sous-collines. Within each colline, a sous-colline was randomly chosen to conduct the interviews in. Within each sous-colline, 15 households and one community representative were interviewed. Figure 2 shows the communities/sous-collines that were surveyed across Burundi. Location of the communities surveyed Primary schooling age in Burundi ranges from 7 to 12 years of age. We limit the analysis to individuals who became of schooling age in 1973 or later and who were 12 years of age in or before 2014.Footnote 3 This means that the sample is limited to individuals who were between 13 and 49 years of age in 2015. As shown in Table 1, this results in a sample of 3712 individuals. Of those 3712 individuals, a total of 858 (23%) individuals belong to the pre-war schooling cohort (born between 1966 and 1980), 1887 (51%) belong to the war schooling cohort (born between 1981 and 1997) and 967 (26%) belong to the post-war schooling cohort (born between 1998 and 2002). Table 1 Definition of cohorts, returnee status and education Refugee experiences were recorded at the individual level in the survey. A person was defined as a returnee if the person had moved internationally with the primary purpose of escaping conflict or political persecution and had resided in another country for a consecutive period of at least 3 months. As also shown in Table 1, 8% of those in the sample are returnees (308 individuals). This share is higher for the pre-war schooling cohort (14%) and smaller for the post-war schooling cohort (2%). Close to 9% of those in the war schooling cohort are returnees. Table 1 also indicates the share of individuals in each education cohort that finished primary school. First, note that only 15% of those in the pre-war cohort finished primary school. The share is similar for returnees and stayees, suggesting that there was not a strong selection into international displacement based on previous educational outcomes (more on this below). For the war generation, 35% finished primary school. The number is similar across returnees and stayees, although slightly higher for returnees. Finally, for the post war generation, returnees have a much higher primary school completion rate. However, it should be noted that the sample of returnees for the post-war generation is small. Becoming a returnee versus stayee The literature on forced displacement suggests that in situations of conflict, those individuals from wealthier families can travel further and choose better destinations to settle in (Fransen et al. 2017; Van Hear 2006, 2014). This means that differences in educational outcomes between returnees and stayees could be driven by pre-conflict characteristics. In this sub-section, we explain why this is less likely to be the case for Burundi and introduce our approach to reduce this potential bias. First, all individuals in the sample were born in Burundi, and in 2012, Tanzania repatriated any remaining refugee camp residents to Burundi (Ruiz and Vargas-Silva 2016). As such, there is no selection issue in terms of returning home or staying in Tanzania, as the latter option was not available for those who fled the 1993 war. Any selection issue relates to being a refugee in the first place. Second, it is important to note that substantial evidence indicates that exposure to conflict in Burundi was random (Uvin 1999). For instance, Voors et al. (2012) show that the type of violence experienced in Burundi was largely exogenous to household characteristics and local economic conditions. The authors, for example, test whether violence was affected by the likelihood of profit for the aggressors, measured as the possibility of stealing assets (including livestock), and ethnic considerations such as the share of the local vote for the assassinated president. They find no support for these possibilities, which suggests that households had equal chances to be affected by the conflict in Burundi and that targeting of households based on certain household characteristics did not occur. Third, most refugee migration took place by foot. Because previous studies have shown that exposure to conflict was random, we assume that displaced individuals are more or less evenly distributed across Burundi, with proximity to the border being the most important determinant of international versus internal displacement. Fransen et al. (2017) show that this is actually the case and that distance to the border of Tanzania led to a higher probability of international displacement and a lower probability of internal displacement. The map in Fig. 3 also provides insights in this regard. The map depicts the number of Burundian refugees in Tanzania in the mid-2000s, per province of origin in Burundi (in brackets), based on UNHCR estimates. The second number in each province corresponds to the number of refugees from that province, as a share of the entire population of the province that was recorded in the 1990 Census (in parenthesis). Figure 3 therefore provides evidence that most of the refugees came from provinces in Burundi that border Tanzania. Number of refugees in Tanzania per province of origin [brackets] and as a share of the province's 1990 population (parenthesis) We created several indicators in order to highlight that there were no major pre-war differences between stayees and returnees. First, we created a household level variable which reflects the average primary schooling outcome (i.e. primary schooling completion dummy) of those members of the household who are from the pre-war schooling cohort. As shown in Table 2, pre-war primary education was similar across the two groups (11% for returnees, 13% for stayees). This difference is not statistically significant. Table 2 Descriptive statistics and regressions of pre-war indicators We also explore possible differences in pre-war wealth using the information on pre-war livestock and pre-war land holdings.Footnote 4 We use Tropical Livestock Units (TLUs) to standardise livestock ownership across individuals. Bundervoet (2009, 2010) conducted an exploration of the impact of conflict on livestock in Burundi. We replicate his analysis and use the following units as weights: 1 cow/ox = 1 TLU, 1 sheep = 0.17 TLU, 1 goat = 0.17 TLU, 1 pig = 0.25 TLU, and 1 fowl = 0.01 TLU.Footnote 5 As shown in Table 2, the difference in the average pre-war livestock and land holdings of returnees and stayees are not statistically significant. In Table 2, we also present results from a regression for the pre-war cohort in which the dependent variable is a dummy for finishing primary school and returnee status is one of the independent variables. The regression also controls for gender, age and province of birth. This regression provides a placebo test for the results regarding refugee experiences. This cohort was over primary school age by the start of the war, and we would not expect their schooling outcomes to be affected by future refugee experiences. The lack of an effect in this group would give additional support for the idea that there are no pre-existing trends that are accounting for differences between returnees and stayees. As suggested by Table 2, the coefficients is not statistically significant. In Table 2, we also show results from regressions in which pre-war education (i.e. household measure), pre-war livestock and pre-war land are the dependent variables. Again, returnee status is not significantly related to these variables. The main estimations include controls for pre-war educational, livestock and land indicators in order to adjust for possible pre-war differences. While these estimations reduce the concerns about selection bias, we cannot fully rule this possibility out (e.g. those who become refugees could value human capital more, even if they have similar characteristics to others). Therefore, we avoid making any strong claims of causality in the discussion of the results. In order to study the impact of refugee experiences on schooling outcomes, we start by estimating the following model: $$ {S}_{\mathrm{i}}={\alpha}_{\mathrm{p}}+\gamma W\_{\mathrm{PW}}_{\mathrm{i}}+\beta {R}_i+\delta W\_{\mathrm{PW}}_{\mathrm{i}}\times {R}_{\mathrm{i}}+\theta {X}_{\mathrm{i}}+{\varepsilon}_{\mathrm{i}} $$ Where Si is a dummy indicating the person completed primary school, αp are dummies for province of birth, W_PWi is a dummy indicating that the individual is from the war or post-war cohort (i.e. pre-war cohort is the control category), Xi are controls for gender, age and controls for pre-conflict characteristics (i.e. livestock, education and land). In the baseline estimations, Ri is a dummy which indicates that the individual is a returnee. The coefficient of interest in this case is δ, which is similar to a difference-in-difference estimator. In (1), the effects for the war and post-war cohort are combined. In a second step, we separate these effects. That is, we replace W_PWi by Wi, which is a dummy indicating that the individual belongs to the war cohort, and run the estimation including only individuals from the pre-war and war cohort. Likewise, we also replace W_PWi by PWi, which is a dummy indicating that the individual belongs to the post-war cohort, and run the estimation only including individuals from the pre-war and post-war cohort. Finally, we include a variable (Yi) in the estimation which indicates the numbers of school years for which the individual was displaced and estimate: $$ {S}_{\mathrm{i}}={\alpha}_{\mathrm{p}}+\delta {W}_{\mathrm{i}}+\gamma {\mathrm{PW}}_{\mathrm{i}}+\beta {R}_{\mathrm{i}}+\rho {Y}_{\mathrm{i}}+\theta {X}_{\mathrm{i}}+{\varepsilon}_{\mathrm{i}} $$ Given that we are controlling for returnee status and schooling cohort, the parameter ρ provides something comparable (although not equal) to a differences-in-differences estimator. Please note that all variables included in the estimations refer to pre-displacement factors and are not affected by refugee experiences. Finally, all estimations are presented with clustered standard errors at the sous-colline level. Table 3 provides descriptive statistics of the independent variables. The sample is slightly more female (53%), a fact that also holds for each of the cohorts, and the average age varies per cohort by construction. Table 3 Descriptive statistics of the control variables We also make use of the information from the in-depth interviews with the returnees to put the results in the context of experiences abroad and use data from Tanzania to compare the schooling outcomes of returnees to those of Tanzanians from the same schooling cohort. Finally, we show a series of robustness tests, paying particular attention to the control for conflict in the estimation. The effect of returnee status Table 4 provides the baseline results of differences between returnees and stayees on the likelihood of finishing primary school. The main variable of interest is the interaction of returnee status with the schooling cohort variable. In the estimation without controls for pre-war characteristics, the coefficient of the interaction term is positive but not statistically significant (panel A, column 1). This coefficient becomes significant when we include the pre-war controls. For instance, the estimation with the pre-war education levels suggests that returnees from the war and post-war cohorts are 25 percentage points more likely to have finished primary school than their non-returnee counterparts (panel A, column 2). The estimated gaps are smaller in the estimations which control for pre-war livestock and pre-war land (22 and 16 percentages points, respectively) and higher for the estimation which includes all pre-war controls (28 percentage points) (see columns 3 to 5). Table 4 Impact refugee experiences on likelihood of finishing primary school As explained above, we would expect the main effect of returnee status to be on those in the war cohort, as this group was more likely to have attended schools in camps. For those in the post-war group, the effect is more indirect. In order to explore this in Table 4, we also present results comparing each group directly to the pre-war cohort (see panel B and panel C). The results suggest that, as expected, the results regarding schooling and returnee status are driven by those in the war cohort. Number of years displaced while of school age In Table 5, we focus on the number of years for which the individual was displaced while of schooling age. This is a better measure of exposure to schools while in displacement than the simple cohort dummy variable. Given that we are also controlling for school cohort and returnee status, the coefficient on the number of years displaced while of school age also provides some intuition similar to that of a differences-in-differences estimator. As suggested by Table 5, an additional year spent as a refugee while school aged is associated with an increase in the likelihood of finishing primary school of four percentage points. Including pre-war controls increases the estimated impact slightly to six percentage points. Table 5 Impact of years abroad during school age on the likelihood of finishing primary school Experiences while in displacement Table 6 provides information on the schooling experiences of refugees while in displacement. This particular information is only available for one randomly selected returnee per household. While this is a small sample, we still get a good indication of the experiences of different cohorts. First, note that those who were above schooling age when displaced and those who returned before schooling age did not accumulate much schooling while abroad (an average of 0.06 and 0.23 years, respectively). On the other hand, those who were of schooling age accumulated about 1.72 years of education abroad. Table 6 Experiences while in displacement Also, and perhaps more importantly, the survey collected information for returnees and former IDPs on whether there was a primary school in the community of displacement. The information for IDPs was only collected for those who were adults before displacement (that is, the pre-war generation). As shown in Table 6, 71% of the returnees stated that there was a primary school in their community of residence abroad (mostly camps). As such, we can corroborate the availability of educational facilities for many refugees while in displacement. On the other hand, this was only the case for 54% of the IDPs. Comparison with hosts In this section, we use data from the Kagera Health and Development Survey (KHDS) to explore if the primary school completion rate of Burundian returnees is very different from that of their hosts in Tanzania. Kagera is the most north-western region of Tanzania (see Fig. 4). It borders Burundi and was one of the main destinations of Burundian refugees in Tanzania. The KHDS is representative of the population of the region and has been used by multiple papers to explore the consequences of hosting refugees (see Baez 2011; Maystadt and Verwimp 2014; Ruiz and Vargas-Silva 2015, 2016, 2017). The last round of the KHDS was conducted in 2010. As such, we can compare the outcomes of residents of Kagera in the same schooling cohort as our "war" generation in Burundi (i.e. who were of primary school age from 1988 to 2009). The KHDS data suggests that only 28% of Kagera residents in that cohort finished primary school, a smaller proportion than the one for the returnees in our sample (37%). While it is not possible to make strong conclusions from this comparison, the finding suggests that returnees could be better off in terms of schooling than both Burundian stayees and residents of north-western Tanzania. It is important to highlight that Kagera is one of the poorest and most remote regions of Tanzania and that primary school completion rates are much higher in other parts of the country. Burundi and vicinity One of the main concerns about the estimation is whether we are controlling adequately for conflict experiences. If this is not the case, it is possible to argue that the refugee experience indicator reflects the impact of several other factors related to conflict exposure. Columns 1 to 5 in panel A of Table 7 present the estimations if we control for the number of years in which the individual was of schooling age during the conflict, instead of simply including a dummy for schooling cohort. This change does not affect the main conclusions from the analysis. It is still the case that spending longer as a refugee while school aged is associated with better schooling outcomes.Footnote 6 Table 7 Impact of years abroad during school age on the likelihood of finishing primary school controlling for years of exposure to conflict while school aged In panel B of Table 7, we account for the fact that the conflict did not affect all provinces at the same time or for the same length of time. We follow the same approach of Verwimp and Van Bavel (2014) to create a variable in which exposure to conflict varies by province and age cohort. These authors argue that the spatial spread of the conflict was determined by geography and natural endowments. In this case, an individual is assumed to be exposed to conflict during school age if he/she had resided in a province that was affected by conflict and was of school age when the province was affected by conflict. Following Verwimp and Van Bavel (2014), we construct the conflict variable using the estimates from Bundervoet (2009) on the percentage of people whose fathers were killed during the initial stage of the conflict (i.e. above and below the median)Footnote 7 and Chrétien and Mukuri's (2000) account of the spread of the conflict for the later stages. The results do not change if we use this alternative way of controlling for conflict exposure. In Table 8, we test the robustness of the results by employing propensity score matching (PSM) techniques in order to match returnee individuals with a comparable group of stayees. In this case, the treatment (T) is being a returnee. As we explained above, the large majority of refugees from the 1993–2005 conflict returned home before our data collection. Hence, the treatment is essentially being a refugee in the first place, a factor that was largely determined by distance from the border of Tanzania. Table 8 Average treatment effect of the treated: likelihood of finishing primary school We start by estimating a probit model to predict the likelihood of being a returnee based on age, gender and province of birth, and then we match individuals based on treatment status. Once we check for the balancing properties and common support across the treatment and comparison group, we proceed to use the nearest neighbour estimation matching procedure. With the matching at hand, the difference in the outcome variable is calculated to estimate the average treatment effect of the treated. As shown in Table 8, the results support the idea that returnees are more likely to have finished primary school than stayees. In this paper, we studied the effects of refugee experiences on primary education and explored if the educational outcomes of individuals with refugee experiences in Burundi differed from the outcomes of those who did not leave the country during the 1993–2005 civil war. Despite the increasing academic interest on the well-being of displaced populations worldwide, the relationship between displacement experiences and educational outcomes is a relatively underexplored topic and even fewer studies have focused on the consequences of international displacement on education. Our survey was conducted 15 years after the signing of the peace agreement in Burundi and after the return of most former refugees to the country, which enables a long-term perspective on the impacts of displacement on education, including the role of education in displacement camps abroad. Our findings show that, once we controlled for pre-war characteristics of the households, former refugees who returned to Burundi had better schooling outcomes than their contemporaries who never left the country. This finding most likely reflects the varying levels of access that children had to education during the war. While children who stayed home were likely to be affected by the negative impacts of conflict on schooling (e.g. destruction of schools, killing and exodus of teachers, child soldiering, household income shocks, higher levels insecurity and decreases in state investments on education), those in neighbouring countries, and particularly those who resided in camps in Tanzania, had access to UNHCR-funded schools. We also provide a simple comparison of the schooling outcomes of returnees with those of Tanzanians, and there is suggestive evidence that returnees were better off than their hosts in Tanzania, again probably because of the specific schools that they had access to by virtue of being refugees. Although the higher likelihood of completing primary school can be seen as a positive side effect of the refugee experience, the reality is that the primary school completion rate for returned refugees was still low (37%). These findings align with current concerns about the access to education of displaced populations during conflict times (United High Commissioner for Refugees 2016b). With the number of displaced populations in the world on the rise, an increasing number of children do not have access to education. The impact of refugee experiences on education is likely to have implications for future labour market outcomes and, more generally, durable peace after the end of conflict. More emphasis is therefore needed on providing primary education to refugees. However, our findings highlight that particularly the children who stay behind when conflict erupts suffer serious gaps in their education, which indicates that there is an additional need for educational support programmes that allow these children to catch-up with those who were not as affected by the war. Note that there were a considerable number of refugees from Burundi in Tanzania before the events of 1993. These refugees fled the violence in 1972. These refugees were given land for cultivation and, by all accounts, were largely self-sufficient (Thomson 2009). In 2015, over 200,000 people were displaced from Burundi to neighbouring countries (United High Commissioner for Refugees 2016c). This is the first episode of large displacement in the country in over a decade. The displacement is the result of increasing tensions and violence in response to the April 2015 announcement that the president of Burundi was running for a third term in office. Many interpreted a third term in office as a violation of the Arusha peace agreements. The data collection for this article was finalised approximately 6 weeks before the president's announcement and before this new wave of tensions and displacement. As explained above, there was a conflict in Burundi during 1972. This conflict was of a smaller scale compared to the conflict from 1993 onwards but could have nonetheless affected the outcomes of those who were of schooling age at the time. Therefore, we limit the sample to those who became of schooling age in 1973 or afterwards. There is no information on the current status of the pre-war livestock and land (i.e. whether it was lost, sold or still in possession). TLUs allow animal species of different average size to be compared by a common unit. These measures are based on the typical weight of the animal raised to the power of 0.75 (also known as the metabolic body weight), compared with the equivalent figure for an animal of 250 kg. Please note that this measure relies on both species being under the same feeding system (which is a reasonable assumption in our case) but does not account for the possibility of different breeds of the same species. Please note that the variable measuring exposure to conflict measures the number of years of schooling age that the individual was exposed to conflict (i.e. based on timing of the conflict), while the displacement variable indicates the number of years of schooling age that the individual was displaced abroad. Bundervoet (2009) adjusts the estimates for bias related to households in which all members were killed. Akresh R, De Walque M. Armed conflict and schooling: evidence from the 1994 Rwandan genocide. IZA Discussion Paper No. 3516. Bonn: Institute for the Study of Labour (IZA); 2008. Amnesty International. Burundi: refugee rights at risk: human rights abuses in returns to and from Burundi. London: Amnesty International; 2005. Baez J. Civil wars beyond their borders: the human capital and health consequences of hosting refugees. J Dev Econ. 2011;96(2):391–408. Blattman C, Annan J. The consequences of child soldiering. Rev Econ Stat. 2010;92(4):882–98. Bundervoet T. Livestock, land and political power: the 1993 killings in Burundi. J Peace Res. 2009;46(3):357–76. Bundervoet T. Assets, activity choices, and civil war: evidence from Burundi. World Dev. 2010;38(7):955–65. Chamarbagwala R, Morán H. The human capital consequences of civil war: evidence from Guatemala. J Dev Econ. 2011;94(1):41–61. Chrétien J-P, Mukuri M. Burundi, la Fracture Identitaire. Paris: Karthala: Logiques de Violence et Certitudes 'Ethniques', (1993–1996); 2000. Connolly MA, Gayer M, Ryan MJ, Salama P, Spiegel P, Heymann DL. Communicable diseases in complex emergencies: impact and challenges. Lancet. 2004;364(9449):1974–83. Dharod JM, Croom JE, Sady CG. Food insecurity: its relationship to dietary intake and body weight among Somali refugee women in the United States. J Nutr Educ Behav. 2013;45(1):47–53. Di Maio M, Nandi TK. The effect of the Israeli–Palestinian conflict on child labor and school attendance in the West Bank. J Dev Econ. 2013;100(1):107–16. Dryden-Peterson S. The educational experiences of refugee children in countries of first asylum. Washington: Migration Policy Institute Report; 2015a. Dryden-Peterson S. Refugee education: a global review. Geneva: UNHCR; 2015b. Fransen S. The socio-economic sustainability of refugee return: insights from Burundi. Population, Space and Place, early view published at 22 September 2015. 2015. https://doi.org/10.1002/psp.1976. Fransen S, Vargas-Silva C, Ruiz I. Return migration and economic outcomes in the conflict context. World Development. 2017;95:196–210. Harild N, Christensen A, Zetter R. Sustainable refugee return: triggers, constraints, and lessons on addressing the development challenges of forced displacement. Washington: World Bank; 2015. Ichino A, Winter-Ebmer R. The long-run educational cost of World War II. J Labor Econ. 2004;22(1):57–86. Integrated Regional Information Network. Burundi: focus on education of internally displaced children. IRIN Humanitarian News and Analysis; 2002. Available at: http://www.irinnews.org/feature/2002/11/14/focus-education-internally-displaced-children, accessed at Aug. 26 2016. Jackson T. Equal access to education: a peace imperative for Burundi, Report commissioned by the Nordic Africa Institute. London: International Alert; 2000. Justino P. Violent conflict and human capital accumulation. Households in Conflict Network Working Paper 99. 2011. Justino P, Leone M, Salardi P. Short- and long-term impact of violence on education: the case of Timor Leste. World Bank Econ Rev. 2014;28(2):320–53. Lai B, Thyne C. The effect of civil war on education, 1980–97. J Peace Res. 2007;44(3):277–92. Leon G. Civil conflict and human capital accumulation the long-term effects of political violence in Perú. J Hum Resour. 2012;47(4):991–1022. Maystadt J-F, Verwimp P. Winners and losers among a refugee-hosting population. Econ Dev Cult Chang. 2014;62(4):769–809. Millner J. Two steps forward, one step back: understanding the shifting politics of refugee policy in Tanzania. New Issues Refugee Res. 2013;(255):1–25. Ngaruko F, Nkurunziza JD. Civil war and its duration in Burundi. In Ed. Paul Collier and Nicholas Sambanis, Understanding civil war: Africa: Washington, World Bank; 2005. Nkurunziza J, Ngaruko F. Explaining growth in Burundi: 1960–2000. CSAE Working Paper 2002–03. Oxford University: CSAE. Oh S, Van der Stouwe M. Education, diversity, and inclusion in Burmese refugee camps in Thailand. Comp Educ Rev. 2008;52(4):589–617. Ruiz I, Vargas-Silva C. The labor market impacts of forced migration. Am Econ Rev. 2015;105(5):581–6. Ruiz I, Vargas-Silva C. The labor market consequences of hosting refugees. J Econ Geogr. 2016;16(3):667–94. Ruiz I, Vargas-Silva C . The impact of hosting refugees on the intra-household allocation of tasks: a gender perspective. UNU-WIDER Working Paper, 66 2017. Shemyakina O. The effect of armed conflict on accumulation of schooling: results from Tajikistan. J Dev Econ. 2011;95:186–200. Skonhoft CG. 'Why should I send my child to school?': A study of Burundian Hutu refugees' experiences of exclusion from education and how this motivates education in Tanzanian exile. Norwegian J Geogr. 2010;54:116–21. Sommeiller E, Wodon Q. Enrolment gains from the elimination of primary school user fees in Burundi. Washington, DC: World Bank; 2014. Thomson J. Durable solutions for Burundian refugees in Tanzania. Forced Migr Rev. 2009;33:35–7. UNICEF. Education working group, joint education needs assessment for Syrian refugee children. Jordan: UNICEF; 2015. United High Commissioner for Refugees. Convention and protocol relating to the status of refugees. Geneva: UNHCR; 1951. United High Commissioner for Refugees. Global trends. Forced displacement in 2015. Geneva: UNHCR; 2016a. United High Commissioner for Refugees. Missing out refugee education in crisis. Geneva: UNHCR; 2016b. United High Commissioner for Refugees. Number of Burundian refugees since last April tops quarter-of-a-million, funding at 3 per cent. Geneva: UNHCR; 2016c. United Nations Development Programme. Human development report. New York: UNPD; 2015. United Nations Educational, Scientific and Cultural Organization. The hidden crisis: armed conflict and education. Education for all global monitoring report 2011. Paris: UNESCO; 2011. United Nations Office for the Coordination of Humanitarian Affairs. Affected population in the Great Lakes region (displaced-refugees). Geneva: UNOCHA; 1999. Uvin P. Ethnicity and power in Burundi and Rwanda: different paths to mass violence. Comparative Politics. 1999;31(3):253–71. Valente C. Education and civil conflict in Nepal. World Bank Econ Rev. 2014;28(2):354–83. Van Hear N . 'I went as far as my money would take me': Conflict, forced migration and class. Centre on Migration, Policy and Society, Working Paper No. 6. 2006. Van Hear N. Reconsidering migration and class. Int Migr Rev. 2014;48:S100–21. Verwimp P, Van Bavel J. Schooling, violent conflict, and gender in Burundi. World Bank Econ Rev. 2014;28(2):384–411. Voors MJ, Nillesen EM, Verwimp P, Bulte EH, Lensink R, Van Soest DP. Violent conflict and behavior: a field experiment in Burundi. Am Econ Rev. 2012;102(2):941–64. Waters T, LeBlanc K. Refugees and education: mass public schooling without a nation-state. Comp Educ Rev. 2005;49(2):129–47. Whitaker BE. Refugees in Western Tanzania: the distribution of burdens and benefits among local hosts. J Refug Stud. 2002;15:339–58. World Bank. Burundi economic indicators. Washington DC: World Bank; 2015. World Bank. World development indicators (WDI) database: Burundi. Washington DC: World Bank; 2016. Zeender G, McCallin B. Durable solutions for internally displaced persons in Burundi within reach. Refugee Survey Quarterly. 2002;32(1): 74–100. We thank Charlie Becker, Tom Bundervoet, Zovanga Kone, Craig Loschmann and participants of the 2015 ASSA meetings, 2015 HDCA Annual Conference and 2015 EISA conference and seminars at Maastricht University and the University of Oxford, for helpful comments and suggestions on previous versions of this paper. We would also like to thank the anonymous referee and the editor for the remarks. Responsible editor: David Lam This document is an output from a project funded by the UK Department for International Development (DFID) and the Institute for the Study of Labour (IZA) for the benefit of developing countries. The views expressed are not necessarily those of DFID or IZA. Data will be made available by the authors upon request. University of Amsterdam, Nieuwe Achtergracht 166, 1018 WV, Amsterdam, Netherlands Sonja Fransen University of Oxford, 58 Banbury Road, Oxford, OX2 6QS, UK Carlos Vargas-Silva Maastricht University/UNU-MERIT, Boschstraat 24, 6211 AX, Maastricht, Netherlands Melissa Siegel Correspondence to Sonja Fransen. This research was approved by the Central University Research Ethics Committee (CUREC) at the University of Oxford. The IZA journal of development and migration is committed to the IZA guiding principles of research integrity. The authors declare that they have observed these principles. Fransen, S., Vargas-Silva, C. & Siegel, M. The impact of refugee experiences on education: evidence from Burundi. IZA J Develop Migration 8, 6 (2018). https://doi.org/10.1186/s40176-017-0112-4
CommonCrawl
主编办公 学术指标 编委会组成 编委会章程 编委会会议 投稿所需 稿件流程 伦理规范 审稿服务 审稿须知 审稿流程 审稿专家 道德声明 所有标题作者关键词摘要Doi栏目作者地址基金中图分类号 Room-temperature terahertz photodetectors based on black arsenic-phosphorus DONG Zhuo , CHEN Jie , ZHU Yi-fan , YANG Jie , WANG Zhong-chang , ZHANG Kai 文章导航 > 中国光学 > 2021 > 优先出版 董卓, 陈捷, 朱一帆, 杨洁, 王中长, 张凯. 黑砷磷室温太赫兹探测器[J]. 中国光学. doi: 10.37188/CO.2020-0175 引用本文: 董卓, 陈捷, 朱一帆, 杨洁, 王中长, 张凯. 黑砷磷室温太赫兹探测器[J]. 中国光学. doi: 10.37188/CO.2020-0175 DONG Zhuo, CHEN Jie, ZHU Yi-fan, YANG Jie, WANG Zhong-chang, ZHANG Kai. Room-temperature terahertz photodetectors based on black arsenic-phosphorus[J]. Chinese Optics. doi: 10.37188/CO.2020-0175 Citation: DONG Zhuo, CHEN Jie, ZHU Yi-fan, YANG Jie, WANG Zhong-chang, ZHANG Kai. Room-temperature terahertz photodetectors based on black arsenic-phosphorus[J]. Chinese Optics. doi: 10.37188/CO.2020-0175 黑砷磷室温太赫兹探测器 doi: 10.37188/CO.2020-0175 董卓1, 2, , 陈捷2, 3, 朱一帆1, 4, 杨洁5, 王中长6, , , 张凯2, , 中国科学技术大学 纳米技术与纳米仿生学院,安徽 合肥 230026 中国科学院苏州纳米技术与纳米仿生研究所 国际实验室,江苏 苏州 215123 上海大学 材料科学与工程学院,上海 200444 中国科学院苏州纳米技术与纳米仿生研究所 纳米器件与应用重点实验室,江苏 苏州 215123 新加坡国立大学 化学与生物分子工程系,新加坡 117585 伊比利亚国际纳米科技实验室 葡萄牙 布拉加 4715-330 中图分类号: TN382 DONG Zhuo1, 2 , , CHEN Jie2, 3 , ZHU Yi-fan1, 4 , YANG Jie5 , WANG Zhong-chang6 , , , ZHANG Kai2 , , School of Nano-Tech and Nano-Bionics, University of Science and Technology of China, Hefei 230026, China i-Lab, Suzhou Institute of Nano-Tech and Nano-Bionics, Chinese Academy of Sciences, Suzhou 215123, China School of Materials Science and Engineering, Shanghai University, Shanghai 200444, China Key Laboratory of Nanodevices and Applications, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123, China Department of Chemical and Biomolecular Engineering, National University of Singapore, Singapore 117585, Singapore International Iberian Nanotechnology Laboratory (INL), Avenida Mestre José Veiga s/n, Braga 4715-330, Portugal Funds: Supported by National Natural Science Foundation of China (No. 61927813,No. 61875223,No. 61922082); National Key R & D Program of China (No. 2016YFE015700) DONG Zhuo (1994—), male, born in Yingcheng City, Hubei province, Ph. D candidate, School of Nano-Tech and Nano-Bionics, University of Science and Technology of China. He got his bachelor's degree from Hubei University in 2017. His research interests are room-temperature terahertz photodetectors based on two-dimensional materials. E-mail: [email protected] ZHANG Kai (1983—), male, born in Xiantao City, Hubei province Ph. D, Professor, Nano-Tech and Nano-Bionics, Chinese Academy of Science. He got his Ph. D. from Hong Kong Polytechnic University in 2011. His research interests are in the areas of narrow-gap two-dimensional (2D) materials and devices, with research activities ranging from the exploration and controllable growth of narrow-gap 2D semiconductors (such as black phosphorus) and topological materials, as well as the development of infrared & terahertz lasers and photodetectors. E-mail: [email protected] Corresponding author: [email protected]; [email protected] 图(10) 表(0) 资源附件(0) 摘要: 由于太赫兹波与众多物质之间存在着丰富的相互作用,太赫兹技术在众多领域均有应用需求。因此,基于独特物理机制和优异材料特性的高灵敏度、便携式太赫兹探测器的研制刻不容缓。黑砷磷是一种新型二维材料,其带隙和输运特性随化学组分可调,在光电探测领域被广泛关注。目前基于黑砷磷的研究集中在红外探测方面,而对于太赫兹探测的应用未见报道。本文介绍了一种基于黑砷磷的天线耦合太赫兹探测器。实验结果表明,在探测过程中存在两种不同的探测机制,并且两者之间存在竞争关系。通过改变黑砷磷的化学组分可以定制不同的探测机制,使其达到最优响应性能。在平衡材料带隙和载流子迁移率的情况下,探测器实现了室温下对0.37 THz电磁波的灵敏探测,其电压响应度和噪声等效功率分别为28.23 V/W和0.53 nW/Hz1/2。 二维材料 / 太赫兹 / 黑砷磷 / 天线耦合探测器 Abstract: Terahertz technology is indispensable in plenty of fields due to the abundant interactions between terahertz waves and matter. In order to meet the needs of terahertz applications, the development of highly sensitive and portable terahertz detectors based on distinctive physical mechanisms and various materials with excellent properties are urgently required. Black arsenic-phosphorus is a novel two-dimensional material that has a tunable band gap and transport characteristics with varying chemical composition, which has gained widespread interest in optoelectronic applications. Recent research on b-AsxP1-x mainly focuses on infrared detection, while the detection of terahertz has not yet been applied. Herein, an antenna-coupled terahertz detector based on exfoliated multilayer black arsenic-phosphorus is demonstrated. The terahertz response performance of the detector reflects two different mechanisms, which have a competitive relationship in the detection process. In particular, the detection mechanism can be tailored by varying the chemical composition of black arsenic-phosphorus. By balancing the band gap and carrier mobility, a responsivity of over 28.23 V/W and a noise equivalent power of less than 0.53 nW/Hz1/2 are obtained at 0.37 THz. This implies that black arsenic-phosphorus has great potential in terahertz technology. two-dimensional material / terahertz / black arsenic-phosphorus / antenna-coupled detector Figure 1. (a) Top view and side view of the b-AsxP1−x crystal structure. (The armchair (X) and zigzag (Y) crystal axis are shown on the graph) (b) The HRTEM image of the b-As0.1P0.9. (c) The corresponding SAED pattern of the b-As0.1P0.9. (d) EDX result of the b-AsxP1−x (x=0.1 and 0.5) flakes, insert: EDS elemental mapping of the b-As0.1P0.9. (e) Raman spectra of b-AsxP1−x with different chemical compositions. (f) Plots of infrared absorption of different b-AsxP1−x samples. Figure 2. (a) Schematic diagram of the b-AsxP1−x detector, including the measurement circuit. (b) Schematic diagram of the measurement system. (c) Spatial distributions of the field enhancement factors for the THz antenna. (d, e) False-color SEM image of the b-AsxP1−x detector. Figure 3. (a, c) Output characteristics of the b-As0.1P0.9 detector (a) and b-As0.5P0.5 detector (c), with VG ranging from −6 V to 6 V with steps of 3 V. Transfer characteristics for the b-As0.1P0.9 detector (b) and the b-As0.5P0.5 detector (d) with a fixed VDS = 50 mV. Figure 4. Frequency dependence of the responsivity for the b-As0.1P0.9 detector (a) and b-As0.5P0.5 detector (c), measured at VG = 0 V. Gate bias dependence of the responsivity for the b-As0.1P0.9 detector (b) and the b-As0.5P0.5 detector (d), measured at the optimal frequency as obtained from (a) and (c), respectively. Figure 5. (a) Schema of the detector based on the self-mixing theory. (b) Photo-excitation and relaxation processes of the carriers in the b-AsxP1−x flakes irradiated by terahertz photons and the schematic diagram of the MSM structure based on EIW theory Figure 6. NEP as a function of the gate voltage for b-As0.1P0.9 detector(a) and b-As0.5P0.5 detector (b), insert: NV as a function of the VG. Transmission images of a key (c) and a pair of metal scissors (d) inside an envelope by the b-As0.1P0.9 detector at 0.37 THz. S1. (a, b) Atomic ratio of As and P elements for the b-As0.1P0.9 and b-As0.5P0.5, respectively. It proves the b-As0.1P0.9 and b-As0.5P0.5 have an accurate elemental ratio of about 1:9 and 5:5. (c, d) The AFM images of few-layer b-As0.1P0.9 and b-As0.5P0.5 nanoflakes on a Si substrate with 285 nm SiO2. It shows clean surface and flat shape, proving the high quality of the mateials. And the corresponding linear scan analysis of height display in the images with a thickness of 14 nm and 16 nm for b-As0.1P0.9 and b-As0.5P0.5 nanoflakes, respectively. S2. The electrical properties of the as-fabricated BP detector. (a) Output characteristics of the BP detector as function of different VG from −8 V to 8 V with steps of 4 V. It shows a good Ohmic contact and a large tunable of carrier density by gate voltage. (b) Transfer characteristics for the BP detector with a fixed VDS=100 mV. It exhibits a typical p-type ambipolar transport behavior. The field-effect hole mobility measured from the transfer curve is about 725 cm2/V·s. S3. The terahertz response characteristics of the same BP detector. (a) Frequency dependence of the voltage responsivityfor BP detector at VG = 0 V. It shows several clear response peaks for the detector and the maximum voltage responsivity located at 0.27 THz.(b, c) Voltage responsivity and noise equivalent power as a function of the gate voltage at 0.27 THz. The maximum RV and minimum NEP of about 8.1 V/W and 1.08 nW/Hz1/2 were measured from the curve, respectively. S4. Derivative of conductivity multiplied by the resistance, as a function of VG, in b-As0.1P0.9(a) and b-As0.5P0.5 (b) detector. This is the expected responsivity following a plasma-wave detection mechanism. If the domain mechanism is the PW mechanism, the measured shape of the RV curve should be in excellent agreement with that curves. [1] GUO W L, WANG L, CHEN X SH, et al. Graphene-based broadband terahertz detector integrated with a square-spiral antenna[J]. Optics Letters, 2018, 43(8): 1647-1650. doi: 10.1364/OL.43.001647 [2] CASTILLA S, TERRÉS B, AUTORE M, et al. Fast and sensitive terahertz detection using an antenna-integrated graphene pn junction[J]. Nano Letters, 2019, 19(5): 2765-2773. doi: 10.1021/acs.nanolett.8b04171 [3] VITI L, PURDIE D G, LOMBARDO A, et al. HBN-encapsulated, graphene-based, room-temperature terahertz receivers, with high speed and low noise[J]. Nano Letters, 2020, 20(5): 3169-3177. doi: 10.1021/acs.nanolett.9b05207 [4] LIU CH L, WANG L, CHEN X SH, et al. Room-temperature photoconduction assisted by hot-carriers in graphene for sub-terahertz detection[J]. Carbon, 2018, 130: 233-240. doi: 10.1016/j.carbon.2018.01.020 [5] HUANG ZH M, TONG J CH, HUANG J G, et al. Room-temperature photoconductivity far below the semiconductor bandgap[J]. Advanced Materials, 2014, 26(38): 6594-6598. doi: 10.1002/adma.201402352 [6] CHEREDNICHENKO S, HAMMAR A, BEVILACQUA S, et al. A room temperature bolometer for terahertz coherent and incoherent detection[J]. IEEE Transactions on Terahertz Science and Technology, 2011, 1(2): 395-402. doi: 10.1109/TTHZ.2011.2164654 [7] SAKHNO M, GOLENKOV A, SIZOV F. Uncooled detector challenges: millimeter-wave and terahertz long channel field effect transistor and Schottky barrier diode detectors[J]. Journal of Applied Physics, 2013, 114(16): 164503. doi: 10.1063/1.4826364 [8] ROGALSKI A, SIZOV F. Terahertz detectors and focal plane arrays[J]. Opto-Electronics Review, 2011, 19(3): 346-404. [9] SUN Y F, SUN J D, ZHOU Y, et al. Room temperature GaN/AlGaN self-mixing terahertz detector enhanced by resonant antennas[J]. Applied Physics Letters, 2011, 98(25): 252103. doi: 10.1063/1.3601489 [10] VITI L, POLITANO A, VITIELLO M S. Black phosphorus nanodevices at terahertz frequencies: photodetectors and future challenges[J]. APL Materials, 2017, 5(3): 035602. doi: 10.1063/1.4979090 [11] VICARELLI L, VITIELLO M S, COQUILLAT D, et al. Graphene field-effect transistors as room-temperature terahertz detectors[J]. Nature Materials, 2012, 11(10): 865-871. doi: 10.1038/nmat3417 [12] GUO W L, DONG ZH, XU Y J, et al. Sensitive terahertz detection and imaging driven by the photothermoelectric effect in ultrashort-channel black phosphorus devices[J]. Advanced Science, 2020, 7(5): 1902699. doi: 10.1002/advs.201902699 [13] TREDICUCCI A, VITIELLO M S. Device concepts for graphene-based terahertz photonics[J]. IEEE Journal of Selected Topics in Quantum Electronics, 2014, 20(1): 8500109. [14] DYAKONOV M, SHUR M. Shallow water analogy for a ballistic field effect transistor: new mechanism of plasma wave generation by dc current[J]. Physical Review Letters, 1993, 71(15): 2465-2468. doi: 10.1103/PhysRevLett.71.2465 [15] VITI L, HU J, COQUILLAT D, et al. Efficient Terahertz detection in black-phosphorus nano-transistors with selective and controllable plasma-wave, bolometric and thermoelectric response[J]. Scientific Reports, 2016, 6: 20474. doi: 10.1038/srep20474 [16] CAI X H, SUSHKOV A B, SUESS R J, et al. Sensitive room-temperature terahertz detection via the photothermoelectric effect in graphene[J]. Nature Nanotechnology, 2014, 9(10): 814-819. doi: 10.1038/nnano.2014.182 [17] NOVOSELOV K S, GEIM A K, MOROZOV S V, et al. Electric field effect in atomically thin carbon films[J]. Science, 2004, 306(5696): 666-669. doi: 10.1126/science.1102896 [18] MANZELI S, OVCHINNIKOV D, PASQUIER D, et al. 2D transition metal dichalcogenides[J]. Nature Reviews Materials, 2017, 2(8): 17033. doi: 10.1038/natrevmats.2017.33 [19] MELLNIK A R, LEE J S, RICHARDELLA A, et al. Spin-transfer torque generated by a topological insulator[J]. Nature, 2014, 511(7510): 449-451. doi: 10.1038/nature13534 [20] LI L K, YU Y J, YE G J, et al. Black phosphorus field-effect transistors[J]. Nature Nanotechnology, 2014, 9(5): 372-377. doi: 10.1038/nnano.2014.35 [21] HU Y, QI ZH H, LU J Y, et al. van der Waals epitaxial growth and interfacial passivation of two-dimensional single-crystalline few-layer gray arsenic nanoflakes[J]. Chemistry of Materials, 2019, 31(12): 4524-4535. doi: 10.1021/acs.chemmater.9b01151 [22] QI ZH H, HU Y, JIN ZH, et al. Tuning the liquid-phase exfoliation of arsenic nanosheets by interaction with various solvents[J]. Physical Chemistry Chemical Physics, 2019, 21(23): 12087-12090. doi: 10.1039/C9CP01052A [23] WANG X X, HU Y, MO J B, et al. Arsenene: a potential therapeutic agent for acute promyelocytic leukaemia cells by acting on nuclear proteins[J]. Angewandte Chemie International Edition, 2020, 59(13): 5151-5158. doi: 10.1002/anie.201913675 [24] BANDURIN D A, SVINTSOV D, GAYDUCHENKO I, et al. Resonant terahertz detection using graphene plasmons[J]. Nature Communications, 2018, 9(1): 5392. doi: 10.1038/s41467-018-07848-w [25] LIU CH L, WANG L, CHEN X SH, et al. Top-gated black phosphorus phototransistor for sensitive broadband detection[J]. Nanoscale, 2018, 10(13): 5852-5858. doi: 10.1039/C7NR09545G [26] TANG W W, POLITANO A, GUO CH, et al. Ultrasensitive room-temperature terahertz direct detection based on a bismuth selenide topological insulator[J]. Advanced Functional Materials, 2018, 28(31): 1801786. doi: 10.1002/adfm.201801786 [27] QIN H, SUN J D, LIANG SH X, et al. Room-temperature, low-impedance and high-sensitivity terahertz direct detector based on bilayer graphene field-effect transistor[J]. Carbon, 2017, 116: 760-765. doi: 10.1016/j.carbon.2017.02.037 [28] VITI L, COQUILLAT D, POLITANO A, et al. Plasma-wave terahertz detection mediated by topological insulators surface states[J]. Nano Letters, 2016, 16(1): 80-87. doi: 10.1021/acs.nanolett.5b02901 [29] XIE Y, LIANG F, CHI SH M, et al. Defect engineering of MoS2 for room-temperature terahertz photodetection[J]. ACS Applied Materials &Interfaces, 2020, 12(6): 7351-7357. [30] LIU B L, KÖPF M, ABBAS A N, et al. Black arsenic-phosphorus: layered anisotropic infrared semiconductors with highly tunable compositions and properties[J]. Advanced Materials, 2015, 27(30): 4423-4429. doi: 10.1002/adma.201501758 [31] PRADHAN N R, GARCIA C, LUCKING M C, et al. Raman and electrical transport properties of few-layered arsenic-doped black phosphorus[J]. Nanoscale, 2019, 11(39): 18449-18463. doi: 10.1039/C9NR04598H [32] LONG M SH, GAO A Y, WANG P, et al. Room temperature high-detectivity mid-infrared photodetectors based on black arsenic phosphorus[J]. Science Advances, 2017, 3(6): e1700589. doi: 10.1126/sciadv.1700589 [33] TAN W C, HUANG L, NG R J, et al. A black phosphorus carbide infrared phototransistor[J]. Advanced Materials, 2018, 30(6): 1705039. doi: 10.1002/adma.201705039 [34] WU F, XIA H, SUN H D, et al. AsP/InSe van der waals tunneling heterojunctions with ultrahigh reverse rectification ratio and high photosensitivity[J]. Advanced Functional Materials, 2019, 29(12): 1900314. doi: 10.1002/adfm.201900314 [35] SHI X Y, WANG T, WANG J, et al. Synthesis of black arsenic-phosphorus and its application for Er-doped fiber ultrashort laser generation[J]. Optical Materials Express, 2019, 9(5): 2348-2357. doi: 10.1364/OME.9.002348 [36] SUN J D, QIN H, LEWIS R A, et al. Probing and modelling the localized self-mixing in a GaN/AlGaN field-effect terahertz detector[J]. Applied Physics Letters, 2012, 100(17): 173513. doi: 10.1063/1.4705306 [37] WU C Y, ZHOU W, YAO N J, et al. Silicon-based high sensitivity of room-temperature microwave and sub-terahertz detector[J]. Applied Physics Express, 2019, 12(5): 052013. doi: 10.7567/1882-0786/ab14fc [38] LI S S. Semiconductor Physical Electronics[M]. Boston, MA: Springer, 1993. [39] SUN J D, FENG W, DING Q F, et al. Smaller antenna-gate gap for higher sensitivity of GaN/AlGaN HEMT terahertz detectors[J]. Applied Physics Letters, 2020, 116(16): 161109. doi: 10.1063/1.5142436 [1] GU Yan-hong, ZUO Zhao-lu, ZHANG Zhen-zhen, SHI Chao-yi, GAO Xian-he, LU Jun. Algorithmic study of total petroleum hydrocarbons in contaminated soil by three-dimensional excitation-emission matrix fluorescence spectroscopy . 中国光学, doi: 10.37188/CO.2019-0216 [2] YANG Qiu-jie, HE Zhi-ping, MI Zhong-liang. Diffraction characteristics analysis of multi-depth phase modulation grating in terahertz band . 中国光学, doi: 10.3788/CO.2019-0147 [3] HUANG Yao, ZHAO Nan-jing, MENG De-shuo, ZUO Zhao-lu, CHENG Zhao, CHEN Yu-nan, CHEN Xiao-wei, GU Yan-hong. Study on quantitative methods of laser-induced two-dimensional fluorescence spectroscopy of multicomponent PAHs in soils . 中国光学, doi: 10.37188/CO.2020-0059 [4] 徐德刚, 朱先立, 贺奕焮, 王与烨, 姚建铨. 新型有机晶体及超宽带太赫兹辐射源研究进展 . 中国光学, doi: 10.3788/CO.20191203.0535 [5] Sviatoslav Igorevich GUSEV, Petr S DEMCHENKO, Olga P CHERKASOVA, Vyacheslav I FEDOROV, Mikhail K KHODZITSKY. Influence of glucose concentration on blood optical properties in THz frequency range . 中国光学, doi: 10.3788/CO.20181102.0182 [6] Daniel GOMON, Egor SEDYKH, Sebastián RODRÍGUEZ, Tafur Monroy IDELFONSO, Kirill ZAITSEV, Anna VOZIANOVA, Mikhail KHODZITSKY. Influence of the geometric parameters of the electrical ring resonator metasurface on the performance of metamaterial absorbers for terahertz applications . 中国光学, doi: 10.3788/CO.20181101.0047 [7] Alexander N GREBENCHUKOV, Anton D ZAITSEV, Mikhail K KHODZITSKY. Optically controlled narrowband terahertz switcher based on graphene . 中国光学, doi: 10.3788/CO.20181102.0166 [8] 陈勰宇, 田震. 石墨烯太赫兹波动态调制的研究进展 . 中国光学, doi: 10.3788/CO.20171001.0086 [9] 胡伟东, 季金佳, 刘瑞婷, 王雯琦, LeoP.LIGTHART. 太赫兹大气遥感技术 . 中国光学, doi: 10.3788/CO.20171005.0656 [10] 石敬, 王新柯, 郑显华, 贺敬文, 王森, 谢振威, 崔烨, 叶佳声, 孙文峰, 冯胜飞, 韩鹏, 张岩. 太赫兹数字全息术的研究进展 . 中国光学, doi: 10.3788/CO.20171001.0131 [11] 谭智勇, 万文坚, 黎华, 曹俊诚. 基于太赫兹量子级联激光器的实时成像研究进展 . 中国光学, doi: 10.3788/CO.20171001.0068 [12] 张磊, 刘硕, 崔铁军. 电磁编码超材料的理论与应用 . 中国光学, doi: 10.3788/CO.20171001.0001 [13] 秦华, 黄永丹, 孙建东, 张志鹏, 余耀, 李想, 孙云飞. 二维电子气等离激元太赫兹波器件 . 中国光学, doi: 10.3788/CO.20171001.0051 [14] 张检发, 袁晓东, 秦石乔. 可调太赫兹与光学超材料 . 中国光学, doi: 10.3788/CO.20140703.0349 [15] WEN Qi-ye, XIE Yun-song, ZHANG Huai-wu, YANG Qing-hui, LIU Bao-yuan. 太赫兹频域的强双带特异材料吸收体 . 中国光学, PDF下载 ( 3956 KB) Turn off MathJax 图(10) 文章访问数: 130 HTML全文浏览量: 30 PDF下载量: 23 网络出版日期: 2020-12-25 CHEN Jie2, 3, ZHU Yi-fan1, 4, YANG Jie5, WANG Zhong-chang6, , , ZHANG Kai2, , 1. 中国科学技术大学 纳米技术与纳米仿生学院,安徽 合肥 230026 2. 中国科学院苏州纳米技术与纳米仿生研究所 国际实验室,江苏 苏州 215123 3. 上海大学 材料科学与工程学院,上海 200444 4. 中国科学院苏州纳米技术与纳米仿生研究所 纳米器件与应用重点实验室,江苏 苏州 215123 5. 新加坡国立大学 化学与生物分子工程系,新加坡 117585 6. 伊比利亚国际纳米科技实验室 葡萄牙 布拉加 4715-330 通讯作者: [email protected]; [email protected] DONG Zhuo 1, 2, , CHEN Jie 2, 3, ZHU Yi-fan 1, 4, YANG Jie 5, WANG Zhong-chang 6, , , ZHANG Kai 2, , 1. School of Nano-Tech and Nano-Bionics, University of Science and Technology of China, Hefei 230026, China 2. i-Lab, Suzhou Institute of Nano-Tech and Nano-Bionics, Chinese Academy of Sciences, Suzhou 215123, China 3. School of Materials Science and Engineering, Shanghai University, Shanghai 200444, China 4. Key Laboratory of Nanodevices and Applications, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123, China 5. Department of Chemical and Biomolecular Engineering, National University of Singapore, Singapore 117585, Singapore 6. International Iberian Nanotechnology Laboratory (INL), Avenida Mestre José Veiga s/n, Braga 4715-330, Portugal Received Date: 2020-09-30 Rev Recd Date: 2020-10-13 Available Online: 2020-12-25 Terahertz (THz) radiation is usually defined as electromagnetic waves in the frequency range of 0.1 THz to 10 THz[1]. There are abundant interactions between THz waves and matter, which leads to plenty of THz applications in fundamental research on nondestructive tests, biomedical imaging, atmospheric observation, process control, homeland security and space communications[2-4]. The development of a reliable room-temperature (RT) THz detector is of utmost importance for these THz applications. However, constrained by the low photon energy of a THz wave (a few meV) and the strong background thermal noise at RT, traditional photodetection excited by electron-hole pairs in semiconductors are not suitable for THz photons[5]. Therefore, the basic detection mechanisms of THz photodetectors at RT should be explored. During the past two decades, many different RT THz detection technologies based on distinctive physical mechanisms have been developed. Amongst them, the most important architectures presently depend on High Electron Mobility Transistors (HEMTs)[6], Schottky barrier structures[7], bolometers[8] and Field Effect Transistors (FETs)[9]. For these architectures, FET-based THz detectors have great potential for a high-performance (fast response and high-frequency operation) and cost-effective THz detector, which also can be fabricated with a standard Complementary Metal-Oxide Semiconductor (CMOS) or silicon technology[10]. To date, the photodetection of THz waves in an FET can be achieved via three main mechanisms: Plasma-Waves rectification (PW), the Photo-Thermoelectric Effect (PTE) and bolometric detection[11-13]. The PW mechanism was first proposed by Dyakonov and Shu in the 1990s[14] and can conventionally operate at RT via rectification of the plasma waves in the FET channel using the ac electromagnetic field. The bolometric process is related to the change in the FET channel's conductivity caused by the heat of the lattice produced by photon absorption[15]. In the case of the PTE, photovoltage can be generated by a Seebeck coefficient difference and a temperature gradient within the FET channel[16]. A promising route to achieve sensitive THz detection using these mechanisms relies on a combination of the excellent properties of the chosen materials and the specific device's structure. The dominant mechanism can be tailored through the design for the materials and structures. Therefore, it is necessary to explore various materials with potentially excellent properties to use as the active channels of the FET-based THz detectors. Recently, two-dimensional (2D) materials, such as graphene, Transition-Metal Dichal Cogenides (TMDCs), Topological Insulators (TIs), Black Phosphorus (BP) and 2D arsenic have attracted enormous interest due to their unique and extraordinary electric and optical properties[17-23]. Increasing numbers of photodetectors based on 2D materials have been reported in the past few years. 2D materials are particularly promising candidates for THz photodetectors owing to their high carrier mobility, gate-tunable carrier concentration, strong light-matter interactions and plasma oscillation[24-26]. For instance, graphene and BP have been applied in asymmetrical antenna-coupled FET THz detectors utilizing plasma-wave mechanisms, and maximum RT responsivities of 30 V/W and 7.8 V/W were obtained at 0.3 THz, respectively[15, 27]. Leonardo Viti et al.[28] reported a PW THz detector based on the Bi2Te(3−x)Sex and achieved a maximum voltage responsivity of 3.0 V/W via topological insulator surface states. In addition, the RT current responsivity of 10 mA/W at 2.52 THz was realized in the MoS2.19-based Metal-Semiconductor-Metal (MSM) structure THz detector[29]. Black arsenic-phosphorus (b-AsxP1−x), a newly discovered 2D material similar to BP, has attracted growing attention[30, 31]. In contrast to graphene and other 2D materials, b-AsxP1−x has a finite direct band gap that allows for suppressed dark currents, which are desirable for a wealth of electronic and optoelectronic devices. Meanwhile, it exhibits a tunable band gap from 0.3 eV to 0.15 eV and different optical properties by varying the chemical composition of arsenic (b-AsxP1−x, x from 0 to 0.83). This energy range suggests that b-AsxP1−x can be extended for the detection of wavelengths from 4 μm to 8 μm (long-wavelength infrared, LWIR). Recent research on b-AsxP1−x mainly focuses on LWIR detection, while the detection of THz has not yet been applied[32-34]. The tunable band gap of b-AsxP1−x can be exploited to selectively control the detection dynamics in the active channel and achieve efficient detection of THz waves at RT. In this work, we demonstrated efficient antenna-coupled RT THz detectors based on a mechanically exfoliated multilayer b-AsxP1−x and studied its THz response characteristics. We found that the detection mechanism of the detector can be tailored by varying the chemical composition of b-AsxP1−x, and a competitive relationship in these mechanisms was revealed. More significantly, the best response performance of the detector can be obtained when the band gap and the carrier mobility of the materials achieve an equilibrium. We fabricated b-AsxP1−x detectors with different components (x = 0, 0.1 and 0.5) and found that the optimal response performance was obtained in the b-As0.1P0.9 detector with responsivities of 28.23 V/W at 0.37 THz. It is worth noting that this is the first time that a THz wave is detected with a detector based on black arsenic-phosphorus. 2.1. Material synthesis and characterizations The high-quality b-AsxP1−x (x values of 0, 0.1 and 0.5) crystals were synthesized by a Chemical Vapor Transport (CVT) method similar to that of our previous report[35]. A High-Resolution Transmission Electron Microscope (HRTEM) image and a Selected Area Electron Diffraction (SAED) pattern were obtained from a TEM (Tecnai G2 F20 S-Twin) to characterize the crystals ' structures and qualities. The elemental composition and distribution of synthesized materials were measured using their Energy-Dispersive X-ray Spectroscopy (EDS) spectrum and elemental mappings were performed with a Scanning Electron Microscope (SEM, Quanta FEG 250). The Raman spectrum was taken from a micro-Raman system (LABRAM HR) with a visible laser (λ = 532 nm) through a 100× objective lens. The morphology and thickness of all flakes were characterized using a combination of the results of an optical microscope (Nikon Eclipse LV100ND) and an Atomic Force Microscope (AFM, Dimension ICON). To investigate the band gaps of the b-AsxP1−x, the infrared absorption spectroscopy was taken on a Bruker Optics Fourier Transfer Infrared spectrometer (Vertex 70) integrated with a Hyperion 1000 microscope system. 2.2. Device fabrication and characterizations The b-AsxP1−x detectors were fabricated by adopting standard e-beam lithography techniques. The b-AsxP1−x flakes were prepared by the mechanical exfoliation method on a high resistance (ρ > 20000 Ωcm) intrinsic Si substrate with 285-nm SiO2. The flakes with a thickness of about 10~15 nm were chosen by observing high-contrast optical microscope images and finally confirmed by AFM. The source and drain contact patterns were defined via Electron-Beam Lithography (EBL, JEOL JBX 5500), and then an Electron-Beam Evaporator (EBE, Ulvac Ei-5Z) was used to evaporate Cr/Au (10/70 nm) films, after which they underwent a lift-off process in acetone to form the source and drain electrodes. A dielectric layer was then fabricated on the samples using EBL to define their patterns, a plasma chemical vapor deposition was inductively coupled (ICPCVD, Oxford Plasma lab system100) in deposits of about 70-nm thick SiO2 (75 ℃ standard process) and a lift-off process was used to form a dielectric layer. Finally, a Cr/Au (10/70 nm) layer was evaporated onto the oxide layer to form a top-gate electrode similar to the source/drain electrode. In order to avoid the oxidation of b-AsxP1−x, the time of exposure to the ambient environment was controlled to be within one hour before the dielectric layer was deposited. The electrical characteristics of the detectors were measured at RT during ambient conditions by a probe station (Cascade M150) equipped with a semiconductor parameter analyzer (Keithley 4200). 2.3. Optoelectronic measurements In order to investigate the THz response of the detector, a system of THz detection was established. In the system, we employ a microwave source equipped with a Schottky-barrier-diode frequency multiplier chain (VDI WR-2.2), operating in the frequency ranges 0.24~0.38 THz. The THz radiation was then collected, collimated and focused by a set of two Off-Axis Parabolic (OAP) mirrors onto the detector surface with a spot of 2 mm in diameter. The power of the incident THz (Pt) at the device's position, which was measured as a function of output frequency by a Golay cell (Tydex GC-1P), ranged between 30 μW and 370 μW. The photoresponse was measured in a photocurrent mode, where the source electrode was grounded, and different gate voltages were applied by a DC voltage source (Yokogawa 7651) to obtain the maximum photocurrent (Iph). The photocurrent response signal was measured at the drain electrode by means of a low-noise current preamplifier (DL1211) to amplify the photocurrent, which was followed by a lock-in amplifier (LIA, Signal Recovery 7265) with an integration time of 200 ms and a signal analyzer (SR770) to record the photocurrent signal and the noise spectral density, respectively. The value of Iph can be calculated from the signal recorded on the lock-in (LIA) through the mathematical relation Iph = 2.2 LIA/Gn, where Gn (107) is the gain factor and 2.2 accounts for the square wave modulation. The b-AsxP1−x has an orthorhombic crystal structure (A17 type structure) with a puckered honeycomb arrangement of As and P atoms. The top and side view of the crystal structure is shown in Fig. 1(a) (color online). In order to illustrate the structure and quality of the b-AsxP1−x crystals, the b-As0.1P0.9 was characterized by TEM. Fig. 1(b) gives the HRTEM image, exhibiting the orthorhombic atomic lattice fringes and the good crystallinity of the crystals. The interplanar spacing of 0.35 nm and 0.39 nm could be measured from the HRTEM, corresponding to the (100) and (0$ \stackrel{\mathrm{-}}{\text{1}} $ 1) planes of the orthorhombic structure. Fig. 1(c) displays the SAED pattern along the [011] direction, which shows the sharp and intense diffraction spots, also suggesting the highly crystalline quality of the crystals. The elemental compositions of the b-AsxP1−x (x = 0.1 and 0.5) crystals were probed by the EDS spectrum as shown in Fig. 1(d), which displays a different signal intensity for As and P, and agrees well with the elemental composition of b-As0.1P0.9 and b-As0.5P0.5 (the accurate elemental ratio is shown in Fig. S1(a-b), Supporting Information). Moreover, the EDS mapping of the b-AsxP1−x, insert in Fig. 1(d), indicates a uniform spatial distribution of As and P atoms in the synthesized materials. The Raman spectroscopy measurements of a few layered b-AsxP1−x and BP samples are shown in Fig. 1(e). It is obvious that the Raman spectra of BP present three characteristic peaks at 362, 438, and 466 cm−1, corresponding to the ${\rm{A}}_{\rm{g}}^1 $ , B2g and ${\rm{A}}_{\rm{g}}^2 $ phonon modes. However, b-AsxP1−x show more Raman peaks than BP, due to the existence of As-As, As-P and P-P bands. We observed that the As-As, As-P and P-P Raman peaks are located in the 200~280 cm−1, 280~380 cm−1, 380~500 cm−1 regions, respectively. It can be seen that the relative intensity of the P-P peaks decreases, while the As-P and As-As peaks increase with an increase in the concentration of As, which agrees with previous reports[30-31]. It proves the high quality of the b-AsxP1−x materials and that they are free of obvious damage after exfoliation. One major advantage of the b-AsxP1−x materials is that the band gaps can be tuned by their chemical compositions. To investigate the band gaps of our synthesized materials, the infrared absorption spectra of b-AsxP1−x with varying x values of 0, 0.1 and 0.5 were measured, as shown in Fig. 1(f). The results show a clear shift in the absorption edge to shorter wavenumbers with increasing x in b-AsxP1−x materials. The BP has an absorption edge of ~2532 cm−1(0.31 eV), and the b-As0.1P0.9 and b-As0.5P0.5 have absorption edges of ~2113 cm−1 (0.26 eV) and ~1870 cm−1 (0.23 eV), respectively. Those results confirm that the band gaps change with the chemical composition of our synthesized b-AsxP1−x materials. To explore the THz response properties of b-AsxP1−x, antenna-coupled FET detectors were fabricated. In order to induce the strongly localized THz fields in the active channel, the bow-tie antenna was chosen. Fig. 2(a) displays the schema of the device. It can be seen that the detector consists of three block antennas, each block is of a dipole antenna. The antenna length L = 139 μm, which determines the center response frequency of 0.34 THz. To investigate the THz response of the detector, a system of THz detection was established. The schematic diagram of this system is shown in Fig. 2(b). We employed a Finite-Difference Time-Domain (FDTD) method to simulate the spatial distribution of the THz electric field for the antenna structure. The corresponding field distribution was plotted on a 2D color scale image as shown in Fig. 2(c) (color online). The THz electric field is strongly distributed at the edge of the gated channel and the THz field on the drain side is stronger than that on the source side. This proved that the antenna can achieve the asymmetric feeding of the ac field into the channel, and finally resulted in a photocurrent signal in the active channel. The detectors were then fabricated on b-AsxP1−x flakes with thickness of ~15 nm, then prepared onto the Si/SiO2 substrate by the mechanical exfoliation method (the AFM images are shown in Fig. S1(c-d), Supporting Information). The S-antenna, D-antenna and G-antenna were defined with a combination of EBL and EBE (for more details see the Experimental Section), and also acted as the source, drain and gate electrodes of the FET detector, respectively. In this configuration, a 70 nm thick SiO2 was deposited as the dielectric layer to modulate the carrier concentration of the channel. Fig. 2(d-e) shows the false-color SEM images of the device structure. Its channel length is LC = 2.5 μm, its gate length has been set to LG = 500 nm and its channel width is W = 4 μm. Before the optical testing, the RT electrical properties of the as-fabricated b-AsxP1−x detectors were measured by the Keithley 4200. Fig.3(a) and 3(c) display the output characteristics with different top-gate voltages (VG, ranging from −6 to 6 V with steps of 3 V) for the b-As0.1P0.9 and b-As0.5P0.5 detectors, respectively. It shows a linear relationship between the source-drain current (IDS) and source-drain voltage (VDS) at the different VG, indicating good Ohmic contact between the b-AsxP1−x flakes and the metal electrodes. Furthermore, the transfer characteristics with a fixed VDS (50 mV) are presented in Fig.3(b) and 3(d), exhibiting typical p-type transport behavior. The field-effect hole mobility of the b-AsxP1−x device can be calculated by the following equation: $$\mu = \frac{{{\rm{d}}{I_{{\rm{DS}}}}}}{{{\rm{d}}{V_{\rm{G}}}}} \cdot \frac{L}{W} \cdot \frac{1}{{{C_{{\rm{OX}}}}{V_{{\rm{DS}}}}}},$$ where L and W are the channel length and width, respectively, and ${C_{{\rm{OX}}}} $ = 4.9×10−4 F/m2 is the capacitance of the 70-nm SiO2 dielectric. The field-effect hole mobilities are ~159 cm2/(V·s) and 79 cm2/(V·s) for the b-As0.1P0.9 and b-As0.5P0.5 detectors, respectively, which were calculated from the transfer characteristics curves. For contrast, the output and transfer characteristics of the BP devices were also measured as shown in Figure S2 (Supporting Information), which shows the field-effect hole mobility to be ~725 cm2/V·s. The photoresponsivity of the as-fabricated b-AsxP1−x detectors were characterized by illuminating the detectors using a tunable THz source with a spectral range of 0.24 THz to 0.38 THz (more details see the Experimental Section), as shown in Fig. 4. The voltage responsivity (RV) of the detector can be extracted from Iph via the relation RV = Iph·R/(Pt·Sd/Sb), where R is the resistance of the detector measured by the Keithley 4200, Sb is the THz beam spot area (Sb = πr2, where r is the radius of the beam spot) and Sd is the detector's active area. The whole area of the 139-μm antenna is smaller than that of the diffraction-limited one (Sλ), hence we assume the Sd = Sλ = λ2/4 (where λ is the wavelength of the incident THz wave). The incident radiation frequency dependence of the RV is measured at VG = 0 V and is shown in Fig. 4(a) and Fig. 4(c) for b-As0.1P0.9 and b-As0.5P0.5, respectively. In order to obtain maximum responsivity, the source electric-field polarization must be parallel to the antenna's axis. As shown, the RV is a function of the frequency and has several clear response peaks for all curves, indicating the broadband nature of the bow-tie antenna. Fig. 4 (b) plots the RV as a function of the VG at 0.37 THz (the optimal frequency was selected from Fig. 4(a)) in the b-As0.1P0.9 detector with a maximum RV = 28.23 V/W while VG = −2.96 V. Meanwhile, the VG dependence of the RV at 0.34 THz (selected from Fig. 4(c)) for the b-As0.5P0.5 detector is shown in Fig. 4(d). The maximum RV = 2.42 V/W at VG = 4.13 V was obtained from the curve. For comparison, the THz response of the BP detector was measured, and the maximum RV was found to be about 8.1 V/W at 0.27 THz. The detection mechanisms can be explained by the self-mixing theory, similar to previous reports[9]. The schematic diagram of the detector based on this mechanism is shown in Fig. 5 (a). The mechanism is identical to the PW mechanism. The oscillating electric field of incoming THz radiation is coupled asymmetrically between the source and gate electrodes via the bow-tie antenna, which excites plasma-waves oscillations, which in turn generate a horizontal (Ex) and vertical (Ez) driving electric field with a phase difference (φ) in the FET channel. Moreover, the driving electric field produces a modulation of drift velocity and carrier density, resulting in a mixing photocurrent (Iph). The responsivity can be expressed as[36]: $$ {R_V} = \frac{{e\mu WR}}{{2L}}{Z_0}\left\{ {\overline {\textit{z}} \displaystyle\int\limits _0^L\frac{{{\rm{d}}n}}{{{\rm{d}}{V_{{\rm{geff}}}}}}{{\dot \xi }_x}{{\dot \xi }_z}{\rm{cos}}\phi {{\rm{d}}_x} - \displaystyle\int\limits _0^L\frac{{{\rm{d}}n}}{{{\rm{d}}{V_{{\rm{geff}}}}}}{{\dot \xi }_x}{{\dot \xi }_z}{{\rm{d}}_x}} \right\},$$ which is related to the detector's structure (antenna geometry) and the material's characteristics (carrier mobility μ). In our experiment, all of the detectors have the same antenna structure and fabrication process, so the RV should be related to the channel's conductivity with respect to VG: 1/σ·dσ/dVG (the curves are shown in Fig.S4 (a-b)). The electrical properties of b-AsxP1−x show that the hole mobility decreases from 725 cm2/V·s (for BP detector) to 79 cm2/V·s (for b-As0.5P0.5 detector). Therefore, the RV should decrease with an increase in the concentration of As atoms. We note that the RV did increase first and then decrease with an increase in the concentration of As atoms, and the maximum RV was obtained in the b-As0.1P0.9 detector, which is not consistent with our expectations of the mechanism. There should be a contrary effect in this detection process. Because the b-AsxP1−x is a narrow gap semiconductor, and the detector is based on a metal-semiconductor-metal structure, the Electromagnetic Induced Well (EIW) theory must also be considered[5, 37], as shown in Fig. 5 (b). There is a well induced by the THz field that traps the carriers and changes the conductivity of the semiconductor to achieve THz detection. The EIW theory is related to the band gap (Eg) of the semiconductor, due to the relationship between the change of carrier concentration and Eg: Δn(Δh)∝exp(-Eg/kT)[38], indicating that the smaller Eg can induce a higher fluctuation in carrier concentration. For our experiment, the higher the concentration of As atoms, the smaller the Eg of b-AsxP1−x, but the lower the field-effect hole mobility. This means that it should be able to compromise for those two parameters. The characterization results of the materials and detectors show that the b-As0.1P0.9 has a narrower Eg (~0.3 eV) than BP and higher mobility (~159 cm2/V·s) than b-As0.5P0.5. It ensures an ideal trade-off between the mobility and the carrier concentration fluctuation, so the b-As0.1P0.9 detector displays a higher RV at 28.23 V/W than the BP and b-As0.5P0.5 detectors. Another important parameter to estimate the sensitivity of the THz detector is the Noise-Equivalent Power (NEP), which is usually defined as the minimum detectable power of a detector with a unitary signal-to-noise ratio in a 1 Hz bandwidth[39]. The value of the NEP can be calculated from the voltage noise spectral density (NV) and RV via the mathematical relationship NEP = NV/RV. The main noise NV is dominated by the thermal Johnson-Nyquist noise due to our device having a zero bias under optoelectronic testing. The NV was measured by a signal analyzer, as shown in the inset of Fig. 6(a) and 6(b) (for more details see the Experimental Section). Fig. 6(a)-(b) plots the measured NEP as a function of VG, minimum NEP 's of 0.53 nW/Hz1/2 and 2.61 nW/Hz1/2 have been attained with the b-As0.1P0.9 and b-As0.5P0.5 detectors, respectively. It should be noted that these values are an upper limit, we assume due to all of the power incident to the antenna being coupled to the detector channel, not taking into account the coupling losses related to the impedance match. Finally, the terahertz transmission images were obtained with a single-pixel b-As0.1P0.9 detector at 0.37 THz, as shown in Fig. 6(c)-(d). As the test objects, we selected a key and a pair of metal scissors inside an envelope, respectively. Fig. 6(c) shows the terahertz image of the key, consisting of 80×40 scanned points and the step size was 1mm×1mm. To minimize the noise and obtain the optimal image, the detected signal was interpreted via a lock-in amplifier with an integration time of 200 ms similar to the optoelectronic test. The shape of the key is clearly revealed, with a reasonably good spatial resolution (1 mm2). The transmission image of the scissors with a total of 70×90 pixels is also exhibited in Fig. 6(d), allowing us to find the concealed threats and implement security checks. These results show that our detector can be exploited in a realistic situation, enabling large-area imaging of macroscopic samples. In conclusion, we demonstrated a RT THz photodetector based on exfoliated flakes of b-AsxP1−x (x = 0, 0.1 and 0.5) for the first time. The tunable band gap and transport characteristics of b-AsxP1−x enables efficient control of the detection mechanisms in the detector. In the experiment, the PW theory and EIW theory were found to be the primary source of the THz response signal, and there is a competitive relationship between them. We found that the PW and EIW theory are related to the carrier mobility and band gap, respectively, and the b-As0.1P0.9 possesses an ideal equilibrium between the two parameters. The optimal response performance was realized in the b-As0.1P0.9 detector, which shows a maximum RV of 28.23 V/W and a minimum NEP of 0.53 nW/Hz1/2 at 0.37 THz. This work implies that b-AsxP1−x has great potential for THz photodetector due to its tunable electronic and optical properties as well as its promising THz response performance. This work was supported by the National Natural Science Foundation of China (Grant No. 61927813, 61875223, 61922082) and the National Key R&D Program of China (2016YFE015700). The support from the Vacuum Interconnected Nanotech Workstation (Nano-X) of the Suzhou Institute of Nano-tech and Nano-bionics (SINANO), Chinese Academy of Sciences is also acknowledged. Figure S1. (a, b) Atomic ratio of As and P elements for the b-As0.1P0.9 and b-As0.5P0.5, respectively. It proves the b-As0.1P0.9 and b-As0.5P0.5 have an accurate elemental ratio of about 1:9 and 5:5. (c, d) The AFM images of few-layer b-As0.1P0.9 and b-As0.5P0.5 nanoflakes on a Si substrate with 285 nm SiO2. It shows clean surface and flat shape, proving the high quality of the mateials. And the corresponding linear scan analysis of height display in the images with a thickness of 14 nm and 16 nm for b-As0.1P0.9 and b-As0.5P0.5 nanoflakes, respectively. Figure S2. The electrical properties of the as-fabricated BP detector. (a) Output characteristics of the BP detector as function of different VG from −8 V to 8 V with steps of 4 V. It shows a good Ohmic contact and a large tunable of carrier density by gate voltage. (b) Transfer characteristics for the BP detector with a fixed VDS=100 mV. It exhibits a typical p-type ambipolar transport behavior. The field-effect hole mobility measured from the transfer curve is about 725 cm2/V·s. Figure S3. The terahertz response characteristics of the same BP detector. (a) Frequency dependence of the voltage responsivityfor BP detector at VG = 0 V. It shows several clear response peaks for the detector and the maximum voltage responsivity located at 0.27 THz.(b, c) Voltage responsivity and noise equivalent power as a function of the gate voltage at 0.27 THz. The maximum RV and minimum NEP of about 8.1 V/W and 1.08 nW/Hz1/2 were measured from the curve, respectively. Figure S4. Derivative of conductivity multiplied by the resistance, as a function of VG, in b-As0.1P0.9(a) and b-As0.5P0.5 (b) detector. This is the expected responsivity following a plasma-wave detection mechanism. If the domain mechanism is the PW mechanism, the measured shape of the RV curve should be in excellent agreement with that curves. 版权所有© 2012《中国光学》编辑部 吉ICP备11002662号 地址:吉林省长春市东南湖大路3888号电话:0431-86176852; 84627061 Email:[email protected]邮 编:130033 本系统由 北京仁和汇智信息技术有限公司 开发 技术支持: [email protected]
CommonCrawl
Home Publications My research area is Functional Analysis with applications to complex analysis and linear partial differential operators. My main interests are Fréchet spaces, theory of distributions, spaces of analytic, real analytic and (non)-quasianalytic classes and operators defined between them. I belong to the Spanish research networks Complex Analysis and Operator Theory and Functional Analysis. I am editor of Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A Mathematicis (RACSAM), Functiones et Approximatio Comentari Mathematici, Banach Journal of Mathematical Analysis, Matematicki Vesnik, Journal of Mathematical Analysis and Applications and Mediterranean Journal of Mathematics. Preprints 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 Before 2006 J. Bonet, W.J. Ricker, Fréchet and (LB) sequence spaces induced by dual Banach spaces of discrete cesàro spaces. Bull. Belg. Math. Soc. Simon Stevin (to appear). PDF J. Bonet, T. Kalmes, A. Peris, Dynamics of shift operators on non-metrizable sequence spaces. Rev. Iber. Mat. (to appear). PDF J. Bonet, W. Lusky, J. Taskinen, On decay rates of solutions of parabolic Cauchy problems, Proc. Royal Sci. Edinburgh. 151 (2021), 1021-1039. DOI: 10.1017/prm.2020.48. PDF J. Bonet , Every separable complex Fréchet space with a continuous norm is isomorphic to a space of holomorphic functions. Canad. Math. Bull. 64 (2021), 8-12. DOI: 10.4153/S000843952000017X. PDF J. Bonet, W. Lusky, J. Taskinen, On the boundedness of Toeplitz operatos with radial weights over weighted sup-norm spaces of holomorphic functions, J. Math. Anal. Appl. 493 (2021) Article 124515. DOI: 10.1016/j.jmaa.2020.124515. PDF J. Bonet, M.C. Gómez-Collado, E. Jordá, D. Jornet, Nuclear weighted composition operators on weighted Banach spaces of analytic functions, Proc. Amer. Math. Soc. 149 (2021), 311-321. DOI: 10.1090/proc/15223. PDF J. Bonet, W.J. Ricker, Operators acting in sequence spaces generated by Dual Banach spaces of discrete Cesàro spaces. Funct. Approx. Comment. Math. 64(1) (2021), 109-139. PDF. J. Bonet, E. Mangino, Associated weights for spaces of p-integrable entire functions, Quest. Math. 43 (2020), 747-760. DOI: 10.2989/16073606.2019.1605420. PDF J. Bonet, W.J. Ricker, Order spectrum of the Cesàro operator in Banach lattice sequence spaces. Positivity (2020) 24:593-603. PDF J. Bonet, W.J. Ricker, Operators acting in the dual spaces of discrete Cesàro spaces. Monatsh. Math. 191 (2020), no. 3, 487–512. PDF J. Bonet, A note about the spectrum of composition operators induced by a rotation. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 114 (2020), no. 2, Art. 63, 6 pp. PDF W. Seyoum, T. Mengestie, J. Bonet, Mean ergodic composition operators on generalized Fock spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 114 (2020), no. 1, Paper No. 6, 11 pp. J. Bonet, W. Lusky, J. Taskinen, On boundedness and compactness of Toeplitz operators in weighted H\infty-spaces. J. Funct. Anal. 278 (2020), no. 10, 108456, 26 pp. PDF J. Bonet, The differentiation operator in the space of uniformly convergent Dirichlet series. Math. Nachr. 293 (2020), 1452–1458. PDF J. Bonet, W. Lusky, J. Taskinen, Solid cores and solid hulls of weighted Bergman spaces. Banach J. Math. Anal. 13 (2019), no. 2, 468–485. PDF A.A. Albanese, J. Bonet, W.J. Ricker, Multiplier and averaging operators in the Banach spaces ces(p), 1<p<∞. Positivity 23 (2019), no. 1, 177–193. PDF J. Bonet, W. Lusky, J. Taskinen, Schauder bases and the decay rate of the heat equation. J. Evol. Equ. 19 (2019), 717-728. DOI: 10.1007/s00028-019-00492-x. PDF J. Bonet, W. Lusky, J. Taskinen, Distance formulas on weighted Banach spaces of analytic functions, Complex Anal. Oper. Theory 13 (2019), 893-900. DOI: 10.1007/s11785-018-0915-4. PDF A.A. Albanese, J. Bonet, W.J. Ricker, Operators on the Fréchet sequence space ces(p+), RACSAM 113 (2019), 1533-1556. DOI: 10.1007/s13398-018-0564-2. PDF J. Bonet, The spectrum of Volterra operators on Korenblum type spaces of analytic functions, Integr. Equ. Oper. Theory (2019) 91:46. PDF J. Bonet, T. Mengestie, M. Worku, Dynamics of the Volterra-type integral and differentiation operators on genralized Fock spaces, Results Math. (2019) 74:197. PDF A.A. Albanese, J. Bonet, W.J. Ricker, The Cesàro operator on power series spaces, Studia Math. 240 (2018), 47-68. PDF. A.A. Albanese, J. Bonet, W.J. Ricker, The Fréchet spaces ces(p+), 1<p< infty, J. Math. Anal. Appl. 458 (2018), 1314-1323. PDF J. Bonet, J. Taskinen, Solid hulls of weighted Banach spaces of analytic functions on the unit disc with exponential weights, Anal. Acad. Sci. Fenn. Math. 43 (2018), 521-530. PDF A.A. Albanese, J. Bonet, W.J. Ricker, The Cesàro operator in weighted l_1spaces, Math. Nachr. 291 (2018), 1015-1048. PDF A.A. Albanese, J. Bonet, W.J. Ricker, The Cesàro operator on Korenblum type spaces of analytic functions, Collect. Math. 69 (2018), 263-281. PDF A.A. Albanese, J. Bonet, W.J. Ricker, The Cesàro operator on duals of power series spaces of infinite type, J. Oper. Theory 79 (2018), 373-402. PDF J. Bonet, The Fréchet Schwartz algebra of unifoormly convergent Dirchlet series, Proc. Edinburgh Math. Soc. 61 (2018), 933-942. PDF J. Bonet, J. Taskinen, Solid hulls of weighted Banach spaces of entire functions, Rev. Mat. Iberoam. 34 (2018), 593-608. PDF J. Bonet, M. Langenbruch, The mathematical work of Pawel Domanski, Funct. Approx. 59 (2018), 7-39. PDF J. Bonet, W. Lusky, J. Taskinen, Solid hulls and cores of weighted H^infty spaces. Rev. Mat. Complut. 31 (2018), 781-804. PDF J. Bonet, W. Lusky, J. Taskinen, Monomial basis in Korenblum type spaces of analytic functions. Proc. Amer Math. Soc. 146 (2018), 5269-5278. PDF J. Bonet, José, E. Jordá, A. Rodríguez, Mean ergodic multiplication operators on weighted spaces of continuous functions. Mediterr. J. Math. 15 (2018), no. 3, Art. 108, 11 pp. PDF Angela A. Albanese, JB, Werner J. Ricker, The Cesàro operator in the Fréchet spaces l^{p+} y L^{p-}, Glasgow Math. J. 59 (2017), 273-287. DOI: 10.1017/S001708951600015X. PDF J. Bonet, C. Fernández, A. Galbis, J.M. Ribera, Frames and representing systems in Fréchet spaces and their duals, Banach J. Math. Anal.11 (2017), 1–20. http://dx.doi.org/10.1215/17358787-3721183. PDF J. Bonet, P. Domanski, A note on the spectrum of composition operators on spaces of real analytic functions, Complex Anal. Oper. Theory 11 (2017), 161-174. doi:10.1007/s11785-016-0589-5. PDF. J.Bonet, D.Vukotic A note on completeness of weighted normed spaces of analytic functions, Results Math. 72 (2017), 263-279. PDF A. A. Albanese, J. Bonet, W. J. Ricker, Dynamics and spectrum of the Cesàro operator on C^\infty(R+), Monatshefte Math. 181 (2016), 267-283. DOI: 10.1007/s00605-015-0863-z. PDF A. A. Albanese, J. Bonet, W. J. Ricker, Mean ergodicity and spectrum of the Cesaro operator on weighted $c_0$ spaces, Positivity 20 (2016), 761–803. DOI 10.1007/s11117-015-0385-x. PDF A. A. Albanese, J. Bonet, W. J. Ricker, The Cesàro operator in growth Banach spaces of analytic functions, Integr. Equ. Oper. Theory 86 (2016), 97–112. DOI 10.1007/s00020-016-2316-z. PDF M.J. Beltrán, J. Bonet, C. Fernández, Classical operators on Hörmander algebras, Discrete and Continuous Dynamical Systems - Series A (DCDS-A) 35 (2015) 637-652. PDF Angela A. Albanese, JB, Werner J. Ricker, On the continuous Cesàro operator in certain function spaces, Positivity 19 (2015), 659-679. DOI: 10.1007/s11117-014-0321-5. PDF J. Bonet, P. Domanski, Abel's functional equation and eigenvalues of composition operators on spaces of real analytic functions, Integral Equations Oper. Theory 81 (2015), 455-482. DOI 10.1007/s00020-014-2175-4. PDF J. Bonet, J. Taskinen, A note on Volterra operators on weighted Banach spaces of entire functions, Math. Nachr. 288 (2015), 1216-1225. DOI: 10.1002/mana.201400099. PDF A. A. Albanese, J. Bonet, W. J. Ricker, Spectrum and compactness of the Cesàro operator on weighted l_p spaces, J. Austral. Math. Soc. 99 (2015) 287-314. DOI: 10.1017/S1446788715000221. PDF J. Bonet, Abscissas of weak convergence of vector valued Dirichlet series, J. Funct. Anal. 269 (2015) 3914-3927. DOI: 10.1016/j.jfa.2015.09.026. PDF J. Bonet, The spectrum of Volterra operators on weighted spaces of entire functions, Quart. J. Math. 66 (2015), no.3, 799-807. DOI: 10.1093/qmath/hav019. PDF José Bonet, Carmen Fernández, Antonio Galbis, Juan M. Ribera, Shrinking and boundedly complete Schauder frames on Fréchet spaces, J. Math. Anal. Appl. 410 (2014), 953-966. DOI: 10.1007/S00009-013-0318-5. PDF J. Bonet, C. Fernández, The range of the restriction map for a multiplicity variety in Hörmander algebras of entire functions, Mediterranean J. Math. 11 (2014), 643-652. DOI: 10.1007/s00009-013-0318-5. PDF Angela A. Albanese, JB, Werner J. Ricker, Uniform mean ergodicity of C_0-semigroups in a class of Fréchet spaces, Funct. Approx. 50.2 (2014) 307-349. PDF A.A. Albanese, J. Bonet, W.J. Ricker, Uniform Convergence and Spectra of Operators in a Class of Fréchet Spaces, Abstr. Appl. Anal. 2014, Art. ID 179027, 16 pp. PDF A.A. Albanese, J. Bonet, W.J. Ricker, Characterizing Fréchet-Schwartz spaces via power bounded operators, Studia Math. 224 (2014), 25-45. PDF José Bonet, Antonio Bonilla, Chaos of the differentiation operator on weighted Banach spaces of entire functions, Complex Anal. Oper. Theory 7 (2013), 33-42. DOI 10.1007/s11785-011-0134-5. PDF María José Beltrán, José Bonet, Carmen Fernández, Classical operators on weighted Banach spaces of entire functions, Proc. Amer. Math. Soc. 141/(2013), 4293-4303. PDF José Bonet, Dragan Vukotic, Superposition operators between weighted Banach spaces of analytic functions of controlled growth, Monatshefte Math. 170 (2013), 311-323. DOI: 10.1007/s00605-012-0441-6. PDF Angela A. Albanese, José Bonet, Werner J. Ricker, Montel resolvents and uniform mean ergodic semigroups of linear operators, Quaestiones Math. 36 (2013), 253-290. PDF Angela A. Albanese, José Bonet, Werner J. Ricker, Convergence of arithmetic means of operators in Fréchet spaces, J. Math. Anal. Appl. 401 (2013), 160-173. PDF J. Bonet, Reordenación de series. El teorema de Levy Steinitz, Gaceta RSME 16 (2013), 449-464. PDF José Bonet, John D. Maitland Wright ,Factorization of weakly compact operators between Banach spaces and Fréchet or (LB)-spaces, Mat. Vesnik 64.4 (2012), 330-335. PDF Angela A. Albanese, José Bonet, Werner J. Ricker, Mean ergodic semigroups of operators, RACSAM 106 (2012), 299-319. PDF. José Bonet, M. Carmen Gómez-Collado, David Jornet, Elke Wolf, Operator-weighted composition operators between weighted spaces of vector-valued analytic functions, Ann. Acad. Sci. Fenn. Math. 37 (2012), 319-338. PDF José Bonet, Leonhard Frerick, Enrique Jordá, The division problem for tempered distributions of one variable, J. Funct. Anal. 262 (2012), 2349-2358. PDF Angela A. Albanese, José Bonet, Fréchet spaces with no infinite dimensional Banach quotient, J. Math. Anal. Appl. 387 (2012) 556–567, doi:10.1016/j.jmaa.2011.09.018. PDF José Bonet, Pawel Domanski, Hypercyclic composition operators on spaces of real analytic functions, Math. Proc. Camb. Phil. Soc. 153 (2012), 489-503. PDF. José Bonet, Mikael Lindström, Elke Wolf, Norm-attaining weighted composition operators on weighted Banach spaces of analytic functions, Archiv Math. 99 (2012), 537-546. PDF José Bonet, Ben de Pagter, and Werner J. Ricker, Mean ergodic operators and reflexive Fréchet lattices. Proc. RoyalSoc. Edinburgh(Math.) 141A (2011), 897-920. PDF José Bonet and Sven A. Wegner, Bornological projective limits of inductive limits of normed spaces, Funct. Approx. 44.2 (2011), 227-242. PDF José Bonet and Pawel Domanski, Power bounded composition operators on spaces of analystic functions, Collectanea Math. 62 (2011), 69-83, DOI: 10.1007/513398-010-0005-9. PDF José Bonet and J.D. Maitland Wright, Non-commutative locally convex measures. Quarterly J. Math. 62 (2011), 21-38, DOI: 10.1093/qmath/hap018. PDF José Bonet, Pawel Domanski, A note on mean ergodic composition operators on spaces of holomorphic functions, RACSAM 105 (2011) 389-396, DOI: 10.1007/s13398-011-0009-7. PDF José Bonet, Carmen Fernández, Spaces of Moscatelli type. A survey , Note di Mat. 31 (2011), 29-42. PDF Angela A. Albanese, José Bonet, and Werner J. Ricker, C0-semigroups and mean ergodic operators in a class of Fréchet spaces. J. Math. Anal. Appl. 365, no. 1, (2010) 142-157. DOI: 10.1016/j.jmaa.2009.10.014 PDF Angela A. Albanese, José Bonet, and Werner J. Ricker, Grothendieck spaces with the Dunford-Pettis property. Positivity 14, no. 1, (2010) 145-164. DOI: 10.1007/s11117-009-0011-x PDF José Bonet and Bernardo Cascales, Noncomplete Mackey topologies in Banach spaces. Bull. Aust. Math. Soc. 81, no. 3, (2010) 409-413. DOI: 10.1017/S0004972709001154 PDF José Bonet, A problem on the structure of Fréchet spaces . RACSAM Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat 104 (2010), 427-434. PDF Simone Agethen, Klaus D. Bierstedt, and José Bonet, Projective limits of weighted (LB)-spaces of continuous functions. Arch. Math. (Basel) 92, no. 5, (2009) 384-398. DOI 10.1007/s00013-009-3197-z. PDF Angela A. Albanese, José Bonet, and Werner J. Ricker, Mean ergodic operators in Fréchet Spaces. Ann. Acad. Sci. Fenn. Math. 34, no. 2, (2009) 401-436. PDF Angela A. Albanese, José Bonet, and Werner J. Ricker, On mean ergodic operators. Accepted for publication in Proceedings of the Third Conference on Vector Measures, Integration and Applications, Operator Theory: Advances and Applications, Eichstätt, Sept. 2008. G.P. Curbera et. al. (Eds). Vol . 201, Birkhäuser, Basel (2009) 1-20. PDF José Bonet, Dynamics of the differentiation operator on weighted spaces of entire functions, Math. Z. 261 (2009) 649-677. DOI: 10.1007/s00209-008-0347-0. PDF José Bonet, Mikael Lindström, and Elke Wolf, Topological structure of the set of weighted composition operators on weighted Bergman spaces of infinite order. Integr. Equ. Oper. Theory 65 (2009) 195-210. PDF José Bonet and Reinhold Meise, Convolution operators on quasianalytic classes of Roumieu type. Functional analysis and complex analysis, Contemp. Math., 481, Amer. Math. Soc., Providence, RI, (2009) 23-45. PDF José Bonet, and Jari Taskinen, Toeplitz operators on the space of analytic functions with logaritmic growth. J. Math. Anal. Appl. 353, no.1, (2009) 428-435. DOI: 10.1016/j.jmaa.2008.12.009. PDF Klaus D. Bierstedt, José Bonet, and Jari Taskinen, Weighted inductive limits of spaces of entire functions, Monatshefte Math. 154, no.2, (2008) 103-120. DOI 10.1007/s00605-008-0526-4. PDF José Bonet and Pawel Domanski, The splitting of short exact sequences of PLS-spaces and smooth dependence of solutions of linear partial differential equations, Advances in Mathematics 217, no. 2, (2008) 561-585. PDF José Bonet, Mikael Lindström, and Elke Wolf, Differences of composition operators between weighted Banach spaces of holomorphic functions, J. Austral. Math. Soc. 84. no.1 (2008) 9-20. DOI:10.1017/S144678870800013X. PDF José Bonet, Pablo Galindo, Mikael Lindström, Spectra and essential radii of composition operators on weighted Banach spaces of analytic functions, J. Math. Anal. Appl. 340, no. 2, (2008) 884-891. DOI: 10.1016/j.jmaa.2007.09.006. PDF José Bonet, Mikael Lindström, and Elke Wolf, Isometric weighted composition on weighted Banach spaces of type H∞, Proc. Amer. Math. Soc. 136, no. 12, (2008) 4267-4273. PDF Angela A. Albanese, José Bonet, Ultradifferentiable kernels of linear partial differential operators on non-quasianalytic classes of Roumieu type, Publ. Res. Inst. Math. Sci. Kyoto Univ. 43 (2007) 39-54. José Bonet, Werner J. Ricker, Schauder decompositions and the Grothendieck and Dunford-Pettis properties in Köthe echelon spaces of infinite order, Positivity 11, no. 1, (2007) 77-93. MR2297324Zbl pre05156944 José Bonet, Reinhold Meise, and Sergej N. Melikhov, A comparison of two different ways to define classes of ultradifferentiable functions, Bull. Belg.Math. Soc. Simon Stevin 14 (2007) 425-444. PDF José Bonet, Leonhard Frerick, and Enrique Jord´, Extension of vector valued holomorphic and harmonic functions, Studia Math. 183 (2007) 225-248. PDF José Bonet and Pawel Domanski, The structure of spaces of quasianalytic functions of Roumieu type, Archiv Math. 89 (2007) 430-441. PDF José Bonet, Topologizable operators on locally convex spaces, Proceedings of the ICTAA2005, Contemporary Math. 427, Amer. Math. Soc. (2007) 103-108. Klaus D. Bierstedt and José Bonet, On the mathematical work of Jean Schmets, Bull. Belg. Math. Soc. Simon Stevin 14 (2007) 385-405. PDF Angela A. Albanese, José Bonet, Intersections of Fréchet and (LB)-spaces, Rocky Mountain J. Math. 36 (2006) 1093-1105 MR2274885 Klaus D. Bierstedt, José Bonet, Weighted (LB)-spaces of holomorphic functions: and completeness of , J. Math. Anal. Appl. 323 (2006) 747-767 MR2260142 Zbl pre05077867 José Bonet and Susanne Dierolf, The pullback for bornological and ultrabornological spaces, Note Mat. 25, no. 1,(2005/06) 63-67. MR2220453 Zbl pre05058681 José Bonet, S. Dierolf, and K. Aye Aye, Dense subspaces of quasinormable spaces, Math. Nachr. 279, no.7, (2006) 699-704. MR2226405 Zbl 1109.46004 José Bonet, Susumu Okada, and Werner J. Ricker, The canonical spectral measure and Köthe function spaces, Quaest. Math. 29, no. 1, (2006) 91-116. MR2209794 Zbl pre05044321 José Bonet and Pawel Domański, Parameter dependence of solutions of differential equations on spaces of distributions and the splitting of short exact sequences, J. Funct. Anal. 230 (2006), no. 2, 329-381. MR2186216 (2006i:35019) Zbl 1094.46006 Progress in Functional Analysis. Proceedings of the International Meeting on Functional Analysis held on the occasion of the sixtieth birthday of M. Valdivia in Peñíscola, October 22--27, 1990. Edited by Klaus-D. Bierstedt, José Bonet, John Horváth and Manuel Maestre. North-Holland Mathematics Studies, 170. North-Holland Publishing Co., Amsterdam, 1992. xxviii+431 pp. ISBN: 0-444-89378-4 MR1150736 (92i:46002) Zbl 0745.00031 José Bonet, Félix Martínez-Gim&eacut;nez, and Alfredo Peris, Linear chaos on Fréchet spaces, Internat. J. Bifur. Chaos Appl. Sci. Engrg. 13 (2003), no. 7, 1649-1655, Dynamical systems and functional equations (Murcia, 2000). MR2015614 (2004i:47016) Zbl 1079.47008 Klaus D. Bierstedt, José Bonet, Weighted (LB)-spaces of holomorphic functions: VH(G)=V0H(G) and completeness of voH(G), J. Math. Anal. Appl. 323 (2006) 747-767 MR2260142Zblpre05077867 José Bonet and Pawel Domański, Parameter dependence of solutions of differential equations on spaces of distributions and the splitting of short exact sequences, J. Funct. Anal. 230 (2006), no. 2, 329-381.MR2186216 (2006i:35019) Zbl 1094.46006 José Bonet and Reinhold Meise, Quasianalytic functionals and projective descriptions, Math. Scand. 94 (2004), no. 2, 249-266. MR2053743 (2005d:46050) Zbl 1064.46018 José Bonet, Werner J. Ricker, The canonical spectral measure in Köthe echelon spaces, Integr. Equ. Oper. Theory 53 (2005) 477-496. MR2187433(2007a:28009) Zbl 1109.46047 José Bonet, Pawel Domański, and Dietmar Vogt, Interpolation of vector-valued real analytic functions, J. London Math. Soc. (2) 66 (2002), no. 2, 407-420. MR1920411 (2003h:46056) Zbl 1027.46048 José Bonet, F&eacut;lix Martínez-Gim&eacut;nez, and Alfredo Peris, A Banach space which admits no chaotic operator, Bull. London Math. Soc. 33 (2001), no. 2, 196-198. MR1815423 (2001m:47015) Zbl 1046.47008 José Bonet and Andreas Defant, The Levy-Steinitz rearrangement theorem for duals of metrizable spaces, Israel J. Math. 117 (2000), 131-156. MR1760590 (2001d:46012) Klaus D. Bierstedt, José Bonet, and Jari Taskinen, Associated weights and spaces of holomorphic functions, Studia Math. 127 (1998) 137-168. MR1488148(99a:46037)Zbl 0934.46027 José Bonet and Alfredo Peris, Hypercyclic operators on non-normable Fréchet spaces, J. Funct. Anal. 159 (1998) 587-595. MR1658096 (99k:47044)Zbl 0926.47011 José Bonet, Antonio Galbis and Reinhold Meise, On the range of convolution operators on non-quasianalytic ultradifferentiable functions, Studia Math. 126 (1997), 171-198. < MR1472697 (99a:46071) Zbl 0918.46039 Klaus D. Bierstedt, José Bonet, and Antonio Galbis, Weighted inductive and projective limits of spaces of holomorphic functions on balanced subsets of CN, Michigan Math. J. 40 (1993), 271-297. MR1226832 (94i:46034) Zbl 0803.46023 José Bonet, R¨diguer Braun, Reinhold Meise, and B.A. Taylor, Whitney's extension theorem for non-quasianalytic classes of ultradifferentiable functions, Studia Math. 99 (1991), 154-184.MR1120747 (93e:46030)Zbl 0738.46009 Fran¸oise Bastin and José Bonet, Locally bounded non continuous linear forms on strong duals of non distinguished echelon spaces, Proc. Amer. Math. Soc. 108 (1990), no.3, 769-774. MR1002152 (90h:46012) Zbl 0724.46006 José Bonet, Pablo Galindo, Domingo García, and Manolo Maestre, Locally bounded sets of holomorphic mappings, Trans. Amer. Math. Soc. 309 (1988), no. 2, 609-620. MR0961603 (90a:46110) Zbl 0706.46033 Klaus D. Bierstedt and José Bonet, Stefan Heinrich's density condition for Fréchet spaces and the characterization of the distinguished echelon spaces, Math. Nachr. 135 (1988), 149-180. MR0944226 (90a:46001) Zbl 0688.46001 José Bonet, On weighted inductive limits of spaces of continuous functions, Math. Z. 192 (1986), no.1, 9-20. MR0835386 (87h:46064) Zbl 0575.46025 Klaus D. Bierstedt, José Bonet, and Jari Taskinen, Associated weights and spaces of holomorphic functions, Studia Math. 127 (1998) 137-168. MR1488148 (99a:46037)Zbl 0934.46027 Pedro Pérez Carreras and José Bonet, Barrelled locally convex spaces. North-Holland Mathematics Studies, 131. Notas de Matem´tica [Mathematical Notes], 113. North-Holland Publishing Co., Amsterdam, 1987. xvi+512 pp. ISBN: 0-444-70129-X MR0880207 (88j:46003) Zbl 0614.46001
CommonCrawl
Some Problems of Simultaneous Approximation of Functions of Two Variables and Their Derivatives by Interpolation Bilinear Splines Myskin K. Yu., Vakarchuk S. B. Exact estimates for the errors of approximation of functions of two variables and their derivatives by interpolation bilinear splines are obtained on certain classes. Estimate for a Rearrangement of a Function Satisfying the "Reverse Jensen Inequality" Korenovskii A. A. We show that an equimeasurable rearrangement of any function satisfying the "reverse Jensen inequality" with respect to various multidimensional segments also satisfies the "reverse Jensen inequality" with the same constant. Markov Uniqueness and Rademacher Theorem for Smooth Measures on an Infinite-Dimensional Space under Successful-Filtration Condition Kulik A. M. For a smooth measure on an infinite-dimensional space, a "successful-filtration" condition is introduced and the Markov uniqueness and Rademacher theorem for measures satisfying this condition are proved. Some sufficient conditions, such as the well-known Hoegh-Krohn condition, are also considered. Examples demonstrating connections between these conditions and applications to convex measures are given. Strong Summability of Faber Series and Estimates for the Rate of Convergence of a Group of Deviations in a Closed Domain with Piecewise-Smooth Boundary Lasuriya R. A. We establish estimates for groups of deviations of Faber series in closed domains with piecewise-smooth boundary. Stability and Comparison of States of Dynamical Systems with Respect to a Time-Varying Cone Mazko A. G. We investigate classes of dynamical systems in a partially ordered space with properties of monotonicity type with respect to specified cones. We propose new methods for the stability analysis and comparison of solutions of differential systems using time-varying cones. To illustrate the results obtained, we present examples using typical cones in vector and matrix spaces. A Limit Theorem for Integral Functionals of an Extremum of Independent Random Processes Matsak I. K. We prove a theorem on the convergence of integral functionals of an extremum of independent stochastic processes to a degenerate law of distributions. Differentiation of Singular Integrals with Piecewise-Continuous Density and Boundary Values of Derivatives of a Cauchy-Type Integral Plaksa S. A. We establish sufficient conditions for the differentiability of a singular Cauchy integral with piecewise-continuous density. Formulas for the nth-order derivatives of a singular Cauchy integral and for the boundary values of the nth-order derivatives of a Cauchy-type integral are obtained. Approximation of Continuous Functions by de La Vallee-Poussin Operators Rukasov V. I., Silin E. S. For $\sigma \rightarrow \infty$, we study the asymptotic behavior of upper bounds of deviations of functions blonding to the classes $\widehat{C}_{\infty}^{\overline{\Psi}}$ and $\widehat{C}^{\overline{\Psi}} H_{\omega}$ from the so-called Vallee Poussin operators. We find asymptotic equalities that, in some important cases, guarantee the solution of the Kolmogorov - Nikol's'kyi problem for the Vallee Poussin operators on the classes $\widehat{C}_{\infty}^{\overline{\Psi}}$ and $\widehat{C}^{\overline{\Psi}} H_{\omega}$. Phragmen-Lindelof Principle for Some Quasilinear Evolution Equations of the Second Order Shishkov A. E., Sleptsova I. P. We consider the equation $u_{tt} + A (u_t) + B(u) = 0$, where $A$ and $B$ are quasilinear operators with respect to the variable x of the second order and the fourth order, respectively. In a cylindrical domain unbounded with respect to the space variables, we obtain estimates that characterize the minimum growth of any nonzero solution of the mixed problem at infinity. Brief Communications (English) Some Results on Asymptotic Stability of Order α Vu Thi Thu Huong, Vu Tuan Quasi-equiasymptotic stability of order $α (α ∈ ℝ_{+} * )$ with respect to a part of variables is considered. Some sufficient conditions, a converse theorem, and a theorem on multistability are proved. Brief Communications (Russian) A Differential Analog of the Main Lemma of the Theory of Markov Branching Processes and Its Applications Imomov A. A. We obtain a differential analog of the main lemma in the theory of Markov branding processes $\mu(t),\quad t \geq 0$, of continuous time. We show that the results obtained can be applied in the proofs of limit theorems in the theory of branching processes by the well-known Stein - Tikhomirov method. In contrast to the classical condition of nondegeneracy of the branching process $\{\mu(t) > 0\}$, we consider the condition of nondegeneracy of the process in distant $\{\mu(\infty) > 0\}$ and justify in terms of generating functions. Under this condition, we study the asymptotic behavior of trajectory of the considered process. On Groups with Minimality Condition for Noninvariant Abelian pd-Subgroups Lyman F. N. Ukr. Mat. Zh. - 2005. - 57, № 2. - pp. 265-270 We study properties and the structure of non-Abelian groups with minimality condition for non-invariant Abelian pd-subgroups in the case where they do not satisfy the minimality condition for Abelian pd-subgroups. We prove the solvability of these groups and establish relations with non-Abelian groups in which all infinite Abelian pd-subgroups are invariant. Brief Communications (Ukrainian) Newton-Kantorovich Iterative Regularization for Nonlinear Ill-Posed Equations Involving Accretive Operators Nguen Byong, Vu Quang Hung The Newton-Kantorovich iterative regularization for nonlinear ill-posed equations involving monotone operators in Hilbert spaces is developed for the case of accretive operators in Banach spaces. An estimate for the convergence rates of the method is established. Shape-Preserving Smoothing of 3-Convex Splines of Degree 4 Prymak A. V. For every 3-convex piecewise-polynomial function s of degree ≤ 4 with n equidistant knots on [0, 1] we construct a 3-convex spline $s_1 (s_1 ∈ C (3))$ of degree ≤ 4 with the same knots that satisfies the inequality $$\left\| {S - S_1 } \right\|_{C_{[0,1]} } \leqslant c\omega _5 (s;1/n),$$ where $c$ is an absolute constant and $ω_5$ is the modulus of smoothness of the fifth order. Two-Sided Approximation of Solutions of Boundary-Value Problems Mentynskyi S. M., Shuvar B. A. We propose a general scheme for the two-sided approximation of solutions of boundary-value problems for ordinary differential equations. This scheme involves a number of known and new two-sided methods. In our investigation, we use constructions of the Samoilenko numerical-analytic method together with the procedure of the construction of two-sided methods proposed by Kurpel' and Shuvar.
CommonCrawl
A mathematical explanation of the DES encryption system I need a mathematical explanation of what does the DES encryption system really do. This means I need more explanation than the one that offers FIPS, which is more an explanation for computer specialists. Among other things, I want to know where do these permutation tables come from: $$IP\\ \newcommand\T{\Rule{0pt}{1em}{.3em}} \begin{array}{|c|c|c|c|c|c|c|c|} \hline 58 \T & 50 \T & 42 \T & 34 \T & 26 \T & 18 \T & 10 \T & 2 \\\hline 60 \T & 52 \T & 44 \T & 36 \T & 28 \T & 20 \T & 12 \T & 4 \\\hline 62 \T & 54 \T & 46 \T & 38 \T & 30 \T & 22 \T & 14 \T & 6 \\\hline 64 \T & 56 \T & 48 \T & 40 \T & 32 \T & 24 \T & 16 \T & 8 \\\hline 57 \T & 49 \T & 41 \T & 33 \T & 25 \T & 17 \T & 9 \T & 1 \\\hline 59 \T & 51 \T & 43 \T & 35 \T & 27 \T & 19 \T & 11 \T & 3 \\\hline 61 \T & 53 \T & 45 \T & 37 \T & 29 \T & 21 \T & 13 \T & 5 \\\hline 63 \T & 55 \T & 47 \T & 39 \T & 31 \T & 23 \T & 15 \T & 7 \\\hline \end{array} $$ I know this kind of encryption systems have a lot to do with the mathematical concept of a field. So I would like to know if someones has a PDF or any other real mathematical explanation of the DES encryption system. encryption reference-request des e-sushi BorjaDESBorjaDES $\begingroup$ No information about where the S-boxes came from was available when DES was first specified; this generated significant amounts of paranoia. It was later found that the exact choices increased the resistance of DES to differential cryptanalysis, but AFAIK the precise way they were selected is still classified (that is, if the details are even written down anywhere). There may be no more to it, mathematically, than "use this magic table lookup". $\endgroup$ – hmakholm left over Monica Nov 21 '11 at 14:14 $\begingroup$ IIRC there was a description in Schneier's "Applied Cryptography", but I don't know if it touches on mathematical fields. (No copy with me). $\endgroup$ – S.L. Barth Nov 21 '11 at 14:50 $\begingroup$ Welcome to Cryptography Stack Exchange. Your two questions on security.stackexchange.com and math.stackexchange.com were migrated here because the questions related to the internals of a cryptographic algorithms, and thus are fully on-topic here. I then merged both questions. Please register your account here, too, to be able to comment and accept an answer. $\endgroup$ – Paŭlo Ebermann Nov 21 '11 at 17:08 $\begingroup$ @esushi: +1 for the nice $\TeX$; I previously did not realize \newcommand was supported. $\endgroup$ – fgrieu Jun 19 '15 at 5:06 The DES standard (FIPS 46-3) is actually a rather straightforward description of DES. It tells with precision and detail where each bit goes. It is a specification for implementers (who can be thought as "computer specialists" but anybody who wants to learn about DES should be able to understand that specification). What FIPS 46-3 does not tell is why DES was designed that way. If you want more mathematics, you can have a look at the Handbook of Applied Cryptography (free download !), in particular chapter 7, which goes over the case of DES. For the initial and final permutations (called "IP" in FIPS 46-3), they were defined not for security (they are fixed, key-less permutations which anyone can readily invert), but to ease implementations in hardware contexts of the 1970s' era: they make it easier to make an hardware implementation which plugs on an 8-bit bus. See this question for some details. There is nothing about fields in DES. It is all bit-by-bit manipulations; to some extent you could say that a bit is a value in GF(2), the field with two elements (0 and 1), but that's quite stretching it. Thomas PorninThomas Pornin Well, (good) encryption schemes serve to obscure and protect the data, while not making it easy to recover the input bits from the output bits. The S-boxes and P-boxes serve to increase the mathematical complexity & along with a key making it very difficult to determine the actual mapping that is occurring between some of the input bits and each output bit. To borrow the oft repeated concept of Shannon & others, "confusion and diffusion". What really hits at the heart of the issue then is the design rationale/criteria for S-boxes. Non-linearity is a must for S-boxes; S-boxes can't be too small etc etc. You can do a Google search and come up with a slew of decent papers over the last 12 years or so .... or you can hit up Terry Ritter's resource page on S-Box Design literature. The concept you'll probably be most intrigued with is SAC (Strict Avalanche Criterion). You may also find the following documents very useful: "Cryptographic Properties of Boolean Functions and S-Boxes", PhD thesis by An Braeken (PDF) "New Analysis Methods on Strict Avalanche Criterion of S-Boxes", article by Phyu Phyu Mar, Khin Maung Latt in WASET 48 2008 (PDF) xkeyxkey I have found some explanation by Thomas pornin: The initial and final permutation have no influence on security (they are unkeyed and can be undone by anybody). The usual explanation is that they make implementation easier in some contexts, namely a hardware circuit which receives data over a 8-bit bus: it can accumulate the bits into eight shift registers, which is more efficient (in terms of circuit area) than a single 64-bit register. This process "naturally" performs the initial permutation of DES. In more details: Suppose that you are designing a hardware circuit which should do some encryption with DES, and receives data by blocks of 8 bits. This means that there are 8 "lines", each yielding one bit at each clock. A common device for accumulating data is a shift register: the input line plugs into a one-bit register, which itself plugs into another, which plugs into a third register, and so on. At each clock, each register receives the contents from the previous, and the first register accepts the new bit. Hence, the contents are "shifted". With an 8-bit bus, you would need 8 shift registers, each receiving 8 bits for an input block. The first register will receive bits 1, 9, 17, 25, 33, 41, 49 and 57. The second register receives bits 2, 10, 18,... and so on. After eight clocks, you have received the complete 64-bit block and it is time to proceed with the DES algorithm itself. If there was no initial permutation, then the first step of the first round would extract the "left half" (32 bits) which, at that point, would consist of the leftest 4 bits of each of the 8 shift registers. The "right half" would also get bits from the 8 shift registers. If you think of it as wires from the shift registers to the units which use the bits, then you end up with a bunch of wires which heavily cross each other. Crossing is doable but requires some circuit area, which is the expensive resource in hardware designs. However, if you consider that the wires must extract the input bits and permute them as per the DES specification, you will find out that there is no crossing anymore. In other words, the accumulation of bits into the shift registers inherently performs a permutation of the bits, which is exactly the initial permutation of DES. By defining that initial permutation, the DES standard says: "well, now that you have accumulated the bits in eight shift registers, just use them in that order, that's fine". The same thing is done again at the end of the algorithm. Remember that DES was designed at a time when 8-bit bus where the top of the technology, and one thousand transistors were an awfully expensive amount of logic IDOKIDOK $\begingroup$ Great, thanks in advance. Let me post you an example of what do I really want. Check this PDF file, (page #8, article 5.2). It's about AES, but I need the one of DES. It's in Spanish but you only need to see the numbers and stuff to realize what I'm looking for: grupos.unican.es/amac/articles/aes.pdf $\endgroup$ – BorjaDES Nov 21 '11 at 14:21 $\begingroup$ I am afraid that I don't understand what you want exactly, I thought that you want only the explanation, are you looking for the exact analogue of what you told for the DES algorithm ? , then I think I am not a correct person as I don't know spanish, wait for others to answer then @BorjaDES $\endgroup$ – iyengar Nov 21 '11 at 14:36 $\begingroup$ The article you gave me is just the FIPS article with a plus of examples. I understand all the algorithm but I want to know which is the source or the reason for doing those steps. This means I want to know "WHY sometimes shift 1 and sometimes 2 depending on the round?" "WHY the PC-1 has that numbers and not another ones"? I know this has a mathematical explanation and this is what I want to know. Anyways, if it's not a lot of work, I would be glad to see your work about DES. Thanks iyengar. $\endgroup$ – BorjaDES Nov 21 '11 at 14:41 $\begingroup$ @BorjaDES: But if you want to listen more about it mathematically, please re-edit the question, and just keep what the point you exactly want, it would be good if you give a suggestion to the users from where to start $\endgroup$ – iyengar Nov 21 '11 at 15:15 $\begingroup$ @Borja: you write "I know this has a mathematical explanation" -- where do you know this from? Whoever told you this, do you have reason to trust them? $\endgroup$ – hmakholm left over Monica Nov 21 '11 at 15:16 DES SBoxes criteria have been kept secret until Don Coppersmith revealed them (some?) in 2000: Each S-box should have six bits of input and four bits of output. (In 1974 this was the largest size S-box that could be accommodated if DES were to fit on a single chip.) No output bit of an S-box should be too close to a linear function of the input bits. (The S-boxes are the only nonlinear part of DES. Their nonlinearity is the algorithm's strength.) Each "row" of an S-box should contain all possible outputs. (This randomizes the output.) If two inputs to an S-box differ in exactly one bit, their outputs should differ in at least two bits. If two inputs to an S-box differ exactly in the middle two bits, their outputs must differ by at least two bits. (Criteria (4) and (5) provide some diffusion.) If two inputs to an S-box differ in their first two bits and agree on their last two, the two outputs must differ. For any nonzero 6-bit difference between inputs, no more than 8 of the 32 pairs of inputs exhibiting that difference may result in the same output difference. Coppersmith added: 6 bits in, 4 bits out. No output bit "too close" to linear function of inputs. Fix two outer bits (autoclave): the rest is a permutation of 4 bits. In other words, $\Delta_{in} = 0wxyz0 \implies \Delta_{out} \neq 0$. (different from 0) $\Delta_{in} = 001100 \implies |\Delta_{out} |\geq 2$. $P(\Delta_{out} = 0\ |\ \Delta_{in}) \leq 8/32$ $P(\Delta_{out} = 0\ |\ \Delta_{in}) \leq$ stricter but ad hoc. $\Delta_{in} = 11xy00 \implies \Delta_{out} \neq 0$. Implementation should use at most 47 gates. This is from Courtois in https://eprint.iacr.org/2003/184.pdf Kiss AlexanderKiss Alexander What are the benefits of the two permutation tables in DES? How exactly was the finalist chosen in the NIST AES competition? Analogue encryption algorithms Show that the equal difference property exits for a modified DES encryption system Does key derivation with known parameters add any security? DES hardware implementation of substitution lookup table [ ReWorked ] What could be the keylength of a DES encryption?
CommonCrawl
Review of Economic Design December 2014 , Volume 18, Issue 4, pp 265–287 | Cite as Common preference, non-consequential features, and collective decision making Susumu Cato This paper examines an extended framework of Arrovian social choice theory. We consider two classes of values: consequential values and non-consequential values. Each individual has a comprehensive preference based on the two. Non-consequential values are assumed to be homogeneous among individuals. It is shown that a social ordering function satisfying Arrovian conditions must be non-consequential: a social comprehensive preference gives unequivocal priority to non-consequential values. We clarify the role of common preferences over non-consequential features. Non-consequentialism Arrow's impossibility theorem Collective decision making Welfarism JEL Classification D63 D71 An earlier version of this paper was circulated under the title "The role of common morality in social choice." I am grateful to Katsuhito Iwai, Kazuya Kamiya, Tomohiko Kawamori, Masahiro Okuno-Fujiwara, Masayuki Otaki, Toyotaka Sakai, Dan Sasaki, Kotaro Suzumura, Naoki Yoshihara, two anonymous referees, an associate editor, and Atila Abdulkadiroglu of this journal for helpful comments and suggestions. This paper was financially supported by Grant-in-Aid for Young Scientists (Start-up; B) from the Japan Society for the Promotion of Science and the Ministry of Education, Culture, Sports, Science and Technology. Appendix: Proofs Proof of Proposition 1 'If'. We show that if there exists at least one non-consequentialist, then there exists a social ordering function \(f\) satisfying PP, IIA, and ND. Step 1. By assumption, there exists a non-consequentialist \(i \in N\). Choose two individuals \(k,l \in N\) with \(k \ne l\). Let \(\theta ^* \in \varTheta \). Consider the following social ordering function \(f\): for all \((x,\theta ), (y,\theta ') \in X \times \varTheta \), $$\begin{aligned} \theta J \theta '&\Rightarrow (x,\theta ) P (y,\theta ')\\ \theta =\theta '=\theta ^*&\Rightarrow [(x,\theta ) R_l (y,\theta ') \Leftrightarrow (x,\theta ) R (y,\theta ')]\\ \theta =\theta '\ne \theta ^*&\Rightarrow [(x,\theta ) R_k (y,\theta ') \Leftrightarrow (x,\theta ) R (y,\theta ')]. \end{aligned}$$ Step 2. By constriction, \(f\) satisfies PP, IIA, and ND. \(R=f(\mathbf{R})\) is reflexive and complete. Thus, it suffices to show that \(R\) is transitive, i.e., \(\forall (x,\theta ),(y,\theta '),(z,\theta '') \in X \times \varTheta , (x,\theta ) R (y,\theta ') \wedge (y,\theta ') R (z,\theta '') \Rightarrow (x,\theta ) R (z,\theta '')\). Then, to check transitivity, we consider the four possibilities that are mutually exclusive and exhaustive. \(\theta J \theta '\) and \(\theta ' J \theta ''\). By transitivity of \(J\), we have \(\theta J \theta ''\). This implies \((x,\theta ) R (z,\theta '')\). \(\theta J \theta '\) and \(\theta ' = \theta ''\). By transitivity of \(J\), we have \(\theta J \theta ''\). This implies \((x,\theta ) R (z,\theta '')\). \(\theta =\theta '\) and \(\theta ' J \theta ''\). By transitivity of \(J\), we have \(\theta J \theta ''\). This implies \((x,\theta ) R (z,\theta '')\). \(\theta = \theta '\) and \(\theta ' =\theta ''\). (4-a) \(\theta = \theta '= \theta ''=\theta ^*\). In this case, \((x,\theta ) R (y,\theta ')\) \(\wedge \) \((y,\theta ') R (z,\theta '')\) implies that \((x,\theta ) R_l (y,\theta ')\) \(\wedge \) \((y,\theta ') R_l (z,\theta '')\). By transitivity of \(R\), we have \((x,\theta ) R_l (z,\theta '')\). By the construction of the social ordering function, \((x,\theta ) R (z,\theta '')\). (4-b) \(\theta = \theta ' =\theta ''\ne \theta ^*\). In this case, \((x,\theta ) R (y,\theta ')\) \(\wedge \) \((y,\theta ') R (z,\theta '')\) implies that \((x,\theta ) R_k (y,\theta ')\) \(\wedge \) \((y,\theta ') R_k (z,\theta '')\). By transitivity of \(R\), we have \((x,\theta ) R_k (z,\theta '')\). Therefore, \(R\) is transitive. 'Only if'. We show that if there exists no non-consequentialist, then there exists no social ordering function \(f\) satisfying PP, IIA, and ND. Since every individual is a consequentialist, for all \(i \in N\), $$\begin{aligned} \forall (x,\theta ),(y,\theta ') \in X \times \varTheta : x \succ _i y&\Rightarrow (x,\theta ) P_i (y,\theta ') \text{ and } x \sim _i y\nonumber \\&\Rightarrow [(x,\theta ) R_i (y, \theta ') \Leftrightarrow \theta J \theta ']. \end{aligned}$$ First we prove the following claim: (Claim) There exists \(d \in N\) such that \((x,\theta ) P_d (y,\theta ') \Rightarrow (x,\theta ) P (y,\theta ')\) for all \((x,\theta ),(y,\theta ') \in X \times \varTheta \) with \(x \ne y\). Consider a triple \((x,\theta ),(y,\theta '),(z,\theta '') \in X \times \varTheta \) such that \(x,y,z\) are all distinct. Note that individual's extended preference is not restricted over this triple. Since \(\# \{(x,\theta ),(y,\theta '),(z,\theta '')\}=3\), Arrow's impossibility theorem can be applied. Hence, there exists a local dictator \(d\) over \(\{(x,\theta ),(y,\theta '),(z,\theta '')\}\). Take \((a,\gamma )\) and \((b,\gamma ')\) such that \(a \ne b\) and \(a,b \in X \setminus \{x,y,z \}\). Now we show that \(d\) is a local dictator over \(\{(a,\gamma ),(b,\gamma ')\}\). Consider a triple \((a,\gamma ),(y,\theta '),(z,\theta '')\). We can apply Arrow's impossibility theorem to the triple, and thus, there exists a local dictator over \(\{(a,\gamma ),(y,\theta '),(z,\theta '')\}\). The local dictator over \(\{(a,\gamma ),(y,\theta '),(z,\theta '')\}\) must be individual \(d\). Similarly, \(d\) is a local dictator over \(\{(a,\gamma ),(b,\gamma '),(z,\theta '')\}\). Therefore, \(d\) is a local dictator over \(\{(a,\gamma ),(b,\gamma ')\}\). Thus, the claim is proved. Next, we show that for all \((x,\theta ),(y,\theta ') \in X \times \varTheta \), if \(x=y\), then \((x,\theta ) P_d (y,\theta ') \Rightarrow (x,\theta ) P (y,\theta ')\). Take any \((x,\theta ),(y,\theta ') \in X \times \varTheta \) such that \(x=y\). Since \(D_d \in {\mathcal {D}}_J\), \((x,\theta ) P_d (y,\theta ')\) if and only if \(\theta J \theta '\). Since every individual is a consequentialist, \(x =y\) and \(\theta J \theta '\) imply that \((x,\theta ) P_i (y,\theta ')\) for all \(i \in N\). By PP, \((x,\theta )P(y,\theta ')\). Therefore, \((x,\theta ) P_d (y,\theta ') \Rightarrow (x,\theta ) P (y,\theta ')\). Combining with the first claim, \(d\) is the universal dictator over \(X \times \varTheta \). \(\square \) Proof of Lemma 1 For each \(i \in N\), for all \(x,y \in X\) and all \(\theta , \theta ' \in \varTheta \), $$\begin{aligned} \Big [ \theta J \theta ' \Rightarrow (x,\theta ) P_i (y,\theta ') \Big ] \text{ and } \Big [ \theta = \theta ' \Rightarrow [(x,\theta ) R_i (y,\theta ') \Leftrightarrow x \succsim _i y] \Big ]. \end{aligned}$$ Thus, \(\theta J \theta '\) implies \((x,\theta ) P_i (y,\theta ')\) for all \(i \in N\). Since \(f\) satisfies PP, we have \((x,\theta ) P (y,\theta ')\). Then, \(\theta J \theta ' \Rightarrow (x,\theta ) P (y,\theta ')\). \(\square \) Proof of Theorem 1 Suppose that a social ordering function \(f\) satisfies PP, IIA, and ND. Let \(\mathcal {W}\) and \(\mathcal {J}\) denote the set of consequentialists and the set of non-consequentialists, respectively. By assumptions, \(\mathcal {W} \cup \mathcal {J} = N\) and \(\mathcal {J} \ne \emptyset \). Then, there exists \(\langle (\succsim _i)_{i \in N}, J\rangle \) such that for each \(i \in \mathcal {W}\), for all \(x,y \in X\) and all \(\theta ,\theta ' \in \varTheta \), $$\begin{aligned}&x \succ _i y \Rightarrow (x,\theta ) P_i (y,\theta '); \\&x \sim _i y \Rightarrow [(x,\theta ) P_i (y,\theta ') \Leftrightarrow \theta J \theta ' \text{ and } (x,\theta ) I_i (y,\theta ') \Leftrightarrow \theta = \theta ']. \end{aligned}$$ and for each \(i \in \mathcal {J}\), for all \(x,y \in X\) and all \(\theta , \theta ' \in \varTheta \), $$\begin{aligned} \theta J \theta '&\Rightarrow (x,\theta ) P_i (y,\theta '); \\ \theta = \theta '&\Rightarrow [(x,\theta ) R_i (y,\theta ') \Leftrightarrow x \succsim _i y]. \end{aligned}$$ Lemma 1 implies that if \(\mathcal {W} = \emptyset \), the \(f\) is non-consequential. Then, throughout the rest of the proof, we assume that \(\mathcal {W} \ne \emptyset \). First, we show that there exists a local dictator over \(\{(x, \theta ): x \in X \}\). Step 1. For each \(\theta \in \varTheta \), there exists \(d(\theta ) \in N\) such that for all \(\mathbf{R} \in {\mathcal {D}}\), \((x,\theta ) P_{d(\theta )} (y,\theta ) \Rightarrow (x,\theta ) P (y,\theta )\) for all \(x,y \in X\). Take any \(\theta ^* \in \varTheta \). Consider preference profiles restricted on \(\{(x, \theta ) \in X \times \varTheta :\theta = \theta ^* \}\). By our supposition on the preference domain, each individual preference ordering \(R_i\) corresponds to \(\succsim _i\) under \(\{(x, \theta ) \in X \times \varTheta :\theta = \theta ^* \}\). Since \(\succsim _i\) is not restricted and \(\# \{(x, \theta ) \in X \times \varTheta :\theta = \theta ^* \} \ge 3\), we can apply Arrow's impossibility theorem to this subset. Hence, there exists \(d(\theta ^*) \in N\) such that for all \(\mathbf{R} \in {\mathcal {D}}\), \((x,\theta ^*) P_i (y,\theta ^*) \Rightarrow (x,\theta ^*) P (y,\theta ^*)\) for all \(x,y \in X\). \(\Box \) We call individual \(d(\theta )\) a local dictator over circumstance \(\theta \). Next, we show the following result. Step 2. Suppose that there exist \(\mathbf{R} \in {\mathcal {D}}\) and \((x,\theta ),(y,\theta ') \in X \times \varTheta \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\) where \(R=f(\mathbf{R})\). Then, \(d(\theta )=d(\theta ')\). By way of contradiction, suppose that there exist \(\mathbf{R} \in {\mathcal {D}}\) and \((x,\theta ),(y,\theta ') \in X \times \varTheta \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\) where \(R=f(\mathbf{R})\), and \(d(\theta ) \ne d(\theta ')\). Let \(\mathbf{R}' \in {\mathcal {D}}\) be such that $$\begin{aligned} \mathbf{R}'|\{(x,\theta ),(y,\theta ')\}&=\mathbf{R}|\{(x,\theta ),(y,\theta ')\},\\ (x,\theta )&P'_{d(\theta )} (z,\theta ),\\ (z,\theta ')&P'_{d(\theta ')} (y,\theta '). \end{aligned}$$ Since \(d(\theta )\) and \(d(\theta ')\) are distinct, the above profile is admissible. Since \(\mathbf{R}'|\{(x,\theta ),(y,\theta ')\}=\mathbf{R}|\{(x,\theta ),(y,\theta ')\}\), IIA implies that \((y,\theta ') R' (x,\theta )\) where \(R'=f( \mathbf{R' })\). Since \(d(\theta )\) is a local dictator over \(\theta \), \((x,\theta ) P'_{d(\theta )} (z,\theta )\) implies that \((x,\theta ) P' (z,\theta )\). Similarly, we have \((z,\theta ') P' (y,\theta ')\). Since \((y,\theta ')R(x,\theta )\), \((x,\theta ) P' (z,\theta )\) and \((z,\theta ') P' (y,\theta ')\), we obtain $$\begin{aligned} (z, \theta ') P' (z, \theta ). \end{aligned}$$ Since \(\mathbf{R}' \in {\mathcal {D}}_J\), \((z,\theta ) P'_i (z,\theta ')\) for all \(i \in N\). By PP, we have $$\begin{aligned}(z,\theta ) P' (z,\theta '). \end{aligned}$$ This is a contradiction. Hence, \(d(\theta ) = d(\theta ')\). \(\square \) Step 3. If there exist distinct \(\theta ,\theta ' \in \varTheta \) such that \(d(\theta ) \ne d(\theta ')\), then \(f(\mathbf{R})\) is J-priori for all \(\mathbf{R} \in {\mathcal {D}}\). By way of contradiction, suppose that there exist \(\mathbf{R} \in {\mathcal {D}}\) and \((x,\theta _1),(y,\theta _2) \in X \times \varTheta \) such that \(\theta _1 J \theta _2\) and \((y,\theta _2) R (x,\theta _1)\) where \(R=f(\mathbf{R})\). From Step 2, \(d(\theta _1) = d(\theta _2)\). Since there exist distinct \(\theta ,\theta ' \in \varTheta \) such that \(d(\theta ) \ne d(\theta ')\), there exists \(\theta _3 \in \varTheta \) such that \(d(\theta _3) \ne d(\theta _1)= d(\theta _2)\). Let \(\mathbf{R}' \in {\mathcal {D}}\) be such that $$\begin{aligned} \mathbf{R}'|\{(x,\theta _1),(y,\theta _2)\}&=\mathbf{R}|\{(x,\theta _1),(y,\theta _2)\},\\ (z,\theta _2) P'_{d(\theta _2)} (y,\theta _2)&\quad \text{ and } \quad (z,\theta _2)P'_{d(\theta _2)} (x,\theta _1). \end{aligned}$$ Then, IIA implies that \((y,\theta _2)R'(x,\theta _1)\). Since \(d(\theta _2)\) is a local dictator over \(\theta _2\), it follows that \((z,\theta _2) P' (y,\theta _2)\). Transitivity implies that \((z,\theta _2) P'_{d(\theta _2)} (x,\theta _1)\). The ranking of \((z,\theta _2)\) and \((x, \theta _1)\) is not specified for individuals except \(d(\theta _2)\), and thus, IIA implies that \((z,\theta _2) P_{d(\theta _2)} (x,\theta _1) \Rightarrow (z,\theta _2) P (x,\theta _1)\) for all \(\mathbf{R} \in {\mathcal {D}}\). Let \(\mathbf{R}'' \in {\mathcal {D}}\) be such that $$\begin{aligned} (z,\theta _2)&P''_{d(\theta _2)} (x,\theta _1),\\ (x,\theta _3)&P''_{d(\theta _3)} (z,\theta _3),\\ \theta _1 J'' \theta _3&\quad \text{ and } \quad \theta _3 J'' \theta _2. \end{aligned}$$ It is admissible that \(\theta _1 J' \theta _3\) and \(\theta _3 J' \theta _2\). By the above argument, \((z,\theta _2) P''_{d(\theta _2)} (x,\theta _1) \Rightarrow (z,\theta _2) P'' (x,\theta _1)\). Since \(\mathbf{R}'' \in {\mathcal {D}}_J\), \((x,\theta _1) P''_i (x,\theta _3)\) for all \(i \in N\). Then, PP implies \((x,\theta _1) P'' (x,\theta _3)\). Similarly, we have \((z,\theta _3) P'' (z,\theta _2)\). On the other hand, \(d(\theta _3) \in N\) is a local dictator over \(\theta _3\), and thus, \((x,\theta _3) P''_{d(\theta _3)} (z,\theta _3) \Rightarrow (x,\theta _3) P'' (z,\theta _3)\). Then, we summarize the social preference as follows: $$\begin{aligned} (z,\theta _2) P'' (x,\theta _1),\\ (x,\theta _1) P'' (x,\theta _3), \\ (x,\theta _3) P'' (y,\theta _3), \\ (y,\theta _3) P'' (z,\theta _3),\\ (z,\theta _3) P'' (z,\theta _2), \end{aligned}$$ where \(R''=f(\mathbf{R}'')\). This contradicts transitivity. Hence, \(f(\mathbf{R})\) is J-priori for every profile \(\mathbf{R}\). \(\square \) Now, we consider the case where \(d = d(\theta )= d(\theta ')\) for all \(\theta ,\theta ' \in \varTheta \). In other words, there exists \(d \in N\) such that for all \(\theta \in \varTheta \), \((x, \theta ) P_{d} (y, \theta ) \Rightarrow (x, \theta ) P (y, \theta )\) for all \(x, y \in X \). Either (i) \(d \in \mathcal {W}\) or (ii) \(d \in \mathcal {J}\). We first consider case (i). Step 4. If there exists \(d \in \mathcal {W}\) such that for all \(\theta \in \varTheta \), \((x, \theta ) P_{d} (y, \theta ) \Rightarrow (x, \theta ) P (y, \theta )\) for all \(x, y \in X \), then \(f(\mathbf{R})\) is J-priori for all \(\mathbf{R} \in {\mathcal {D}}\). On the contrary, suppose that there exist \(\mathbf{R} \in {\mathcal {D}}\) and \((x,\theta ),(y,\theta ') \in X \times \varTheta \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\) where \(R=f(\mathbf{R})\). (a) First, we show that for all \(z \in X\), \((z,\theta ') P_d (x,\theta ) \Rightarrow (z,\theta ') P (x,\theta )\). Let \(\mathbf{R}' \in {\mathcal {D}}\) be such that $$\begin{aligned} \mathbf{R}'|\{(x,\theta ),(y,\theta ')\}=\mathbf{R}|\{(x,\theta ),(y,\theta ')\},\\ (z,\theta ') P'_d (x,\theta )\quad \text{ and } \quad (z,\theta ') P'_d (y,\theta '). \end{aligned}$$ Since \(d\) is a consequentialist, \(\mathbf{R}'\) is possible. IIA implies that \((y,\theta ') R' (x,\theta )\). Since \(d\) is a local dictator over \(\theta '\), we have \((z,\theta ') P' (y,\theta ')\). Since \((z,\theta ') P' (y,\theta ')\) and \( (y,\theta ') R' (x,\theta )\), we have \((z,\theta ') P' (x, \theta )\) by transitivity. The ranking of \((z,\theta ')\) and \((x, \theta )\) is not specified for individuals except \(d\), and thus, IIA implies that \((z,\theta ') P_d (x,\theta ) \Rightarrow (z,\theta ') P (x,\theta )\) for all \(\mathbf{R} \in {\mathcal {D}}\). (b) Next, we show that \((z,\theta ') P_d (w,\theta ) \Rightarrow (z,\theta ') P (w,\theta )\) for all \(z,w \in X\). Let \(\mathbf{R}'' \in {\mathcal {D}}\) be such that $$\begin{aligned} (z,\theta ') P''_d (x,\theta ), (x,\theta ) P''_d (w,\theta ) ,\quad \text{ and } \quad (z,\theta ') P''_d (w,\theta ). \end{aligned}$$ It follows from (a) that \((z,\theta ') P''_d (x,\theta ) \Rightarrow (z,\theta ') P'' (x,\theta )\). Since individual \(d\) is a local dictator over \(\theta \), we have \((x,\theta ) P'' (w,\theta )\). Since \((z,\theta ') P'' (x,\theta ) \) and \( (x,\theta ) P'' (w,\theta )\), \((z,\theta ') P'' (w,\theta )\). The ranking of \((z,\theta ')\) and \((w, \theta )\) is not specified for individuals except \(d\), and thus, IIA implies that for all \(\mathbf{R} \in {\mathcal {D}}\), \((z,\theta ') P_d (w,\theta ) \Rightarrow (z,\theta ') P (w,\theta )\) for all \(z,w \in X\). (c) Finally, we show that \(d\) is the universal dictator, i.e., \((a, \bar{\theta }) P_d (b, \hat{\theta }) \Rightarrow (a, \bar{\theta }) P (b, \hat{\theta })\) for all \((a, \bar{\theta }) ,(b, \hat{\theta }) \in X \times \varTheta \). Let \(\mathbf{R}''' \in {\mathcal {D}}\) be such that $$\begin{aligned} (a, \bar{\theta })&P'''_d (b, \hat{\theta }), \\ \theta J''' \hat{\theta }&\text{ and } \bar{\theta } J''' \theta '. \end{aligned}$$ Since \(R'''_i \in {\mathcal {D}}_J\) for all \(i \in N\), \((a, \bar{\theta }) P'''_i (a,\theta ')\) and \((b, \theta ) P'''_i (b, \hat{\theta })\) for all \(i \in N\). By PP, \((a, \bar{\theta }) P''' (a,\theta ')\) and \((b, \theta ) P''' (b, \hat{\theta })\). Since \((x, \theta ') P_d (y,\theta ) \Rightarrow (x,\theta ') P (y,\theta )\) for all \(x,y \in X\) by (b), we have \((a,\theta ') P''' (b,\theta )\). By transitivity, \((a, \bar{\theta }) P''' (b, \hat{\theta })\). The ranking of \((a, \bar{\theta })\) and \((b, \hat{\theta })\) is not specified for individuals except \(d\), and thus, IIA implies that \(d \in \mathcal {C}\) is the universal dictator. This is a contradiction. \(\Box \) Next, we consider case (ii). Step 5. If there exists \(d \in \mathcal {J}\) such that for all \(\theta \in \varTheta \), \((x, \theta ) P_{d} (y, \theta ) \Rightarrow (x, \theta ) P (y, \theta )\) for all \(x, y \in X \), then there exists no social ordering function \(f\) satisfying PP, IIA and ND. If, for all \(\mathbf{R} \in {\mathcal {D}}\), \(\theta J \theta ' \Rightarrow (x,\theta ) R (y,\theta ')\) for all \((x,\theta ), (y,\theta ') \in X \times \varTheta \), then \(d \in \mathcal {J}\) is the universal dictator. This is inconsistent with ND. Hence, there exist \(\mathbf{R} \in {\mathcal {D}}\) and \((x,\theta ),(y,\theta ') \in X \times \varTheta \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\) where \(R=f(\mathbf{R})\). Let \(\mathbf{R}' \in {\mathcal {D}}\) be such that $$\begin{aligned} \mathbf{R}'|\{(x,\theta ),(y,\theta ')\}&=\mathbf{R}|\{(x,\theta ),(y,\theta ')\},\\ (x,\theta ) P'_d (y,\theta )&\text{ and } (x,\theta ') P'_d (y,\theta '), \\&\theta J' \theta '. \end{aligned}$$ Since \(d\) is a non-consequentialist, \(\mathbf{R}'\) is possible. By IIA, \((y,\theta ')R'(x,\theta )\) where \(R'=f(\mathbf{R'})\). Moreover, since \(d\) is a local dictator over \(\theta '\), \((x,\theta ') P'_d (y,\theta ') \Rightarrow (x,\theta ') P' (y,\theta ')\). By transitivity, \((x,\theta ') P' (x,\theta )\). Since \((x,\theta ) P'_i (x,\theta ')\) for all \(i \in N\), PP implies that \((x,\theta ) P' (x,\theta ')\). This is a contradiction. \(\square \) From Steps 1–5, if \(f\) satisfies PP, IIA, and ND, then \(f(\mathbf{R})\) is J-priori for all \(\mathbf{R} \in {\mathcal {D}}\). \(\blacksquare \) Step 1. For each \(\theta \in \varTheta \), there exists \(d(\theta ) \in N\) such that \((x,\theta ) P_{d(\theta )} (y,\theta ) \Rightarrow (x,\theta ) P (y,\theta )\) for all \(x,y \in X\). Take any \(\theta ^* \in \varTheta \). Note that CIIA implies the following: for all \(\mathbf{R} = (R_1,R_2,\cdots ,R_n)\), \(\mathbf{R}' = (R'_1,R'_2,\cdots ,R'_n) \in {\mathcal {D}} \), and for all \((x,\theta ^*),(y,\theta ^*)\) \(\in X \times \varTheta \), if $$\begin{aligned} \mathbf{R}|\{(x,\theta ^*),(y,\theta ^*)\}=\mathbf{R}|\{(x,\theta ^*),(y,\theta ^*) \}, \end{aligned}$$ then \([ (x,\theta ^*) R (y,\theta ^*)\) \(\Leftrightarrow \) \((x,\theta ^*) R' (y,\theta ^*) ]\) and \([ (y,\theta ^*) R (x,\theta ^*)\) \(\Leftrightarrow \) \((y,\theta ^*) R' (x,\theta ^*) ]\). Then, CIIA can be applied to pairs over \(\{(x, \theta ) \in X \times \varTheta :\theta = \theta ^* \}\). In the same manner as in Step 1 of the proof of Theorem 1, we can apply Arrow's impossibility theorem to this subset. Hence, the claim is proved. \(\square \) Step 2. Suppose that there exist \(\mathbf{R} \in {\mathcal {D}}_J\) and \((x,\theta ),(y,\theta ') \in X \times \varTheta \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\) where \(R=f(\mathbf{R})\). Then, \(d(\theta )=d(\theta ')\). By way of contradiction, suppose that there exist \(\mathbf{R} \in {\mathcal {D}}_J\) and \((x,\theta ),(y,\theta ') \in X \times \varTheta \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\) where \(R=f(\mathbf{R})\), and \(d(\theta ) \ne d(\theta ')\). Fix \(z \notin \{ x,y \}\). Let \(\mathbf{R}' \in {\mathcal {D}}_J\) be such that $$\begin{aligned} \mathbf{R}'|\varGamma&=\mathbf{R}|\varGamma ,\\ (x,\theta )&P'_{d(\theta )} (z,\theta ) ,\\ (z,\theta ')&P'_{d(\theta ')} (y,\theta '), \end{aligned}$$ where \(\varGamma =\{ x,y \} \times \{ \theta ,\theta ' \}\). Since \(d(\theta )\) and \(d(\theta ')\) are distinct, the above profile is admissible. CIIA implies that \((y,\theta ') R' (x ,\theta )\). In the same manner as in Step 2 of the proof of Theorem 1, we obtain a contradiction, and thus, \(d(\theta ) = d(\theta ')\). \(\Box \) Now we prove Theorem 2. Suppose, on the contrary, that there exist \(\mathbf{R} \in {\mathcal {D}}_J\) and \((x,\theta _1),(y,\theta _2) \in X \times \varTheta \) such that \(\theta _1 J \theta _2\) and \((y,\theta _2) R (x,\theta _1)\) where \(R=f(\mathbf{R})\). From Step 1, there exists a local dictator \(d(\theta )\) over each circumstance \(\theta \in \varTheta \); from Step 2, \(d(\theta _1) = d(\theta _2)\). By NCD, there exists \(\theta _3 \in \varTheta \) such that \(d(\theta _3) \ne d(\theta _1)\). Fix \(z \notin \{ x,y \}\). Let \(\mathbf{R}' \in {\mathcal {D}}_J\) be such that $$\begin{aligned} \mathbf{R}'|\varGamma&=\mathbf{R}|\varGamma ,\\ (x,\theta _3)&P'_{d(\theta _3)} (z,\theta _3)\\ (z,\theta _2)&P'_{d(\theta _1)} (y,\theta _2)\\ \theta _1 J' \theta _3&\text{ and } \theta _3 J' \theta _2, \end{aligned}$$ where \(\varGamma =\{ x,y \} \times \{ \theta ,\theta ' \}\). It is admissible that \(\theta _1 J' \theta _3\) and \(\theta _3 J' \theta _2\). CIIA implies that \((y,\theta _2) R' (x, \theta _1)\). Since \(\mathbf{R}' \in {\mathcal {D}}_J\), \((x,\theta _1) P'_i (x,\theta _3)\) and \((z,\theta _2) P'_i (z,\theta _3)\) for all \(i \in N\). By CPP, \((x,\theta _1) P' (x,\theta _3)\) and \((z,\theta _3) P' (z,\theta _2)\). On the other hand, \(d(\theta _3) \in N\) is the local dictator over the circumstance \(\theta _3\), and thus, \((x,\theta _3) P'_{d(\theta _3)} (z,\theta _3) \Rightarrow (x,\theta _3) P' (z,\theta _3)\). Similarly, \((z,\theta _2) P'_{d(\theta _1)} (y,\theta _2) \Rightarrow (z,\theta _2) P' (y,\theta _2)\). Note that $$\begin{aligned} (y,\theta _2) R' (x, \theta _1) P' (x,\theta _3) P' (z,\theta _3)P' (z,\theta _2)P' (y,\theta _2). \end{aligned}$$ This contradicts transitivity. \(\blacksquare \) Step 1. For each \(\theta \in \varTheta \), there exists \(d(\theta ) \in N\) such that \((x,\theta ) P_{d(\theta )} (y,\theta ) \Rightarrow (x,\theta ) P (y,\theta )\) for all \(x,y \in \{ a \in X: (a,\theta ) \in \varPsi \}\). Take any \(\theta ^* \in \varTheta \). Since \(\varPsi \) is connected, \(\# \{ a \in X: (a,\theta ) \in \varPsi \}\ge 3\). Then, Arrow's impossibility theorem can be applied in the same manner as in Step 1 of the proof of Theorem 2. \(\square \) Step 2. Suppose that there exist \(\mathbf{R} \in {\mathcal {D}}^{\varPsi }_J\) and \((x,\theta ),(y,\theta ') \in \varPsi \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\) where \(R=f(\mathbf{R})\). Then, \(d(\theta )=d({\theta '})\). By way of contradiction, suppose that there exist \(\mathbf{R} \in {\mathcal {D}}^{\varPsi }_J\) and \((x,\theta ),(y,\theta ') \in \varPsi \) such that \(\theta J \theta '\) and \((y,\theta ') R (x,\theta )\), and \(d(\theta ) \ne d({\theta '})\). Since \(\varPsi \) is connected, there exists \(z \in X\) such that \((z,\theta ), (z,\theta ') \in \varPsi \). Let \(\mathbf{R}' \in {\mathcal {D}}^{\varPsi }_J\) be such that $$\begin{aligned} \mathbf{R}'|(\varGamma \cap \varPsi )&=\mathbf{R}|(\varGamma \cap \varPsi ),\\ (x,\theta )&P'_{d(\theta )} (z,\theta ) ,\\ (z,\theta ')&P'_{d(\theta ')} (y,\theta '), \end{aligned}$$ where \(\varGamma = \{ x, y \} \times \{ \theta ,\theta ' \}\). In the same manner as in Step 2 of the proof of Theorem 2, we obtain a contradiction. \(\Box \) We prove Theorem 3. Suppose, on the contrary, that there exist \(\mathbf{R} \in {\mathcal {D}}^{\varPsi }_J\) and \((x,\theta _1),(y,\theta _2) \in X \times \varTheta \) such that \(\theta _1 J \theta _2\) and \((y,\theta _2) R (x,\theta _1)\) where \(R=f(\mathbf{R})\). Steps 1 and 2 imply that there exists a local dictator over each circumstance \(\theta \in \varTheta \) and \(d({\theta _1}) = d(\theta _2)\). NCD implies that there exists \(\theta _3 \in \varTheta \) where \(d(\theta _3) \ne d(\theta _1)\). Consider the case where \((x,\theta _3) \in \varPsi \). Since \(\varPsi \) is connected, there exists \(z \notin \{ x,y \}\) such that \((z,\theta _2),(z,\theta _3) \in \varPsi \). Let \(\mathbf{R}' \in {\mathcal {D}}_J\) be such that $$\begin{aligned}&\mathbf{R}'|(\varGamma \cap \varPsi )=\mathbf{R}|(\varGamma \cap \varPsi )\\&(x,\theta _3) P'_{d(\theta _3)} (z,\theta _3)\\&(z,\theta _2) P'_{d(\theta _1)} (y,\theta _3)\\&\theta _1 J' \theta _3 \text{ and } \theta _3 J' \theta _2, \end{aligned}$$ where \(\varGamma = \{ x, y \} \times \{ \theta _1 ,\theta _2 \}\). CIIA implies that \((y,\theta _2) R' (x, \theta _1)\). Since \(\mathbf{R}' \in {\mathcal {D}}_J\), \((x,\theta _1) P'_i (x,\theta _3)\) and \((x,\theta _2) P'_i (x,\theta _3)\) for all \(i \in N\). By CPP, \((x,\theta _1) P' (x,\theta _3)\) and \((z,\theta _3) P' (z,\theta _2)\). Moreover, \((x,\theta _3) P'_{d( \theta _3 )} (z,\theta _3) \Rightarrow (x,\theta _3) P' (z,\theta _3)\) and \((z,\theta _2) P'_{d(\theta _1)} (y,\theta _3) \Rightarrow (z,\theta _2) P' (y,\theta _3)\). Note that $$\begin{aligned} (y,\theta _2) R' (x, \theta _1) P' (x,\theta _3) P' (z,\theta _3) P' (z,\theta _2) P' (y,\theta _3). \end{aligned}$$ This contradicts transitivity. Consider the case where \((x,\theta _3) \notin \varPsi \). Since \(\varPsi \) is connected, there exist distinct \(z,w \notin \{ x,y \}\) such that \((z,\theta _1),(z,\theta _3) \in \varPsi \) and \((w,\theta _2),(w,\theta _3) \in \varPsi \). Let \(\mathbf{R}'' \in {\mathcal {D}}_J^{\varPsi }\) be such that $$\begin{aligned}&\mathbf{R}''|(\varGamma \cap \varPsi )=\mathbf{R}|(\varGamma \cap \varPsi )\\&(z,\theta _3) P''_{d({\theta _3})} (w,\theta _3)\\&(x,\theta _1) P''_{d({\theta _1})} (z,\theta _1)\\&(w,\theta _2) P''_{d({\theta _1})} (y,\theta _2)\\&\theta _1 J'' \theta _3 \text{ and } \theta _3 J'' \theta _2, \end{aligned}$$ where \(\varGamma = \{ x, y \} \times \{ \theta _1 ,\theta _2 \}\). CIIA implies that \((y,\theta _2) R'' (x, \theta _1)\). CPP implies that \((z,\theta _1) P'' (z,\theta _3)\) and \((w,\theta _3) P'' (w,\theta _2)\). Moreover, \((z,\theta _3) P''_{d( \theta _3 )} (w,\theta _3) \Rightarrow (z,\theta _3) P'' (w,\theta _3)\), \((x,\theta _1) P''_{d({\theta _1})} (z,\theta _1) \Rightarrow (x,\theta _1) P'' (z,\theta _1)\), and \((w,\theta _2) P''_{d({\theta _1})} (y,\theta _2) \Rightarrow (w,\theta _2) P'' (y,\theta _2)\). Note that $$\begin{aligned} (y,\theta _2) R'' (x, \theta _1)P'' (z,\theta _1)P'' (z,\theta _3)P'' (w,\theta _3)P'' (w,\theta _2)P'' (y,\theta _2). \end{aligned}$$ Arrow KJ (1951) Social choice and individual values. Wiley, New YorkGoogle Scholar Black D (1948) On the rationale of group decision-making. J Polit Econ 56:23–34CrossRefGoogle Scholar Blackorby C, Bossert W, Donaldson D (2005) Multi-profile welfarism: a generalization. Soc Choice Welfare 24:253–267CrossRefGoogle Scholar Blau JH (1957) The existence of social welfare functions. Econometrica 25:302–313CrossRefGoogle Scholar Cato S (2011) Pareto principles, positive responsiveness, and majority decisions. Theory Decis 71:503–518CrossRefGoogle Scholar Cato S (2012) Social choice without the Pareto principle: a comprehensive analysis. Soc Choice Welfare 39:869–889CrossRefGoogle Scholar Cato S (2014) Independence of irrelevant alternatives revisited. Theory Decis 76:511–527 doi: 10.1007/s11238-013-9384-1 Dworkin R (1977) Taking rights seriously. Harvard University Press, CambridgeGoogle Scholar Fishburn PC (1976) Dictators on blocks: generalizations of social choice impossibility theorems. J Comb Theory Ser B 20:153–170CrossRefGoogle Scholar Fleurbaey M, Suzumura K, Tadenuma K (2005a) Arrovian aggregation in economic environments: how much should we know about indifference surfaces? J Econ Theory 124:22–44CrossRefGoogle Scholar Fleurbaey M, Suzumura K, Tadenuma K (2005b) The informational basis of the theory of fair allocation. Soc Choice Welfare 24:311–341CrossRefGoogle Scholar Fleurbaey M, Tungodden B, Chang HF (2003) Any non-welfarist method of policy assessment violates the Pareto principle: a comment. J Polit Econ 111:1382–1385CrossRefGoogle Scholar Gaertner W (2002) Domain restriction. In: Arrow KJ, Sen AK, Suzumura K (eds) Handbook of social choice and welfare, vol 1. North-Holland, Amsterdam, pp 131–170CrossRefGoogle Scholar Gotoh R, Suzumura K, Yoshihara N (2005) Extended social ordering functions for rationalizing fair game forms in the sense of Rawls and Sen. Int J Econ Theory 1:21–41CrossRefGoogle Scholar Gravel N (1994) Can a ranking of opportunity sets attach an intrinsic importance to freedom of choice? Am Econ Rev 84:454–458Google Scholar Hammond PJ (1976) Equity, Arrow's conditions, and Rawls' difference principles. Econometrica 44:793–804CrossRefGoogle Scholar Hansson B (1973) The independence condition in the theory of social choice. Theory Decis 4:25–49Google Scholar Iwata Y (2009) Consequences, opportunities, and Arrovian impossibility theorems with consequentialist domains. Soc Choice Welfare 32:513–531CrossRefGoogle Scholar Kaplow L, Shavell S (2001) Any non-welfarist method of policy assessment violates the Pareto principle. J Polit Econ 109:281–286CrossRefGoogle Scholar Kaplow L, Shavell S (2002) Fairness versus welfare. Harvard University Press, CambridgeGoogle Scholar Kaplow L, Shavell S (2004) Any non-welfarist method of policy assessment violates the Pareto principle: reply. J Polit Econ 112:249–251CrossRefGoogle Scholar Kelsey D (1987) The role of information in social welfare judgement. Oxf Econ Pap 39:301–317Google Scholar Maskin ES (1995) Majority rule, social welfare functions, and game forms. In: Basu K, Pattanaik PK, Suzumura K (eds) Choice, welfare, and development: a festschrift in honour of Amartya K. Sen. Oxford University Press, Oxford, pp 100–109Google Scholar Pattanaik PK, Suzumura K (1994) Rights, welfarism and social choice. Am Econ Rev Pap Proc 84:435–439Google Scholar Pattanaik PK, Suzumura K (1996) Individual rights and social evaluation: a conceptual framework. Oxf Econ Pap 48:194–212CrossRefGoogle Scholar Perfit D (1984) Reasons and persons. Clarendon Press, OxfordGoogle Scholar Pettit P (2000) Non-consequentialism and universalizability. Philos Q 50:175–190CrossRefGoogle Scholar Rawls J (1971) A theory of justice. Oxford University Press, OxfordGoogle Scholar Sakai T, Shimoji M (2006) Dichotomous preferences and the possibility of Arrovian social choice. Soc Choice Welfare 26:435–445CrossRefGoogle Scholar Scheffler S (1982) The rejection of consequentialism. Oxford University Press, OxfordGoogle Scholar Sen AK (1977) On weights and measures:informational constraints in social welfare analysis. Econometrica 45:1539–1572Google Scholar Sen AK (1970) Collective choice and social welfare. Holden-Day, San FranciscoGoogle Scholar Sen AK (1979) Personal utilities and public judgements: or what's wrong with welfare economics. Econ J 89:537–558CrossRefGoogle Scholar Sen AK (1986) Information and invariance in normative choice. In: Da Starret, Heller WP, Starr RM (eds) Social choice and public decision making. Cambridge University Press, Cambridge, pp 29–55CrossRefGoogle Scholar Suzumura K (1999) Consequence, opportunities, and procedures. Soc Choice Welfare 16:17–40CrossRefGoogle Scholar Suzumura K, Xu Y (2001) Characterizations of consequentialism and non-consequentialism. J Econ Theory 101:423–436CrossRefGoogle Scholar Suzumura K, Xu Y (2004) Welfarist-consequentialism, similarity of attitudes, and Arrow's general impossibility theorem. Soc Choice Welfare 22:237–251CrossRefGoogle Scholar Suzumura K, Yoshihara N (2008) On initial conferment of individual rights., Discussion Paper SeriesInstitute of Economic Research, Hitotsubashi UniversityGoogle Scholar Yoshihara N (2008) On non-welfarist social ordering functions. In: Pattanaik PK, Tadenuma K, Xu Y, Yoshihara N (eds) Rational choice and social welfare: theory and applications (in honor of Kotaro Suzumura). Springer-Verlag, Berlin, Heidelberg, pp 43–67Google Scholar 1.Institute of Social ScienceThe University of TokyoTokyoJapan Cato, S. Rev Econ Design (2014) 18: 265. https://doi.org/10.1007/s10058-014-0164-3
CommonCrawl
Like logarithmic terms The logarithmic terms which contain same logarithmic coefficients are called like logarithmic terms. Logarithm terms are often appeared similar when two or more logarithmic terms are compared. It is possible when the logarithmic terms contain same logarithmic coefficient. Due to similar logarithmic coefficient, the logarithmic terms are called as like logarithmic terms. Examine the following examples to identity the like logarithmic terms. $(1) \,\,\,$ $6\log_{3}{7}$ and $-8\log_{3}{7}$ Express both terms as factors by factorization (or) factorisation method. $6 \times \log_{3}{7}$ and $-8 \times \log_{3}{7}$ $6$ and $-8$ are different and numbers. $\log_{3}{7}$ is a logarithmic coefficient of $6$ and $-8$ in the both terms. Therefore, $6\log_{3}{7}$ and $-8\log_{3}{7}$ are similar in appearance and known as like logarithmic terms. $(2) \,\,\,$ $d\log_{a}{xy}$, $\Big(\dfrac{1}{c}\Big)\log_{a}{xy}$ and $0.6\log_{f}{x}\log_{a}{xy}$ Once again, factorize (or) factorise all three logarithmic terms to identity common logarithmic coefficients. $d \times \log_{a}{xy}$, $\Big(\dfrac{1}{c}\Big) \times \log_{a}{xy}$ and $0.6 \times \log_{f}{x} \times \log_{a}{xy}$ $\log_{a}{xy}$ is a logarithmic coefficient of $d$ in the first term, a logarithmic coefficient of $\dfrac{1}{c}$ in the second term and also a logarithmic coefficient of $0.6\log_{f}{x}$ in the third term. In this case, the factor $\log_{f}{x}$ is a logarithmic coefficient of $0.6\log_{a}{xy}$ but it is not appeared in remaining two terms. Due to the common involvement of $\log_{a}{xy}$ in all three terms, the three log terms are appeared similar. Hence, the $d\log_{a}{xy}$, $\Big(\dfrac{1}{c}\Big)\log_{a}{xy}$ and $0.6\log_{f}{x}\log_{a}{xy}$ are called as like logarithmic terms.
CommonCrawl
Second order all speed method for the isentropic Euler equations Estimates of solutions of linear neutron transport equation at large time and spectral singularities Global existence for the Vlasov-Poisson system with steady spatial asymptotic behavior Jack Schaeffer 1, Department of Mathematics Sciences, Carnegie Mellon University, Pittsburgh, PA 15213, United States Received August 2011 Revised August 2011 Published January 2012 A collisionless plasma is modeled by the Vlasov-Poisson system in three space dimensions. A fixed background of positive charge, which is independent of time and space, is assumed. The situation in which mobile negative ions balance the positive charge as $|x|\to\infty$ is considered. Hence, the total positive charge and the total negative charge are both infinite. It is shown, in three spatial dimensions, that smooth solutions may be continued as long as the velocity support remains finite. Also, in the case of spherical symmetry, a bound on velocity support is obtained and hence solutions exist globally in time. Keywords: velocity support, Vlasov equation., Collisionless plasma. Mathematics Subject Classification: 35L60, 35Q99, 82C21, 82C22, 82D1. Citation: Jack Schaeffer. Global existence for the Vlasov-Poisson system with steady spatial asymptotic behavior. Kinetic & Related Models, 2012, 5 (1) : 129-153. doi: 10.3934/krm.2012.5.129 J. Batt, Global symmetric solutions of the initial value problem of stellar dynamics, J. Diff. Eqns., 25 (1977), 342-364. doi: 10.1016/0022-0396(77)90049-3. Google Scholar J. Batt and G. Rein, Global classical solutions of the periodic Vlasov-Poisson system in three dimensions, C. R. Academy of Sci. Paris Sér. I Math., 313 (1991), 411-416. Google Scholar E. Caglioti, S. Caprino, C. Marchioro and M. Pulvirenti, The Vlasov equation with infinite mass, Arch. Rational Mech. Anal., 159 (2001), 85-108. doi: 10.1007/s002050100150. Google Scholar S. Caprino, C. Marchioro and M. Pulvirenti, On the two dimensional Vlasov-Helmholtz equation with infinite mass, Commun. PDE, 27 (2002), 791-808. doi: 10.1081/PDE-120002874. Google Scholar R. Glassey, "The Cauchy Problem in Kinetic Theory,'' SIAM, Philadelphia, PA, 1996. doi: 10.1137/1.9781611971477. Google Scholar R. Glassey and J. Schaeffer, Time decay for solutions to the linearized Vlasov equation, Trans. Th. Stat. Phys., 23 (1994), 411-453. doi: 10.1080/00411459408203873. Google Scholar R. Glassey and J. Schaeffer, On time decay rates in Landau damping, Commun. PDE, 20 (1995), 647-676. doi: 10.1080/03605309508821107. Google Scholar R. Glassey and W. Strauss, Singularity formation in a collisionless plasma could occur only at high velocities, Arch. Rat. Mech. Anal., 92 (1986), 59-90. doi: 10.1007/BF00250732. Google Scholar E. Horst, On the asymptotic growth of the solutions of the Vlasov-Poisson system, Math. Meth. Appl. Sci., 16 (1993), 75-86. doi: 10.1002/mma.1670160202. Google Scholar E. Horst, On the classical solutions of the initial value problem for the unmodified nonlinear Vlasov-Equation, Parts I and II, Math. Meth. Appl. Sci., 3 (1981), 229-248 and 4 (1982), 19-32. Google Scholar P.-E. Jabin, The Vlasov-Poisson system with infinite mass and energy, J. Statist. Phys., 103 (2001), 1107-1123. doi: 10.1023/A:1010321308267. Google Scholar R. Kurth, Das Anfangswertproblem der stellardynamik, Z. Astrophys., 30 (1952), 213-229. Google Scholar L. D. Landau, On the vibrations of the electronic plasma, Akad. Nauk SSSR. Shurnal Eksper. Fiz., 16 (1946), 574-586. Google Scholar P.-L. Lions and B. Perthame, Propagation of moments and regularity for the 3-dimensional Vlasov-Poisson system, Invent. Math., 105 (1991), 415-430. doi: 10.1007/BF01232273. Google Scholar T. Okabe and S. Ukai, On classical solutions in the large in time of two-dimensional Vlasov's equation, Osaka J. Math., 15 (1978), 245-261. Google Scholar S. Pankavich, Explicit solutions of the one-dimensional Vlasov-Poisson system with infinite mass, Math. Methods Appl. Sci., 31 (2008), 375-389. doi: 10.1002/mma.915. Google Scholar S. Pankavich, Local existence for the one-dimensional Vlasov-Poisson system with infinite mass, Math. Methods Appl. Sci., 30 (2007), 529-548. doi: 10.1002/mma.796. Google Scholar S. Pankavich, Global existence and increased spatial decay for the radial Vlasov-Poisson system with steady spatial asymptotics, Transport Theory Statist. Phys., 36 (2007), 531-562. doi: 10.1080/00411450701703480. Google Scholar S. Pankavich, Global existence for the Vlasov-Poisson system with steady spatial asymptotics, Comm. Partial Differential Equations, 31 (2006), 349-370. Google Scholar K. Pfaffelmoser, Global classical solutions of the Vlasov-Poisson system in three dimensions for general initial data, J. Diff. Eqns., 95 (1992), 281-303. Google Scholar J. Schaeffer, Global existence of smooth solutions to the Vlasov-Poisson system in three dimensions, Commun. Part. Diff. Eqns., 16 (1991), 1313-1335. Google Scholar J. Schaeffer, Asymptotic growth bounds for the Vlasov-Poisson system, Mathematical Methods in the Applied Sciences., 34 (2011), 262-277. doi: 10.1002/mma.1354. Google Scholar J. Schaeffer, The Vlasov-Poisson system with steady spatial asymptotics, Comm. PDE, 28 (2003), 1057-1084. doi: 10.1081/PDE-120021186. Google Scholar J. Schaeffer, Steady spatial asymptotics for the Vlasov-Poisson system, Math. Meth. Appl. Sci., 26 (2003), 273-296. doi: 10.1002/mma.354. Google Scholar N. G. VanKampen and B. U. Felderhof, "Theoretical Methods in Plasma Physics,'' North-Holland, Amsterdam, 1967. Google Scholar S. Wollman, Global-in-time solutions of the two-dimensional Vlasov-Poisson system, Comm. Pure Appl. Math., 33 (1980), 173-197. doi: 10.1002/cpa.3160330205. Google Scholar Christophe Pallard. Growth estimates and uniform decay for a collisionless plasma. Kinetic & Related Models, 2011, 4 (2) : 549-567. doi: 10.3934/krm.2011.4.549 Baptiste Fedele, Claudia Negulescu. Numerical study of an anisotropic Vlasov equation arising in plasma physics. Kinetic & Related Models, 2018, 11 (6) : 1395-1426. doi: 10.3934/krm.2018055 Oǧul Esen, Serkan Sütlü. Matched pair analysis of the Vlasov plasma. Journal of Geometric Mechanics, 2021, 13 (2) : 209-246. doi: 10.3934/jgm.2021011 Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. Evolution Equations & Control Theory, 2016, 5 (3) : 449-461. doi: 10.3934/eect.2016013 Silvia Caprino, Guido Cavallaro, Carlo Marchioro. Time evolution of a Vlasov-Poisson plasma with magnetic confinement. Kinetic & Related Models, 2012, 5 (4) : 729-742. doi: 10.3934/krm.2012.5.729 Gang Li, Xianwen Zhang. A Vlasov-Poisson plasma of infinite mass with a point charge. Kinetic & Related Models, 2018, 11 (2) : 303-336. doi: 10.3934/krm.2018015 Yulia O. Belyaeva, Björn Gebhard, Alexander L. Skubachevskii. A general way to confined stationary Vlasov-Poisson plasma configurations. Kinetic & Related Models, 2021, 14 (2) : 257-282. doi: 10.3934/krm.2021004 Silvia Caprino, Guido Cavallaro, Carlo Marchioro. A Vlasov-Poisson plasma with unbounded mass and velocities confined in a cylinder by a magnetic mirror. Kinetic & Related Models, 2016, 9 (4) : 657-686. doi: 10.3934/krm.2016011 Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5321-5335. doi: 10.3934/dcdsb.2020345 Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control & Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017 Sebastián Ferrer, Martin Lara. Families of canonical transformations by Hamilton-Jacobi-Poincaré equation. Application to rotational and orbital motion. Journal of Geometric Mechanics, 2010, 2 (3) : 223-241. doi: 10.3934/jgm.2010.2.223 Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159 Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73 Alexander Bobylev, Mirela Vinerean, Åsa Windfäll. Discrete velocity models of the Boltzmann equation and conservation laws. Kinetic & Related Models, 2010, 3 (1) : 35-58. doi: 10.3934/krm.2010.3.35 Pedro Teixeira. Dacorogna-Moser theorem on the Jacobian determinant equation with control of support. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 4071-4089. doi: 10.3934/dcds.2017173 Pedro Isaza, Jorge Mejía. On the support of solutions to the Kadomtsev-Petviashvili (KP-II) equation. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1239-1255. doi: 10.3934/cpaa.2011.10.1239 Ugo Bessi. Viscous Aubry-Mather theory and the Vlasov equation. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 379-420. doi: 10.3934/dcds.2014.34.379 Frédérique Charles, Bruno Després, Benoît Perthame, Rémis Sentis. Nonlinear stability of a Vlasov equation for magnetic plasmas. Kinetic & Related Models, 2013, 6 (2) : 269-290. doi: 10.3934/krm.2013.6.269 Emmanuel Frénod, Sever A. Hirstoaga, Eric Sonnendrücker. An exponential integrator for a highly oscillatory vlasov equation. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 169-183. doi: 10.3934/dcdss.2015.8.169 Darryl D. Holm, Vakhtang Putkaradze, Cesare Tronci. Collisionless kinetic theory of rolling molecules. Kinetic & Related Models, 2013, 6 (2) : 429-458. doi: 10.3934/krm.2013.6.429 Jack Schaeffer
CommonCrawl
Exercise 102 - Chapter 1 Fredrick Eisele April 2018 edited June 2018 in Exercises Choose sets X and Y with between two and four elements each, and choose a function \( f: X \rightarrow Y \). Choose two different subsets \( B_1 , B_2 \subseteq Y \) and find \( f^{−1}(B_1) \) and \( f^{-1}(B_2) \). Choose two different subsets \( A_1 , A_2 \subseteq X \) and find \( f_!( A_1 ) \) and \( f_! (A_2) \). With the same \( A_1 , A_2 \subseteq X \), find \( f_* (A_1) \) and \( f_* (A_2 ) \). Example 1.101. Let \( f : X \rightarrow Y \) be a function between sets. We can imagine X as a set of balls, \( Y \) as a set of buckets, and \( f \) as putting each ball in a bucket. Then we have the monotone map \( f^*: \mathbb{P} Y \rightarrow \mathbb{P} X \) that category theorists call "pullback along f ". This map takes a subset \( B \subseteq Y \) to its preimage \( f^{−1} B \subseteq X: \) that is, it takes a collection \( B \) of buckets, and tells you all the balls that lie in them. This operation is monotonic (more buckets means more balls) and it has both a left and a right adjoint. The left adjoint \( f_! (A) \) is given by the direct image: it maps a subset \( A \subseteq X \) to $$ f_{!} (A) := \{ y \in Y | \exists a \in A \text{ such that } f(a) = y \} $$ This map hence takes a set \( A \) of balls, and tells you all the buckets that contain at least one of these balls. The right adjoint \( f_* \) maps a subset \( A \subseteq X \) to $$ f_* (A) := \{ y \in Y | \forall a \text{ such that } f(a) = y \text{ we have } a \in A \} $$ Michael Hong I tried 3 different examples and instead of just choosing 2 subsets as the exercise asked for, I solved it for every subset since 2 and 3 element power sets aren't too big to draw out. I've been struggling with this all week and still can't seem to figure this one out. I think I am having a problem getting my head around the logical expressions in the definitions of \(f_!\) and \(f_*\). So instead I just used the definitions of left and right adjoints got the answers above. Not sure if this is correct but they preserve order and seem to obey all the definitions of an adjoint. But what doesn't make sense for me is the intuitive explanations of these functions given in by Spivak and Fong. [Left Adjoint] hence takes a set A of balls, and tells you all the buckets that contain at least one of these balls. [Right Adjoint] takes a set A of balls, and tells you all the buckets that only contain balls from A. Seems to me these intuitive explanations should be the other way around. The left adjoint seems to tell you the bucket that contains only the balls in A and right adjoint seems to tell you the buckets that contain at least one of the balls in A. Tell you the truth they both sound like they are saying the same thing to me... What am I doing wrong here? Comment Source:I tried 3 different examples and instead of just choosing 2 subsets as the exercise asked for, I solved it for every subset since 2 and 3 element power sets aren't too big to draw out. ![2-3 Pullback ver1](http://aether.co.kr/wp-content/uploads/2-3-pullback-f1.jpg) ![2-3 Pullback ver2](http://aether.co.kr/wp-content/uploads/2-3-pullback-f2.jpg) ![3-2 Pullback](http://aether.co.kr/wp-content/uploads/3-2-pullback-f1.jpg) I've been struggling with this all week and still can't seem to figure this one out. I think I am having a problem getting my head around the logical expressions in the definitions of \\(f_!\\) and \\(f_*\\). So instead I just used the definitions of left and right adjoints got the answers above. Not sure if this is correct but they preserve order and seem to obey all the definitions of an adjoint. But what doesn't make sense for me is the intuitive explanations of these functions given in by Spivak and Fong. >[Left Adjoint] hence takes a set A of balls, and tells you all the buckets that contain at least one of these balls. >[Right Adjoint] takes a set A of balls, and tells you all the buckets that only contain balls from A. Seems to me these intuitive explanations should be the other way around. The left adjoint seems to tell you the bucket that contains only the balls in A and right adjoint seems to tell you the buckets that contain at least one of the balls in A. Tell you the truth they both sound like they are saying the same thing to me... What am I doing wrong here? Valter Sorana @MichaelHong : I am not sure whether this helps you, but I think the description of the right adjoint you cited is slightly incomplete (this is related to the confusion I had when we first defined \(f_{\ast}\) ): \(f_{\ast}(A)\) does not just give you "all the buckets that only contain balls from A", but also all the buckets that never get any ball from \(X\). The description of the left adjoint seems right to me - and it definitely does not say the same thing as the one for the right adjoint: 1) it includes buckets (i.e., points of \(Y\) ) that receive balls from both \(A\) and \(X-A\) and which would thus be excluded from \(f_{\ast}(A)\) (this bites only if \(f\) is not injective); 2) conversely, according to my description, \(f_{\ast}(A)\) includes buckets outside the range of \(f\) which are instead excluded by \(f_{!}\) (this bites only if \(f\) is not surjective). 3) If \(f\) is both injective and surjective, then the two adjoints coincide with the inverse \(f^{-1}\) of \(f\). Comment Source:@MichaelHong : I am not sure whether this helps you, but I think the description of the right adjoint you cited is slightly incomplete (this is related to the confusion I had when we first defined \\(f_{\ast}\\) ): \\(f_{\ast}(A)\\) does not just give you "all the buckets that only contain balls from A", but also all the buckets that never get any ball from \\(X\\). The description of the left adjoint seems right to me - and it definitely does not say the same thing as the one for the right adjoint: 1) it includes buckets (i.e., points of \\(Y\\) ) that receive balls from both \\(A\\) and \\(X-A\\) and which would thus be excluded from \\(f_{\ast}(A)\\) (this bites only if \\(f\\) is not injective); 2) conversely, according to my description, \\(f_{\ast}(A)\\) includes buckets outside the range of \\(f\\) which are instead excluded by \\(f_{!}\\) (this bites only if \\(f\\) is not surjective). 3) If \\(f\\) is both injective and surjective, then the two adjoints coincide with the inverse \\(f^{-1}\\) of \\(f\\). Daniel Wang @MichaelHong: What do you use to draw your diagrams? They're staggeringly beautiful. Comment Source:@MichaelHong: What do you use to draw your diagrams? They're staggeringly beautiful. Valter wrote: \(f_*(A)\) does not just give you "all the buckets that only contain balls from A", but also all the buckets that never get any ball from \(X\). This is a nice example of how ordinary language treats "all" in a different way than mathematics. Suppose I'm unmarried and I say "all my wives are millionaires". Is this true? A mathematician would say yes, it's true. It's "vacuously true". In mathematics, "All my wives are millionaires" is equivalent to "I have no wife that is not a millionaire". Go through my wives and check this. Yes it's true, because there are no wives to check! \(f_*(A)\) gives you the buckets all of whose balls are from \(A\). If a bucket has no balls in it, it's vacuously true that all the balls in this bucket are from \(A\). Comment Source:Valter wrote: > \\(f_*(A)\\) does not just give you "all the buckets that only contain balls from A", but also all the buckets that never get any ball from \\(X\\). This is a nice example of how ordinary language treats "all" in a different way than mathematics. Suppose I'm unmarried and I say "all my wives are millionaires". Is this true? A mathematician would say yes, it's true. It's "vacuously true". In mathematics, "All my wives are millionaires" is equivalent to "I have no wife that is not a millionaire". Go through my wives and check this. Yes it's true, because there are no wives to check! \\(f_*(A)\\) gives you the buckets all of whose balls are from \\(A\\). If a bucket has no balls in it, it's vacuously true that all the balls in this bucket are from \\(A\\). Michael Hong wrote: The left adjoint seems to tell you the buckets that contains only the balls in A and right adjoint seems to tell you the buckets that contain at least one of the balls in A. Tell you the truth they both sound like they are saying the same thing to me... The left adjoint \(f_{!} (A) \) is the set of buckets such that some ball in that bucket comes from \(A\). The right adjoint \(f_{\ast}(A) \) is the set of buckets such that all balls in that bucket come from \(A\). These are quite different. For example, suppose a bucket has two balls in it: one from \(A\) and one not from \(A\). Then this bucket is in \(f_{!}(A)\) but not in \(f_{\ast}(A)\). Some ball in that bucket is from \(A\), but not all. Or suppose a bucket has no balls in it. Then this bucket is in \(f_{\ast}(A)\) but not in \(f_{!}(A)\). All balls in that bucket are from \(A\), but not some. If this seems surprising, read my previous comment. Since there are no balls in this bucket, it's vacuously true that all balls in this bucket come from \(A\). If you give me a specific example where you seem to be getting a different answer, we can talk about it. Your pictures are actually pictures of many choices of \(A\) - too much to talk about. Comment Source:Michael Hong wrote: > The left adjoint seems to tell you the buckets that contains only the balls in A and right adjoint seems to tell you the buckets that contain at least one of the balls in A. Tell you the truth they both sound like they are saying the same thing to me... The left adjoint \\(f_{!} (A) \\) is the set of buckets such that _some_ ball in that bucket comes from \\(A\\). The right adjoint \\(f_{\ast}(A) \\) is the set of buckets such that _all_ balls in that bucket come from \\(A\\). These are quite different. For example, suppose a bucket has two balls in it: one from \\(A\\) and one not from \\(A\\). Then this bucket is in \\(f_{!}(A)\\) but not in \\(f_{\ast}(A)\\). _Some_ ball in that bucket is from \\(A\\), but not _all_. Or suppose a bucket has no balls in it. Then this bucket is in \\(f_{\ast}(A)\\) but not in \\(f_{!}(A)\\). _All_ balls in that bucket are from \\(A\\), but not _some_. If this seems surprising, read my previous comment. Since there are no balls in this bucket, it's *vacuously* true that all balls in this bucket come from \\(A\\). If you give me a specific example where you seem to be getting a different answer, we can talk about it. Your pictures are actually pictures of many choices of \\(A\\) - too much to talk about. @Valter \(f_{\ast}(A)\) does not just give you "all the buckets that only contain balls from A", but also all the buckets that never get any ball from \(X\). Yes that's what I thought after I drew the diagrams! The word "all" was getting to me because by my mortal logic where all excludes the vacuous truth that John talks about in the comments below, this would lead to left and right adjoints being the same map for all of the examples above but it isn't. Comment Source:@Valter >\\(f_{\ast}(A)\\) does not just give you "all the buckets that only contain balls from A", but also all the buckets that never get any ball from \\(X\\). Yes that's what I thought after I drew the diagrams! The word "all" was getting to me because by my mortal logic where all excludes the vacuous truth that John talks about in the comments below, this would lead to left and right adjoints being the same map for all of the examples above but it isn't. Thank! I use Illustrator CS6 for my diagrams. Comment Source:@Daniel Thank! I use Illustrator CS6 for my diagrams. @John Thanks! This pretty much cleared up all confusion I was having for a week. Basically its like elementary school field day. Everyone (\(all\)) gets an award for participation! These are quite different. For example, suppose a bucket has two balls in it: one from \(A\) and one not from \(A\). Then this bucket is in \(f_{*}(A)\) but not in \(f_{!}(A)\). _Some_ ball in that bucket is from \(A\), but not _all_. Or suppose a bucket has no balls in it. Then this bucket is in \(f_{!}(A)\) but not in \(f_{\ast}(A)\). All balls in that bucket are from \(A\), but not some. If this seems surprising, read my previous comment. Since there are no balls in this bucket, it's vacuously true that all balls in this bucket come from \(A\). I think you switched \(f_!\) and \(f_*\) here? Comment Source:@John >\\(f_*(A)\\) gives you the buckets all of whose balls are from \\(A\\). If a bucket has no balls in it, it's vacuously true that all the balls in this bucket are from \\(A\\). Thanks! This pretty much cleared up all confusion I was having for a week. Basically its like elementary school field day. Everyone (\\(all\\)) gets an award for participation! >These are quite different. For example, suppose a bucket has two balls in it: one from \\(A\\) and one not from \\(A\\). Then this bucket is in \\(f_{*}(A)\\) but not in \\(f_{!}(A)\\). _Some_ ball in that bucket is from \\(A\\), but not _all_. >Or suppose a bucket has no balls in it. Then this bucket is in \\(f_{!}(A)\\) but not in \\(f_{\ast}(A)\\). _All_ balls in that bucket are from \\(A\\), but not _some_. If this seems surprising, read my previous comment. Since there are no balls in this bucket, it's *vacuously* true that all balls in this bucket come from \\(A\\). I think you switched \\(f_!\\) and \\(f_*\\) here? Yes. I always get everything backwards. I could say I was just testing you, but that would be a lie. I'll fix comment #5 so other people don't get confused. Comment Source:Michael Hong wrote: > I think you switched \\(f_!\\) and \\(f_*\\) here? Yes. I always get everything backwards.<img src = "http://math.ucr.edu/home/baez/emoticons/sm_upset.gif"> I could say I was just testing you, but that would be a lie. I'll fix comment #5 so other people don't get confused. By the way, Michael Hong, until quite recently a lot of logicians believed this axiom: $$ \forall x \, P(x) \implies \exists x \, P(x) $$ This says If every \(x\) has property \(P\) then some \(x\) has property \(P\). But we know realizes this axiom is bad because it fails in the cases where the set of \(x\)'s is empty! This is the tricky case that was confusing you. If I'm a bachelor, everyone who is my wife is a millionaire does not imply there exists a wife of mine who is a millionaire. So, you're not the only one who was confused by this. It turns out that dropping the above axiom, which allows the empty set of \(x\)'s to work just as well as any other set, is crucial to making the rules of logic work smoothly. Comment Source:By the way, Michael Hong, until quite recently a lot of logicians believed this axiom: \[ \forall x \, P(x) \implies \exists x \, P(x) \] This says <center> If every \\(x\\) has property \\(P\\) then some \\(x\\) has property \\(P\\). </center> But we know realizes this axiom is bad because it fails in the cases where the set of \\(x\\)'s is empty! This is the tricky case that was confusing you. If I'm a bachelor, _everyone who is my wife is a millionaire_ does not imply _there exists a wife of mine who is a millionaire_. So, you're not the only one who was confused by this. It turns out that dropping the above axiom, which allows the empty set of \\(x\\)'s to work just as well as any other set, is crucial to making the rules of logic work smoothly. Ahh that makes sense. Its kind of like the discovery of zero but for logic. The empty set from A gets counted as a set just like zero gets counted like a number. This is quite mind blowing because the consequences are freedom of imagination LOL. You can say anything about something that doesn't exist and it could be true but you can't say it exists just cause it could be true since it is just dreamt up. Comment Source:@John Ahh that makes sense. Its kind of like the discovery of zero but for logic. The empty set from A gets counted as a set just like zero gets counted like a number. This is quite mind blowing because the consequences are freedom of imagination LOL. You can say anything about something that doesn't exist and it could be true but you can't say it exists just cause it could be true since it is just dreamt up. Michael wrote: It's kind of like the discovery of zero but for logic. Exactly! The analogy goes quite deep. I don't want to go to far into it right now, but the fact that "for all \(x \in S\), \(P(x)\) is true" is automatically true when \(S\) is empty is analogous to the fact that the product of a collection of numbers is automatically equal to \(1\) when that collection is empty. You may not have thought much about what happens when you multiply an empty collection of numbers, but you know examples, like $$ 3^0 = 1 $$ which means that if you multiply no 3's at all, you get 1. Comment Source:Michael wrote: > It's kind of like the discovery of zero but for logic. Exactly! The analogy goes quite deep. I don't want to go to far into it right now, but the fact that "for all \\(x \in S\\), \\(P(x)\\) is true" is automatically true when \\(S\\) is empty is analogous to the fact that the product of a collection of numbers is automatically equal to \\(1\\) when that collection is empty. You may not have thought much about what happens when you multiply an empty collection of numbers, but you know examples, like $$ 3^0 = 1 $$ which means that if you multiply no 3's at all, you get 1. the product of a collection of numbers is automatically equal to 1 when that collection is empty. I think we can all meditate on this infinitely LOL. Comment Source:@John > the product of a collection of numbers is automatically equal to 1 when that collection is empty. I think we can all meditate on this infinitely LOL.
CommonCrawl
IZA Journal of Labor & Development Partial minimum wage compliance Haroon Bhorat1, Ravi Kanbur2 & Benjamin Stanwix1 IZA Journal of Labor & Development volume 4, Article number: 18 (2015) Cite this article In many developing countries, a significant portion of the wage distribution is found below the legal minimum wage. In order to fully understand the nature of this non-compliance, we need to compare the counterfactual wage distribution without the minimum wage law to the current wage distribution. Such a comparison could reveal partial compliance, where employers raise wages some of the way to the minimum wage, to balance out the benefits of non-compliance with the costs and penalties to the extent that they depend on the gap between the legal minimum wage and the wage actually paid. This paper presents a simple model of such partial compliance and uses its predictions to structure an empirical investigation of the impact of introducing a minimum wage law for agricultural workers in South Africa. We find that partial compliance is indeed taking place and further, the lowest wages are being raised disproportionately, consistent with the predictions of the model. JEL codes: J23, J25, J31, J32, J38, J43 According to basic theory, the introduction of a fully enforced minimum wage in a perfectly competitive labor market will increase the wages of the employed to the minimum wage, but reduce employment. As minimum wages have been introduced and increased in many countries around the world, there have been extensive empirical assessments of the economic and welfare consequences of such an intervention. Whilst the evidence remains predominantly concentrated in developed economies, empirical assessments for the developing world are growing in prevalence. The effects on employment remain intensely debated in large part due to the substantive heterogeneity in measured outcomes. Indeed, the impact of a minimum wage on employment remains the dominant question in the literature. But in many countries, then particularly in developing countries, a significant portion of the wage distribution is found below the legal minimum wage. Less focus and attention has been given to understanding changes in wage levels below the stipulated legal minimum, given that the legislation is rarely fully enforced. In order to understand the nature of this non-compliance, we need to compare the counterfactual wage distribution without the minimum wage law to the current wage distribution. Such a comparison could reveal partial compliance with the law. There are two senses in which a firm's compliance with a minimum wage regulation could be partial, relative to the situation with no regulation. First, the firm could raise the wage it pays its workers somewhat but not all the way to the minimum wage. Second, it could pay some of its workers the minimum wage, and others the subminimum wage. The choice between these strategies will depend on the enforcement regime—the probability of being caught, and the level and structure of penalties if caught. It will also depend on non-official, social costs of non-compliance. Our previous work on minimum wage regulation in South Africa has quantified the degree of non-compliance, taking into account not only the fact of violation but also the extent and depth of violation of the minimum wage law (Bhorat et al. 2012, 2013a). We have introduced an index of minimum wage violation which assigns different weights to the depth of violation, and tried to relate this to enforcement resources (Bhorat et al. 2013b). Other work has also explored the employment consequences of the introduction of the minimum wage law in different sectors, including in agriculture where, for example, we have found evidence of both employment losses and rising wages (Bhorat et al. 2014). There is limited work internationally in trying to understand responses to the minimum wage below the minimum wage. The work of Neumark et al. (2000) though, does attempt to model how minimum wages changes impact on the wages of workers throughout the wage distribution. As such then, it is a key insight of the paper that employment, wage and hours of work responses, of low-wage versus high-wage workers to minimum wage changes, will differ. There is however, no framework here building below minimum wage levels as a measure of non-compliance. Our objective in this paper is to provide a model and empirical evidence of partial compliance as a response to the promulgation of a minimum wage. The model predicts partial adjustment of wages and furthermore predicts, under certain conditions, that the lowest wages will be adjusted upwards disproportionately. Using fourteen waves of the South African Labour Force Survey (LFS) which span the introduction of the minimum wage, we test for these predictions using an index of non-compliance or violation, and find that the predictions of the model are borne out in the data. The plan of the paper is as follows. Section 2 sets out the theoretical model to aid intuition and to structure our empirical work. It shows how even in a competitive labor market, imperfect enforcement can lead to a distribution of wages between the minimum wage and the competitive wage. Section 3 introduces our dataset, briefly describes the institutional structure of minimum wage enforcement in South African agriculture, and sets out the empirical strategy. With this background, in Section 4 we present the empirical results on partial wage compliance in South African agriculture. Section 5 concludes the paper. A simple model of partial compliance The basic reference in the literature on minimum wage compliance is Ashenfelter and Smith (1979). This paper formulated the gains from non-compliance taking into account the probability of getting caught and the penalty if caught, and applied the theory to the US minimum wage. Their measure of non-compliance was based on whether a worker was earning below the sectoral minimum wage, and not on the depth of the violation. They argued that "government enforcement, while not inducing anything like complete compliance, does have an impact" (p 333). Ashenfelter and Smith (1979) theoretical argument was modified, corrected and extended in a series of papers (Grenier, 1982; Chang and Ehrlich, 1985; Yaniv, 2001). Chang and Ehrlich (1985) for example, pointed out that neither Ashenfelter and Smith (1979) nor Grenier (1982), fully incorporated the firm's employment responses to the enforcement regime itself. They also introduced a formulation of the penalty of being caught as a multiple of the total underpayment of wages relative to the minimum wage. Ashenfelter and Smith (1979), Grenier (1982), and Chang and Ehrlich (1985) all assumed the firm to be risk neutral. Yaniv (2001) extended the earlier analysis by introducing risk aversion, and also by introducing partial non-compliance, which he modelled as the firm deciding to pay some workers the minimum wage and others the competitive wage. Yaniv (2001) retains the Chang and Ehrlich (1985) formulation of the penalty, but introduced the dependence of the probability of inspection (and therefore the probability of getting caught) on the number of workers not receiving the minimum wage, and the penalty as increasing in the number of workers not being paid the minimum wage. The most striking result of Yaniv (2001) is that the total employment is independent of the enforcement parameters, depending only on the minimum wage as if it was fully enforced. Where enforcement makes an impact is on how many workers are paid the minimum wage. However, note Yaniv's (2001) assumption that all workers not paid the minimum wage are paid the competitive wage, whereas firms could in fact pay these workers somewhat higher wages so as to reduce the penalty if caught. We emphasize this dimension of adjustment in this paper. All of the above analyses are in the context of a competitive labor market. Basu et al. (2010) take up the strand of the literature which follows on from Stigler's (1946) analysis for a monopsonistic labor market. For this case they analyze the determinants of the minimum wage and enforcement intensity chosen by the government, to optimize an objective function taking into account both equity and efficiency. On the way to this they show that the non-complying monopsonist will choose a wage level between the low monopsony wage and the higher, but imperfectly enforced, minimum wage. In this sense there is also partial compliance which again depends on the enforcement variables. We will focus here on the competitive labor market case and present a simple model of partial compliance which combines elements from different parts of the literature, to motivate and to frame the empirical analysis to follow. The linear-quadratic formulation employed here is simple enough to give closed form solutions, but rich enough to ground intuitions and suggest empirical approaches. Let output y be given by $$ y= al-\left(\frac{1}{2}\right)b{l}^2 $$ where l is labor and a and b are parameters of the production function which differentiate firms from each other. If the wage is w then profit is $$ \pi = al-\left(\frac{1}{2}\right)b{l}^2-wl $$ For a competitive firm facing a wage w c the profit maximizing level of employment and associated profit are given respectively by: $$ {l}^c = \left(a-{w}^c\right)/b $$ $$ {\pi}^c = {\left(a-{w}^c\right)}^2/2b $$ Now introduce a minimum wage m. With full enforcement the employment level and profits are of course given by: $$ {l}^m = \left(a-m\right)/b $$ $$ {\pi}^m = {\left(a-m\right)}^2/2b $$ These will be useful as reference points for later comparison. Clearly, with m > w c, employment and profits are lower with a fully enforced minimum wage. Thus there are incentives for non-compliance; the extent of which will depend on the probability of getting caught and the fine if so caught. As noted in the introduction, there are two dimensions of non-compliance: The subminimum wage that is paid, and the number of workers who are paid this low wage. In this model we focus on the first of these dimensions, and all workers will be paid the subminimum wage in the event of non-compliance.Footnote 1 The firm is assumed to be risk neutral. Denoting the probability of inspection as p and the fine as f, the expected profit when the wage paid is w is given by: $$ {\pi}^e = al-\left(\frac{1}{2}\right)b{l}^2-wl-pf $$ What determines p and f? The institutional framework in many countries, including South Africa as described in the next section, is that inspections are determined by a combination of complaints received and targeted inspections by the inspectorate. We assume that the probability of receiving a complaint is proportional to the underpayment (m − w). Further, given the cost effectiveness of inspecting larger establishments, we assume that the probability of targeted inspections increases with size of establishment. Putting these considerations together, we specify p as: $$ p=\lambda \left(m-w\right)l $$ where λ is a constant of proportionality (> 0). On the fine if caught, we follow most minimum wage enforcement regimes in making it a multiple of the total underpayment: $$ f=\gamma \left(m-w\right)l $$ where γ > 0. Putting together the expressions for p and f we get expected profits as: $$ {\pi}^e = al-\left(\frac{1}{2}\right)b{l}^2-wl-\left(\frac{1}{2}\right)\lambda\ \gamma {\left(m-w\right)}^2{l}^2 $$ where the multiple \( \left(\frac{1}{2}\right) \) has been introduced in the last term to simplify the expressions that follow without affecting anything of substance. We now model this risk neutral firm as maximizing profits by choosing two variables. First is the usual choice of employment l. The second, however, is the choice of wage w to pay. Since the minimum wage m has been set to be above the competitive wage w c, so long as w is also chosen to be above the competitive wage, the firm can get all the workers it wants at wage w. Of course, the choice of l and w has to take into account the fact that these choices will affect the probability of inspection and the fine if caught. Maximizing π e with respect to l and w, the first order conditions are given by: $$ \frac{\partial {\pi}^e}{\partial l}=a-bl-w - \lambda\ \gamma {\left(m-w\right)}^2l=0 $$ $$ \frac{\partial {\pi}^e}{\partial w}=-l + \lambda\ \gamma \left(m-w\right){l}^2=0 $$ Solving these gives the following expressions for the optimum values of employment and wage: $$ {l}^e = \left(a-m\right)/b $$ $$ {w}^e=m - \frac{b}{\lambda\ \gamma \left(a-m\right)} $$ Note that (14) is constrained by the fact that we need w e > w c and the parameter configuration has to satisfy this relationship. Thus employment is the same as with a fully enforced minimum wage as in Yaniv (2001), and is predicted to decline with the minimum wage. It is independent of the enforcement parameters λ and γ. This is because the response to enforcement comes through choice of wage paid, to which we now turn. The expression for w e shows that there will be a spread of wages below the minimum wage. Write the wage gap g as $$ g=m-{w}^e=\frac{b}{\lambda\ \gamma \left(a-m\right)} $$ Prior to the introduction of the minimum wage, every worker is paid the competitive wage w c and employment is l c. When the minimum wage m is introduced it has two effects. First, it reduces employment from l c to l e. Second, it increases wages from w c to w e; the increase depending on the productivity parameters and on the enforcement parameters. Higher productivity firms (higher a and lower b) will pay higher wages and will have a smaller gap relative to the minimum wage. A higher minimum wage will increase the wage gap for every productivity level, but greater intensity of enforcement, as measured by higher values of λ or γ, will lower the wage gap. Particular attention is paid in this paper to the proportional shortfall of the actual wage paid relative to the minimum wage. Denote this as $$ v = \frac{m-{w}^e}{m} = \frac{b}{\lambda\ \gamma m\left(a-m\right)} $$ It is easy to see that this proportional gap also falls with higher productivity and greater intensity of enforcement. However, $$ \frac{\partial v}{\partial m}\kern0.5em =\frac{b\left(2m-a\right)}{\lambda\ \gamma \left(a-m\right)} $$ Thus the proportional gap rises or falls with the minimum wage according as m is greater or lesser than one half of a. Further, $$ \frac{\partial^2v}{\partial m\partial v}\kern0.5em =\frac{b\left(a-3m\right)}{\lambda\ \gamma {m}^2{\left(a-m\right)}^4} $$ whose sign depends on whether m is greater or lesser than one third of a. Putting together (17) and (18) we can show that if m is less than one third of a, then an increase in the minimum wage will reduce the proportionate gap by more, the larger was the gap to start with. However, for other parameter values the proportionate gap, although it will always fall, may fall less, the higher is the gap. The general point emerging from the discussion surrounding equations (17) and (18) is that even in this simplified model the behaviour of the proportional gap is complicated and ambiguous, and thus an empirical matter for further investigation. Before concluding this theory section we note that although we have interpreted the costs of non-compliance for firms in terms of official enforcement and fines, other interpretations are also possible. For example, there might be peer pressures and social sanctions in terms of reputational effects for non-compliers. The magnitudes of these would naturally depend on the extent of non-compliance. Thus the costs of non-compliance specified in (10) as proportional to (m − w)2 l 2 can be interpreted directly as social opprobrium, being in proportion to the square of the shortfall of the wage bill from the minimum wage level. We will not be able to distinguish empirically between these different forces. But they all predict partial compliance to different degrees. Thus we have rationalized the phenomenon of partial adjustment to a minimum wage when there is imperfect enforcement. However, even in this quite simple setting we get a rich set of predictions. There is partial adjustment but its extent is an empirical question. The next section begins our empirical analysis of partial compliance upon the introduction of the minimum wage in South African agriculture. Data and empirical strategy Legally binding national minimum wages set by the State have a relatively short history in South Africa. While there is currently no single national minimum wage, wage setting does apply to workers in specific sectors and occupations.Footnote 2 The first such minimum wage policy was introduced in 1999 in the Contract Cleaning industry. This was followed by legislation for Private Security workers (2001), Domestic workers (2002) and workers in Wholesale and Retail (2003). Minimum wages in this case form part of a Sectoral Determination, which in addition to wages provides legislation on working hours, employment contracts, and over time. Officially the Minister of Labour is responsible for introducing and updating these wage schedules but the Minister's decisions rely on recommendations from a tripartite committee known as the Employment Conditions Commission (ECC). State institutions have also been introduced to enforce these new wage laws. The minimum wage law in the Agricultural sector was promulgated in December 2002 and came into effect on the 1st of March 2003, with separate wage levels for rural and urban areas. Wages were initially set at 800 Rands per month in urban areas (Area A) and 650 Rands per month in rural areas (Area B), and adjusted upward annually.Footnote 3 Figure 1 shows the annual adjustments over a four year period from 2003–2007 — which is the focus of this section. We have analysed the impact of this minimum wage on employment in Bhorat et al. (2014). We find significant reductions in employment in the short run. We also find a significant rise in average wages. But what exactly happened to wages below the minimum? Was there full compliance, or partial compliance? And if compliance was partial, what were the nature of the adjustments that took place—were the lowest wages raised proportionately more, or less? Answering these questions is not straightforward because of the econometric issues involved in constructing appropriate counterfactuals but we attempt to explore them here. Legislated minimum wage (in Rands/month), Area A and B, 2003–2007. Source: Department of Labour, Agriculture Sectoral Determinations (2003–2007) Labor regulations in South Africa are accompanied by inspections and penalties for violations. These inspections are carried out by labour inspectors employed by the Inspection and Enforcement Services (IES) unit within the Department of Labour (DoL). The penalties for non-compliance are shown in Table 1. The upper section of the table refers to violations not concerned with underpayment of wages which attract specific monetary fines. The lower section refers to violations that do involve underpayment of wages and it is clear that while repeat offences and greater levels of underpayment attract larger penalties, in general the value of the fines are low. Also, given the resources allocated to the inspectorate, and the relatively small number of inspectors, the inspection rate is not high. For example, in the Western Cape Province (the only province for which we have detailed inspection data) the simple probability of a farmer being visited by a labor inspector in 2007 was 11 percent. The inspectorate also tries to ensure compliance through a combination of individual farm inspections, advertising, advocacy sessions with workers, and training programmes (Western Cape Government, 2010). There is thus an explicit attempt to publicise the law and create social pressure for compliance, alongside a more conventional inspection and enforcement structure. This social pressure can come from a number of sources, including peer effects. The influence of the latter effects though, is difficult to test with the data available to us, and at this stage it is not possible to separate out official from non-official determinants of compliance. We restrict our focus to the nature of compliance. Table 1 Maximum permissible fines for violation (Schedule 2 of the BCEA, 1997) The primary data for this paper are drawn from 14 waves of the South African Labour Force Survey (LFS), from February 2001 to September 2007, covering the period before and after the introduction of the minimum wage.Footnote 4 The LFS is a bi-annual, rotating panel survey conducted in February/March and September each year. Our chosen sample includes five waves before the minimum wage legislation became effective (March 2003), and nine afterwards. Given that the law became effective at around the same time as the first 2003 survey we exclude the March 2003 wave from our econometric analysis. The remaining 13 waves are pooled and treated as repeated cross sections over time. The LFS covers approximately 30,000 households in each wave and this includes between 2,000 and 3,300 farmworkers per wave, over the period. September 2003 is treated as the first wave where the direct impacts of the law should begin to be evident. As noted earlier two separate wage levels were prescribed for full-time farmworkers, according to geographic location: a higher minimum wage (Rands 800) for those working within urbanised municipal areas classified as Area A, and a lower wage (Rands 650) for predominantly rural areas classified as Area B.Footnote 5 In order to evaluate which minimum wage applied to each individual, it was necessary to assign individuals to geographic areas.Footnote 6 This was done by matching geographic information available in the LFS to areas A and B listed in the Sectoral Minimum Wage schedules. The sample includes both rural and urban workers, and includes full-time and part-time workers — defined as individuals working for at least 27 h per week, or less than 27 h a week, respectively. We restrict the sample to include only those classified as employees. Monthly wages reported in brackets in the LFS are transformed into point estimates by random allocation to a uniform distribution within the bracket to maintain variation.Footnote 7 This accounts for between five and ten percent of the sample in each wave, on average. All monthly wages are then combined and converted into hourly wages, which we use for analysis. Employed individuals reporting zero or missing wages are excluded. The key focus of our analysis is the impact of the introduction of the minimum wage on sub-minimum wages. For each agricultural worker in each wave of the survey we define $$ \begin{array}{l}{v}_{\alpha}\left(m,\kern0.5em w\right)=\left[m-w\right)/{\left.m\right]}^{\alpha}\kern0.5em if\ w<m;\hfill \\ {}{v}_{\alpha}\left(m,\ w\right)=0\kern0.5em if\ w\ \ge m\hfill \end{array} $$ where m is the official minimum wage in a given year, and w captures wages for each individual worker. We refer to v α as the "violation index" for a particular worker. When α =0 this is simply a count variable which takes on the value of 1 if the worker is below the minimum wage and 0 if the worker is above. When α =1, the violation index is a measure of the proportional gap between farmworker wages and the minimum wage. This is of course the gap variable in equation (16). When α =2, v α becomes the squared gap and gives more weight to wages that fall further below the minimum. Our empirical focus will be on the effects of the law on v 1 and v 2 across workers. Given the violation index, or proportional gap of reach worker, a measure of the aggregate gap is simply the average of (19) over all workers. Denoting the aggregate gap of violation as V α , if h(w) is the frequency density of w then $$ \begin{array}{ll}{V}_{\alpha}\hfill & ={\displaystyle \int }{v}^{\alpha}\left(m,\ w\right)h(w)dw\hfill \\ {}\hfill & ={\displaystyle {\int}_0^m}\left[m-w\right)/{\left.m\right]}^{\alpha }h(w)dw\hfill \end{array} $$ This index of aggregate minimum wage non-compliance, which is analogous to the FGT index of poverty (Foster et al. 1984), was introduced by Bhorat et al. (2012, 2013a). We will also present patterns and trends in the aggregate gap in the next section. To estimate the effects of the law on compliance we use two specifications. We first employ a standard difference-in-differences model analogous to Card and Krueger (1994): $$ {Y}_{ikt} = {\beta}_0+{\beta}_1POS{T}_t+{\beta}_2 Farmworke{r}_k+{\beta}_3POS{T}_t* Farmworke{r}_k + {X}_{ijt}+{\varepsilon}_{ikt} $$ where, Y ikt is the outcome of interest (v 1 , v 2) for individual i, in group k, in period t. POST t is the time dummy which captures 'before-and-after' effects. Farmworker k is the dummy for whether an individual is in the treatment or comparison group (k = 1, 2), which equals 1 if the individual is a farmworker and 0 if they are in the comparison group. POST t * Farmworker k is the difference-in-differences term which measures the difference between the outcomes of the treatment group versus those of the comparison group, across the pre- and post-law periods. This tests whether the observed changes in violation were shared by similar workers to whom the law did not apply, and for workers in the comparison group we calculate V1 using the agricultural minimum wage. Specifically the difference-in-differences coefficient measures the difference between what happened to farmworkers in the post-law period versus what happened to comparison group. This will correctly identify the effects of the minimum wage if there were no idiosyncratic shocks, in addition to the law that only affected farmworkers in the post period.Footnote 8 X ijt comparisons for various worker characteristics such as Age, Education, Race, provincial agricultural GDP, and if the individual has a written contract, and we run the regression with and without comparisons. In an attempt to provide a counterfactual for what would have happened to wages in the absence of the minimum wage law we identify a comparison group that has similar characteristics to farmworkers. This is one part of our difference-in-differences identification strategy, which compares changes in farmworker compliance outcomes to a comparison group of comparable workers. The comparison group is made up of employees in unskilled or 'elementary' occupations, based on the 4-digit SASCO occupation codes and ISIC industry codes, earning less than the Basic Condition of Employment Act's (BCEA) income cut-off of R9 631 per month, aged between 15–65, who have completed no more than 12 years of schooling. In addition, union members, and those in sectors affected by another minimum wage, are excluded. For clarity, this group includes occupations such as: street vendors, packers, manufacturing and transport laborers, and elementary machine operators. The agricultural minimum wage law does not apply to them. Changes in the comparison group's wages provide an indication of movements in the economy that coincided with the period when the agricultural minimum wage was introduced, but were not the result of that policy change. Secondly we specify another difference-in-differences model which tests to see whether the violation gap increased more in districts where farmworker wages were lower in the pre-law periodFootnote 9: $$ {Y}_{ijt} = {\theta}_0 + \theta POS{T}_t + {\theta}_2W{G}_j + {\theta}_3POS{T}_t*W{G}_j + {X}_{ijt}+{v}_{ijt}, $$ where, Y ijt is the outcome of interest (v 1, v 2) for individual i, in district j, in period t. POST t is the time dummy, and X ijt comparisons for various worker characteristics. The wage gap (WG j ) is a constructed variable which identifies cross-sectional variation between District Councils in the pre-law period. The wage gap is represented by: $$ W{G}_j= \log \left[ minimum\left({w}_j^{*}\right)\right]- \log \left[ median\left({w}_j^{\hbox{'}}\right)\right], $$ where \( {w}_j^{*} \) is the initial minimum wage in district j and \( {w}_j^{\hbox{'}} \) is the median agricultural worker wage in district j, in the year before the law was introduced. \( {w}_j^{\hbox{'}} \) is calculated using real wages in 2002. Areas with a larger gap in the pre-law period would be expected to experience greater increases in wages in the post-law period if the law was binding.Footnote 10 In equation (21) β 1 indicates the changes in the post-law period for both groups, β 2 gives the average difference between farmworkers and the comparison group over the full period, and β3 shows the change for farmworkers in the post-law period relative to the comparison group. In equation (22) the parameter θ 2 represents the average difference in outcomes for workers in low wage gap versus high wage gap areas across the entire period. θ 3 is the difference-in-differences parameter, and tells us how much more outcomes changed in the post-law period, in areas where the wage gap was largest. Lastly, θ 1 is also of interest as it tells us how the variable of interest changed on average after the law for a particular district, conditional on the wage gap being zero in that district. As in all such natural experiments we must assume that in the absence of the law, agricultural wages would be on the same general trend across districts, as well as for both groups of workers. Tables 2 and 3 present the results for the aggregate index of violation in equation (20) above, for the sample of farmworkers, in Areas A and B respectively. The first key result here is that V 0, the proportion of farmworkers earning less than the minimum wage specified for workers in both areas, declines over the period 2001–2007. This decline is relatively slow in both areas — 11 percent in area A and 18 percent in area B — but is greater for farmworkers in Area B, who comprise 70–80 percent of all farmworkers over the period. Taken together, however, the results show that the proportion of workers whose wages increase to a level at or above the minimum wage is relatively small. Put differently, the estimates indicate substantial non-compliance with the law, as 54 percent of farmworkers in area A and 67 percent of workers in area B here still earn below the legislated minimum. Importantly, this measure can tell us nothing about wage movements below the minimum wage, which is in fact where the most significant changes have taken place. V 1, measuring the proportional gap between an individual's wage and the minimum wage, averaged over all individuals, is far more instructive in this regard. Table 2 reveals a decrease in the depth of aggregate violation: V 1 declines by 18 percent over the whole period but in particular falls by 30 percent in the year directly after the introduction of the law. In Area B Table 3 shows much larger decreases in the depth of violation, where overall V 1 falls by 45 percent, with a 24 percent decrease between 2002 and September 2003. Table 2 Aggregate index of violation, Area A Table 3 Aggregate index of violation, Area B The estimates for V 2, which weights observations further below the minimum wage more heavily, show even sharper declines. Over the entire period V 2 falls by 22 percent among workers in area A and by 54 percent among workers in area B. In the year directly after the law is introduced this measure declines by over 30 percent in both areas. This suggests that workers further below the minimum in the pre-law period (or simply those with the lowest wages) were the greatest gainers in the post-law period. From equation (20) the ratio V 1/V 0 is simply the percentage shortfall of wages for farmworkers earning below the minimum; or put differently, violated workers in this sample are earning on average a fraction V 1/V 0 below the minimum wage. The data show that in 2002, workers who earned wages that were below the imminent minimum wage report wages that were on average 34 percent and 48 percent below the legislated minimum in areas A and B, respectively. In 2003, after the introduction of the law, these percentages fell 26 percent and 39 percent, both quite substantial decreases. The ratio begins to rise again among workers in area A but for the majority of workers (area B) the overall decrease in V 1/V 0 is 32 percent. To illustrate changes in the depth of violation over time and across the wage distribution more clearly, Figs. 2 and 3 plot kernel density functions of v 1 across individuals for both the treatment and comparison groups. Equivalent plots for v 2 are shown in Figs. 4 and 5. Both figures for farmworkers show that the depth of violation and squared depth of violation decrease significantly after 2002, suggesting that once the law came into effect the depth of violation decreased, even if many farmworkers still earned sub-minimum wages. Crucially, also the V2 density results, reinforce the fact that even when we place greater weight on workers further below the minimum wage, employers of these workers were just as likely to partially comply with the law. Violation gap (v 1) density function (2001–2005), farmworkers. Note: The figure is a kernel density plot of V1 for all farmworkers (Area A and B), calculated using the minimum wage for each year. Kolmogorov-Smirnov tests for equality of distributions are rejected at the 5% level for each pairwise comparison of waves in the before and after periods Violation gap (v 1) density function (2001–2005), comparison group. Note: The figure is a kernel density plot of V2 for all farmworkers (Area A and B), calculated using the minimum wage for each year. Kolmogorov-Smirnov tests for equality of distributions are not rejected at the 5% level for each pairwise comparison of waves in the before and after periods Violation gap squared (v 2) density function (2001–2005), farmworkers. Note: The figure is a kernel density plot of V2 for all farmworkers (Area A and B), calculated using the minimum wage for each year. Kolmogorov-Smirnov tests for equality of distributions are rejected at the 5% level for each pairwise comparison of waves in the before and after periods Violation gap squared (v 2) density function (2001–2005), comparison group. Note: The figure is a kernel density plot of V2 for all farmworkers (Area A and B), calculated using the minimum wage for each year The evidence presented here, through the empirical lens of the violation index, and consistent with our theoretical predictions, suggests that employers may respond at the margin to the institution of a minimum wage law. In particular, the results show that despite no 'spike' at the minimum wage level in agriculture, employers have responded by on average increasing the wages paid relative to the legislated minimum. In other words, there is partial compliance with the minimum wage law. In addition, for those workers further from the minimum wage, the post-law period yields larger marginal adjustment toward the minimum, as predicted for certain parameter values by equation (18). The regression results from equation (21) are presented in Table 4. The model is estimated separately for three dependent variables, which represent the simple measure of violation (v 0), the depth of violation (v 1), and the violation gap squared (v 2). Each specification is run with and without comparisons. Columns 2 and 4 include comparisons for race, age, education, agricultural GDP, union status, and possession of a written employment contract. Table 4 Partial compliance, depth of violation – treatment vs comparison group The coefficient on the farmworker variable for v 0 shows that on average farmworkers earn lower wages than workers in the comparison group — the average number of farmworkers who earn below the agricultural minimum wage is higher by 40 percentage points in specification 2. Of primary interest is the difference-in-differences estimator (Farmworker*POST). Here the results suggest that relative to workers in the comparison group, the level of violation falls by approximately 2.5 percent in the post-law period when we control for individual characteristics. This provides some support for the descriptive statistics which suggest small declines in v 0 after the law was introduced. The estimates on v 1 initially suggest that the depth of violation among farmworkers is low relative to the comparison group but this effect disappears when we add controls, suggesting that after taking account of age, education, race and so on, v 1 is larger for farmworkers over the period. Put differently, farmworkers earn wages that are further below the agricultural minimum wage than their counterparts in the comparison group, although the difference is not large. The POST t coefficient suggests that when the two groups are taken together no large changes in levels of violation are observed, but again this difference is small. The difference-in-differences estimator, however, is both negative and significant for v 1 and v 2, and remains stable when comparisons are added. This reveals that relative to the comparison group, farmworkers experienced a notable decrease in the depth of violation in the post-law period; when we control for individual characteristics the average depth of violation fall by almost nine percent, while the squared gap falls by six percent. The results for v 2 provide some evidence that workers with wages further away from the minimum saw wages rise as a result of the law, although they continued to earn sub-minimum wages. These results underscore the trends evident in kernel density distributions showing a move in wages toward, but not up to, the minimum wage as a result of the law. The notion that this result is consistent even when placing a higher weight on workers further from the minimum wage — suggests that partial compliance with the law may be invariant to how low pre-law wages are. Table 5 presents the regression results of equation (22), estimated only for the sample of farmworkers. Here we test for whether compliance with the agricultural minimum wage falls by more in districts where the average wage gap was larger in the pre-law period. We do this for v 1 and v 2. The coefficient on the wage gap is positive and significant across all specifications, suggesting that in areas where the district wage gap is larger, individual levels of violation are higher for the entire period; this is robust to the addition of comparisons. As in the previous regressions, the POST coefficient suggests that levels of violation for farmworkers fell in the post-law period. Of primary interest, the difference-in-differences estimator shows that levels of violation fell by more in areas where the initial district wage gap was larger. This is robust to addition of individual-level controls, for both v 1 and v 2, and suggests changes of 6.8-7.8 percent for v 1 and changes of 7.1-7.9 percent for v 2. In other words, we observe greater levels of partial compliance in areas where prior to the law, workers were receiving lower wages. The results also reinforce that workers with larger wages gaps moved closer to the minimum wage, on average. Table 5 Partial compliance, depth of violation – wage gap Using these regression techniques to examine changes in the depth of violation serves to illustrate how sub-minimum wages in the agricultural sector have responded to the introduction of the minimum wage law and supports the descriptive picture presented above. Evidence suggests that while levels of violation remained high, many employers do not simply make a discrete decision — either to comply with, or violate, the law. Instead, it would appear that employers choose by how much to comply with the law, representing this crucial theoretical and empirical observation of partial compliance with the law. In addition to the k-density functions then, the regressions reveal a wide spread of sub-minimum wages, that increase partially but not all the way up to the minimum wage, in response to the law. It is this varied distribution of partial compliance levels — measured and modelled through v 1 and v 2 here — which is the important value-add of this paper in terms of understanding the typology of employer responses to the institution of minimum wage regulations. When a minimum wage law is introduced, what happens to wages that were below the legal minimum wage? If there is perfect enforcement, we should not observe any workers earning below the minimum wage. With imperfect enforcement there will be wages below the minimum wage, but how does the sub-minimum wage distribution after the law compare to that distribution before the law was passed? This paper attempts to provide answers to these questions using the fact that for South Africa, wage distribution data is available for a period before and after the promulgation of a minimum wage law in agriculture. The paper begins by developing a simple theoretical model in which the obvious benefits of non-compliance for an employer are to be set against the costs of non-compliance. We model these as composed of the probability of getting caught, and the fine imposed if caught. However, alternative interpretations of the costs of non-compliance are possible, including peer effects. We model the costs of non-compliance as a function of the total short fall of the wage bill compared to full compliance, allowing for the possibility that employers adjust partially towards the minimum wage. The model predicts that employment will fall as the result of a minimum wage, but that a spread of wages below the minimum wage should be observed in response to the law. Employer responses to a minimum wage law depend on the level at which the minimum wage is set, relative to the existing wage. Wages will rise towards the minimum wage, and under certain conditions, the lowest wages will rise proportionately more than those close to the stipulated minimum. We then turn to an empirical application based on the agriculture sector in South Africa, focusing on wage effects. We utilise an index of minimum wage violation to estimate whether relative compliance levels, measured here by a closing of the average gap between actual and minimum wages, have changed as a result of the promulgation of the minima. We find that there is evidence of partial compliance, where employers adjust wages upwards as a result of the law, but not all the way to the minimum wage. This supports work by Dinkelman and Ranchhod (2012), Dinkelman et al. (2014) who observe partial response to the minimum wage law in the Domestic Worker sector in South Africa. However, unlike Dinkelman and Ranchhod (2012), we find that when the minimum wage is set at a higher level relative to existing mean wages in a district, there is evidence of greater partial compliance, which is predicted by the theoretical model for some parameter values. The data indicate that while the fraction of workers paid below the minimum wage decreases over time in response to the law, the levels of non-compliance remain high. More than half of farmworkers report earning wages below the prescribed minimum in 2007. The relatively low levels of compliance imply that official enforcement efforts, and non-official pressures, are not sufficient. It may also be plausible that the government initially accepted low levels of compliance after the introduction of the law and thus did not commit substantial resources to enforcement.Footnote 11 Nonetheless, given that the state continually engages in numerous enforcement activities to try and ensure compliance, it is plausible that these formal enforcement efforts have had some observable, if partial, influence on compliance, resulting in some low-level partial compliance equilibrium. Based on these results there is scope for more research. Our results suggest strongly that we need to think of responses to the minimum wage in a continuous rather than a discrete manner. In particular, our estimates of the depth and severity of violation suggest that whilst employers do respond positively to the introduction of a minimum wage, this is often not in complete adherence with the law. Moreover, it appears that areas where wages were lower in the pre-law period, and for workers with wages further below the minimum wage, we observe greater levels of partial compliance. The argument for stricter enforcement is further strengthened by the result of our model, and the more general analysis of Yaniv (2001), that the employment effect depends only on the minimum wage and not on intensity of enforcement if employers are adjusting optimally, albeit partially, to the law. This notion of partial compliance with responses at the margin to the law deserves further and closer attention in the minimum wage literature. There are in principle three choice variables the employer has: (i) the number of workers paid the minimum wage; (ii) the number of workers paid the sub-minimum wage; (iii) the level of the sub-minimum wage. The previous literature fixed (iii) at the competitive wage, and fixed (i) at zero, leaving only the choice of (ii). Yaniv (2001) also fixed (iii) at the competitive wage but allowed choice of (i) and (ii). Since our focus in this paper is on understanding the behaviour of the wage distribution below the minimum wage, our theory focuses on choice on (iii) but shuts down the distinction between (i) and (ii)—all workers are paid the subminimum wage. Our specific functional forms give closed form solutions which help in developing comparative statics and thus in structuring the empirical analysis which is the core of the paper. In fact, given the linear quadratic structure of our model, and the way the probability of detection and fines if detected are specified, it can be shown that when choice of (i), (ii) and (iii) is allowed the choice of (ii) and (iii) lies on a relationship and any choice along this relationship will be optimal. The particular point along the relationship shown in the paper is the subminimum wage such that all workers are paid that wage (both optimally chosen). This extension of the model is available from the authors but would needlessly complicate the exposition here. This system runs in tandem with collective bargaining at the industry level (see Theron et al. 2007). An hourly minimum wage is also set, which is equivalent to the monthly rate. The next three paragraphs are based on Bhorat et al. (2014), which uses the same dataset. This demarcation was based on the average household income recorded for the municipal area concerned in the 1996 census, where: Average income greater than Rands 24, 000 per annum Average income between Rands 12, 000 and Rands 24, 000 per annum. Areas with average income below Rands 12, 000 are included in Area B. Since 2009, the demarcation between Area A and Area B was removed, and Area A schedules now apply nationally. We use geographical information on Magisterial Districts and District Councils in the LFS to demarcate areas A and B. We are however, unable to differentiate between place of residence and place of work for individuals, though we would argue that this is not as problematic for the agricultural sector as it might be for some other sectors. Although there are no reliable national figures we assume that the majority of full-time farmworkers still live on the farm and there is some evidence to suggest this is so: During the South African Human Rights Commission's 2007 hearings on farmworkers and farm dwellers, Agriculture South Africa stated that, in addition to workers, approximately 4 million people lived on farms but were not employed there. This suggests that the number of farmworkers living on farms is still significant if we assume the average household size for South Africa applies to these households. Moreover, the Department of Land Affairs held that the majority of full time farmworkers still live on farms (Human Rights Watch, 2011). A new seed is set in STATA for each bracket calculation. In order to comparison for changes specific to the agricultural sector that may bias our results, we tested using agricultural GDP as well as Net Agricultural Income as independent variables in our regression. Neither variable had any significant impact on our results. This approach follows Lee (1999), and Dinkelman and Ranchhod (2012). In order for us to identify the effect of the minimum wage law we must assume that in the absence of the law change, low wage-gap districts would be on the same trend in outcomes as high wage gap districts (as in Dinkelman & Ranchhod, 2012). We must also assume that changes in demand for labor were uniform. A review of the recent agricultural economics literature for South Africa gives us no reason to believe that there were price or non-price changes which may have caused labor demand to differ by geography in the post-law period. Interviews in the Western Cape, and a 2012 survey of Labour Inspectors conducted by the Development Policy Research Unit (DPRU) at the University of Cape Town, reveal a severe lack of resources in the IES. This is supported by an ILO (2010) report on the inspectorate in South Africa and administrative data from the Department of Labour. In particular, there are too few inspectors, inspectors are poorly trained and under-remunerated for their quasi-legal roles, and Labour Centres are underequipped, specifically lacking in vehicles and computers according to inspectors and IES officials. Ashenfelter O, Smith R (1979) Compliance with the minimum wage law. J Polit Econ 1979(2):333–350 Basu A, Chau N, Kanbur R (2010) Turning a blind eye: costly enforcement, credible commitment, and minimum wage laws. Econ J 120(543):244–269 Bhorat H, Kanbur R, Mayet N (2012) Minimum wage violation in south africa. Int Labour Rev 151.2012(3):277–287 Bhorat H, Kanbur R, Mayet N (2013a) A note on measuring the depth of minimum wage violation. Labour 27(2):192–197 Bhorat H, Kanbur R, Mayet N (2013b) Estimating the causal effect of enforcement on minimum wage compliance: the case of South Africa. Rev Dev Econ 16(4):608–623 Bhorat H, Kanbur R, Stanwix B (2014) Estimating the impact of minimum wages on employment, wages and non-wage benefits: the case of agriculture in South Africa. Am J Agric Econ 96(5):1402–1419 Card D, Krueger AB (1994) Minimum wages and employment: a case study of the fast-food industry in New Jersey and Pennsylvania. Am Econ Rev 84:772–793 Chang YM, Ehrlich I (1985) On the economics of compliance with the minimum wage law. J Polit Econ 93(1):84–91 Department of Labour (2014) Basic Condition of Employment Amendment Act, 2013: Commencement, SA Government Gazette Vol. 590, No. 37955, Pretoria: 2014 Dinkelman T, Ranchhod V (2012) Evidence on the impact of minimum wage laws in an informal sector: domestic workers in South Africa. J Dev Econ 99(1):27–45 Dinkelman T, Ranchhod V, Hofmeyr C (2014) Enforcement and compliance: The case of minimum wages and mandatory contracts for domestic workers in South Africa., Available at: www.econ3x3.org. Accessed 29 July 2015 Foster J, Greer J, Thorbecke E (1984) A class of decomposable poverty measures. Econometrica 52(3):761–766 Grenier G (1982) On compliance with the minimum wage law. J Polit Econ 90(1):184–187 Human Rights Watch (2011). Ripe with abuse, human rights conditions in South Africa's Fruit and Wine Industry. Available at: https://www.hrw.org/report/2011/08/23/ripe-abuse/human-rights-conditions-south-africas-fruit-and-wine-industries. Accessed 29 July 2015 ILO (2010) "Global Wage Report 2010/11, Wage Policies in Times of Crisis", ISBN 978-92-2-123622-1. Lee D (1999) "Wage Inequality in the United States During the 1980s: Rising Dispersion or Falling Minimum Wage?" Quarterly Journal of Economics, 114(3):977–1023 Neumark D, Schweitzer M, Wascher W (2000) The effects of minimum wages throughout the wage distribution, NBER Working Paper 7519. National Bureau of Economic Research, Cambridge Stigler G (1946) The economics of minimum wage legislation. Am Econ Rev 36(3):358–365 Theron J, Godfrey S, Visser M (2007) Globalization, the impact of trade liberalization, and labour law: The case of South Africa. International Institute for Labour Studies, Geneva Western Cape Government (2010). Provincial Inspection Plan, Inspection and Enforcement Services, Department of Labour. Internal Document, Cape Town Yaniv G (2001) Minimum wage noncompliance and the employment decision. J Labor Econ 19:596–603 This paper was first presented at a global conference entitled Reforming Minimum Wage and Labor Regulation Policy in Developing and Transition Economies held at Beijing Normal University, October 18-19, 2014, Beijing, China. The authors gratefully acknowledge the detailed and constructive comments on this manuscript, offered by two anonymous referees. Responsible editor: David Lam Development Policy Research Unit, University of Cape Town, Cape Town, 7700, South Africa Haroon Bhorat & Benjamin Stanwix Cornell University, Ithaca, NY, 14853-7801, USA Ravi Kanbur Haroon Bhorat Correspondence to Haroon Bhorat. The IZA Journal of Labor and Development is committed to the IZA Guiding Principles of Research Integrity. The authors declare that they have observed these principles. Bhorat, H., Kanbur, R. & Stanwix, B. Partial minimum wage compliance. IZA J Labor Develop 4, 18 (2015). https://doi.org/10.1186/s40175-015-0039-1 Index of Minimum Wage Violation Reforming Minimum Wage and Labor Regulation Policies in Developing Economies
CommonCrawl
Chapters (29) MRS Online Proceedings Library Archive (71) Mathematical Proceedings of the Cambridge Philosophical Society (8) Symposium - International Astronomical Union (8) Geological Magazine (4) Journal of Paleontology (4) Radiocarbon (3) European Journal of Anaesthesiology (2) Journal of the Australian Mathematical Society (2) European Astronomical Society Publications Series (1) International Astronomical Union Colloquium (1) Journal of Applied Probability (1) Mineralogical Magazine (1) Powder Diffraction (1) Proceedings of the London Mathematical Society (1) Visual Neuroscience (1) International Astronomical Union (11) Australian Mathematical Society Inc (5) Applied Probability Trust (1) Mineralogical Society (1) Encyclopedia of Mathematics and its Applications (19) London Mathematical Society Lecture Note Series (9) Flows along arch filaments observed in the GRIS 'very fast spectroscopic mode' S. J. González Manrique, C. Denker, C. Kuckein, A. Pastor Yabar, M. Collados, M. Verma, H. Balthasar, A. Diercke, C. E. Fischer, P. Gömöry, N. Bello González, R. Schlichenmaier, M. Cubas Armas, T. Berkefeld, A. Feller, S. Hoch, A. Hofmann, A. Lagg, H. Nicklas, D. Orozco Suárez, D. Schmidt, W. Schmidt, M. Sigwarth, M. Sobotka, S. K. Solanki, D. Soltau, J. Staude, K. G. Strassmeier, R. Volkmer, O. von der Lühe, T. Waldmann Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S327 / October 2016 Published online by Cambridge University Press: 12 September 2017, pp. 28-33 A new generation of solar instruments provides improved spectral, spatial, and temporal resolution, thus facilitating a better understanding of dynamic processes on the Sun. High-resolution observations often reveal multiple-component spectral line profiles, e.g., in the near-infrared He i 10830 Å triplet, which provides information about the chromospheric velocity and magnetic fine structure. We observed an emerging flux region, including two small pores and an arch filament system, on 2015 April 17 with the 'very fast spectroscopic mode' of the GREGOR Infrared Spectrograph (GRIS) situated at the 1.5-meter GREGOR solar telescope at Observatorio del Teide, Tenerife, Spain. We discuss this method of obtaining fast (one per minute) spectral scans of the solar surface and its potential to follow dynamic processes on the Sun. We demonstrate the performance of the 'very fast spectroscopic mode' by tracking chromospheric high-velocity features in the arch filament system. On the selection and balancing of multiple selfish goals Catalina Kopetz, Wilhelm Hofmann, Reinout W. H. J. Wiers Journal: Behavioral and Brain Sciences / Volume 37 / Issue 2 / April 2014 The selfish goal metaphor is interesting and intriguing. It accounts for the idiosyncrasies and inconsistencies in peoples' goal pursuits without invoking free will, self-regulatory, or self-control failures. However, people pursue multiple goals, sometimes simultaneously. We argue that the model proposed in the target article may gain significant theoretical and practical value if the principles underlying goal selection and/or balancing on a moment-to-moment basis are clearly specified and integrated with the notion of the selfish goal. Comparison of psychotherapies for adult depression to pill placebo control groups: a meta-analysis P. Cuijpers, E. H. Turner, D. C. Mohr, S. G. Hofmann, G. Andersson, M. Berking, J. Coyne The effects of antidepressants for treating depressive disorders have been overestimated because of selective publication of positive trials. Reanalyses that include unpublished trials have yielded reduced effect sizes. This in turn has led to claims that antidepressants have clinically insignificant advantages over placebo and that psychotherapy is therefore a better alternative. To test this, we conducted a meta-analysis of studies comparing psychotherapy with pill placebo. Ten 10 studies comparing psychotherapies with pill placebo were identified. In total, 1240 patients were included in these studies. For each study, Hedges' g was calculated. Characteristics of the studies were extracted for subgroup and meta-regression analyses. The effect of psychotherapy compared to pill placebo at post-test was g = 0.25 [95% confidence interval (CI) 0.14–0.36, I 2 = 0%, 95% CI 0–58]. This effect size corresponds to a number needed to treat (NNT) of 7.14 (95% CI 5.00–12.82). The psychotherapy conditions scored 2.66 points lower on the Hamilton Depression Rating Scale (HAMD) than the placebo conditions, and 3.20 points lower on the Beck Depression Inventory (BDI). Some indications for publication bias were found (two missing studies). We found no significant differences between subgroups of the studies and in meta-regression analyses we found no significant association between baseline severity and effect size. Although there are differences between the role of placebo in psychotherapy and pharmacotherapy research, psychotherapy has an effect size that is comparable to that of antidepressant medications. Whether these effects should be deemed clinically relevant remains open to debate. THE PROBABILITY THAT $\lowercase {X}^{\lowercase {M}}$ AND $\lowercase {Y}^{\lowercase {N}}$ COMMUTE IN A COMPACT GROUP Representation theory of groups Probabilistic methods in group theory KARL H. HOFMANN, FRANCESCO G. RUSSO Journal: Bulletin of the Australian Mathematical Society / Volume 87 / Issue 3 / June 2013 In a recent article [K. H. Hofmann and F. G. Russo, 'The probability that $x$ and $y$ commute in a compact group', Math. Proc. Cambridge Phil Soc., to appear] we calculated for a compact group $G$ the probability $d(G)$ that two randomly selected elements $x, y\in G$ satisfy $xy=yx$ , and we discussed the remarkable consequences on the structure of $G$ which follow from the assumption that $d(G)$ is positive. In this note we consider two natural numbers $m$ and $n$ and the probability $d_{m,n}(G)$ that for two randomly selected elements $x, y\in G$ the relation $x^my^n=y^nx^m$ holds. The situation is more complicated whenever $n,m\gt 1$ . If $G$ is a compact Lie group and if its identity component $G_0$ is abelian, then it follows readily that $d_{m,n}(G)$ is positive. We show here that the following condition suffices for the converse to hold in an arbitrary compact group $G$ : for any nonopen closed subgroup $H$ of $G$ , the sets $\{g\in G: g^k\in H\}$ for both $k=m$ and $k=n$ have Haar measure $0$ . Indeed, we show that if a compact group $G$ satisfies this condition and if $d_{m,n}(G)\gt 0$ , then the identity component of $G$ is abelian. The probability that x and y commute in a compact group Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 153 / Issue 3 / November 2012 We show that a compact group G has finite conjugacy classes, i.e., is an FC-group if and only if its center Z(G) is open if and only if its commutator subgroup G′ is finite. Let d(G) denote the Haar measure of the set of all pairs (x,y) in G×G for which [x,y]=1; this, formally, is the probability that two randomly picked elements commute. We prove that d(G) is always rational and that it is positive if and only if G is an extension of an FC-group by a finite group. This entails that G is abelian by finite. The proofs involve measure theory, transformation groups, Lie theory of arbitrary compact groups, and representation theory of compact groups. Examples and references to the history of the discussion are given at the end of the paper. Metal slanted columnar thin film THz optical sensors T. Hofmann, D. Schmidt, A. Boosalis, P. Kühne, C.M. Herzinger, J.A. Woollam, E. Schubert, M. Schubert Published online by Cambridge University Press: 18 April 2012, mrsf11-1409-cc13-31 We demonstrate that the anisotropic optical response of metal (cobalt) slanted columnar thin films (STF) at THz frequencies strongly depends on the dielectric properties of the dielectric ambient surrounding the slanted columnar thin films. An effective medium dielectric function approach is used to describe the combined optical response of metal slanted columnar thin film and dielectric ambient. Our observations indicate that metal (cobalt) slanted columnar thin films can be used as sensors which will enable detection and characterization of minute amounts of dielectrics at THz frequencies, such as for flow-based detection of liquid chemical constituents. Infrared ellipsometry and near-infrared-to-vacuum-ultraviolet ellipsometry study of free-charge carrier properties in In-polar p-type InN Stefan Schöche, Tino Hofmann, Nebiha Ben Sedrine, Vanya Darakchieva, Xinqiang Wang, Akihiko Yoshikawa, Mathias Schubert Published online by Cambridge University Press: 16 January 2012, mrsf11-1396-o07-27 We apply infrared spectroscopic ellipsometry (IRSE) in combination with near-infrared to vacuum-ultraviolet ellipsometry to study the concentration and mobility of holes in a set of Mg-doped In-polar InN samples of different Mg-concentrations. P-type behavior is found in the IRSE spectra for Mg-concentrations between 1x1018 cm-3 and 3x1019 cm-3. The free-charge carrier parameters are determined using a parameterized model that accounts for phonon-plasmon coupling. From the NIR-VUV data information about layer thicknesses, surface roughness, and structural InN layer properties are extracted and related to the IRSE results. Pro-Lie groups which are infinite-dimensional Lie groups K. H. HOFMANN, K.-H. NEEB Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 146 / Issue 2 / March 2009 A pro-Lie group is a projective limit of a family of finite-dimensional Lie groups. In this paper we show that a pro-Lie group G is a Lie group in the sense that its topology is compatible with a smooth manifold structure for which the group operations are smooth if and only if G is locally contractible. We also characterize the corresponding pro-Lie algebras in various ways. Furthermore, we characterize those pro-Lie groups which are locally exponential, that is, they are Lie groups with a smooth exponential function which maps a zero neighbourhood in the Lie algebra diffeomorphically onto an open identity neighbourhood of the group. S75 In-Situ Strain Measurements in Composite Castings Using Neutron Diffraction U. Wasmuth, M. Hofmann, L. Meier, H. Hoffmann Journal: Powder Diffraction / Volume 23 / Issue 2 / June 2008 Published online by Cambridge University Press: 20 May 2016, p. 190 Ediacaran biota on Bonavista Peninsula, Newfoundland, Canada H. J. Hofmann, S. J. O'Brien, A. F. King Journal: Journal of Paleontology / Volume 82 / Issue 1 / January 2008 Published online by Cambridge University Press: 20 May 2016, pp. 1-36 Print publication: January 2008 Newly found fossils in the Conception and St. John's groups of the Bonavista Peninsula considerably extend the known geographic distribution of the Ediacaran fossils in Newfoundland. They occur in deepwater sediments and are preserved as epireliefs, forming census populations underneath volcanic ash layers throughout a more than 1 km thick turbiditic sequence. the exposed fossiliferous units comprise the Mistaken Point, Trepassey, Fermeuse, and Renews Head formations. the remains are tectonically deformed, with long axes of elliptical discs aligned parallel to cleavage strike; shortening of originally circular bedding surface features is on the order of 30-50% (averaging ~35%). The assemblage includes Aspidella, Blackbrookia, Bradgatia, Charnia, Charniodiscus, Fractofusus, Hiemalora, and Ivesheadia. These occur throughout the succession, with Aspidella being the most common genus, followed by Charnia and Charniodiscus. Four new taxa are described, with candelabra-like fossils with a Hiemalora-like base referred to Primocandelabrum hiemaloranum n. gen. and sp., bush-like fossils to Parviscopa bonavistensis n. gen. and sp., ladder-like fossils to Hadryniscala avalonica n. gen. and sp., and string-like fossils with basal disc to Hadrynichorde catalinensis n. gen. and sp. the remains also include dubiofossils. The stratigraphic ranges of some taxa on the Bonavista Peninsula are longer than previously reported from the Avalon Peninsula, with Fractofusus spindles present in the Trepassey Formation, Bradgatia, Charnia, Charniodiscus, and Ivesheadia reaching as high as the Fermeuse Formation, and Aspidella extending into the middle of the Renews Head Formation. the spindles in the Trepassey Formation are comparable to those found mainly in the stratigraphically older Briscal Formation on the Avalon Peninsula. Infrared behavior of aluminum nanostructure sculptured thin films Tino Hofmann, M. Schubert, D. Schmidt, E. Schubert Published online by Cambridge University Press: 01 February 2011, 1080-O04-16 We report on fabrication, structural and infrared optical characterization of nanostructure aluminum sculptured thin films prepared by glancing angle deposition (GLAD) and controlled substrate motion on p-type silicon. We discuss two structures, one with plate-like and one with screw-like (chiral) morphology. While the plate-like sample possesses a metal Drude behavior in the infrared spectral range, the chiral nanowire sample behaves non-metallic and reveals a series of intriguing resonances, which are equally spaced in frequency by ∼7.5 THz. We suggest that formation of 3D nano resonator circuits consisting of inductances and capacitances has occurred within the screw-like conductive aluminum wire sample, which might be responsible for the observed resonances. We suggest conductive GLAD nanostructures in combination with Schottky diodes to facilitate active or passive THz detector and transmitter devices. The relative stabilities of the copper hydroxyl sulphates C. H. Yoder, T. M. Agee, K. E. Ginion, A. E. Hofmann, J. E. Ewanichak, C. D. Schaeffer, M. J. Carroll, R. W. Schaeffer, P. F. McCaffrey Journal: Mineralogical Magazine / Volume 71 / Issue 5 / October 2007 The literature contains considerable disagreements on the relative stabilities of the members of the copper hydroxyl sulphate family. Titration of copper sulphate with sodium hydroxide is claimed by some to produce only brochantite, while other reports indicate that antlerite and a dihydrate of antlerite are produced in the titration. Most stability field diagrams show that antlerite is the more stable stoichiomer at pH 4 and sulphate activity of 0.05–1. We have reexamined this stoichiometric family by titration of aqueous copper sulphate with sodiumhydroxide and sodium carbonate, reverse titration of sodiumhydroxide with copper sulphate and simultaneous addition of copper sulphate and sodium hydroxide at a variety of mole ratios, concentrations, temperatures and reaction times. We have also explored the reaction of copper hydroxide with copper sulphate and the reaction of weak bases, such as sodiumacetate, sodiumcarbonate and urea, with copper sulphate. Our work indicates that: (1) antlerite is not formed in reactions of 0.05 to 1.2 M CuSO4 with 0.05–1.0 M NaOH or Na2CO3 at room temperature; (2) antlerite is formed in the addition of small concentrations of base (≤0.01 M) to 1 M CuSO4 at 80°C, but not at roomtem perature or with 0.01 M CuSO4 at 80°C; (3) the formation of Cu5(SO4)2(OH)6·4H2O occurs at large Cu2+ to base mole ratios; (4) the compound described in the literature as antlerite dihydrate is actually Cu5(SO4)2(OH)6.4H2O; (5) at mole ratios of Cu2+ to OH– ranging from 2:1 to 1:2 the predominant product is brochantite; and (6) brochantite and Cu5(SO4)2(OH)6.4H2O are converted to antlerite in the presence of 1 M CuSO4 (the latter requires temperatures of 80°C or greater). The Ksp (ion activity product) values of antlerite and brochantite were determined to be 2.53 (0.01)⨯10−48 and 1.01 (0.01)⨯10−69, respectively, using atomic absorption spectroscopy and Visual MINTEQ after equilibration in solutions of varying ionic strength and pH for six days. These values are in good agreement with those from the literature. However, after 6 months, antlerite in contact with solution is partially converted to brochantite and hence is metastable with a relatively low conversion rate. The Ksp value for antlerite must therefore be considered approximate. The relative stabilities of the copper hydroxyl sulphates are rationalized using appropriate equations and Gibbs energy calculations. A Gibbs free energy of formation for Cu5(SO4)2(OH)6.4H2O of –3442 kJ/mol was obtained from the simple salt approximation. An Open Mapping Theorem For Pro-Lie Groups Topological linear spaces and related structures Topological and differentiable algebraic systems Karl H. Hofmann, Sidney A. Morris Journal: Journal of the Australian Mathematical Society / Volume 83 / Issue 1 / August 2007 Published online by Cambridge University Press: 09 April 2009, pp. 55-78 A pro-Lie group is a projective limit of finite dimensional Lie groups. It is proved that a surjective continuous group homomorphism between connected pro-Lie groups is open. In fact this remains true for almost connected pro-Lie groups where a topological group is called almost connected if the factor group modulo the identity component is compact. As consequences we get a Closed Graph Theorem and the validity of the Second Isomorphism Theorem for pro-Lie groups in the almost connected context. Commuting exponentials in a Lie group KARL H. HOFMANN, WALTER J. MICHAELIS Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 141 / Issue 2 / September 2006 Two commuting real matrices $A$ and $B$ have commuting exponentials $\exp A$ and $\exp B$, a fact observed for instance in linear algebra or differential equations courses. The converse implication is false. A clarification of this phenomenon is proposed that makes use of the theory of the exponential function $\exp\colon{\fam\euffam g}\to G$ of a real Lie group $G$ and its singularities. In Section 1, a catalog of low-dimensional examples illustrates various ways that, for two elements $X, Y\in{\fam\euffam g}$, the commuting of $\exp X$ and $\exp Y$ in $G$ may fail to entail the commuting of $X$ and $Y$ in ${\fam\euffam g}$. In Section 2, consequences of the relation $[\exp X,\exp Y]={\bf 1}$ are inspected, whereby certain regularity assumptions on $X$ and $Y$ are made. A regular element $Y$ of the Lie algebra ${\fam\euffam g}$ determines a Cartan subalgebra ${\fam\euffam h}={\fam\euffam g}^0(Y)$ of ${\fam\euffam g}$ and a certain subgroup ${\cal W}_Y$ of the (finite!) Weyl group of ${\fam\euffam g}$ with respect to the Cartan subalgebra ${\fam\euffam h}$. If, additionally, the exponential function is regular at $X$ and at $Y$, then the ordered pair $(X,Y)$ is said to be in general position. If $(X,Y)$ is in general position, then the relation $[\exp X,\exp Y]={\bf 1}$ in $G$ permits the definition of a certain element $w(X,Y)\in{\cal W}_Y$. Let ${\fam\euffam z}({\fam\euffam g})$ denote the center of ${\fam\euffam g}$. It is shown that, if $\exp X$ and $\exp Y$ commute in $G$ for $(X,Y)$ in general position, then $[X,Y]\in{\fam\euffam z}({\fam\euffam g})\cap[{\fam\euffam h},{\fam\euffam h}]$ iff $w(X,Y)={\bf 1}$. Write $H\defi\exp{\fam\euffam h}$, and let $Z(G)$ denote the center of $G$. If the identity component of $Z(G)\cap[H,H]$ is simply connected, and if $\exp X$ and $\exp Y$ commute for $(X,Y)$ in general position, then $[X,Y]=0$ iff $w(X,Y)={\bf 1}$. If $G$ is simply connected compact, then $[\exp X,\exp Y]={\bf 1}$ and $[X,Y]=0$ are equivalent for all pairs $(X,Y)$ in general position. In ${\mathop{\rm SO}\nolimits}(3)$ this is not the case; here $|{\cal W}_Y|=2$. In Section 3, examples show that the validity of the equation $\exp X\exp Y=\exp\!(X+Y)$ has no implications whatsoever in the direction of the commuting of $\exp X$ and $\exp Y$. Finally, in Section 4, it is shown that, for a simply connected Lie group $G$, the commuting of $X, Y\in{\fam\euffam g}$ and that of $\exp X,\exp Y\in G$ are equivalent properties for all$X$ and $Y$ if and only if the exponential function is injective. This class of Lie groups was characterized in terms of other properties by Dixmier and by Saito, independently, in 1957. NIR high-resolution imaging and radiative transfer modeling of the Frosty Leo nebula K. Murakawa, K. Ohnaka, T. Driebe, K.-H. Hofmann, D. Schertl, S. Oya, G. Weigelt Journal: Proceedings of the International Astronomical Union / Volume 2 / Issue S234 / April 2006 We present a $K\prime$-band speckle image and $HK$-band polarimetric images of the proto-planetary nebula Frosty Leo obtained using the 6 m SAO telescope and the 8 m Subaru telescope, respectively. Our speckle image revealed clumpy structures in the hourglass-like bipolar nebula. The polarimetric data, for the first time, detected an elongated region with small polarizations and polarization vector alignment on the east side of the central star. We have performed radiative transfer calculations to model the dust shell of Frosty Leo. We found that micron-size grains in the equatorial dense region and small grains in the bipolar lobes are required to explain the total intensity images, the polarization images, and the spectral energy distribution. 5 - Molecular mechanisms and gene expression patterns in myelodysplastic syndromes By Wolf-Karsten Hofmann, University Hospital "Benjamin Franklin," Berlin, Germany, H. Phillip Koeffler, Cedars Sinai Research Institute, UCLA School of Medicine, Los Angeles, CA Edited by Peter L. Greenberg Book: Myelodysplastic Syndromes Print publication: 10 November 2005, pp 129-146 Remifentanil for analgesia during retrobulbar nerve block placement W. Leidinger, P. Schwinn, H.-M. Hofmann, J. N. Meierhofer Journal: European Journal of Anaesthesiology / Volume 22 / Issue 1 / January 2005 Background and objectives: Patients undergoing eye surgery under regional anaesthesia often require concomitant medication for analgesia and comfort. Remifentanil, with its ultra-short acting-profile, may be useful to reduce pain during retrobulbar nerve block for cataract surgery. Methods: We performed a prospective, randomized, double-blind study to compare the efficacy of remifentanil for analgesia during retrobulbar nerve block placement. Ninety patients undergoing cataract surgery were randomly divided to receive either remifentanil 0.3 μg kg−1 (n = 45) or an equivalent volume of saline (n = 45). The injection was administered within 30 s in both groups. Patients rated their amount of pain on a 10 cm visual analogue scale. Respiratory frequency, oxygen saturation, cardiac rhythm and postoperative nausea and vomiting (PONV) were recorded. Results: The mean visual analogue score in the Remifentanil group was 2.56; it was 5.51 in the Saline group (P = 0.001, U-test). Three patients developed bradycardia and three had PONV in the Remifentanil group. Two patients developed tachycardia and one had PONV in the Saline group. No patient developed respiratory depression. Conclusion: In patients undergoing retrobulbar block placement for eye surgery, 0.3 μg kg−1 remifentanil over 30 s significantly reduced their reported pain. In addition, remifentanil did not increase the risk of untoward side-effects. Projective Limits of Finite-Dimensional Lie Groups Journal: Proceedings of the London Mathematical Society / Volume 87 / Issue 3 / November 2003 For a topological group $G$ we define $\cal N$ to be the set of all normal subgroups modulo which $G$ is a finite-dimensional Lie group. Call $G$ a pro-Lie group if, firstly, $G$ is complete, secondly, $\cal N$ is a filter basis, and thirdly, every identity neighborhood of $G$ contains some member of $\cal N$. It is easy to see that every pro-Lie group $G$ is a projective limit of the projective system of all quotients of $G$ modulo subgroups from $\cal N$. The converse implication emerges as a difficult proposition, but it is shown here that any projective limit of finite-dimensional Lie groups is a pro-Lie group. It is also shown that a closed subgroup of a pro-Lie group is a pro-Lie group, and that for any closed normal subgroup $N$ of a pro-Lie group $G$, for any one parameter subgroup $Y \colon \mathbb{R} \to G/N$ there is a one parameter subgroup $X \colon \mathbb{R}\to G$ such that $X(t) N = Y(t)$ for any real number $t$. The category of all pro-Lie groups and continuous group homomorphisms between them is closed under the formation of all limits in the category of topological groups and the Lie algebra functor on the category of pro-Lie groups preserves all limits and quotients. G. Gierz, University of California, Riverside, K. H. Hofmann, Technische Universität, Darmstadt, Germany, K. Keimel, Technische Universität, Darmstadt, Germany, J. D. Lawson, Louisiana State University, M. Mislove, Tulane University, Louisiana, D. S. Scott, Carnegie Mellon University, Pennsylvania Book: Continuous Lattices and Domains Print publication: 06 March 2003, pp 572-574 III - The Lawson Topology The first topologies defined on a lattice directly from the lattice ordering (that is, Birkhoff's order topology and Frink's interval topology) involved "symmetrical" definitions – the topologies assigned to L and to Lop were identical. A guiding example was always the unit interval of real numbers in its natural order, which is of course a highly symmetrical lattice. The initial interest was in such questions as which lattices became compact and/or Hausdorff in these topologies. The Scott topology stands in strong contrast to such an approach. Indeed it is a "unidirectional" topology, since, for example, all the open sets are always upper sets; thus, for nontrivial lattices, the T0 separation axiom is the strongest it satisfies. Nevertheless, we saw in Chapter II that the Scott topology provides many links between domains and general topology in such classical areas as the theory of semicontinuous functions and in the study of lattices of closed (compact, convex) sets (ideals) in many familiar structures. In this chapter we introduce a new topology, called the Lawson topology, which is crucial in linking continuous lattices and domains to topological algebra. Its definition is more in the spirit of the interval and order topologies, and indeed it may be viewed as a mixture of the two. However, it remains asymmetrical – the Lawson topologies on L and Lop need not agree. But, even if one is seeking an appropriate Hausdorff topology for continuous lattices, this asymmetry is not at all surprising in view of the examples we have developed.
CommonCrawl
Rough solutions for the periodic Korteweg--de~Vries equation CPAA Home On a heated incompressible magnetic fluid model March 2012, 11(2): 697-708. doi: 10.3934/cpaa.2012.11.697 On the blow-up boundary solutions of the Monge -Ampére equation with singular weights Haitao Yang 1, and Yibin Chang 1, Department of Mathematics, Zhejiang University, Hangzhou 310027, China Received July 2010 Revised July 2011 Published October 2011 We consider the Monge-Ampére equations det$D^2 u = K(x) f(u)$ in $\Omega$, with $u|_{\partial\Omega}=+\infty$, where $\Omega$ is a bounded and strictly convex smooth domain in $R^N$. When $f(u) = e^u$ or $f(u)= u^p$, $p>N$, and the weight $K(x)\in C^\infty (\Omega )$ grows like a negative power of $d(x)=dist(x, \partial \Omega)$ near $\partial \Omega$, we show some results on the uniqueness, nonexistence and exact boundary blow-up rate of strictly convex solutions for this problem. Existence of such solutions will be also studied in a more general case. Keywords: Monge-Ampére equation, uniqueness, singular weight, blow-up solution, boundary behavior.. Mathematics Subject Classification: Primary: 35j05, 35j25; Secondary: 35B4. Citation: Haitao Yang, Yibin Chang. On the blow-up boundary solutions of the Monge -Ampére equation with singular weights. Communications on Pure & Applied Analysis, 2012, 11 (2) : 697-708. doi: 10.3934/cpaa.2012.11.697 Bandle and M. Marcus, Large solutions of semilinear elliptic equations: existence, uniqueness and asymptotic behavior, J. Anal. Math., 58 (1992), 9-24. Google Scholar L. Caffarelli, L. Nirenberg and J. Spruck, The Dirichlet problem for nonlinear second-order elliptic equations I. Monge-Ampére equation, Comm. Pure Appl. Math., 37 (1984), 369-402. Google Scholar S. Y. Cheng and S. T. Yau, On the regularity of the Monge-Ampére equation $det(\partial^2/\partial x_i\partial x_j) =F(x, u)$, Comm. Pure Appl. Math., 30 (1977), 41-68. Google Scholar S. Y. Cheng and S. T. Yau, On the existence of a complete Kahler metric on non-compact complex manifolds and regularity of Fefferman's equation, Comm. Pure Appl. Math., 33 (1980), 507-544. Google Scholar M. Chuaqui and C. Cortazar et al., Uniqueness and boundary behavior of large solutions to elliptic problems with weight, Comm. on Pure and Applied Analysis, 3 (2004), 653-662. Google Scholar F. C. Cirstea and Y. Du, General uniqueness results and variation speed for blow-up solutions of elliptic equations, Proc. Lond. Math. Soc., 91 (2005), 459-482. Google Scholar F. C. Cirstea and V. Radulescu, Blow-up boundary solutions of semilinear elliptic problems, Nonlinear Analysis, T. M. A., 48 (2002), 521-534. Google Scholar F. C. Cirstea and C. Trombetti, On the Monge-Ampére equation with boundary blow-up: existence, uniqueness and asymptotics, Calc. Var. Partial Differential Equations, 31 (2008), 167-186. Google Scholar J. García-Melián and R. Letelier-Albornoz et al., Uniqueness and asymptotic behaviour for solutions of semilinear problems with boundary blow-up, Proc. Amer. Math. Soc., 129 (2001), 3593-3602. Google Scholar M. Ghergu and V. Radulescu, Nonradial blow-up solutions of sublinear elliptic equations with gradient term, Comm. on Pure and Applied Analysis, 3 (2004), 465-474. Google Scholar B. Guan and H. Y. Jian, On the Monge-Ampére equation with infinite boundary value, Pac. J. Math., 216 (2004), 77-94. Google Scholar Y. Huang, Boundary asymptotical behavior of large solutions to Hessian equations, Pacific J. Math., 244 (2010), 85-98. Google Scholar H. Y. Jian, Hessian equations with infinite Dirichlet boundary, Indiana Univ. Math. J., 55 (2006), 1045-1062. Google Scholar J. B. Keller, On solutions of $\Delta u =f(u)$, Comm. Pure Appl. Math., 10 (1995), 503-510. Google Scholar N. D. Kutev, Nontrivial solutions for the equations of Monge-Ampére type, J. Math. Anal. Appl., 132 (1988), 424-433. Google Scholar A. C. Lazer and P. J. Mckenna, On Singular Boundary Value Problems for the Monge-Ampére Operator, J. Math. Anal. Appl., 197 (1996), 341-362. Google Scholar J. López-Gómez, Optimal uniqueness theorems and exact blow-up rates of large solutions, J. Diff. Eqns, 224 (2006), 385-439. Google Scholar J. Matero, The Bieberbach-Rademacher problem for the Monge-Ampére Operator, Manuscripta Math., 91 (1996), 379-391. Google Scholar A. Mohammed, On the existence of solutions to the Monge-Ampére equation with infinite boundary values, Proc. Amer. Math. Soc., 135 (2007), 141-149. Google Scholar A. Mohammed, Existence and estimates of solutions to a singular Dirichlet problem for the Monge-Ampére equation, J. Math. Anal. Appl., 340 (2008), 1226-1234. Google Scholar H. T. Yang, Existence and nonexistence of blow-up boundary solutions for sublinear elliptic equations, J. Math. Anal. Appl., 314 (2006), 85-96. Google Scholar Z. Zhang, Boundary blow-up elliptic problems with nonlinear gradient terms and singular weights, Proc. Roy. Soc. Edinburgh Sect. A, 138 (2008), 1403-1424. Google Scholar Zhijun Zhang. Optimal global asymptotic behavior of the solution to a singular monge-ampère equation. Communications on Pure & Applied Analysis, 2020, 19 (2) : 1129-1145. doi: 10.3934/cpaa.2020053 Qi-Rui Li, Xu-Jia Wang. Regularity of the homogeneous Monge-Ampère equation. Discrete & Continuous Dynamical Systems, 2015, 35 (12) : 6069-6084. doi: 10.3934/dcds.2015.35.6069 Nam Q. Le. Optimal boundary regularity for some singular Monge-Ampère equations on bounded convex domains. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021188 Shuyu Gong, Ziwei Zhou, Jiguang Bao. Existence and uniqueness of viscosity solutions to the exterior problem of a parabolic Monge-Ampère equation. Communications on Pure & Applied Analysis, 2020, 19 (10) : 4921-4936. doi: 10.3934/cpaa.2020218 Luca Codenotti, Marta Lewicka. Visualization of the convex integration solutions to the Monge-Ampère equation. Evolution Equations & Control Theory, 2019, 8 (2) : 273-300. doi: 10.3934/eect.2019015 Bo Guan, Qun Li. A Monge-Ampère type fully nonlinear equation on Hermitian manifolds. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1991-1999. doi: 10.3934/dcdsb.2012.17.1991 Alessio Figalli, Young-Heon Kim. Partial regularity of Brenier solutions of the Monge-Ampère equation. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 559-565. doi: 10.3934/dcds.2010.28.559 Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267 Shouchuan Hu, Haiyan Wang. Convex solutions of boundary value problem arising from Monge-Ampère equations. Discrete & Continuous Dynamical Systems, 2006, 16 (3) : 705-720. doi: 10.3934/dcds.2006.16.705 Adam M. Oberman. Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 221-238. doi: 10.3934/dcdsb.2008.10.221 Diego Maldonado. On interior $C^2$-estimates for the Monge-Ampère equation. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1427-1440. doi: 10.3934/dcds.2018058 Barbara Brandolini, Carlo Nitsch, Cristina Trombetti. Shape optimization for Monge-Ampère equations via domain derivative. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 825-831. doi: 10.3934/dcdss.2011.4.825 Yahui Niu. Monotonicity of solutions for a class of nonlocal Monge-Ampère problem. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5269-5283. doi: 10.3934/cpaa.2020237 Limei Dai, Hongyu Li. Entire subsolutions of Monge-Ampère type equations. Communications on Pure & Applied Analysis, 2020, 19 (1) : 19-30. doi: 10.3934/cpaa.2020002 Jiakun Liu, Neil S. Trudinger. On Pogorelov estimates for Monge-Ampère type equations. Discrete & Continuous Dynamical Systems, 2010, 28 (3) : 1121-1135. doi: 10.3934/dcds.2010.28.1121 Fan Cui, Huaiyu Jian. Symmetry of solutions to a class of Monge-Ampère equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1247-1259. doi: 10.3934/cpaa.2019060 Juhua Shi, Feida Jiang. The degenerate Monge-Ampère equations with the Neumann condition. Communications on Pure & Applied Analysis, 2021, 20 (2) : 915-931. doi: 10.3934/cpaa.2020297 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 Limei Dai. Multi-valued solutions to a class of parabolic Monge-Ampère equations. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1061-1074. doi: 10.3934/cpaa.2014.13.1061 Jingang Xiong, Jiguang Bao. The obstacle problem for Monge-Ampère type equations in non-convex domains. Communications on Pure & Applied Analysis, 2011, 10 (1) : 59-68. doi: 10.3934/cpaa.2011.10.59 Haitao Yang Yibin Chang
CommonCrawl
Q&A > History > What do you think Lincoln meant by the phrase "government of the people, by the people,for the people"? What do you think Lincoln meant by the phrase "government of the people, by the people,for the people"? What do you think Lincoln meant by the phrase "government of the pe... He was going to have the public influenced in more of the government matters than his predecessors. He was going to inform the people and not keep them in the dark. Virtual Teaching Assistant: Colleen R. Question Level: Basic Karma: Free Upload Date: 5/31/2017 This 36 words question was answered by Colleen R. on StudySoup on 5/31/2017. The question contains content related to History Since its upload, it has received 81 views. What is obamas first name?! Which is the same as moving the decimal point three places to the right in a ... A. multiplying the number by 1,000 B.dividing the number by 100 C. diving the... Find the percent of markup. Round to the nearest tenth $31.00 to $35.00. In Mathematics what is 4y+2=x+8 if anna have 3 eggs and josh gives her another 3. how many eggs anna haves? You and a friend both would like a salad and a small drink. Between the two of you, you have $8.00. A salad costs $2.49 and a small drin... Add 4 to n then divide by 5 Which function is not linear F(x)=5/x F(x)=-7x F(x)=3x-4 F(x)=5 The probability of winning a particular game at the Game Place is 1/4. Find the odds against winning the game. A. 5 to 1 B. 3 to 1 C. 1 to 4 D. 3 to 4 David helped his father stock the pond with baby catfish and blue gill. They put in a total of 75 fish. If the number of blue gill was doubled and t... Anjali brought a snack size of M & Ms to school for a snack. There were 7 brown, 4 green, 2 yellow, 4 red, and 5 orange M & Ms in the bag.... Hassan's backyard swimming pool is 30 feet long, 20 feet wide, and 5 feet deep. There are 7.5 gallons of water in 1 cubic foot. How many gallons of water wi... Jerome bought 8 packs of baseball cards at a garage sale each pack had 10 car... Carol weighs three times as much as her daughter Sue. both weights are whole numbers. the sum of their weights is less than or equ... What property is shown in the equation? 5ac = 5ca a Identity Property of Multiplication b Reciprocal Property of Mult... Keenan has 35 CDs and 55 DVDs. What is the ratio of DVDs to CDs? 1. Rachel has 6 pairs of black shoes and 2 pairs of red shoes. What is the ratio of pairs of red shoes to pairs of black shoes? a. 3:1 b. ... graph y=3sinx and y=sin 3x on the same axes. label the graph of each function. if Sam have 30 tokens and give Adam 6 how many tokens do sam have Find the prime factorizations for the number. Write your answers so the products are in order from least to greatest. You ... I have 90 coins and I sold 15% of them. How many coins did I sell? find the unit rate for $7.50 for 12 pens if a cookie weights 3/4 of an ounce, how many ounces does a case of 24 packag... of the 200 guests invited to a wedding, 154 attend the wedding what percent o... Two functions, P and Q, are described as follows: Function P y = 5x + 3 Funct... How much more is the rate of change of function P than the slope of function Q? Can anybody explain substitution ? Algebra 1, ch. 7.4 most school books [tex] \frac{1-2b^{2}}{2b+3} [/tex] Use numbers to explain the pattern you see when you count forward by tens. The density of an object is the quotient of its mass and its volume. Translate the sentence into a formula (ab+8a+1)-(-6ab+4) help me&we could help eachother StudySoup
CommonCrawl
American Institute of Mathematical Sciences Journal Prices Book Prices/Order Proceeding Prices E-journal Policy Pushed traveling fronts in monostable equations with monotone delayed reaction DCDS Home Resonance problems for Kirchhoff type equations May 2013, 33(5): 2155-2168. doi: 10.3934/dcds.2013.33.2155 Non-degeneracy and uniqueness of periodic solutions for $2n$-order differential equations Pedro J. Torres 1, , Zhibo Cheng 2, and Jingli Ren 3, Departamento de Matemática Aplicada, Universidad de Granada, 18071 Granada School of Mathematics and Informatics, Henan Polytechnic University, Jiaozuo 454000, China Dept. of Math., Zhengzhou University, Zhengzhou 450001 Received December 2011 Revised August 2012 Published December 2012 We analyze the non-degeneracy of the linear $2n$-order differential equation $u^{(2n)}+\sum\limits_{m=1}^{2n-1}a_{m}u^{(m)}=q(t)u$ with potential $q(t)\in L^p(\mathbb{R}/T\mathbb{Z})$, by means of new forms of the optimal Sobolev and Wirtinger inequalities. The results is applied to obtain existence and uniqueness of periodic solution for the prescribed nonlinear problem in the semilinear and superlinear case. Keywords: superlinear, uniqueness, $2n$-order differential equation., Non-degeneracy, semilinear. Mathematics Subject Classification: 34C2. Citation: Pedro J. Torres, Zhibo Cheng, Jingli Ren. Non-degeneracy and uniqueness of periodic solutions for $2n$-order differential equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2155-2168. doi: 10.3934/dcds.2013.33.2155 Z. B. Cheng and J. L. Ren, Periodic solutions for a fourth-order Rayleigh type $p$-Laplacian delay equation,, Nonlinear Anal. TMA, 70 (2009), 516. Google Scholar F. Z. Cong, Q. D. Huang and S. Y. Shi, Existence and uniqueness of periodic solution for $(2n+1)^{th}$-order differential equation,, J. Math. Anal. Appl., 241 (2000), 1. doi: 10.1006/jmaa.1999.6471. Google Scholar F. Z. Cong, Periodic solutions for $2k$th order ordinary differential equations with nonresonance,, Nonlinear Anal. TMA, 32 (1998), 787. doi: 10.1016/S0362-546X(97)00517-8. Google Scholar A. Fonda and J. Mawhin, Quadratic forms, weighted eigenfunctions and boundary value problems for non-linear second order ordinary differential equations,, Proc. Royal Soc. Edinburgh Sect. A, 112 (1989), 145. doi: 10.1017/S0308210500028213. Google Scholar Y. Kametaka, H. Yamagishi, K. Watanabe, A. Nagai and K. Takemura, Riemann zeta function, Bernoulli polynomials and the best constant of Sobolev inequality,, Sci. Math. Jpn., 65 (2007), 333. Google Scholar A. Lasota and Z. Opial, Sur les solutions périodiques des equations differentielles ordinaires,, Ann. Polon. Math., 16 (1964), 69. Google Scholar W. Li and M. R. Zhang, Non-degeneracy and uniqueness of periodic solutions for some superlinear beam equations,, Appl. Math. Lett., 22 (2009), 314. doi: 10.1016/j.aml.2008.03.027. Google Scholar G. Meng, P. Yan, X. Y. Lin and M. R. Zhang, Non-degeneracy and periodic solutions of semilinear differential equations with deviation,, Adv. Nonlinear Stud., 6 (2006), 563. Google Scholar R. Ortega and M. Zhang, Some optimal bounds for bifurcation values of a superlinear periodic problem,, Proc. Royal Soc. Edinburgh Sect. A, 135 (2005), 119. doi: 10.1017/S0308210500003796. Google Scholar L. J. Pan, Periodic solutions for higher order differential equations with deviating argument,, J. Math. Anal. Appl., 343 (2008), 904. doi: 10.1016/j.jmaa.2008.01.096. Google Scholar J. L. Ren and Z. B. Cheng, On high-order delay differential equation,, Comput. Math. Appl., 57 (2009), 324. doi: 10.1016/j.camwa.2008.10.071. Google Scholar J. L. Ren and Z. B. Cheng, Periodic solutions for generalized high-order neutral differential equation in the critical case,, Nonlinear Anal., 71 (2009), 6182. doi: 10.1016/j.na.2009.06.011. Google Scholar K. Wang and S. P. Lu, On the existence of periodic solutions for a kind of high-order neutral functional differential equation,, J. Math. Anal. Appl., 326 (2007), 1161. doi: 10.1016/j.jmaa.2006.03.078. Google Scholar J. R. Ward, Asymptotic conditions for periodic solutions of ordinary differential equations,, Proc. Amer. Math. Soc., 81 (1981), 415. doi: 10.2307/2043477. Google Scholar M. R. Zhang, An abstract result on asympotitically positively homogeneous differential equations,, J. Math. Anal. Appl., 209 (1997), 291. doi: 10.1006/jmaa.1997.5383. Google Scholar Genni Fragnelli, Jerome A. Goldstein, Rosa Maria Mininni, Silvia Romanelli. Operators of order 2$ n $ with interior degeneracy. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020128 Robert Magnus, Olivier Moschetta. The non-linear Schrödinger equation with non-periodic potential: infinite-bump solutions and non-degeneracy. Communications on Pure & Applied Analysis, 2012, 11 (2) : 587-626. doi: 10.3934/cpaa.2012.11.587 Xiaocai Wang, Junxiang Xu. Gevrey-smoothness of invariant tori for analytic reversible systems under Rüssmann's non-degeneracy condition. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 701-718. doi: 10.3934/dcds.2009.25.701 Dongfeng Zhang, Junxiang Xu. On elliptic lower dimensional tori for Gevrey-smooth Hamiltonian systems under Rüssmann's non-degeneracy condition. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 635-655. doi: 10.3934/dcds.2006.16.635 Mark Lewis, Daniel Offin, Pietro-Luciano Buono, Mitchell Kovacic. Instability of the periodic hip-hop orbit in the $2N$-body problem with equal masses. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1137-1155. doi: 10.3934/dcds.2013.33.1137 Zhongjie Liu, Duanzhi Zhang. Brake orbits on compact symmetric dynamically convex reversible hypersurfaces on $ \mathbb{R}^\text{2n} $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4187-4206. doi: 10.3934/dcds.2019169 Paolo Caldiroli. Radial and non radial ground states for a class of dilation invariant fourth order semilinear elliptic equations on $R^n$. Communications on Pure & Applied Analysis, 2014, 13 (2) : 811-821. doi: 10.3934/cpaa.2014.13.811 Nikos I. Karachalios, Athanasios N Lyberopoulos. On the dynamics of a degenerate damped semilinear wave equation in \mathbb{R}^N : the non-compact case. Conference Publications, 2007, 2007 (Special) : 531-540. doi: 10.3934/proc.2007.2007.531 Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control & Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017 Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73 Carmen Cortázar, Marta García-Huidobro, Pilar Herreros. On the uniqueness of bound state solutions of a semilinear equation with weights. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6761-6784. doi: 10.3934/dcds.2019294 Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1825-1838. doi: 10.3934/cpaa.2012.11.1825 C. Cortázar, Marta García-Huidobro. On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian. Communications on Pure & Applied Analysis, 2006, 5 (4) : 813-826. doi: 10.3934/cpaa.2006.5.813 C. Cortázar, Marta García-Huidobro. On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian. Communications on Pure & Applied Analysis, 2006, 5 (1) : 71-84. doi: 10.3934/cpaa.2006.5.71 Tomás Caraballo, María J. Garrido–Atienza, Björn Schmalfuss, José Valero. Asymptotic behaviour of a stochastic semilinear dissipative functional equation without uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 439-455. doi: 10.3934/dcdsb.2010.14.439 Jaan Janno, Kairi Kasemets. Uniqueness for an inverse problem for a semilinear time-fractional diffusion equation. Inverse Problems & Imaging, 2017, 11 (1) : 125-149. doi: 10.3934/ipi.2017007 Ruofei Yao, Yi Li, Hongbin Chen. Uniqueness of positive radial solutions of a semilinear elliptic equation in an annulus. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1585-1594. doi: 10.3934/dcds.2018122 Joseph A. Iaia. Localized radial solutions to a semilinear elliptic equation in $\mathbb{R}^n$. Conference Publications, 1998, 1998 (Special) : 314-326. doi: 10.3934/proc.1998.1998.314 R.S. Dahiya, A. Zafer. Oscillatory theorems of n-th order functional differential equations. Conference Publications, 2001, 2001 (Special) : 435-443. doi: 10.3934/proc.2001.2001.435 Peng Mei, Zhan Zhou, Genghong Lin. Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2085-2095. doi: 10.3934/dcdss.2019134 PDF downloads (18) HTML views (0) on AIMS Pedro J. Torres Zhibo Cheng Jingli Ren Copyright © 2019 American Institute of Mathematical Sciences Export File RIS(for EndNote,Reference Manager,ProCite) Citation and Abstract Recipient's E-mail*
CommonCrawl
Preparatory phase Dissertation phase PhD Thesis Defenses 2020 & 2019 & 2018 Thesis proposal colloquium PhD Thesis Defenses 2013 PhD thesis defenses are a public affair and open to anyone who is interested. Attending them is a great way to get to know the work going on by your peers in the various research groups. On this page you will find a list of upcoming and past defense talks. Please go here for electronic access to most of the doctoral dissertations from Saarbrücken Computer Science going back to about 1990. Sandro CASTRONOVO The Pull Paradigm: Foundations of User-Centric Advanced Driver Assistance Systems Based on Bidirectional Car2X Communication (Advisor: Prof. Wolfgang Wahlster) Fri, 20 December 2013, 14:00h, building D3 2 (DFKI), Conference room Reuse -2,17 This thesis develops applications for vehicular ad-hoc networks that go far beyond the currently established areas of driving safety and traffic efficiency. The ad-hoc network is regarded as a dynamic information resource which is available to any vehicle at any time. In contrast to current state-of-the-art research, the proposed Pull Paradigm starts at the user's vehicle rather than at an information source somewhere in the network, e.g. a braking car. To access information from highly dynamic ad-hoc networks, bidirectional communication and information discovery and retrieval play a vital role. Therefore, in the course of the work, the applicability of the Pull Paradigm to established vehicular ad-hoc networks is thoroughly examined and missing aspects are identified. It turns out that a number of enhancements to almost all layers of the network stack are necessary in order to apply the Pull Paradigm using existing technology. The central elements here are two novel algorithms for managing information flow and dissemination in ad-hoc networks, which are at first formulated from the abstract perspective of graph theory. Using the knowledge gained leads to the development of PADE, a platform that supports development of vehicular ad-hoc network applications. The designed algorithms are then implemented as a routing scheme, integrated and evaluated in large, simulated city scenarios. Furthermore, PADE combines "real" and simulated communication technologies and abstracts from them, so that applications can be transferred from the lab into a test vehicle with minimal effort. In order to achieve this ambitious goal, PADE builds on a number of existing simulation and communication technologies. The practical applicability of the Pull approach is shown in two demonstrators that are integrated into a BMW 5 series test vehicle. The presentation module of the PADE platform was tested in the currently largest field operational test for vehicular ad-hoc communication. Over 400 drivers in 120 vehicles experienced the system on a daily basis. Christian FEDERMANN Hybrid Machine Translation using Binary Classification Models trained on Joint, Binarised Feature Vectors (Advisor: Prof. Hans Uszkoreit) Mon, 16 December 2013, 12:15h, building C7 4 (conference room) We describe the design and implementation of a system combination method for machine translation output. It is based on sentence selection using binary classification models estimated on joint, binarised feature vectors. By contrast to existing system combination methods which work by dividing candidate translations into n-grams, i.e., sequences of n words or tokens, our framework performs sentence selection which does not alter the selected, best translation. First, we investigate the potential performance gain attainable by optimal sentence selection. To do so, we conduct the largest meta-study on data released by the yearly Workshop on Statistical Machine Translation (WMT). Second, we introduce so-called joint, binarised feature vectors which explicitly model feature value comparison for two systems A, B. We compare different settings for training binary classifiers using single, joint, as well as joint, binarised feature vectors. After having shown the potential of both selection and binarisation as methodological paradigms, we combine these two into a combination framework which applies pairwise comparison of all candidate systems to determine the best translation for each individual sentence. Our system is able to outperform other state-of-the-art system combination approaches; this is confirmed by our experiments. We conclude by summarising the main findings and contributions of our thesis and by giving an outlook to future research directions. Thomas HELTEN Processing and Tracking Human Motions Using Optical, Inertial, and Depth Sensors (Advisors: Prof. Meinard Müller, Prof. Christian Theobalt) Fri, 13 December 2013, 15:00h, building E1 4 (MPI-Inf), R 0.19 The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, e. g., for real-time motion reconstruction using efficient cross- modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information. Lizhen QU Sentiment Analysis with Limited Training Data (Advisor: Prof. Gerhard Weikum) Wed, 4 December 2013, 10:30h, building E1 4 (MPI-Inf), R 0.24 Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase- level predictors and sentence similarity measures. 3. Senti-LSSVMrae model: To understand the sentiments regarding entities, we proposed Senti-LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks. Nuno SANTOS Improving Trust in Cloud, Enterprise, and Mobile Computing Platforms (Advisor: Dr. Rodrigo Rodrigues) Wed, 27 November 2013, 18:00h, building E1 5 (MPI-SWS), R 0.29 Trust plays a fundamental role in the adoption of technology by society. Potential consumers tend to avoid a particular technology whenever they feel suspicious about its ability to cope with their security demands. Such a loss of trust could occur in important computing platforms, namely cloud, enterprise, and mobile platforms. In this thesis, we aim to improve trust in these platforms by (i) enhancing their security mechanisms, and (ii) giving their users guarantees that these mechanisms are in place. To realize both these goals, we propose several novel systems. For cloud platforms, we present Excalibur, a system that enables building trusted cloud services. Such services give cloud customers the ability to process data privately in the cloud, and to attest that the respective data protection mechanisms are deployed. Attestation is made possible by the use of trusted computing hardware placed on the cloud nodes. For enterprise platforms, we propose an OS security model—the broker security model—aimed at providing information security against a negligent or malicious system administrator while letting him retain most of the flexibility to manage the OS. We demonstrate the effectiveness of this model by building BrokULOS, a proof-of- concept instantiation of this model for Linux. For mobile platforms, we present the Trusted Language Runtime (TLR), a software system for hosting mobile apps with stringent security needs (e.g., e-wallet). The TLR leverages ARM TrustZone technology to protect mobile apps from OS security breaches. Tianxiang LU Formal Verification of the Pastry Protocol (Advisor: Prof. Christoph Weidenbach) Pastry is a structured P2P algorithm realizing a Distributed Hash Table (DHT) over an underlying virtual ring of nodes. Several implementations of Pastry are available and have been applied in practice, but no attempt has so far been made to formally describe the algorithm or to verify its properties. Since Pastry combines rather complex data structures, asynchronous communication, concurrency, resilience to churn, i.e. spontaneous join and departure of nodes, it makes an interesting target for verification. This thesis focuses on Join protocol of Pastry and formally defines different statuses (from "dead" to "ready") of a node according to its stage during join. Only "ready" nodes are suppose to have consistent key mapping among each other and are allowed to deliver the answer message. The correctness property is identified by this thesis to be CorrectDelivery, stating that there is always at most one node that can deliver an answer to a lookup request for a key and this node is the numerically closest "ready" node to that key. This property is non-trivial to preserve in the presence of churn. Through this thesis, unexpected violations of CorrectDelivery in previous versions of Pastry are discovered and analyzed using the TLA+ model checker TLC. Based on the analysis, Pastry is improved to a new design of the Pastry protocol IdealPastry, which is first verified using the interactive theorem prover TLAPS for TLA+. IdealPastry is further improved to LuPastry, which is formally proved to be correct w.r.t. CorrectDelivery under the assumption that no nodes leave the network, which cannot be further relaxed due to possible network separation when particular nodes simultaneously leave the network. Mohammed SHAHEEN Cache based Optimization of Stencil Computations – An Algorithmic Approach (Advisor: Prof. Hans-Peter Seidel) Tues, 5 November 2013, 9:00h, building E1 4, R 019 We are witnessing a fundamental paradigm shift in computer design. Memory has been and is becoming more hierarchical. Clock frequency is no longer crucial for performance. The on-chip core count is doubling rapidly. The quest for performance is growing. These facts have lead to complex computer systems which bestow high demands on scientific computing problems to achieve high performance. Stencil computation is a frequent and important kernel that is affected by this complexity. Its importance stems from the wide variety of scientific and engineering applications that use it. The stencil kernel is a nearest-neighbor computation with low arithmetic intensity, thus it usually achieves only a tiny fraction of the peak performance when executed on modern computer systems. Fast on-chip memory modules were introduced as the hardware approach to alleviate the problem. There are mainly three approaches to address the problem, cache aware, cache oblivious, and automatic loop transformation approaches. In this thesis, comprehensive cache aware and cache oblivious algorithms to optimize stencil computations on structured rectangular 2D and 3D grids are presented. Our algorithms observe the challenges for high performance in the previous approaches, devise solutions for them, and carefully balance the solution building blocks against each other. The many-core systems put the scalability of memory access at stake which has lead to hierarchical main memory systems. This adds another locality challenge for performance. We tailor our frameworks to meet the new performance challenge on these architectures. Experiments are performed to evaluate the performance of our frameworks on synthetic as well as real world problems. Evgeny KRUGLOV Superposition Modulo Theory Thu, 31 October 2013, 16:30h, building E1 5, R 0.02 This thesis is about the Hierarchic Superposition calculus SUP(T) and its application to reasoning in hierarchic combinations FOL(T) of the free first-order logic FOL with a background theory T. Particular hierarchic combinations covered in the thesis are the combinations of FOL and linear and non-linear arithmetic, LA and NLA resp. Recent progress in automated reasoning has greatly encouraged numerous applications in soft- and hardware verification and the analysis of complex systems. The applications typically require to determine the validity/unsatisfiability of quantified formulae over the combination of the free first-order logic with some background theories. The hierarchic superposition combines (i) reasoning in FOL equational clauses with universally quantified variables, which is based on the standard "flat" superposition calculus, and (ii) SMT-based reasoning techniques in such rich theories as, e.g., arithmetic, which are usually not (finitely) axiomatizable by FOL formulae. The thesis significantly extends previous results on SUP(T), particularly: we introduce new substantially more effective sufficient completeness and hierarchic redundancy criteria turning SUP(T) to a complete or a decision procedure for various FOL(T) fragments; instantiate and refine SUP(T) to effectively support particular combinations of FOL with the LA and NLA theories enabling a fully automatic mechanism of reasoning about systems formalized in FOL(LA) or FOL(NLA). Tomasz JURKIEWICZ Toward better Computation Models for Modern Machines (Advisor: Prof. Kurt Mehlhorn) Wed, 30 October 2013, 14:00h, building E1 4, R 024 Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and a virtual memory. We address the computational cost of the address translation in the virtual memory and difficulties in design of parallel algorithms on modern many-core machines. Starting point for our work on virtual memory is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms. In the second part of the paper we present a case study of the design of an efficient 2D convex hull algorithm for GPUs. The algorithm is based on the ultimate planar convex hull algorithm of Kirkpatrick and Seidel, and it has been referred to as the first successful implementation of the QuickHull algorithm on the GPU by Gao et al. in their 2012 paper on the 3D convex hull. Our motivation for work on modern many- core machines is the general belief of the engineering community that the theory does not produce applicable results, and that the theoretical researchers are not aware of the difficulties that arise while adapting algorithms for practical use. We concentrate on showing how the high degree of parallelism available on GPUs can be applied to problems that do not readily decompose into many independent tasks. Gernot GEBHARD Static Timing Analysis Tool Validation in the Presence of Timing Anomalies (Advisor: Prof. Reinhard Wilhelm) Tues, 22 October 2013, 14:00h, building E1 7, R 001 The validation of the timing behavior of a safety-critical embedded software system requires both safe and precise worst-case execution time bounds for the tasks of that system. Such bounds need to be safe to ensure that each component of the software system performs its job in time. Furthermore, the execution time bounds are required to be precise to ensure the (provable) schedulability of the software system. When trying to achieve both safe and precise bounds, timing anomalies are one of the greatest challenges to overcome. Almost every modern hardware architecture shows timing anomalies, which also greatly impacts the analyzability of such architectures with respect to timing. Intuitively spoken, a timing anomaly is a counterintuitive behavior of a hardware architecture, where a "good" event (\eg a cache hit) leads to an overall longer execution, whereas the corresponding "bad" event (in this case, a cache miss) leads to a globally shorter execution time. In the presence of such anomalies, the local worst-case is not always a safe assumption in static timing analysis. To compute safe timing guarantees, any (static) timing analysis has to consider all possible executions. In this thesis we investigate the source of timing anomalies in modern architectures and study instances of timing anomalies found in rather simple hardware architectures. Furthermore we discuss the impact of timing anomalies on static timing analysis. Finally we provide means to validate the result of static timing analysis for such architectures through trace validation. Aleksandar STUPAR Soundtrack Recommendation for Images (Advisor: Dr. Sebastian Michel) Fri, 4 October 2013, 10:00h, building E1 4, R 0.24 The drastic increase in production of multimedia content has emphasized the research concerning its organization and retrieval. In this thesis, we address the problem of music retrieval when a set of images is given as input query, i.e., the problem of soundtrack recommendation for images. To tackle this problem, we formulate a hypothesis that the knowledge appropriate for the task is contained in publicly available contemporary movies. Our approach, Picasso, employs similarity search techniques inside the image and music domains, harvesting movies to form a link between the domains. In addition to the proposed Picasso approach, we further investigate effectiveness and efficiency of the task at hand and present a novel benchmark collection to evaluate Picasso and related approaches. Rüdiger EHLERS Symmetric and Efficient Synthesis (Advisor: Prof. Bernd Finkbeiner) Wed, 2 October 2013, 15:00h, building E1 7, Room 001 Despite the many advantages of synthesis over the manual engineering of reactive systems, it is not yet a well-established part of today's system design flows. It is commonly agreed on that the main reasons for this discrepancy are the lack of scalability of current synthesis techniques and the insufficient quality of the implementations computed. In this thesis, we tackle both of these problems for reactive synthesis. To improve the quality of synthesized systems, we analyse the problem of symmetric synthesis. In this alternative synthesis problem, the aim is to compute a solution that consists of multiple copies of the same process such that the overall system satisfies the specification. Such systems have no centralised control units, and are considered to be more robust and easier to maintain. We characterise undecidable and decidable cases of the problem, and provide a synthesis algorithm for rotation-symmetric architectures, which capture many practical cases. To improve the scalability in synthesis, we start with a simple but scalable approach to reactive synthesis that has shown its principal applicability in the field, and extend its main idea both in terms of scope and usability. We enhance its expressivity in a way that allows the synthesis of robust systems, and remove its limitation to specifications of a very special form. Both improvements yield theoretical insight into the synthesis problem: we characterise which specification classes can be supported in synthesis approaches that use parity games with a fixed number of colours as the underlying computation model, and examine the properties of universal very-weak automata, on which we base a synthesis workflow that combines the ease of specification with a low complexity of the underlying game solving step. As a side-result, we also obtain the first procedure to translate a formula in linear-time temporal logic (LTL) to a computation tree logic (CTL) formula with only universal quantifiers, whenever possible. Iftikhar AHMAD Analysis of Algorithms for Online Uni-directional Conversion Problems (Advisor: Prof. Günter Schmidt) Mon, 30 September 2013, 15:00h, building B4 1, Fakultätssaal 0.17 In an online uni-directional conversion problem, an online player wants to convert an asset $D$ to a desired asset $Y$. The objective of the player is to obtain the maximum amount of the desired asset. Competitive analysis is used as a tool for the design, and analysis of online algorithms for conversion problems. Although widely used, competitive analysis has its own set of drawbacks when the applicability of online algorithms in real world is considered. In this work, we investigate online uni- directional conversion problems with the objective to suggest measures for improving the applicability of online conversion algorithms in real world. First, we study competitive ratio as a coherent measure of risk and conclude that as it satisfies all the desirable axioms of coherence, competitive ratio can be used in practice. Secondly, we evaluate a selected set of online algorithms on real world as well bootstrap data to highlight the gap between theoretically guaranteed and experimentally achieved competitive ratio. The third aspect of the study deals with generating synthetic data that truly represents all possible scenarios such as market crashes. We suggest the use of Extreme Value Theory (EVT) approach. Using EVT approach, we generate synthetic data and execute a selected set of non-preemptive uni-directional online algorithms on it. The fourth contribution of the thesis includes the design and analysis of risk-aware reservation price algorithms for conversion problems. The proposed algorithms are flexible to accommodate the risk level of the online player whereas guaranteeing a bounded worst case competitive ratio as well. We evaluate our algorithms using the competitive analysis approach as well as testing the algorithms on the real world data. The results will help to improve the applicability of online conversion algorithms in real world. We conclude the work by discussing a number of open questions that will provide new directions for future research. Esteban LÉON SOTO Multi-agent Communication for the Realization of Business Processes (Advisor: Prof. Jörg Siekmann) Fri, 20 September 2013, 17:15h, building D3 2, Vis.-Center -1.63 As Internet and information technologies expand further into daily business activities, new solutions and techniques are required to cope with the growing complexity. One area that has gained attention is systems and organizations interoperability and Service Oriented Architectures (SOA). Web Services have grown as a preferred technology in this area. Although these techniques have proved to solve problems of low level integration of heterogeneous systems, there has been little advance at higher levels of integration like how to rule complex conversations between participants that are autonomous and cannot depend on some ruling or orchestrating system. Multi-agent technology has studied techniques for content-rich communication, negotiation, autonomous problem solving and conversation protocols. These techniques have solved some of the problems that emerge when integrating autonomous systems to perform complex business processes. The present research work intends to provide a solution for the realization of complex Business Process between heterogeneous autonomous participants using multi-agent technology. We developed an integration of Web Services and agent-based technologies along with a model for creating conversation protocols that respect the autonomy of participants. A modeling tool has been developed to create conversation protocols in a modular and reusable manner. BDI-Agents implementations that communicate over Web Services are automatically generated out of these models. Cristián MADRIGAL-MORA A model-driven approach for organizations in multi-agent systems This thesis introduces a new model-driven approach to agent-oriented software engineering in which agent organizations not only play a crucial role, but are also represented in every abstraction level. In our methodology, multiagent systems are modeled at a platform-independent level and transformed into a platform-specific level preserving the organizational structures. The approach has been refined through several years and has been used in two European Union projects. Jochen MIROLL Scalable and Rate Adaptive Wireless Multimedia Multicast (Advisor: Prof. Thorsten Herfet) Wed, 18 September 2013, 12:00h, building E1 7, Room 4.07 The methods that are described in this talk enable efficient multimedia Internet protocol streaming over wireless digital communication systems to an arbitrary number of receivers by multicast. A fundamental difference as compared to point-to- point connections between exactly two communicating stations is in conveying information about successful packet reception at the receiver side due to the multitude of receivers. This work considers time division multiple access systems, in which a single channel is used for data transmission and feedback. Therefore, the amount of time that should be spent for transmitting feedback is limited. Feedback about reception from the receiver(s) is necessary for efficient transmission. With respect to this, feedback for wireless multicast is evaluated and shown to be feasible. Aggregation of feedback in time is the mechanism proposed herein for physical layer bit rate adaptation. It is evaluated with respect to rate adaptation by example of orthogonal frequency division multiplex based IEEE 802.11 wireless networks. In proposed mechanisms, a constant amount of time is spent for multicast feedback, independent of the number of receivers (n). Therefore, also multimedia data throughput may remain independent of n. This may be taken for granted in case of statically configured, single purpose systems such as digital television. In the scope of this work are, however, multi-user and multi-purpose digital communication networks. Wireless LAN and LTE mobile networks are well-known examples. In suchlike systems, it is of great importance to remain independent of the number of receivers in order to maintain service quality. With a consumer hardware prototype for digital live-TV re-distribution in the local wireless network, the proposed mechanisms could be demonstrated. Jens KERBER Of Assembling Small Sculptures and Disassembling Large Geometry Tues, 17 September 2013, 14:00h, building E1 4 (MPII), Room 0.19 This thesis describes the research results and contributions that have been achieved during the author's doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect. The first part covers the true-to- detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief. Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike and aesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements. The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size. Martin SUNKEL Statistical Part-based Models for Object Detection in Large 3D Scans (Advisor: Dr. Michael Wand) 3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications. This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times. Kristina SCHERBAUM Data Driven Analysis of Faces from Images This talk proposes three new data-driven approaches to detect, analyze, or modify faces in images. All presented approaches are inspired by the use of prior knowledge and they derive information about facial appearances from pre-collected databases of images or 3D face models. First, we show an approach that extends a widely-used monocular face detector by an additional classifier that evaluates disparity maps of a passive stereo camera. The algorithm runs in real-time and significantly reduces the number of false positives compared to the monocular approach. Next, with a many-core implementation of the detector, we train view- dependent face detectors based on tailored views which guarantee that the statistical variability is fully covered. These detectors are superior to the state of the art on a challenging dataset and can be trained in an automated procedure. Finally, we present a model describing the relation of facial appearance and makeup. The approach extracts makeup from before/after images of faces and allows to modify faces in images. Applications such as machine-suggested makeup can improve perceived attractiveness as shown in a perceptual study. In summary, the presented methods help improve the outcome of face detection algorithms, ease and automate their training procedures and the modification of faces in images. Moreover, their data-driven nature enables new and powerful applications arising from the use of prior knowledge and statistical analyses. Miguel GRANADOS Advanced Editing Methods for Image and Video Sequences (Advisor: Prof. Christian Theobalt) Tues, 10 September 2013, 9:00h, building E1 4 (MPII), Room 0.19 In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos. Matthias BÖHMER Understanding and Supporting Mobile Application Usage (Advisor: Prof. Antonio Krüger) Fri, 6 September 2013, 14:00h, building D3 2, Room -2.17 (Reuse) In recent years mobile phones have evolved significantly. While the very first cellular phones only provided functionality for conducting phone calls, smartphones nowadays provide a rich variety of functionalities. Additional hardware capabilities like new sensors (e.g. for location) and touch screens as new input devices gave rise to new use cases for mobile phones, such as navigation support, taking pictures or making payments. Mobile phones not only evolved with regard to technology, they also became ubiquitous and pervasive in people's daily lives by becoming capable of supporting them in various tasks. Eventually, the advent of mobile application stores for the distribution of mobile software enabled the end-users themselves to functionally customize their mobile phones for their personal purposes and needs. So far, little is known about how people make use of the large variety of applications that are available. Thus, little support exists for end-users to make effective and efficient use of their smartphones given the huge numbers of applications that are available. This dissertation is motivated by the evolution of mobile phones from mere communication devices to multi-functional tool sets, and the challenges that have arisen as a result. The goal of this thesis is to contribute systems that support the use of mobile applications and to ground these systems' designs in an understanding of user behavior gained through empirical observations. The contribution of this dissertation is twofold: First, this work aims to understand how people make use of, organize, discover and multitask between the various functionalities that are available for their smartphones. Findings are based on observations of user behavior by conducting studies in the wild. Second, this work aims to assist people in leveraging their smartphones and the functionality that is available in a more effective and efficient way. This results in tools and improved user interfaces for end-users. Given that the number of available applications for smartphones is rapidly increasing, it is crucial to understand how people make use of such applications to support smartphone use in everyday life with better designs for smartphone user interfaces. Avishek ANAND Indexing Methods for Web Archives (Advisor: Dr. Klaus Berberich) Fri, 6 September 2013, 9:00h, building E1 4 (MPII), Room 0.24 There have been numerous efforts recently to digitize previously published content and preserving born-digital content leading to the widespread growth of large text repositories. Web archives are such continuously growing text collections which contain versions of documents spanning over long time periods. Web archives present many opportunities for historical, cultural and political analyses. Consequently there is a growing need for tools which can efficiently access and search them. In this work, we are interested in indexing methods for supporting text-search workloads over web archives like time-travel queries and phrase queries. To this end we make the following contributions: • Time-travel queries are keyword queries with a temporal predicate, e.g., "mpii saarland" @ [06/2009], which return versions of documents in the past. We introduce a novel index organization strategy, called index sharding, for efficiently supporting time-travel queries without incurring additional index-size blowup. We also propose index-maintenance approaches which scale to such continuously growing collections. • We develop query-optimization techniques for time-travel queries called partition selection which maximizes recall at any given query-execution stage. • We propose indexing methods to support phrase queries, e.g., "to be or not to be that is the question". We index multi-word sequences and devise novel query- optimization methods over the indexed sequences to efficiently answer phrase queries. We demonstrate the superior performance of our approaches over existing methods by extensive experimentation on real-world web archives. Matthias LANG Foundations of Realistic Rendering – A Mathematical Approach (Advisor: Prof. Philipp Slusallek) Wed, 28 August 2013, 15:00h, building D3 2, DFKI, Visualisierungszentrum (-1.63, NB) New, more efficient and more elegant algorithms for computing at least approximate solutions to the light transport equation and its different variants can be developed more easily in the context of a deeper understanding of the light transport equation. Since the problems of realistic renderings are deeply rooted in various mathematical disciplines, the complete understanding of the global illumination problem requires knowledge of several areas of mathematics. Our objective will be to strictly mathematically formulate the global illumination problem based on principles of functional analysis, the theory of integral equations as well as of measure, integration, and probability theory. Additionally, we will try to show the interaction of all of these mathematical disciplines interwoven into the realistic rendering and to represent it in a comprehensible manner, especially for students and other interested people. Bilyana TANEVA Automatic Population of Knowledge Bases with Multimodal Data about Named Entities Mon, 12 August 2013, 14:00h, building E1 4 (MPI-Inf), Room 024 Knowledge bases are of great importance for Web search, recommendations, and many Information Retrieval tasks. However, maintaining them for not so popular entities is often a bottleneck. Typically, such entities have limited textual coverage and only a few ontological facts. Moreover, these entities are not well populated with multimodal data, such as images, videos, or audio recordings. The goals in this thesis are (1) to populate a given knowledge base with multimodal data about entities, such as images or audio recordings, and (2) to ease the task of maintaining and expanding the textual knowledge about a given entity, by recommending valuable text excerpts to the contributors of knowledge bases. The thesis makes three main contributions. The first two contributions concentrate on finding images of named entities with high precision, high recall, and high visual diversity. Our main focus are less popular entities, for which the image search engines fail to retrieve good results. Our methods utilize background knowledge about the entity, such as ontological facts or a short description, and a visual-based image similarity to rank and diversify a set of candidate images. Our third contribution is an approach for extracting text contents related to a given entity. It leverages a language-model-based similarity between a short description of the entity and the text sources, and solves a budget-constraint optimization program without any assumptions on the text structure. Moreover, our approach is also able to reliably extract entity related audio excerpts from news podcasts. We derive the time boundaries from the usually very noisy audio transcriptions. Matthias BERG Formal Verification of Cryptographic Security Proofs (Advisor: Prof. Michael Backes) Fri, 9 August 2013, 15:00h, building E1 1, Room 407 Verifying cryptographic security proofs manually is inherently tedious and error- prone. The game-playing technique for cryptographic proofs advocates a modular proof design where cryptographic programs called games are transformed stepwise such that each step can be analyzed individually. This code-based approach has rendered the formal verification of such proofs using mechanized tools feasible. In the first part of this dissertation we present Verypto: a framework to formally verify game-based cryptographic security proofs in a machine-assisted manner. Verypto has been implemented in the Isabelle proof assistant and provides a formal language to specify the constructs occurring in typical cryptographic games, including probabilistic behavior, the usage of oracles, and polynomial-time programs. We have verified the correctness of several game transformations and demonstrate their applicability by verifying that the composition of 1-1 one-way functions is one- way and by verifying the IND-CPA security of the ElGamal encryption scheme. In a related project Barthe et al. developed the EasyCrypt toolset, which employs techniques from automated program verification to validate game transformations. In the second part of this dissertation we use EasyCrypt to verify the security of the Merkle-Damgaard construction – a general design principle underlying many hash functions. In particular we verify its collision resistance and prove that it is indifferentiable from a random oracle. Ralf OSBILD General Analysis Tool Box for Controlled Perturbation Algorithms and Complexity and Computation of Θ-Guarded Regions Fri, 2 August 2013, 14:00h, building E1 4 (MPI-Inf), Room 024 Diese Dissertation auf dem Gebiet der Algorithmischen Geometrie beschäftigt sich mit den folgenden zwei Problemen. 1. Die Implementierung von verlässlichen und effizienten geometrischen Algorithmen ist eine herausfordernde Aufgabe. Controlled Perturbation verknüpft die Geschwindigkeit von Fließ-komma-Arithmetik mit einem Mechanismus, der die Verlässlichkeit garantiert. Wir präsentieren einen allgemeinen ,,Werkzeugkasten" zum Analysieren von Controlled Perturbation Algorithmen. Dieser Werkzeugkasten ist in unabhängige Komponenten aufgeteilt. Wir präsentie-ren drei alternative Methoden für die Herleitung der wichtigsten Schranken. Des Weiteren haben wir alle Prädikate, die auf Polynomen und rationalen Funktionen beruhen, sowie Objekt- erhaltende Perturbationen in die Theorie miteinbezogen. Darüber hinaus wurde der Werkzeugkasten so entworfen, dass er das tatsächliche Verhalten des untersuchten Algorithmus ohne vereinfachende Annahmen widerspiegelt. 2. Illumination und Guarding Probleme stellen ein breites Gebiet der Algorithmischen und Kombinatorischen Geometrie dar. Hierzu tragen wir die Komplexität und Berechnung von $\Theta$-bewachten Regionen bei. Sie stellen eine Verallgemeinerung der konvexen Hülle dar und sind mit $\alpha$-hulls und $\Theta$-maxima verwandt. Die Schwierigkeit beim Studium der $\Theta$- bewachten Regionen ist die Abhängigkeit ihrer Form und Komplexität von $\Theta$. Für alle Winkel $\Theta$ beweisen wir grundlegende Eigenschaften der Region, leiten untere und obere Schranken ihrer worst-case Komplexität her und präsentieren einen Algorithmus, um die Region zu berechnen. Jeremias RÖßLER From Software Failure to Explanation (Advisor: Prof. Andreas Zeller) Fri, 12 July 2013, 10:00h, building E1 1, Room 407 "Why does my program crash?"—This ever recurring question drives the developer both when trying to reconstruct a failure that happened in the field and during the analysis and debugging of the test case that captures the failure. This is the question this thesis attempts to answer. For that I will present two approaches which, when combined, start off with only a dump of the memory at the moment of the crash (a core dump) and eventually give a full explanation of the failure in terms of the important runtime features of the program such as critical branches, state predicates or any other execution aspect that is deemed helpful for understanding the underlying problem. The first approach (called RECORE) takes a core dump of a crash and by means of search-based test case genera-tion comes up with a small, self-contained and easy to understand unit test that is similar to the test as it is attached to a bug report and reproduces the failure. This test case can server as a starting point for analysis and manual de-bugging. Our evaluation shows that in five out of seven real cases, the resulting test captures the essence of the fail-ure. But this failing test case can also serve as the starting point for the second approach (called BUGEX). BUGEX is a universal debugging framework that applies the scientific method and can be implemented for arbitrary runtime features (called facts). First it observes those facts during the execution of the failing test case. Using state-of-the-art statistical debugging, these facts are then correlated to the failure, forming a hypothesis. Then it performs experi-ments: it generates additional executions to challenge these facts and from these additional observations refines the hypothesis. The result is a correlation of critical execution aspects to the failure with unprecedented accuracy and instantaneously point the developer to the problem. This general debugging framework can be implemented for any runtime aspects; for evaluation purposes I implemented it for branches and state predicates. The evaluation shows that in six out of seven real cases, the resulting facts pinpoint the failure. Both approaches are independent form one another and each automates a tedious and error prone task. When being combined, they automate a large part of the debugging process, where the remaining manual task—fixing the defect—can never be fully automated. Kaustubh PATIL Genome signature based sequence comparison for taxonomic assignment and tree inference (Advisor: Prof. Alice McHardy) Wed, 29 May 2013, 15:00h, building E1 5 (MPI-SWS), HS 002 In this work we consider the use of the genome signature for two important bioinformatics problems; the taxonomic assignment of metagenome sequences and tree inference from whole genomes. We look at those problems from a sequence comparison point of view and propose machine learning based methods as solutions. For the first problem, we propose a novel method based on structural support vector machines that can directly predict paths in a tree implied by evolutionary relationships between taxa. For the second problem we propose a distance metric learning method. Based on the assumption that for different groups of prokaryotes different oligonucleotide weights can be more informative, our method learns group-specific distance metrics. In the outlook, we expect that for the addressed problems the work of this thesis will complement and in some cases even outperform alignment-based sequence comparison at a considerably reduced computational cost, allowing it to keep up with advancements in sequencing technologies. Sabine SCHMALTZ Towards the Pervasive Formal Verification of Multi-Core Operating Systems and Hypervisors Implemented in C (Advisor: Prof. Wolfgang Paul) Fri, 10 May 2013, 14:00h, building E1 7, Room 001 The Verisoft XT project had the goal of verifying correctness of the Microsoft Hyper- V hypervisor and achieved great code verification results using the concurrent C verification tool VCC developed by our project partners during the project. A sound mathematical theory to support code verification was not established. To remedy this shortcoming, we sketch a model stack for a simplified multi-core architecture based on a simplified MIPS model for system programmers and illustrate on a high level of abstraction how to obtain a simulation between neighboring models. We survey the current state of theory development and outline missing links and parts. As part of the dissertation, a hardware model for our architecture is formalized at a detailed level of abstraction of the model stack. In addition, we provide operational semantics for a quite simple intermediate language for C as well as an extension of this semantics with specification (ghost) state and code which can serve as a basis for arguing the soundness of VCC. Kim HERZIG TLB Virtualization in the Context of Hypervisor Verification Mon, 6 May 2013, 16:00h, building E1 1, Room 407 Developers change source code to add new functionality, fix bugs, or refactor their code. Many of these changes have immediate impact on quality or stability. However, some impact of changes may become evident only in the long term. This thesis makes use of change genealogy dependency graphs modeling dependencies between code changes capturing how earlier changes enable and cause later ones. Using change genealogies, it is possible to: (a) Apply formal methods like model checking on version archives to reveal temporal process patterns. Such patterns encode key features of the software process and can be validated automatically: In an evaluation of four open source histories, our prototype would recommend pending activities with a precision of 60-72%. (b) Classify the purpose of code changes. Analyzing the change dependencies on change genealogies shows that change genealogy network metrics can be used to automatically separate bug fixing from feature implementing code changes. (c) Build competitive defect prediction models. Defect prediction models based on change genealogy network metrics show competitive prediction accuracy when compared to state-of-the-art defect prediction models. As many other approaches mining version archives, change genealogies and their applications rely on two basic assumptions: code changes are considered atomic and bug reports are considered to refer to corrective maintenance tasks. In a manual examination of more than 7,000 issue reports and code changes from bug databases and version control systems of open-source projects, we found 34% of all issue reports to be mis-classified and that up to 15% of all applied issue fixes consist of multiple combined code changes serving multiple developer maintenance tasks. This introduces bias in bug prediction models confusing bugs and features. To partially solve these issues and to measure its impact we present an approach to untangle such combined changes with a mean success rate of 58-90% after the fact. Mikhail KOVALEV Mining and Untangling Change Genealogies Wed, 27 Mar 2013, 15:00h, building E1 7, Room 0.01 In this thesis we address the challenges of hypervisor verification for multicore processors. As a first contribution we unite different pieces of hypervisor verification theory into a single theory comprising the stack of highly nontrivial computational models used. We consider multicore hypervisors for x86-64 architecture written in C. To make code verification in a C verifier possible, we define a reduced hardware model and show that under certain safety conditions it simulates the full model. We introduce an extension of the C semantics, which takes into consideration possible MMU and guest interaction with the memory of a program. We argue that the extended C semantics simulates the hardware machine, which executes compiled hypervisor code, given that the compiler is correct. The second contribution of the thesis is the formal verification of a software TLB and memory virtualization approach, called SPT algorithm. Efficient TLB virtualization is one of the trickiest parts of building correct hypervisors. An SPT algorithm maintains dedicated sets of ''shadow'' page tables, ensuring memory separation and correct TLB abstraction for every guest. We use our extended C semantics to specify correctness criteria for TLB virtualization and to verify a simple SPT algorithm written in C. The code of the algorithm is formally verified in Microsoft's VCC automatic verifier, which is ideally suited for proofs performed on top of our semantic stack. Oana CIOBOTARU Rational Cryptography: Novel Constructions, Automated Verification and Unified Definitions Tues, 26 Mar 2013, 14:00h, building E1 7, Room 0.01 Rational cryptography has recently emerged as a very promising field of research by combining notions and techniques from cryptography and game theory, because it offers an alternative to the rather inflexible traditional cryptographic model. In contrast to the classical view of cryptography where protocol participants are considered either honest or arbitrarily malicious, rational cryptography models participants as rational players that try to maximize their benefit and thus deviate from the protocol only if they gain an advantage by doing so. The main research goals for rational cryptography are the design of more efficient protocols when players adhere to a rational model, the design and implementation of automated proofs for rational security notions and the study of the intrinsic connections between game theoretic and cryptographic notions. In this thesis, we address all these issues. First we present the mathematical model and the design for a new rational file sharing protocol which we call RatFish. Next, we develop a general method for automated verification for rational cryptographic protocols and we show how to apply our technique in order to automatically derive the rational security property for RatFish. Finally, we study the intrinsic connections between game theory and cryptography by defining a new game theoretic notion, which we call game universal implementation, and by showing its equivalence with the notion of weak stand-alone security. Stefan WARWAS A Model-driven Framework for Engineering Multiagent Systems Mon, 11 Mar 2013, 14:00h, building D3 4 (access via D3 2), VIS-Room -1.63 Since the invention of computer systems, the level of abstraction of software languages has been steadily in-creased from op-codes to object-oriented languages. Agent technology promises to embody high-level concepts that go beyond those of object-oriented approaches. This dissertation presents the Bochica framework for Agent-Oriented Software Engineering (AOSE). The framework's task in the software development process is (i) to capture the design decisions for a system under consideration on a platform- independent level of abstraction and (ii) to project this design to a target platform. Bochica goes beyond the state-of-the-art in AOSE as it combines the benefits of a platform-independent approach with the possibility to address concepts of custom application domains and execution environments. Several extension interfaces are specified to enable the customization of the underlying modeling language to the engineer's needs. Bochica is accompanied by an iterative adaptation process to gradually incorporate extensions. Conceptual mappings for projecting Bochica models to executable code are specified. In order to enable Bochica for modeling agents that inhabit semantically-enhanced virtual worlds, an according extension model is proposed. Finally, a model-driven reverse engineering approach for lifting the underlying design of already implemented Multiagent System (MAS) to the platform-independent layer is introduced. The framework has been successfully evaluated for designing intelligent agents that operate a virtual production line as well as for extracting the underlying design of an already implemented MAS. The evaluation results show that the Bochica approach to AOSE contributes to overcome the gap between design and code. Ya-Fang WANG Methods and Tools for Temporal Knowledge Harvesting Mon, 25 Feb 2013, 15:00h, building E1 4 (MPI-Inf), Rm 0.24 To extend the traditional knowledge base with temporal dimension, this thesis offers methods and tools for harvesting temporal facts from both semi-structured and textual sources. Our contributions are briefly summarized as follows. 1. Timely YAGO: A temporal knowledge base called Timely YAGO (T-YAGO) which extends YAGO with temporal attributes is built. We define a simple RDF-style data model to support temporal knowledge. 2. PRAVDA: To be able to harvest as many temporal facts from free-text as possible, we develop a system PRAVDA. It utilizes a graph-based semi-supervised learning algorithm to extract fact observations, which are further cleaned up by an Integer Linear Program based constraint solver. We also attempt to harvest spatio-temporal facts to track a person's trajectory. 3. PRAVDA-live: A user-centric interactive knowledge harvesting system, called PRAVDA-live, is developed for extracting facts from natural language free-text. It is built on the framework of PRAVDA. It supports fact extraction of user-defined relations from ad-hoc selected text documents and ready-to-use RDF exports. 4. T-URDF: We present a simple and efficient representation model for time- dependent uncertainty in combination with first-order inference rules and recursive queries over RDF-like knowledge bases. We adopt the common possible-worlds semantics known from probabilistic databases and extend it towards histogram-like confidence distributions that capture the validity of facts across time. All of these components are fully implemented systems, which together form an integrative architecture. PRAVDA and PRAVDA-live aim at gathering new facts (particularly temporal facts), and then T-URDF reconciles them. Finally these facts are stored in a (temporal) knowledge base, called T-YAGO. A SPARQL-like time-aware querying language, together with a visualization tool, are designed for T-YAGO. Temporal knowledge can also be applied for document summarization. Christoph CULLMANN Cache Persistence Analysis for Embedded Real-Time Systems Thurs, 14 Feb 2013, 15:00h, building E1 7, Rm 001 To compute a worst-case execution time (WCET) estimate for a program running on a safety-critical hard real-time system, the effects of the architecture of the underlying hardware have to be modeled. The classical cache analysis distinguishes three categories for memory references to cached memory: always-hit, always-miss and not-classified. The cache persistence analysis tries to classify memory references as persistent thereby improving the classical cache analysis by limiting the number of misses for not-classified memory references. We present several new abstract interpretation based cache persistence analyses. Two are based on the concept of conflict counting, one on the may cache analysis, and one combines both concepts. All analyses also fix a correctness issue of the original cache persistence analysis by Ferdinand and Wilhelm. For non-fully-timing-compositional architectures using the persistence information is not straightforward. A novel path analysis enables the use of persistence information also for state-of-the-art architectures that exhibit timing anomalies / domino effects. The new analyses are practically evaluated within the industrially used WCET analyzer aiT on a series of standard benchmark programs and a series of real avionic examples. Jens HAUPERT DOMeMan: Repräsentation, Verwaltung und Nutzung von digitalen Objektgedächtnissen Wed, 16 Jan 2013, 16:15h, building D3 2 (DFKI), Rm -2.17 (Reuse) This thesis addresses the research question, how an infrastructure for digital object memories has to be designed. Primary goal of this thesis is to identify and develop components and processes of an architecture concept particularly suited to represent, manage, and use digital object memories. In order to leverage acceptance and deployment of this novel technology, the envisioned infrastructure has to include tools for integration of new systems, and for migration with existing systems. Special requirements to object memories result from the heterogeneity of data in so- called open-loop scenarios. On the one hand, it has to be flexible enough to handle different data types. On the other hand, a simple and structured data access is required. Depending on the application scenario, the latter one needs to be complemented with concepts for a rights- and role-based access and version control. First, this thesis provides a data model for structuring object memory data by means of meta data, which enables cross-domain data exchange in heterogeneous scenarios. Then, a software architecture concept will be introduced, which provides means of storing and accessing memory data, and integrates an activity component into object memories. Finally, the concept is completed by a toolset that enables developers to migrate existing datasets, to create new structures based on semantic information, and to support user interaction with this data by means of application- specific visualizations. © 2018 Saarbrücken Graduate School of Computer Science
CommonCrawl
humanities and social sciences communications Human–dog relationships during the COVID-19 pandemic: booming dog adoption during social isolation Liat Morgan ORCID: orcid.org/0000-0002-5001-93381, Alexandra Protopopova2, Rune Isak Dupont Birkler3, Beata Itin-Shwartz4, Gila Abells Sutton1, Alexandra Gamliel1, Boris Yakobson5 & Tal Raz ORCID: orcid.org/0000-0002-8966-61101 Humanities and Social Sciences Communications volume 7, Article number: 155 (2020) Cite this article 164 Altmetric The recent COVID-19 pandemic led to uncertainty and severe health and economic concerns. Previous studies indicated that owning a companion animal, such as a dog or a cat, has benefits for good mental health. Interactions with animals may help with depression and anxiety, particularly under stress-prone conditions. Human–animal interactions may even improve peer-to-peer social relationships, as well as enhance feelings of respect, trust, and empathy between people. Interestingly, it has also been shown that stress and poor well-being of dog owners negatively affect the well-being of their companion animals. However, a dramatic increase in dog abandonment could potentially occur due to COVID-19 related health, economic and social stresses, as well as due to the inconclusive reports of companion animals being potential COVID-19 carriers. Such a scenario may lead to high costs and considerable public health risks. Accordingly, we hypothesized that the COVID-19 pandemic, and the related social isolation, might lead to dramatic changes in human–dog bidirectional relationships. Using unique prospective and retrospective datasets, our objectives were to investigate how people perceived and acted during the COVID-19 pandemic social isolation, in regards to dog adoption and abandonment; and to examine the bidirectional relationship between the well-being of dog owners and that of their dogs. Overall, according to our analysis, as the social isolation became more stringent during the pandemic, the interest in dog adoption and the adoption rate increased significantly, while abandonment did not change. Moreover, there was a clear association between an individual's impaired quality of life and their perceptions of a parallel deterioration in the quality of life of their dogs and reports of new behavioral problems. As humans and dogs are both social animals, these findings suggest potential benefits of the human–dog relationships during the COVID-19 pandemic, in accordance with the One Welfare approach that implies that there is a bidirectional connection between the welfare and health of humans and non-human animals. As our climate continues to change, more disasters including pandemics will likely occur, highlighting the importance of research into crisis-driven changes in human–animal relationships. The virus SARS-CoV-2 emerged in December 2019, in Wuhan, China. This unknown respiratory disease developed into the pandemic, termed COVID-19, as declared by the World Health Organization on March 2020 (Bojdani et al., 2020). One of the main approaches worldwide for combating the disease is social isolation and distancing, at least until a protective vaccine is available (Koo et al., 2020; Lewnard and Lo, 2020; Bavel et al., 2020). Social isolation may prevent the spread of the disease, but it may also lead to other concerns. One of the greatest concerns regarding the influence of social isolation is its psychological effect on humans. Extended social isolation may lead to a significant decrease in quality of life and well-being, and high levels of stress, in both the infected and non-infected populations (Xiao et al., 2020; Bavel et al., 2020). Social isolation is an additional stressor to an already highly stressful world environment and people's extensive fear of the novel COVID-19 pandemic threat (Bavel et al., 2020; LeDoux, 2012; Mobbs et al., 2015). In addition, social distancing included full lockdowns in many countries, as well as in Israel, with dramatic economic effects (Anser et al., 2020; Sangar et al., 2019). Adverse local and global economic impacts, in addition to drastic personal income reduction, may be detrimental to people's psychological health and general well-being (Xiao et al., 2020). Interestingly, the mental health benefits of owning a companion animal, such as a dog or a cat, have been shown by several scientific studies (Serpell, 1991; Beetz et al., 2012; Powell et al., 2019). The majority of studies indicate that interactions with animals may help with depression, anxiety, and stress, in particular under stress-prone conditions (Beetz et al., 2012). On the one hand, companion animals provide companionship, improve mood, and may ease loneliness; human–animal interactions may even improve peer-to-peer social relationships, as well as enhance feelings of respect, trust, and empathy between people (Powell et al., 2018; Beetz et al., 2012; Powell et al., 2019). On the other hand, it has also been shown that stress and poor well-being of owners negatively affect the stress and well-being of their companion animals (Buttner et al., 2015; Sumegi et al., 2014; Ryan et al., 2019). For example, there has been some indication that the stress of the owner could influence their dog's cognitive ability (Sumegi et al., 2014). Moreover, changes in the attention of owners to their dogs may affect the behavior of the dogs (Kaminski et al., 2009; Payne et al., 2016). Therefore, we hypothesized that the COVID-19 pandemic might lead to dramatic changes in human–dog bidirectional relationships. On the one hand, owning a dog may assist the owner in coping with the stressful world situation, and therefore, more people may decide to adopt a dog during this pandemic. On the other hand, behavioral problems in dogs were reported to be one of the main reasons for the abandonment of dogs to shelters (Patronek et al., 1996; Salman et al., 2000); if changes in the lives of owners occurred during the COVID-19 pandemic, and indeed, if behavioral problems in their dogs developed as was shown under other circumstances (Sumegi et al., 2014), then this might increase the risk of dog relinquishment. Another potential risk factor for dog abandonment and relinquishment during the COVID-19 pandemic was their suspected epidemiological role in the spread of SARS-CoV-2. There was a worldwide growing concern that companion animals, specifically dogs and cats, could transmit the disease to humans (Goumenou et al., 2020; Parry, 2020; Leroy et al., 2020). Although the anecdotal reports were inconclusive, it could lead to an increase in the number of dogs relinquished by their owners. Thus, overall, the inconclusive reports of companion animals being potential carriers of the COVID-19 virus, the economic crisis, and the general stress and panic during this pandemic, could potentially cause a dramatic increase in dog abandonment numbers. Since such a scenario might incur high costs and present considerable risk to public health, it should be explored. Relinquishment and abandonment of companion animals is a global problem. It is estimated that millions of pets are abandoned each year (Fatjo et al., 2015), even without a pandemic in the background. It results in increasing numbers of free-roaming animals, overcrowded animal shelters, impaired animal welfare, and it carries high costs to tax payers (Fatjo et al., 2015). Moreover, it is a severe public health issue due to the potential transmission of zoonotic diseases (such as rabies) and attacks on people (Carter, 1990; Burgos-Caceres, 2011). All of these threats also carry remarkable economic consequences, which affect national and local governments, humane organizations, as well as individuals (Carter, 1990). In 2012, an online, searchable database of animals that need homes in Israel was established (http://Yad4.co.il), by the first author. The first and only project of its kind in Israel, Yad4, serves as a national database for dog adoption, as it includes the vast majority of abandoned dogs that need homes throughout the country. As such, the established database provides both an understanding of the current landscape of dog abandonment and adoption at any given moment, as well as a unique look into the longitudinal relationships of dogs and people as the same dogs may be tracked across time, multiple homes, and shelter stays. The Yad4 initiative aims to rescue abandoned animals in Israel by increasing adoption rates, reducing the extent of dog euthanasia, and shortening the length of stay at the shelters until adoption, and has no profit purposes. The website offers a user-friendly search engine for potential adopters to find available dogs from organizations and municipal shelters across the country of Israel. The information is uploaded and updated by animal welfare organizations and municipal veterinarians, typically as soon as they have the dog in their possession. As of 2020, 72 animal welfare organizations and municipal shelters are registered and active on the website, each managing its own pool of adoptable pets independently, with its own online account. During the COVID-19 pandemic, the website operated as usual; although, initially, there was a concern for massive abandonment and a decrease in adoption. In order to control the pandemic, gradual social restrictions were initiated during March 2020 in Israel, while in April, a total lockdown was implemented for a full month by the Israeli government, as marked on the timeline in Fig. 1. During this period, walking the dog and veterinary care were exceptions for the lockdown restrictions, as well as dog adoptions from animal welfare organizations and municipal shelters. Therefore, whereas it was not allowed to be outside of a 100 m radius from your home, dog adoption and dog walking were permitted throughout these periods. Fig. 1: Timeline of COVID-19 pandemic in Israel. The different colors, which get darker, represent the various periods analyzed in this study (x-axis): before the COVID-19 outbreak in China (years 2016–2019; light gray); from the initial outbreak in China until the first diagnosed patient in Israel (dark gray); during the outbreak in Israel, from the diagnosis of the first COVID-19 patient until lockdown declared by the Israeli government (light brown); during the full lockdown for a month (brown); and the gradual opening on May (gray, on the right side of the figure). The daily number of new diagnosed COVID-19 patients in Israel is represented as red dots. The objectives of this study were to investigate: (1) how the COVID-19 pandemic affected adoption and abandonment of dogs at shelters, and the public's general interest in adopting a dog; (2) the association between the quality of life of owners and their dogs during the pandemic; as well as (3) the effect of the pandemic on the development of new behavioral problems and on the relinquishment rate of dogs by their owners. This study focused on a new aspect of the COVID-19 pandemic by investigating the human–dog relationship during this crisis. Dog adoptions, abandonment, as well as the association between the well-being of the owners and their perceptions of the quality of life of their dogs, were examined. Overall, in contrast to some of the initial concerns, all dog adoption measures significantly improved as the social restrictions became stricter. Furthermore, there was a clear association between an individual's quality of life and their perceptions of their dog's quality of life and behavior, as well as the probability of their relinquishing their pet. Changes in dog adoption and abandonment The database of Yad4 website was analyzed in order to investigate dog abandonment and adoptions under the growing pressure of the COVID-19 pandemic. Most abandoned dogs which are offered for adoption in Israel are published on Yad4 website, which includes most animal welfare organizations and municipal shelters for dogs. Therefore, the dogs uploaded on a daily basis to the website represent the abandoned dog population, which includes mainly dogs that were relinquished by their owners. Overall, according to our analysis, the stricter the social restrictions became during the COVID-19 pandemic in Israel, the number of potential adopters (people looking to adopt a dog), as well as the dog adoption rate, increased significantly (Fig. 2); while dog abandonment did not change. Multiple linear regression analyses of Yad4 records from January 2016 to May 2020 revealed that the main periods during the development of the pandemic in Israel were significantly associated with dog adoption measures, while the abandonment rate did not change (Fig. 2; Supplementary Table S1). The number of dogs uploaded to the website, representing most of the abandoned dogs in Israel, did not change significantly over the years, including during the COVID-19 pandemic (Fig. 2a–c). On the contrary, adoption measures were significantly affected by the different periods (Fig. 2d–i), particularly after the first COVID-19 patient in Israel was diagnosed, and even to a further extent during the social lockdown. Between the time that the first patient in Israel was diagnosed to the full lockdown of the country, the average number of adoption requests submitted online was 31.1 ± 1.9 (Mean ± SEM) requests per day; during the total lockdown, the average number of dog adoption requests was 111.3 ± 4.1 requests per day; and during the gradual opening on May, 73 ± 4.6 adoption requests were submitted per day. However, before the COVID-19 outbreak in China, the average daily number of dog adoption requests was only 25.7 ± 4.1 requests per day. Linear regression analysis revealed that after controlling for the effects of the month, the year and governmental initiatives for the encouragement of responsible dog ownership between 2018 and 2019, the increase in the number of adoption requests during the outbreak in Israel, and the full lockdown, were significantly higher than the period before the COVID-19 outbreak in China (P < 0.05; Fig. 2d–f; Supplementary Table S1). Accordingly, the average number of adopted dogs increased significantly already following the outbreak in China, as well as during the outbreak in Israel and the full lockdown, as compared to before the pandemic (P < 0.05; Fig. 2g–i; Supplementary Table S1). Immediately after the outbreak in China, the average daily number of adopted dogs was 17.3 ± 2.2 dogs per day; during the outbreak in Israel it was 22.8 ± 2.1 adopted dogs per day; during the total lockdown it was 26.1 ± 2.2 adopted dogs per day; and after the gradual opening it was 14.7 ± 1.1 adopted dogs per day, which is similar to the period before the COVID outbreak in China (14.1 ± 0.3 adopted dogs per day). Furthermore, as compared to the years prior to the COVID-19 pandemic, the length of stay (LOS) of the dog at the shelter, calculated as the interval from the time the dog was uploaded online to the Yad4 website until it was marked by the organizations as adopted, was significantly shorter following the media report of the COVID-19 outbreak in China and subsequently, with the shortest LOS (10.1 ± 0.5 days) during the full lockdown. Potential effects such as the month, the year, and governmental initiatives were controlled in the linear regression models (Supplementary Fig. S1; Supplementary Table S1; P < 0.05). Fig. 2: Dogs' adoption and abandonment measures, during the COVID-19 outbreak in Israel. Each row represents data of a different variable: upper row (panels a–c) number of abandoned dogs (marked in red); middle row (panels d–f) number of adoption requests made by potential owners (marked in blue); lower row (panels g–i) number of adopted dogs (marked in green). Daily data is presented on the first and second columns. Each dot represents the daily number of each parameter, to demonstrate trends over time. In the left column (panels a, d, g), data are presented from 2016 until May 2020. On the middle column (panels b, e, h) data are presented as zoom-in, from November 2019 to May 2020. Period of times related to the COVID-19 pandemic are separated by colors, as detailed in Fig. 1. In the right column (panels c, f, i), the results of Multivariate Linear Regression models are presented. In these models, the predictors were: the different time periods, from the outbreak in China to outbreak in Israel, the developments in Israel until full lockdown, full lockdown, and gradual opening; each period was compared to the period prior to COIVD-19 pandemic (from 2016 until the outbreak in China, represented by the horizontal dotted line); controlled for year, month, and governmental initiatives for dog adoption on 2019. The data are presented as coefficients (large dots) and its 95% confidence interval (bars); P < 0.05. Another option available for the public on Yad4 website was to fill in a request to serve as a foster family, as an alternative to adoption. Usually, the demand for foster families among the organizations is very high, but the number of available foster families is low. Therefore, typically, there are no available foster families since the organizations use them all. During the pandemic period, the number of foster families was higher than the demand. Accordingly, from the reports about the outbreak in China until the end of the lockdown in Israel, as well as during the gradual opening, the number of available foster families increased significantly. For example, as described in Fig. 3a, b, by the end of April 2019 there were no available foster families on Yad4 website, since they were all occupied and used by the organizations; contrarily, at the time of the outbreak in China, 226 foster families were available but did not receive a dog to foster, and by the end of April 2020, there were 844 available foster families. Fig. 3: Online users' visits in the Israeli Yad4 website, and worldwide Google searches, for adoptable dogs before and during COVID-19 pandemic. a The daily numbers of visitors on Yad4.co.il, the Israeli adoption search engine, from January 2016 to May 2020. b Zoom-in on the same data as in panel a during COVID-19 pandemic, from November 2019 to May 2020. c Results of the Linear Regressions model for Yad4 online visits, in each period during the COVID-19 pandemic, as compared to before the pandemic. In these models, the predictors were: the different periods, from the outbreak in China to outbreak in Israel, the developments in Israel until full lockdown, full lockdown, and gradual opening; each period was compared to the period prior to the COVID-19 pandemic (from 2016 until the outbreak in China, represented by the horizontal dotted line); controlled for: year, month, and governmental initiatives for dog adoption on 2019. d The weekly trends of Google searches for "adopt a dog" are presented from November 2019. e Zoom-in on the same data as in panel d, during COVID-19 pandemic. Both worldwide searches (orange) and USA searches (blue) are presented. f Results of the Linear Regressions model for global searches for adoptable dogs. In this model, the predictors were: the different periods, from the outbreak in China to the declaration of the World Health Organization on Europe as the epicenter of the pandemic, during the time most of the world was under restricted social isolation, and the gradual opening on May 2020. Each period was compared to the period from January 2019 to the outbreak in China (represented by the horizontal dotted line); controlled for: year and month. In panels c and f, data are presented as coefficients (large dots) and its 95% confidence interval (bars); P < 0.05. Local and global online searches for adoptable dogs The daily number of visitors on the Yad4 website since the first COVID-19 patient diagnosed in Israel until the end of the full lockdown was significantly higher, as compared to the whole period before the pandemic (Fig. 3). The effect of year and month were controlled in the models (Supplementary Table S1). The linear regression model revealed that there was a significant increase in daily visits online when the outbreak emerged in Israel during March by 657.9 ± 80.8 (coefficient ± SE) visits, and by 2311 ± 82.1 daily visitors online during the total lockdown period (Fig. 3a–c; P < 0.05). For example, the absolute number of visits online in April 2020 was 221,959 visits, as compared to 72,703 in April 2019, and 91,920 visits in October 2019, which is typically the busiest season of the website. Interestingly, according to global non-scientific media reports, the demand for adoptable dogs worldwide was also high in other countries. Pictures of empty cages from many countries were published, but until now, to the best of the authors' knowledge, no scientific data has yet been published documenting this phenomenon. Thus, the global trend was investigated by analyzing Google Trends data for searches all around the world, as well as specifically in the USA. In order to do so, the timeline was divided to three periods: (1) before the outbreak in China; (2) from the first media reports about the outbreak in China on December 27th until March 13th—when the World Health Organization (WHO) announced Europe to be the epicenter of the pandemic; (3) the main lockdown worldwide—from the announcement of the WHO until the gradual opening on May; and (4) during May. The effect of year and month were controlled in the models (Supplementary Table S2). Interestingly, the world trends, according to the Google Trends data, were found to be similar to that we report herein for Israel (Fig. 3). The trends of worldwide searches online for "adopt a dog" were significantly higher during the periods of the outbreak in China, as well as during the period many countries declared lockdowns, as compared to the year of 2019 (Fig. 3d–f). Given the high demand for dogs to adopt during the pandemic, the second part of our study included questionnaires targeting people who had recently adopted a dog, as well as current general dog owners, to explore the motivation behind this increase in demand for adoptable dogs. The motivation for dog adoptions during COVID-19 pandemic lockdown, and the return rate of dogs back to shelters after the gradual opening of the lockdown An online questionnaire was carried out in order to explore the reasons for dog adoption, particularly during the COVID-19 related lockdown, as well as to explore the return rate of the adopted dogs to the shelters, during the lockdown, and after the opening of the lockdown. This questionnaire was active for five days, starting on May 20th, 2020 (20 days after the gradual opening of the lockdown), and targeted people who adopted a dog from a shelter during the COVID-19 pandemic. The questionnaire targeted individuals who had adopted a dog as described in the :Methods" section, resulting in n = 508 people in total; 312 of the respondents stated that they had adopted a dog during the pandemic (January–May). Of these 312 new dog owners, 38.5% of participants stated they had considered adopting a dog for a long time, and being at home during the COVID-19 lockdown seemed like a good opportunity; 37.8% stated that they had planned to adopt a dog regardless of the situation; 8.0% stated they felt lonely and/or stressed and believed that owning a dog might help; 9.3% had heard about dog abandonment in the media and felt it was the right thing to do; and a few people adopted for other reasons, as detailed in Fig. 4. Only 8 of the participants, who had adopted a dog during the pandemic (2.6%), had already returned or relinquished the dog or have been considering relinquishment. Fig. 4: Reasons for dog adoption during the COVID-19 pandemic. The frequencies of the participants' statements for the reason to adopt specifically during the pandemic are presented. Reasons related to the pandemic are marked in red. Other reasons are presented in black. The association between impaired quality of life of owners to their perception of the quality of life of their dogs In order to study the association between the quality of life of owners and their companion dogs under the COVID-19 pandemic situation, a digital questionnaire for dog owners was active during the full lockdown and social isolation (April). Participants replied to questions regarding their own well-being, as well as the well-being of their companion dog. Questions such as the effect of the pandemic on their stress level and personal finances, their concern about their own health, and their perceptions regarding their dog's well-being and behavior under the COVID-19 related lockdown, were included. The questionnaire also included questions regarding the characteristics of the owners and their dogs, as well as the care they provided to their dog during the pandemic. These variables were controlled in the statistical models (details in Supplementary Table S3). The outcome variables were set on a scale of 1–5 (for example, 1-low stress; 5-extremely stressed). Scores 4 and 5 were relabeled as "severe stress" for the analyses, and they were compared to scores 1–3: "none to moderate". The questionnaire was answered by n = 3138 individuals. Overall, 25% of the participants were very concerned about their health (Fig. 5a), 25.6% stated they were extremely stressed (Fig. 5b), and 22.9% reported that their personal finances were severely affected (Fig. 5c). For further analysis, an impaired quality of life index was calculated as the mean of these scores (general stress; concern for their own health; and the damage to their personal financial situation; Fig. 5d). In addition, in the questionnaire, owners were asked to rank on a scale of 1–5 their assessment of the quality of life of their dogs during the COVID-19 lockdown, as well as their recognition of new behavioral problems, and whether they have considered relinquishing their dog. Fig. 5: Association between dog owners' life-quality and their perceptions of their dog's life-quality and behavior. a Frequencies of participants' answers regarding their concern for their own health on a scale of 1–5. b Frequencies of participants' answers regarding their stress level on a scale of 1–5. c Frequencies of participants' answers regarding their personal economic damage on a scale of 1–5. d Index of the impaired quality of life of the owner, based on the data presented in panels a–c. e Results of Logistic Regression model for dog's parameters by the increased impaired quality of life of the owners, as reported by the owner; quality of life of the dog (round dot), development of dog behavioral problems (square), and the intention of the owner to abandon the dog (triangular). Data are presented as odds ratios and its 95% confidence interval (bars); P < 0.05 when the 95% confidence interval does not cross the horizontal dotted line. As hypothesized, multivariate logistic regressions revealed that an increase in the impaired quality of life index of the owner was associated with lower quality of life of the dog, as was assessed by the owner (odds ratio: 0.887 times lower for every one-unit increase in owner's quality of life index; Fig. 5e; P < 0.05). In addition, for a one-unit increase in impaired quality of life index of the owner, the odds ratio for recognition of new behavioral problems of the dog (as defined and recognized by the owner) was 1.397 times higher (Fig. 5e; P < 0.05). Moreover, for a one-unit increase in the impaired quality of life index of the owner, the odds ratio for relinquishment was 1.762 times higher (Fig. 5d; P < 0.05). Overall, the number of people that recognized behavioral problems in their dogs was low (11.6% of the dog owners), as well as the number of people that considered relinquishing their dog (1%). Still, according to these data, severely impaired quality of life of owners under the COVID-19 pandemic and lockdown was a significant risk factor associated with the quality of life of the dog, as well as for the recognition of the dog's behavioral problems, and for dog relinquishment, as reported by dog owners. Characteristics of the dogs and owners, as well as ownership habits, were controlled in the statistical models, as fully detailed in the "Methods" section and Supplementary Table S3. Further questions that were included in the model regarding the type of behavioral changes in the dogs are presented in Supplementary Table S4. The study was conducted in accordance with the ethical guidelines of The Hebrew University of Jerusalem. As detailed below, data analyzed included four main datasets: (1) retrospective data from the pet adoption website Yad4 (http://Yad4.co.il), an online search engine for adoptable pets in Israel, from January 2016 to May 2020; (2) retrospective data regarding worldwide Google searches for adoptable dogs, downloaded from Google Trends, from November 2016 to May 2020; (3) data gathered from a prospective online digital questionnaire targeting dog owners in Israel, which was active from March 27th, 2020 to April 30th, 2020, during the COVID-19 related full lockdown in Israel; and (4) data gathered from an online digital questionnaire targeting people in Israel who adopted a dog from a shelter during the COVID-19 pandemic; the questionnaire was active from May 20th to May 25th, 2020, a period of time following the gradual opening of the lockdown. Collection of data regarding abandoned adoptable dogs, adoptions and adopters from Yad4 website Information was gathered regarding abandoned adoptable dogs, adoptions, and adopter's data, as recorded by Yad4, an open-source online website. In this Israeli website, animal welfare organizations and municipal veterinarians upload individual information for each abandoned dog, typically as soon as it enters the shelter, and this information is available to potential new dog owners who can fill out an online adoption request form through the website to be considered and approved by the shelter. Dataset included: records of the dogs uploaded to the website, date of marking the dog as adopted if indeed adopted, the number of adoption requests sent through the website, as well as requests to serve as foster families. Data regarding the online use of the Yad4 website were extracted from Google Analytics. The database of the Yad4 website was analyzed from January 2016 to May 2020, and included: 33,883 adoptable dogs, 2,618,190 online visits on the website, 53,923 online adoption requests, and 2042 fostering applications. As demonstrated in Fig. 1, data were compared in five different periods: from January 2016 until the outbreak in China; from the initial outbreak in China until the first diagnosed COVID-19 patient in Israel; during the outbreak in Israel, from the diagnosis of the first patient until the full lockdown declared by the Israeli government; during the full lockdown for a month; and during the gradual opening in May 2020. Collection of data regarding worldwide Google searches for adoptable dogs Data regarding the use of the website and worldwide Google searches for adoptable dogs were extracted by Google Analytics (https://analytics.google.com). Retrospective data (from January 2016 until May 2020) regarding trends of searches online for "adopt a dog", both in the USA and worldwide, were downloaded from Google Trends (https://trends.google.com) and used for the analysis. As detailed below, data were compared in four different periods: before the COVID-19 outbreak in China; from the media reports about the outbreak in China on December 27th, 2020 until March 13th, 2020—when the World Health Organization (WHO) announced Europe as the epicenter of the pandemic; from the WHO announcement until the gradual opening in May 2020 (the main lockdown in many countries worldwide); and (4) during May 2020 (the gradual opening in many countries). Online digital questionnaire for dog owners during the COVID-19 related lockdown in Israel An online digital questionnaire targeting dog owners in Israel was active during the COVID-19 related full lockdown in Israel. The questionnaire was designed by the researchers using Google Forms, and was distributed by a company who specializes in this purpose (Lead Marketing Ltd.; https://leadmarketingltd.com), in order to specifically and effectively reach dog owners in Israel. The online distribution of the questionnaire was based on targeting a pre-defined group of respondents, with a high level of accuracy, characterized by their interests and online behavior (e.g., users that shop online for dog food or who perform searches for information on dog care). Various digital platforms were used (i.e., Google Display Network, Facebook, Instagram, and others), and banner ads led users to the questionnaire, asking them to voluntarily and anonymously participate in the survey with their consent to be part of this research study. The questionnaire for dog owners was in Hebrew, and participants were asked to reply to questions regarding their own well-being, as well as the well-being of their companion dog. It included questions regarding the characteristics of the dog population; the source of their dog (e.g., adopted from a shelter, backyard breeding, official breeders), the age of the dog and reproductive status (sterilized or intact), the number of years they have owned the dog, where the dog is kept (e.g., inside an apartment, in a private house, in a garden, free roam). The characteristics of the owners included age, the geographical area in Israel, type of residential area (e.g., city, countryside), and gender. In addition, owners were asked questions regarding their well-being under the lockdown due to the COVID-19 pandemic (on a scale of 1–5); "how stressed are you overall from the COVID-19 pandemic?" (1-not stressed, 5-severely stressed); "to what extent are you worried about your health risk from the COVID-19 epidemic?" (1-not worried, 5-extremely worried); "to what extent the current crisis was harmful to your personal financial income?" (1-not at all, 5-severely harmful); "to what extent was your daily routine altered during this time?" (1-no change, 5-extreme change). In addition, owners were asked how many times a day they walked the dog during the lockdown, the average length of the walk, whether the attention they gave the dog changed (increased, was not changed, or decreased), their assessment of the overall quality of life of their dog under the lockdown (1-remarkedly impaired, 5-markedly improved), if there were new behaviors expressed by their dog, and whether or not they were considering relinquishing their dog. For the purpose of analyses regarding the link between human well-being and their answers regarding their dog, an impaired quality of life index was generated, by calculating the average score of the owners based on the responses regarding their overall stress, health concerns, and their personal financial harm due to the COVID-19 epidemic and lockdown, as detailed above. The questionnaire was conducted from March 27th to April 30th, 2020, during the COVID-19 related full lockdown in Israel, and was successfully answered by 3138 individuals. Records were not included if they were incomplete, were completed by people who stated they did not own a dog during the time of the questionnaire or by minors (under 18 years old), or if the age of the dog or the number of years that they raised it were irrational (e.g., 51, 139). Thus, 2906 records were included in the analyses. Characteristics of the participants and their dogs are detailed in Supplementary Fig. S2. Online digital questionnaire for people in Israel who adopted a dog from a shelter during the COVID-19 pandemic An online digital questionnaire targeting mainly people in Israel who adopted a dog from a shelter during the COVID-19 pandemic, was active from May 20th to May 25th, 2020, after the gradual opening of the lockdown. The questionnaire was designed by the researchers in Hebrew using Google Forms, and was distributed by Lead Marketing Ltd., in a similar manner as the first questionnaire, in order to effectively target individuals of the pre-defined group, such as individuals who visited dog adoption websites, mainly Yad4, dog shelters and their Facebook fan pages. Participants were asked to reply to questions regarding the date of dog adoption, the main reason for the specific timing of the adoption (Fig. 5), as well as on the short-term success of the rehoming (e.g., planning to keep the dog, gave it to another family, returned it to the shelter, or considering not keeping it). The questionnaire was answered by 508 participants, and 312 of them stated they adopted the dog during the pandemic (January–May, 2020) Statistical analyses were performed using commercial statistical software (IBM SPSS Statistics, version 24.0; STATA, version 15.0). Linear regression analysis was utilized to evaluate the effects of the spread of COVID-19 and lockdown stages on adoption and abandonment outcomes for pet dogs, using data from the Yad4 adoption website. The general structure of the estimated regressions was as detailed in the Eq. (1). $$\begin{array}{*{20}{l}} Y_t =\beta _0 + \delta _{{\mathrm{ChinaOutbreak}}} + \delta _{{\mathrm{LocalOutbreak}}} + \delta _{{\mathrm{LocalLockown}}} + \delta _{{\mathrm{OpenUp}}} + \gamma _t\cr \\ + \,\beta _1 \cdot {\mathrm{trend}}_t + \beta _2 \cdot {\mathrm{regulation}}_t + \varepsilon _t \end{array}$$ where Yt was the outcome of interest in month t, the δ's were dummy variable effects of the stages of outbreak and lockdown, all compared to the baseline period, before the outbreak of COVID-19 in China. \(\delta _{{\mathrm{ChinaOutbreak}}}\) is a dummy variable for the months between the outbreak of COVID-19 in China and the first confirmed case in Israel; \(\delta _{{\mathrm{LocalOutbreak}}}\) is a dummy variable for the period between the first local case and the start of the full lockdown; \(\delta _{{\mathrm{LocalLockdown}}}\)—between the start of the lockdown and the start of gradual opening in Israel. γt are the calendar month fixed effects controlling for seasonality in adoption activities, and \({\mathrm{trend}}_t\) controls for a linear annual time trend. \({\mathrm{regulation}}_t\) controls for a change in governmental initiatives with regard to encouragement of responsible ownership and adoptions between 2018 and 2019, in order to make sure this does not drive our results. εt is a standard error term. Several outcomes were considered variables: the number of adoption requests received through the website and the number of dogs marked as adopted, as measures of the level of interest in conducting an adoption process and the final outcome of successful adoptions; the number of dogs uploaded to the website, as a measure of recent abandonment cases; and the number of users on the website, as a measure of general interest in adoption. The major part of the analysis uses Israeli data, as detailed above. However, two outcome variables were added from Google Trends to compare the results to worldwide trends. Google Trends were used to construct two additional outcome variables: the number of web-searches, in the US, and worldwide, of the phrase "adopt a dog" during the same period. The regression analysis for these outcomes was similar to the main model, but the stages of shutdown were defined as: the outbreak in China, the declaration of World Health Organization on Europe as the epicenter of the pandemic, and the gradual opening in May 2020. Because the lockdown policies are not centralized in the US and worldwide, we did not include a specific separate shutdown time-period. The digital questionnaire of the dog owners was analyzed using logistic regression. The binary outcome variables that were considered were: the quality of life of the dog, as assessed by the owner; the development of new behavioral problems if recognized and defined by the owners; and whether the owner was considering abandoning the dog. The general model for estimation was as detailed in Eq. 2. $$\begin{array}{*{20}{l}} {\mathrm{Prob}}\left( {Y_i = 1} \right) = \beta _0 + \beta _1 \cdot {\mathrm{LifeChange}}_i + \beta _2 \cdot {\mathrm{CrisisIndex}}_i + \gamma \cdot Z_i \cr \\ + \,\delta \cdot W_i + \alpha \cdot D_i + \nu _i + \varepsilon _i \end{array}$$ where Yt is the binary outcome of interest for respondent i. \({\mathrm{LifeChange}}_i\) is a dummy variable depicting whether the respondent declared that his life changed following the COVID-19 outbreak. \({\mathrm{CrisisIndex}}_i\) is an average of three responses addressing three aspects of negative effects of the outbreak: economy, health concerns, and stress (as reported by the responders). Zi are owner characteristics: gender, age and whether there are young children in the household. Wi are dog characteristics: age, whether the dog was adopted from a shelter, number of years with the owner. Di are characteristics of the care given to the dog: number of walks a day, the average duration of the walks, and a general measure of attention to the dog. vi are geographical area fixed-effects and εi is a standard error term. The logistic regressions, as they are based on the responses of the owners, which can be biased on their own, should be interpreted as descriptive analyses rather than being given causal interpretation. Descriptive statistics are given as mean ± SE, 95% confidence interval, or as frequency (n) with percentage (%), as applicable. A P < 0.05 was considered statistically significant. All reported P-values were based on a two-tailed hypothesis. Humans and dogs are both social animals, and their bond can be traced back at least 15,000 years to the Bonn-Oberkassel dog that was found buried with two humans (Janssens et al., 2016). According to the 2019–2020 National Pet Owners Survey conducted by the American Pet Products Association (APPA), approximately 63.4 million households in the USA owned at least one dog, making them the most widely owned type of companion animal across the USA at this time. The advantages of raising a dog have been widely investigated. The human–dog bond has potential physical, psychological and mental benefits, and can improve the general well-being and happiness of owners (Lass-Hennemann et al., 2020; Tzivian et al., 2015; Barker and Barker, 1988; Wells, 2007). Despite all the known advantages, and the evidence that separation between a dog and its owner negatively impacts not only the dog but also the wellness of the owner (Lowe et al., 2015), millions of companion dogs are abandoned every year (Marder and Duxbury, 2008). Dog abandonment carries high costs and a significant risk for public health (Fatjo et al., 2015; Kumar, 2002; Carter, 1990). Prior to this study, it was unknown whether the COVID-19 pandemic was a risk factor for dog abandonment, as well as a risk for impaired well-being of the dogs as a reflection of the potentially impaired well-being of the owners. Therefore, the motivation to conduct this study was to explore the human–dog relationship during this pandemic, to benefit the welfare and well-being of both humans and animals, in accordance with the One Welfare approach. The One Welfare approach extents the One Health theme, suggesting that there is a strong connection between welfare and health of human and animals, including both physical and mental health, and that improving animal welfare often improves human welfare (and vice versa) (Pinillos et al., 2016; Mor et al., 2018; Panning et al., 2016; Lem, 2019; Jordan and Lem, 2014; Card et al., 2018). According to this approach, veterinarians, animal's owners, animal welfare organizations, human psychiatrists, environmental scientists, and others, should collaborate and share expertize in order to care for the welfare of both animals and their owners. Accordingly, the rationale behind this study was the hypotheses that human perceptions and acts regarding dog ownership and adoption might be influenced by the COVID-19 pandemic and the related social isolation, as well as the stress and well-being of both species. Our data indicate that not only is the concern of increased dog abandonment not justified, at least so far, the opposite has occurred. As social restrictions increased during the COVID-19 pandemic, the rates of dog adoptions improved significantly (Fig. 2); the demand for adoptable dogs and the requests to serve as foster families increased significantly, and accordingly, the length of stay of dogs at the shelter was significantly shorter. Previous reports following disasters, such as earthquakes or other situations that require immediate evacuation, were associated with a massive unintentional dog abandonment (Nagasawa et al., 2012). However, people may refuse to separate from their pet when needed due to disasters or extreme situations, as pet owners may find their pets closer or at the very least, as close as family (Chadwin, 2017; Barker and Barker, 1988). This may be the reason why, so far, the vast majority of people were reluctant to relinquish their dog during the COVID-19 pandemic. Still, further investigation is required, as the potential risk for dog relinquishment in the coming months cannot be completely excluded, due to the various social and economic impacts that this pandemic may yet bring. Furthermore, as our climate continues to change, more disasters, including additional pandemics, will likely occur, highlighting the need for more research into crisis-driven human behavior changes, including changes in the human–animal relationship. While it may be clear why people kept their companion animals, the motivation to acquire a new dog through adoption, particularly during the COVID-19 related lockdown, is less intuitive. As expected, many people stated they decided to adopt a dog since they had been planning to adopt prior to the COVID-19 outbreak, as well as the fact that people were at home and more available to the new challenge. In addition, acknowledgment of the fact that a dog can reduce feelings of stress and loneliness, as well as misleading media publications about increased dog abandonment, played an important role in their decision. Surprisingly, neither pressure from children and the desire to keep children occupied, nor an excuse to leave the house during the lockdown, were reported to play an important role in the decision of owners to adopt a dog under the circumstances. In the scientific literature, the characteristics of individuals associated with a higher likelihood to adopt a dog, such as ethnicity and housing, were described (Holland, 2019; Weiss et al., 2012); however, the specific timing for adoption has not been investigated. Nevertheless, a previous study found that owners who just obtained a dog expected that the new dog ownership would increase their walking activity, happiness, companionship, and would decrease stress and loneliness (Powell et al., 2018). This may explain the increase in adoption rate during the COVID-19 pandemic, as social isolation was legally enforced. In addition, to determining the adoption and relinquishment rates associated with COVID-19, we investigated the effect of the stressful pandemic on dog welfare in the pet home environment. Therefore, the questionnaire for dog owners examined the relationship between their impaired quality of life during the pandemic and the quality of life of their companion dog. Although there is an obvious limitation in this study in that the quality of life of the dog and the development of new behavior problems were based subjectively on the owner's perception rather than objectively, the results are nevertheless valuable; as the perceptions of the owners of their dog's behavior is likely a more important predictor of relinquishment than objective measures. Previous researches have found that new owners of dogs often do not report the same behavioral problems as relinquishing owners of the same dog, suggesting that perception plays an important role (Duffy et al., 2014; Stephen and Ledger, 2007). It was found that impaired quality of life of the owners was associated with a decrease in quality of life of their dog, as well as increased development of new behavioral problems, as judged by their owners. Although it has been reported that owners have poor ability to recognize behavioral problems (Powell et al., 2018; Tami and Gallagher, 2009), the perception of the owners can influence the future of the owner-pet relationship, as well as the probability that they would decide to relinquish it (Payne et al., 2015). Thus, the perception of the owner that their dog has behavioral problems may influence their ownership, and by that, also the welfare of the dog. As mentioned, the quality of life of the dog, and its behavior, were neither diagnosed objectively, nor by professional observers. Therefore, the characteristics of the dogs and of the owners as risk factors for the low quality of life and new behavioral problems of the dog, cannot be thoroughly concluded from these models. Still, those variables were controlled in the statistical models, and it was found that the perception of dog owners regarding their own impaired quality of life was significantly associated with their assessment of lower quality of their dog's life and its emerging behavioral problems. These results are consistent with previous studies, which found that there is an effect of the stress level and well-being of the humans, on the stress, well-being, cognitive ability and behavior of their dogs (Buttner et al., 2015; Sumegi et al., 2014; Kaminski et al., 2009). An alternative hypothesis may be that owners in a crisis situation may have had a pessimistic outlook regarding things in their life and their surroundings, and the resulting decrease of the quality of life of the dog was due to the overall negative outlook of the owner rather than any true reflection on the dog. This information is important for both humans and dogs, since it can provide valuable information for initiatives to improve the welfare and well-being of both the dog owners and their companion dogs, as suggested by the One Welfare approach. For example, the Israeli Veterinary Services (a branch of the Ministry of Agriculture), invests an annual amount of approximately $1.2 M on encouraging responsible dog ownership and adoptions, and has already approved new upcoming initiatives based on this study, such as digital online adoption days, with education geared towards responsible dog ownership. In this study, although the overall number of dog owners who reported that they were going to relinquish their dog due to the COVID-19 situation was low, it was significantly associated with a poorer quality of life index of the owners. Among new dog owners that had adopted the dog during the pandemic, a similar percentage is reported for owners that had already relinquished their dog or were considering doing so. Since a lack of time of the owners is one of the main risk factors for dog relinquishment reported in the literature (Salman et al., 2000), it was a concern that people who adopted during the COVID-19 lockdown would relinquish their dog after going back to routine life. The second survey to detect the relinquishment rate of dogs adopted during the pandemic was performed at the end of May 2020, after opening the lockdown. Therefore, in most cases, people were back to routine life more than a month following adoption. According to previous research, the highest proportion of dog relinquishment happens just one month after adoption; in fact, owners report knowing of behavioral problems with their dogs within 24 h post-adoption (Shore, 2005). It is important to mention that people who decided to relinquish their dog might have avoided our survey. Still, the relatively low number of dogs that were uploaded to Yad4 website on May 2020 may indicate that, so far, there has not been a massive relinquishment after opening the lockdown. Hence, we tentatively suggest that the majority of adoptions were successful. One hypothesis can be that owners had more time to spend with their dog at the beginning, which may have helped to ease rehoming. Nonetheless, this hypothesis requires further investigation in the long run. Furthermore, an important issue that was not covered in this study is related to the differences between individuals who had already owned a dog and those who had not, in regards to their coping with the extreme social and economic challenges during the COVID-19 pandemic. Still, studies show that both children and adults cope better with stress when owning a dog (Chadwin, 2017; Powell et al., 2019). Therefore, we hypothesize that owning a dog might even prevent the development of Post Traumatic Stress Disorders (PTSD) caused by the pandemic or at least ease the coping with it, once it has occurred. It has been reported that after the SARS outbreak in 2003, which may be equivalent to the COVID-19 pandemic in many aspects, patients suffered from Post Traumatic Stress Disorders (Wu et al., 2005). It is known that dogs have a positive effect on the treatment of PTSD, and that dog owners might be more resilient (Chadwin, 2017; Powell et al., 2019; Beetz et al., 2019). Therefore, this is an important future direction for human–pet relationship research. In summary, the COVID-19 pandemic that emerged in December 2019 in Wuhan, China, led to the utilization of social isolation in many countries, as well as to widespread uncertainty and severe health and economic concerns. Our study indicates that the stricter the social isolation became during the COVID-19 pandemic, the greater the interest in dog adoption. The adoption rate increased significantly, while dog abandonment did not change. Furthermore, there was a clear association between individual's quality of life and their perceptions of their dog's quality of life and behavior, as well as the probability of relinquishing the dog. As humans and dogs are both social animals, these findings suggest potential benefits of the human–dog relationships during the COVID-19 pandemic, in accordance with the One Welfare approach implying that there is a bidirectional connection between the welfare and health of humans and non-human animals. All data generated or analyzed during this study is included in this article (and its Supplementary Information files). Anser MK, Yousaf Z, Khan MA, Nassani AA, Alotaibi SM, Qazi Abro MM, Vo XV, Zaman K (2020) Does communicable diseases (including COVID-19) may increase global poverty risk? A cloud on the horizon. Environ Res 187:109668. Barker SB, Barker RT (1988) The human-canine bond: closer than family ties? J Mental Health Counsel 10(1):46–56 Bavel JJV, Baicker K, Boggio PS, Capraro V, Cichocka A, Cikara M, Crockett MJ, Crum AJ, Douglas KM, Druckman JN, Drury J, Dube O, Ellemers N, Finkel EJ, Fowler JH, Gelfand M, Han S, Haslam SA, Jetten J, Kitayama S, Mobbs D, Napper LE, Packer DJ, Pennycook G, Peters E, Petty RE, Rand DG, Reicher SD, Schnall S, Shariff A, Skitka LJ, Smith SS, Sunstein CR, Tabri N, Tucker JA, Linden SVD, Lange PV, Weeden KA, Wohl MJA, Zaki J, Zion SR, Willer R (2020) Using social and behavioural science to support COVID-19 pandemic response. Nat Human Behav 4(5):460–471 Beetz A, Schofmann I, Girgensohn R, Braas R, Ernst C (2019) Positive effects of a short-term dog-assisted intervention for soldiers with post-traumatic stress disorder-a pilot study. Front Vet Sci 6:170 Beetz A, Uvnas-Moberg K, Julius H, Kotrschal K (2012) Psychosocial and psychophysiological effects of human-animal interactions: the possible role of oxytocin. Front Psychol 3:234 Bojdani E, Rajagopalan A, Chen A, Gearin P, Olcott W, Shankar V, Cloutier A, Solomon H, Naqvi NZ, Batty N, Festin FED, Tahera D, Chang G, DeLisi LE (2020) COVID-19 pandemic: impact on psychiatric care in the United States. Psychiatry Res 289:113069 Burgos-Caceres S (2011) Canine rabies: a looming threat to public health. Animals 1(4):326–342 Buttner AP, Thompson B, Strasser R, Santo J (2015) Evidence for a synchronization of hormonal states between humans and dogs during competition. Physiol Behav 147:54–62 Card C, Epp T, Lem M (2018) Exploring the social determinants of animal health. J Vet Med Educ 45(4):437–447 Carter CN (1990) Pet population control: another decade without solutions? J Am Vet Med Assoc 197(2):192–195 Chadwin R (2017) Evacuation of pets during disasters: a public health intervention to increase resilience. Am J Public Health 107(9):1413–1417 Duffy DL, Kruger KA, Serpell JA (2014) Evaluation of a behavioral assessment tool for dogs relinquished to shelters. Prev Vet Med 117(3-4):601–609 Fatjo J, Bowen J, Garcia E, Calvo P, Rueda S, Amblas S, Lalanza JF (2015) Epidemiology of dog and cat abandonment in Spain (2008-2013). Animals 5(2):426–441 Goumenou M, Spandidos DA, Tsatsakis A (2020) [Editorial] Possibility of transmission through dogs being a contributing factor to the extreme Covid19 outbreak in North Italy. Mol Med Rep 21(6):2293–2295 Holland KE (2019) Acquiring a pet dog: a review of factors affecting the decision-making of prospective dog owners. Animals 9(4):124 Article PubMed Central Google Scholar Janssens LA, Street M, Miller R, Hazewinkel HA, Giemsch L, Schmitz R (2016) The oldest case yet reported of osteoarthritis in a dog: an archaeological and radiological evaluation. J Small Anim Pract 57(10):568–574 Jordan T, Lem M (2014) One health, one welfare: education in practice veterinary students' experiences with community veterinary outreach. Can Vet J 55(12):1203–1206 Kaminski J, Brauer J, Call J, Tomasello M (2009) Domestic dogs are sensitive to a human's perspective. Behaviour 146:979–998 Koo JR, Cook AR, Park M, Sun Y, Sun H, Lim JT, Tam C, Dickens BL (2020) Interventions to mitigate early spread of SARS-CoV-2 in Singapore: a modelling study. Lancet Infect Dis 20(6):678–688 Kumar S (2002) Stray dogs are a growing threat to public health. BMJ 325(7355):66–66 Lass-Hennemann J, Schafer SK, Sopp MR, Michael T (2020) The relationship between dog ownership, psychopathological symptoms and health-benefitting factors in occupations at risk for traumatization. Int J Environ Res Public Health 17(7):2562 LeDoux J (2012) Rethinking the emotional brain. Neuron 73(4):653–676 Lem M (2019) Serving homeless populations through a One Health approach. Can Vet J 60(10):1119–1120 Leroy EM, Ar Gouilh M, Brugere-Picoux J (2020) The risk of SARS-CoV-2 transmission to pets and other wild and domestic animals strongly mandates a one-health strategy to control the COVID-19 pandemic. One Health 10:100133 Lewnard JA, Lo NC (2020) Scientific and ethical basis for social-distancing interventions against COVID-19. Lancet Infect Dis 20(6):631–633 Lowe SR, Joshi S, Pietrzak RH, Galea S, Cerda M (2015) Mental health and general wellness in the aftermath of Hurricane Ike. Soc Sci Med 124:162–170 Marder A, Duxbury MM (2008) Obtaining a pet: realistic expectations. Vet Clin North Am Small Anim Pract 38(5):1145–1162 Mobbs D, Hagan CC, Dalgleish T, Silston B, Prevost C (2015) The ecology of human fear: survival optimization and the nervous system. Front Neurosci 9:55 Mor SM, Norris JM, Bosward KL, Toribio J, Ward MP, Gongora J, Vost M, Higgins PC, McGreevy PD, White PJ, Zaki S (2018) One health in our backyard: Design and evaluation of an experiential learning experience for veterinary medical students. One Health 5:57–64 Nagasawa M, Mogi K, Kikusui T (2012) Continued distress among abandoned dogs in Fukushima. Sci Rep 2:724 Article ADS PubMed PubMed Central CAS Google Scholar Panning C, Lem M, Bateman S (2016) Profiling a one-health model for priority populations. Can J Public Health 107(3):e222–e223 Parry NMA (2020) COVID-19 and pets: when pandemic meets panic. Forensic Sci Int 2:100090–100090 Patronek GJ, Glickman LT, Beck AM, McCabe GP (1996) Risk factors for relinquishment of dogs to an animal shelter. J Am Vet Med Assoc 209(3):572–581 Payne E, Bennett PC, McGreevy PD (2015) Current perspectives on attachment and bonding in the dog-human dyad. Psychol Res Behav Manag 8:71–79 Payne E, DeAraugo J, Bennett P, McGreevy P (2016) Exploring the existence and potential underpinnings of dog-human and horse-human attachment bonds. Behav Proces 125:114–121 Pinillos RG, Appleby MC, Manteca X, Scott-Park F, Smith C, Velarde A (2016) One welfare—a platform for improving human and animal welfare. Vet Rec 179(16):412–413 Powell L, Chia D, McGreevy P, Podberscek AL, Edwards KM, Neilly B, Guastella AJ, Lee V, Stamatakis E (2018) Expectations for dog ownership: Perceived physical, mental and psychosocial health consequences among prospective adopters. PLoS ONE 13(7):e0200276 Powell L, Edwards KM, McGreevy P, Bauman A, Podberscek A, Neilly B, Sherrington C, Stamatakis E (2019) Companion dog acquisition and mental well-being: a community-based three-arm controlled study. BMC Public Health 19(1):1428 Ryan MG, Storey AE, Anderson RE, Walsh CJ (2019) Physiological indicators of attachment in domestic dogs (Canis familiaris) and their owners in the strange situation test. Front Behav Neurosci 13:162 Salman MD, Hutchison J, Ruch-Gallie R, Kogan L, New JC, Kass PH, Scarlett JM (2000) Behavioral reasons for relinquishment of dogs and cats to 12 shelters. J Appl Animal Welfare Sci 3(2):93–106 Sangar S, Dutt V, Thakur R (2019) Comparative assessment of economic burden of disease in relation to out of pocket expenditure. Front Public Health 7:9 Serpell J (1991) Beneficial effects of pet ownership on some aspects of human health and behaviour. J R Soc Med 84(12):717–720 Shore ER (2005) Returning a recently adopted companion animal: adopters' reasons for and reactions to the failed adoption experience. J Appl Anim Welf Sci 8(3):187–198 Article MathSciNet CAS PubMed Google Scholar Stephen J, Ledger R (2007) Relinquishing dog owners' ability to predict behavioural problems in shelter dogs post adoption. Appl Animal Behav Sci 107(1):88–99 Sumegi Z, Olah K, Topal J (2014) Emotional contagion in dogs as measured by change in cognitive task performance. Appl Animal Behav Sci 160:106–115 Tami G, Gallagher A (2009) Description of the behaviour of domestic dog (Canis familiaris) by experienced and inexperienced people. Appl Animal Behav Sci 120(3):159–169 Tzivian L, Friger M, Kushnir T (2015) Associations between stress and quality of life: differences between owners keeping a living dog or losing a dog by euthanasia. PLoS ONE 10(3):e0121081 Weiss E, Miller K, Mohan-Gibbons H, Vela C (2012) Why did you choose this pet?: Adopters and pet selection preferences in five animal shelters in the United States. Animals 2(2):144–159 Wells DL (2007) Domestic dogs and human health: an overview. Br J Health Psychol 12(Pt 1):145–156 Wu KK, Chan SK, Ma TM (2005) Posttraumatic stress after SARS. Emerg Infect Dis 11(8):1297–1300 Xiao H, Zhang Y, Kong D, Li S, Yang N (2020) Social capital and sleep quality in individuals who self-isolated for 14 days during the Coronavirus Disease 2019 (COVID-19) outbreak in January 2020 in China. Med Sci Monit 26:e923921 We thank the Yad4.co.il website for sharing their data, and specifically to the technical developer of YAD4, Mr. Omri Amos, for extracting the data, as well as the CTS group, and Farmina pet-food, Israel, the current supporters of the website. We thank all study participants and adopters who were willing to provide information. We thank the Universities Federation for Animal Welfare (UFAW) for their support in all our work regarding animal welfare in Israel. Gratitude to Ms. Odelya Natan and her team from Lead Marketing Ltd. Koret School of Veterinary Medicine, Robert H. Smith Faculty of Agriculture, Food and Environment, The Hebrew University of Jerusalem, Jerusalem, Israel Liat Morgan, Gila Abells Sutton, Alexandra Gamliel & Tal Raz Faculty of Land and Food Systems, University of British Columbia, Vancouver, BC, V6T 1Z4, Canada Alexandra Protopopova BLAVATNIK CENTER for Drug Discovery, Metabolite Medicine Division, Tel Aviv University, Tel Aviv, Israel Rune Isak Dupont Birkler Environmental Economics and Management, Robert H. Smith Faculty of Agriculture, Food and Environment, The Hebrew University of Jerusalem, Jerusalem, Israel Beata Itin-Shwartz Ministry of Agriculture and Rural Development, Kimron Veterinary Institute, Bet Dagan, Israel Boris Yakobson Liat Morgan Gila Abells Sutton Alexandra Gamliel Tal Raz T.R. supervised the project, conceived the ideas and study design, and revised the manuscript. B.Y. co-supervisor of the project. L.M. conceived the ideas, conducted the study, analyzed the results, and wrote the first draft of the manuscript. A.P. took part in the study design, ideas, and writing of the manuscript. R.B. assisted with the analyses and revised the manuscript. B.I.S contributed with the ideas, design, conducted the statistical analyses, and participated in writing the manuscript. G.S. conducted statistical anlyses. A.G. collected data. All authors read, revised, and approved the manuscript. Correspondence to Tal Raz. L.M is the founder and the owner of Yad4 website. The ownership of the website had no influence on the objectivity and the integrity of the manuscript. The remaining authors declare no competing interests. Morgan, L., Protopopova, A., Birkler, R.I.D. et al. Human–dog relationships during the COVID-19 pandemic: booming dog adoption during social isolation. Humanit Soc Sci Commun 7, 155 (2020). https://doi.org/10.1057/s41599-020-00649-x The impact of the COVID-19 pandemic on a cohort of Labrador retrievers in England Charlotte S. C. Woolley Ian G. Handel Dylan N. Clements BMC Veterinary Research (2022) Profiles of family pet ownership during the COVID-19 pandemic Eli D. Halbreich Megan K. Mueller Current Psychology (2022) Animal reservoirs of SARS-CoV-2: calculable COVID-19 risk for older adults from animal to human transmission Teresa G. Valencak Anna Csiszar Zoltan Ungvari GeroScience (2021) Pet Keeping in the Time of COVID-19: The Canine and Feline Companions of Young Children Mary Renck Jalongo Early Childhood Education Journal (2021) COVID-19: humanities and social sciences perspectives Referee instructions Humanities and Social Sciences Communications (Humanit Soc Sci Commun) ISSN 2662-9992 (online)
CommonCrawl
Finding a general equation for number of paths through grid I started with a 4x4 grid (although I want to eventually generalize for an n x n grid). You must move through a grid on the squares, not on the grid lines. The number of paths for path length = 1 is trivial, equaling n x n, so let's start with the case where the path length = 2. How many paths of length 2 are there on the 4x4 grid? Then we will consider, 3, 4, ... and hopefully get data that will help us figure out a general equation. 1. No square can be visited more than once in any path. 2. The path may go horizontally, vertically, or diagonally. I have manually figured out the answer for lengths 2, 3, and 4, and was hoping to use that data and math modelling to figure out a general equation to use for any specified length and any specified grid size. But I haven't yet figured it out, so I wanted to pose the question to this community and maybe learn something! Here are the answers I have calculated for lengths 2, 3, and 4: (Explanation: For path length = 2, the 3 in cell A1 means that there are 3 possible paths of length 2 that start in cell A1, and so on) For this problem, path order matters, so path AB is different than path BA, and ABC is different from CBA, and so on. So, there are 16 paths of length = 1, 84 paths of length 2, 408 paths of length 3, and 1,764 paths of length 4 (corrected by achille hui). Can we use this to figure out a general equation? graph-theory mathematical-modeling JLee JLeeJLee $\begingroup$ I put $84,408$ into oeis.org and found nothing useful $\endgroup$ – Ross Millikan Sep 20 '14 at 3:50 $\begingroup$ I get a different count for $n = 4$, $$\begin{array}{|c|c|c|c|} \hline 75 & 109 & 109 & 75\\ \hline 109 & 148 & 148 & 109\\ \hline 109 & 148 & 148 & 109\\ \hline 75 & 109 & 109 & 75\\ \hline \end{array}\quad\implies\verb/Total/ = 1764.$$ $\endgroup$ – achille hui Sep 20 '14 at 6:26 $\begingroup$ Some more data points. By brute force, some numbers of self-avoiding directed paths on an $n \times n$ grid with path length $p$ are: $$\begin{array}{r|rr} & \rlap{\quad\quad\quad\quad\quad\quad n}\\ p & 4 & 5 & 6 & 7 & 8\\ \hline 1 & 16 & 25 & 36 & 49 & 64\\ 2 & 84 & 144 & 220 & 312 & 420\\ 3 & 408 & 768 & 1240 & 1824 & 2520\\ 4 & 1764 & 3768 & 6508 & 9984 & 14196\\ 5 & 6712 & 17280 & 32520 & 52432 & 77016\\ 6 & 22672 & 74072 & 156484 & 268048 & 408764\\ 7 & 68272 & 296390 & 722384\\ 8 & 183472 & 1110000 & 3193800\\ 9 & 436984\\ 10 & 905776\\ \end{array}$$ $\endgroup$ – achille hui Sep 20 '14 at 10:01 $\begingroup$ Based on above numbers, an OEIS search return the series A186861 - $T(n,k)$ - Number of n-step king's tours on a kXk board. This OEIS entry is relatively new and there aren't much reference there. However, there is an empirical formula which I can't decrypt. You can take a look and see whether this will give you any insight. $\endgroup$ – achille hui Sep 20 '14 at 10:18 $\begingroup$ Thank you achille hui! Those extra data points will help, I think. I corrected n=4 in the question. What is the best way to tackle a math modeling problem like this? $\endgroup$ – JLee Sep 20 '14 at 14:05 I don't think there is an answer for this because part of the problem reduces to an open problem. Let $\mathcal{N}_n(p)$ be the number of path of length $p$ (more precisely, covered $p$ squares) on a $n\times n$ grid. As mentioned in comment, we can compute some of these $\mathcal{N}_n(p)$ by brute force. $$\begin{array}{r|rr} & \rlap{\quad\quad\quad\quad\quad\quad n}\\ p & 4 & 5 & 6 & 7 & 8\\ \hline 1 & 16 & 25 & 36 & 49 & 64\\ 2 & 84 & 144 & 220 & 312 & 420\\ 3 & 408 & 768 & 1240 & 1824 & 2520\\ 4 & 1764 & 3768 & 6508 & 9984 & 14196\\ 5 & 6712 & 17280 & 32520 & 52432 & 77016\\ 6 & 22672 & 74072 & 156484 & 268048 & 408764\\ 7 & 68272 & 296390 & 722384\\ 8 & 183472 & 1110000 & 3193800\\ 9 & 436984\\ 10 & 905776\\ \end{array}$$ Based on these numbers, an OEIS search return the series A186861 - T(n,k) - Number of n-step king's tours on a kXk board. It mentions there is an empirical formula for $T(n,k)$ for large $n$: $$T(n,k) = 3T(n-1,k) - 3T(n-2,k) + T(n-3,k)\quad\text{ for large } n$$ I look back at my program computing $\mathcal{N}_n(p)$, I find the empirical formula is true. Imagine we have an $\infty \times \infty$ grid and $p$ any positive integer. Let $X_p$ be the collection of self avoiding path covered $p$ squares starting at origin. Given any path $\ell \in X_p$, let $(x_1,y_1) = (0,0)$, $(x_2,y_2), \ldots, (x_p,y_p)$ be the squares visited by $\ell$. Let $W(\ell)$ and $H(\ell)$ be the horizontal and vertical span of the path. More precisely, $$\begin{align} W(\ell) &= \max\{ x_k : 1 \le k \le p \} - \min\{ x_k : 1 \le k \le p \}\\ H(\ell) &= \max\{ y_k : 1 \le k \le p \} - \min\{ y_k : 1 \le k \le p \} \end{align}$$ To generate all possible paths on a $n\times n$ grid, we can walk through the set of path in $X_p$ and translate each of them on the $n \times n$ grid. Given any $\ell \in X_p$, the number of paths it can generate is $$\begin{cases}(n - W(\ell))(n - H(\ell)),& n \ge \max( W(\ell), H(\ell) )\\ 0,&\text{otherwise}\end{cases}$$ As a result, $$\mathcal{N}_n(p) = \sum_{\substack{ \ell\in X_p\\ n \ge \max( W(\ell), H(\ell) )} } (n-W(\ell))(n-H(\ell))$$ When $n \ge p$, we find $n \ge W(\ell)$ and $\ge H(\ell)$ for all $\ell \in X_p$. This leads to $$\bbox[12pt,border: 1px solid blue;]{ \mathcal{N}_n(p) = \sum_{\ell\in X_p } (n-W(\ell))(n-H(\ell)) = \alpha_p n^2 - \beta_p n + \gamma_p \quad\text{ whenever } n \ge p } $$ where $$ \begin{cases} \alpha_p &= |X_p| = \sum\limits_{\ell \in X_p} 1\\ \beta_p &= \sum\limits_{\ell \in X_p} (W(\ell) + H(\ell))\\ \gamma_p &= \sum\limits_{\ell \in X_p} W(\ell) H(\ell) \end{cases} $$ In such case, $\mathcal{N}_n(p)$ becomes a quadratic polynomial in $n$ and the empirical formula is justified. Following are some numbers for small $p$. $$\begin{array}{r|rrr} p & \alpha_p & \beta_p & \gamma_p\\ \hline 1 & 1 & 0 & 0\\ 2 & 8 & 12 & 4\\ 3 & 56 & 144 & 88\\ 4 & 368 & 1308 & 1108\\ 5 & 2336 & 10456 & 11160\\ 6 & 14576 & 77924 & 99292\\ 7 & 89928 & 555464 & 817760\\ 8 & 550504 & 3839372 & 6382124\\ \end{array}$$ I have tried to use this number to make an OEIS search but I cannot find anything. In any event, it is clear the leading behavior of $\mathcal{N}_n(p)$ is controlled by the number $\alpha_p$. Numbers like $\alpha_p$ has been studied before in mathematics, statistical physics and chemsitry. It is usually under the subject Self Avoiding walk. If is also clear if we have a general formula for $\mathcal{N}_n(p)$, we will have a formula for $\alpha_p$. Unluckily, finding a formula for $\alpha_p$ is an open problem! Quoting wiki: There is currently no known formula for determining the number of self-avoiding walks, although there are rigorous methods for approximating them. Finding the number of such paths is conjectured to be an NP-hard problem. My recommendation is this: If you want to proceed mathematically, you should forget this version of problem first. You should look up the literature of self-avoiding walks for a simpler walk. For example, a walk only allow moves in horizontal and vertical direction on an infinite square lattice. Learn the basic first and see what sort of techniques are available for the simpler case. achille huiachille hui $\begingroup$ Thank you for the super-detailed answer! I am still reading it and studying the problem now. I will probably mark this as the answer later, but I want to understand it better first. $\endgroup$ – JLee Sep 22 '14 at 14:47 Not the answer you're looking for? Browse other questions tagged graph-theory mathematical-modeling or ask your own question. Number of simple paths between two vertices on an $n \times m$ square-grid graph? Algorithm for Collection of Shortest Paths in a Grid without any clash at a point of time. Minimum number of k length paths over n vertices Number of optimal paths through a grid with an ordered path constraint Number of "Unique effective" paths on a grid. Number of paths in a MxN matrix Number of paths of length n+i in n x n grid with diagonal Number of paths from A to B of length N Longest path in a square grid Upper bound on the number of fixed length, self avoiding paths between two points on a 2D grid
CommonCrawl
Why is space junk so persistent in LEO? From a similar post: How much of a problem is space junk, and how can we clean it up? I'm wondering why space junk seems to be ever-present. Most of it seems to be in Low Earth Orbit in the first place, and most of it seems to be very small fragments of metal, paint, or other stuff. So it seems to me that it's orbit would decay very quickly (a few days) because its so light-'weight'. Are we really depositing junk in orbit at such a high rate? Are we really that dirty? Can final stage separation really blow off that much debris into orbit? And correct me if I'm wrong but we are not launching orbital payloads at a rate of 1 every 3 days...are we? So I don't see what's replenishing this stream of space junk. Note: I know about those two satellites blown up by USA and China, along with that collision between Irridium and some Russian sputnik. Hopefully these kinda of things are exceptions rather than the primary 'supply' of space junk. spacecraft debris DrZ214DrZ214 $\begingroup$ Related: Reason for space debris clustering in LEO $\endgroup$ – TildalWave $\begingroup$ What makes you think that "light weight" space junk would decay more quickly? $\endgroup$ – Aron $\begingroup$ @Aron because atmospheric drag is proportionally much stronger against it. $\endgroup$ – DrZ214 The way I see it, the problem is two-fold: Firstly, regulations aim to limit the amount of time debris / satellites after the end of their useful lifetimes spend in orbit: (e) The orbital lifetime of objects passing through LEO (lower than 2,000 km) shall be shorter than 25 years after the end of operation. (Source: ISO 24113 Debris mitigation requirements and compliance) Why exactly Inter-Agency Space Debris Coordination Committee (IADC) wants to keep the satellites in orbit after the end of the mission (after all, you would think that it made more sense to take out the trash when the bin gets full, and not after 25 years?) is outlined a bit better by ESA: A growing number of countries have introduced regulations to limit the production of fresh debris within the protected low orbits, typically intending satellites to be brought down or boosted out of here within 25 years of the end of their working lives while reducing the risk to people on the ground to less than 1 in 10 000. They also need to be 'passivated', removing leftover propellant and powering down batteries to avoid explosions. So the debris is purposefully kept (for the moment; still deorbited within the planned 25 years) in orbit to try and decommission the satellite to reduce the chance of damage to humans. Also, if you read between the lines, this is done so that there would not be a hail of exploding metal on Earth, which would severely affect both the reputation and funding of space agencies. (An example of this would be to look at how strong opinions the population has towards nuclear power, whilst stating that they don't know it good enough; some might describe nuclear power as "a gift from gods, stomped on out of fear".) The second issue with prolonged presence of debris is the almost complete lack of opposing forces. In the frame of reference of debris, practically the only constant opposing force is the drag provided by Earth's atmosphere. The atmospherical drag affects everything within about 500 kilometres (about 310 miles or about 1.6 million feet) from the Earth's surface. A document by Cornell University gives the drag mathematically: For many satellites in low earth orbit (LEO), the largest dynamic model uncertainty stems from atmospheric drag. Acceleration due to atmospheric drag $\boldsymbol{a}_D$ is related to atmospheric density $\rho$ by the equation: $$ \boldsymbol{a}_D = −\frac{1}{2} \left( C_D \frac{A_v \left( t \right) }{m_s} \right) \rho v_r^2 \boldsymbol{e}_v $$ where $C_D$ is a drag coefficient, $A_v \left( t \right)$ is the cross-sectional area of the satellite in the direction of travel, $m_s$ is the total spacecraft mass, $v_r$ is the velocity magnitude relative to the ambient atmosphere, and $\boldsymbol{e}_v$ is a unit vector in the relative velocity direction. From the above formula, we can see that indeed a lower mass object would have a greater atmospherical drag like you suggested. What also plays a major role here is the cross-sectional area $A_v \left( t \right)$ of the debris. If you think about trying to throwing a beachball and a small rock of about equal masses, you know that you can throw the small rock way faster than the beachball, and this is due to the different cross-sectional area $A_v \left( t \right)$ of the two objects. Now, let's consider entire satellites vs smaller pieces of debris: the smaller piece is probably not hollow, but just a chunk of metal or some composite, where as the satellite will be relatively empty and hollow from the inside. In this case, the ratio between the cross-sectional area $A_v \left( t \right)$ and the total spacecraft mass $m_s$ would be greater for the satellite than it would be for the smaller piece of debris, i.e. the satellite would have a greater drag and deorbit sooner than the smaller piece of debris (assuming the constants and coefficients would stay the same for the two). This brings into question why did USA and China try blowing up old satellites, creating tens of thousands of new, smaller, longer lasting pieces of debris, when it does not make sense physically? Long story short, the rate is, sadly, not that high. There are efforts to combat it (see the ESA link above), but so far it is looking pretty horrible on short-term. And to top that, anything above the 500km threshold effectively feels no drag due to the atmosphere. For example, satellites in the geosynchronous orbit (GEO, around 36,000 km / 22,000 mi out) have to be positioned into a graveyard orbit at the end of their lifetime. From Wikipedia: A graveyard orbit, also called a junk orbit or disposal orbit, is a supersynchronous orbit that lies significantly above synchronous orbit, where spacecraft are intentionally placed at the end of their operational life. It is a measure performed in order to lower the probability of collisions with operational spacecraft and of the generation of additional space debris. To answer some of your shorter questions: This picture from Wikipedia gives some idea how dirty we are: Final stage separation is usually designed not to blow off a lot of debris into space, and agencies / launch companies are increasingly trying to bring earlier stages back to Earth (intact or not). Otherwise you may have a piece or two of debris that already has a downwards trajectory on stage separation (Newton's Third Law of Motion: the "extra push" on stage separation affects the earlier stage equally, so while the payload is pushed further away from Earth, the earlier stage is pushed towards Earth.) There are about 70-90 launches per year these days, so no, we are not launching orbital payloads at a rate of 1 every 3 days... Merely at a rate of about 1 every 4 days. ;) This does not equate to the number of satellites deployed however: for example, recent developments of CubeSats (tiny (10 cm by 10 cm by 10 cm), cheap satellites) have led to numerous smaller satellites being launched at once along with bigger payload. In late 2013, USAF launched a Minotaur I rocket loaded with its main payload, the Air Force's Space Test Program Satellite-3, and along with it 28 CubeSats, so a single launch deployed an impressive 29 satellites. So, after all this mumbling, the primary source of space junk really is us. And not really us launching more and more stuff into space, but our lack of progress in effectively removing our past, present, and future orbiters from space after their missions have ended. I hope this helped. (An obstacle I did not mention is political: governments do not really want to fund missions to remove the space junk. Politicians are much more interested in missions that can "be seen" and credited to their time in government, whereas removing debris would be quite opposite of this (nobody ever thinks highly of the garbageman, despite their infrastructurally important job).) V-JV-J $\begingroup$ "So the debris is purposefully kept in orbit to try and decommission the satellite to reduce the chance of damage to humans." No, the rule says you can't leave a satellite in LEO indefinitely. The 25-year period allows satellites time to have their orbit decay naturally (a shorter time period would mean you need to include fuel for the deorbit burn). A defunct satellite isn't that much of a problem: you have a single, large object in a well-defined orbit. It's the small stuff that causes problems. $\endgroup$ – Hobbes $\begingroup$ Whoops, I forgot the ever so important 'for the moment' from there… You're absolutely correct, and I agree with the defunct satellite "issue". When the Chinese "tested" blowing up a defunct satellite, I cannot understand how the outcome of 150,000 smaller objects — around 2,000 of which are trackable — traveling at several kilometres per second would have been a better situation to manage than just a single larger body… $\endgroup$ – V-J $\begingroup$ That sounds like an even more questionable goal… How on Earth did they manage to avoid the political fallout? I understand the program was a response to the U.S. withdrawal from the Anti-Ballistic Missile Treaty in 2002, but nonetheless they haven't done a similar test since 1985, and with several nations capable of similar technologies (according to en.m.wikipedia.org/wiki/…), why did the Chinese think it was necessary to perform a live test when many other capable nations had passed? $\endgroup$ $\begingroup$ @V-J It's because the US demonstrated that they can blow up a sattelite as early as 1985. They destroyed P78-1, or Solwind. So the Chinese demonstrated it too, destroying FY-1C. That's how military politics "works". One of them demonstrates a capability, the other has to too. And I guess the US thought it was their turn again because one year after destroying FY-1C, the US destroyed USA-193. $\endgroup$ $\begingroup$ Decent answer, but the last paragraph of the first "section" is just wrong, IMO. $\endgroup$ – Chris Not the answer you're looking for? Browse other questions tagged spacecraft debris or ask your own question. Reason for space debris clustering in LEO How much of a problem is space junk, and how can we clean it up? Orbital decay - does it apply to space debris? Criticality in space junk Can I get higher with Space Junk? Do space agencies take measures to prevent spent upper stages from becoming space junk? What liability do commercial space firms have for abandoned space junk? Why can't cubesats get gravity assists from space junk in Low Earth Orbit? Impact of space junk on Earth based space elevators How can space junk be dangerous at geosynchronous orbits? Who owns space junk? Why are we not seeing probability curves for space junk collision prediction? Are pictures of Earth' space junk realistic?
CommonCrawl
Mathematics of waves The pendulum Sound waves Light & Matter Trigonometry basics xaktly | Physics | Waves This section will refer to several mathematical models of waves. These are discussed in the Mathematics of Waves section. You might want to review that section before continuing with sound waves. What is a sound wave? The illustration above is a schematic representation of air molecules (blue spheres). Gas molecules aren't ever organized in nice rows and columns like that, but it helps for the diagram. A sound wave is a periodic disturbance in a medium like air, though sound can also travel through liquids and solids — more on that later. A. Sound is generated by vibration of something, like your vocal chords, a tuning fork or a piano string. As a vibrating object moves, it periodically pushes on the surrounding air molecules, causing them to move and compress. B. One such compression zone has been formed by a mechanical vibration on the left, and will now begin to move or propagate to the right. Particles that have received a push from the left will push on particles to the right, and the compression zone moves along. In panels C and D, the wave continues to propagate. The zones of high density gas are called compression zones and the zones of relatively low density in between are called rarefaction zones. E. A new compression zone has been formed and will continue to propagate to the right. The distance between compression zones is the wavelength of the wave. The disturbance in the medium (air) is in the same direction as the direction of the wave, so sound waves are called longitudinal waves. The disturbance is along the direction of travel. This is in contrast to transverse waves like water waves, in which the disturbance in the medium is at right angles to the direction of travel. The diagram below is another common way of illustrating compression and rarefaction in sound waves. Wave motion comes in two forms. In longitudinal waves (sound waves), the disturbance in the medium is in the direction (along) the wave. In transverse waves the disturbance is at right angles to the direction of the wave. Water waves created by tossing a pebble in a smooth pond spread out horizontally from the center, but the actual disturbance in the water is up and down. Another representation of longitudinal waves The spacing between vertical lines represents the average spacing between air molecules. In the top row, a compression zone has been formed and will travel to the right. If the physical object causing the disturbance is periodic, like a vibrating tuning fork, compression zones will continue to form at even intervals of time and distance (the wavelength). When something like the human ear experiences such a periodic compression wave, and if its wavelength is within the range of human hearing, it will be experienced as a sound. Representation with sine waves We've seen that sound waves are periodic alternations in compression and rarefaction of air. Because of their periodic nature, and even though they aren't actually transverse waves, it's still convenient to represent them with sine waves. All of the features of a sound wave map onto the sine wave: The amplitude of a sound wave corresponds to the volume or intensity of the sound. The frequency or wavelength correspond to the pitch of the sound, and relative phase is important for sound waves. Sounds can interfere, and even cancel, if they are produced out of phase. The relationship λ · ν = speed also holds for sound waves. The speed of sound in air and several materials are listed in this table: For all waves, including sound waves, $$\lambda \cdot \nu = \text{speed}$$ where λ (Greek lambda) is the wavelength (in meters), and ν (Greek nu) is the frequency (in Hz). The resulting units of speed are m/s or m·s-1. Sound transmission through solids & liquids Sound travels faster through liquids and solids than through air. That's not too hard to understand, given the nature of a sound wave in air. Sound traveling through air depends on collisions of particles. In air, those particles are widely spread out compared to the particles of a liquid or solid, so there is a time delay between collisions, resulting in a slower propagation of the wave. In liquids and solids, the atoms and molecules are much closer together, thus the time between collisions is reduced, resulting in faster sound-wave travel. Sound travels nearly 20 times faster through a bar of aluminum than through air, for example (see the table above for some representative speeds of sounds in air, liquids and solids). Sound travels at 1484 m/s in pure water, more than four times faster than through air, but not as fast as in a metal, where the atoms are even more tightly-packed. Effect of density The speed of sound in a medium is apparently proportional to the density of the medium. The more space between particles of some mass, the slower the speed of sound in that medium. That's evident in the speeds of sound in air, water and metal. But how do density changes in one type of medium affect the speed of sound waves? In solids and liquids, the speed of sound scales as the reciprocal of the square root of the density: $$\text{speed} \propto \frac{1}{\sqrt{\rho}}$$ where ρ (rho) is the Greek symbol for density. Clearly, as the density increases, the speed decreases. Why? It's because higher density means more mass in a given volume. That means heavier gas particles, which have more inertia and are harder to accelerate as the wave propagates, resulting in a slower wave. It's a little different for gases. When a gas is heated at constant pressure (think of atmospheric pressure), the molecules speed up and have harder collisions, creating more space between them and a lower density. We'd expect to see a reduced speed of sound in a hotter gas, but this is what we actually see: This is where science gets fun — when we don't get the expected results. The reason for this is that the faster speeds of hotter gas particles hasten collisions, and thus hasten the propagation of sound waves. Range of human hearing & the ear The human ear (when reasonably young) can hear sounds at frequencies between 20 Hz and 20 KHz, or between wavelengths of 17 m and 1.7 cm. When humans age, the mechanism of hearing deteriorates, particularly on the high-frequency end of the scale. Older adults lose the ability to hear very high frequencies starting at about age 50 or so. That's why kids can set their phone alerts at high frequency and hear the tone when their parents (or teachers) can't. The ear It's worth taking a look at how the ear works, because it's pretty ingenious. The diagram below is a highly schematic version of the ear, but the essential features important for hearing are all there. Sound waves enter the ear through the outer ear or the auditory canal, and hit a thin, tightly-stretched membrane, the ear drum (tympanic membrane). Sounds cause the drum to vibrate. On the other side of the drum, which separates the outer ear from the inner ear, are three small bones – the smallest in the body, the incus, malleus and stapes, commonly referred to (because of their shapes) as the hammer, anvil and stirrup, respectively. The hammer is connected to the inner side of the drum, and thus vibrates when the drum does. Acting like a series of levers, the hammer, anvil and stirrup amplify the vibrations of the drum. The stirrup is in contact with fluid (mostly water) in the snail-shaped cochlea. The cochlea (the word comes from Greek and Latin and means "spiral.") is an ingenious device for separating frequencies of sound. Lower pitched (long wavelength) sounds travel all the way through the spiral of the cochlea and are picked up by nerve endings there, while higher-frequency sounds don't get as far before being translated into a nerve impulse. The auditory nerves lead to the brain where sounds are processed and perceived. The ear is a transducer, a device that converts mechanical sound waves (longitudinal waves that impart kinetic energy to the ear drum) to electrical signals in the brain. Microphones and speakers It's not surprising that a microphone works in a way very similar to a biological ear (see diagram). When sound hits a microphone, it vibrates a thin membrane, usually a plastic or metal foil. A wire coil is coupled to that membrane in one of a variety of ways. A permanent magnet is inserted into the coil, and the coil moves back and forth across the magnet as the membrane responds to sound. We know (or you will eventually study) that when a coil of wire is moved through a magnetic field, a current is generated in the wire through electromagnetic induction. The two ends of the wire forming the coil are fed to an amplifier circuit which converts the current to a useful signal, say for broadcasting or recording. Speakers work in a similar way, just in reverse. An amplifier circuit delivers a current through a coil of wire, which produces a small magnetic field that alternatively opposes and aligns with the field of a permanent magnet. This produces a motion in the coil, which in turn vibrates a membrane, a reed or a large paper or fabric cone, depending on the range of pitches that the speaker is designed to produce. propagate To propagate in this context is to move forward and to spread out. It can also mean to spread an idea or rumor, or to continue a plant or animal gene line in breeding.
CommonCrawl
Enhancing laser speckle reduction by decreasing the pitch of a chiral nematic liquid crystal diffuser Controllable shifting, steering, and expanding of light beam based on multi-layer liquid-crystal cells Urban Mur, Miha Ravnik & David Seč Slow light nanocoatings for ultrashort pulse compression M. Ossiander, Y.-W. Huang, … F. Capasso High resolution multispectral spatial light modulators based on tunable Fabry-Perot nanocavities Shampy Mansha, Parikshit Moitra, … Arseniy I. Kuznetsov Merging transformation optics with electron-driven photon sources Nahid Talebi, Sophie Meuret, … Peter A. van Aken At-will chromatic dispersion by prescribing light trajectories with cascaded metasurfaces Andrew McClung, Mahdad Mansouree & Amir Arbabi Artifact-free holographic light shaping through moving acousto-optic holograms Dorian Treptow, Raúl Bola, … Mario Montes-Usategui Inhibition and enhancement of linear and nonlinear optical effects by conical phase front shaping for femtosecond laser material processing Ehsan Alimohammadian, Erden Ertorer, … Peter R. Herman Optically driven liquid crystal droplet rotator Keita Saito & Yasuyuki Kimura Synthesis of light needles with tunable length and nearly constant irradiance Rosario Martínez-Herrero, David Maluenda, … Artur Carnicer David J. Hansford1, Yihan Jin1, Steve J. Elston1 & Stephen M. Morris1 Scientific Reports volume 11, Article number: 4818 (2021) Cite this article The artefact known as speckle can plague numerous imaging applications where the narrow linewidth of laser light is required, which includes laser projection and medical imaging. Here, we report on the use of thin-film chiral nematic liquid crystal (LC) devices that can be used to mitigate the influence of speckle when subjected to an applied electric field. Results are presented which show that the speckle contrast (a quantitative measure of the presence of speckle) can be significantly reduced by decreasing the pitch of the chiral nematic LC from 2700 to 244 nm. Further reduction in the speckle contrast can be observed by operating the diffuser technology at a temperature close to the chiral nematic to isotropic transition. At such temperatures, we observe a simultaneous improvement in the transmission of light through the device and a decrease in the electric field amplitude required for the minimum speckle contrast value. We conclude by presenting a laser projected image of the 1951 USAF target with and without the LC device to demonstrate the visual improvement as a result of the speckle reduction. Speckle is a well-studied phenomenon that occurs when a coherent, or at least partially coherent, beam of light is scattered either by an optically rough reflecting surface or by travelling through a material with a random variation in its refractive index. Most surfaces can be considered as being optically rough, with high quality mirrors being a notable exception. Each point on a rough surface can be treated as a secondary source of light that contributes to the reflected light field. An observer with a finite aperture samples this optical field at a point in space and the intensity that is observed is then a summation of the complex optical fields (consisting of both amplitude and phase information) at that position. The variation in amplitude and phase that arises due to path differences between the observer and these 'secondary emitters' leads to a random amount of constructive and destructive interference, which in turn causes a random noise pattern across the observed light field. This granular intensity profile has been given the name 'speckle'1. Speckle can be observed in a wide range of applications either with (subjective) or without (objective) the use of image-forming optical configurations. Technological applications in which speckle can be observed includes laser projectors2 and optical coherence microscopy3 where, in both cases, it is considered an unwanted feature that should be removed. The degree of speckle present in an image with uniform average intensity can be quantified by the speckle contrast C of the interference pattern. This is found by dividing the standard deviation of the intensity values σL by the average intensity value I̅, and is effectively the reciprocal of the signal to noise ratio1, $$C = \frac{{\sigma_{L} }}{I}$$ To reduce the speckle contrast, two or more statistically independent, and at least partially decorrelated, speckle patterns need to be superimposed. Under these conditions the random intensity fluctuations can be 'averaged out' across the image, depending upon the number of patterns used. Generally, speckle reduction techniques can be classified according to whether the statistically independent speckle patterns are created instantaneously or time sequentially. Time sequential methods take advantage of the finite integration time of the observer. For example, the human eye has an integration time of approximately 50 ms4. The techniques that can be employed to reduce the speckle contrast can also be further classified by the method by which these speckle patterns are mutually decorrelated. Examples of four different approaches that have been successfully employed include spectral, spatial, angular and polarisation diversification. These different methods are not mutually exclusive and can be used in combination to further reduce speckle contrast: the total speckle reduction is typically the product of the speckle reduction of each method, assuming that each method produces speckle patterns that are statistically independent. One of the most common forms of speckle reduction involves passing a beam of coherent light through a rotating ground glass diffuser (RGGD). In this method, the diffuser creates a time-varying spatially-random phase perturbation across the beam leading to angular decorrelation that can reduce speckle contrast to values below C = 0.055,6,7. Researchers have also considered combatting speckle noise by developing laser sources that have been engineered to have a low spatial coherence, and which have shown considerable promise when used to generate full-field images8. However, such an approach requires the use of very specific light sources (e.g. random lasers) and is not necessarily an optical component that could be retrofitted into existing imaging and projection display systems. Alongside optical methodologies (i.e. ones that involve directly altering the coherence of the illumination source), there is also a growing body of research devoted to the development of numerical-based techniques that can emulate the process of speckle decorrelation. Such techniques achieve a reduction in the speckle contrast by computing a set of uncorrelated speckle patterns from a single recording, which when summed together follow the same dependence on the number of averaged independent speckle patterns, N, that is observed using optical techniques: namely, \(1/\sqrt N\). For technologies such as digital holography, which is important for 3-dimensional coherent imaging, this alternative approach has proved to be particularly successful9,10,11,12. Nonetheless, optical-based techniques do have the advantage that no computational time is required to calculate a set of uncorrelated speckle patterns on the fly. Liquid crystals (LCs) are a potentially useful material for speckle reduction as the optical properties can be altered externally with an electric field leading to phase, polarisation or intensity variation of the incident light that fluctuates in both time and space. Varying these properties as a function of time results in a time variation of the speckle spot intensities and positions, which leads to the creation of multiple, statistically independent speckle patterns that can then be summed together over a finite integration time so as to reduce the perceived appearance of speckle. This approach is equivalent to the time-averaging techniques that have been developed by research teams that have employed the use of colloidal-based material systems rather than those fabricated from liquid crystalline materials13,14. To date, there have been numerous reports demonstrating speckle reduction using LC devices and materials. For example, previous studies have shown a reduction in the speckle contrast using: (1) a nematic LC with photo-isomerisable alignment layer that enables two orthogonal polarisation states to be created thereby leading to polarisation diversity15; (2) a chiral smectic ferroelectric LC (FLC) with an alternating field applied that creates a spatially and temporally random refractive index across the cell16,17,18; (3) surface and/or polymer stabilised FLC19 for polarisation diversity; (4) nematic LCs mixed with photocurable monomers for light scattering20, and (5) the use of a LC spatial light modulator (SLM) that applies multiple random phase masks corresponding to the Hadamard orthogonal function to create statistically independent speckle patterns at the observer21. These techniques, whilst they show promise, are not without their limitations such as a small amount of speckle reduction, complex electric field profiles, and/or the use of expensive and bulky components (such as an SLM). In a previous study, we showed that a positive dielectric anisotropy chiral nematic LC with a pitch of 250 nm and doped with an ionic dopant (cetyltrimethylammonium bromide—CTAB ) can cause a spatially and temporally random phase perturbation to incident laser light when operated in a dynamic scattering mode by subjecting the LC to a low frequency (< 100 Hz) square wave electric field of sufficiently large amplitude22. The random perturbation to the phase of light can, in turn, result in a reduction in the observed speckle contrast. The purpose of this paper is to demonstrate that speckle reduction can be observed in chiral nematic LC mixtures without the need for an ionic dopant such as CTAB and to show that the speckle contrast can be reduced to lower values than those reported previously in22. Furthermore, we consider how the pitch of the chiral nematic LC helix influences the magnitude of the speckle reduction as well as the electric field amplitude and applied frequency required to achieve optimum speckle reduction. The results show that a reduction in the pitch, p, from 2700 to 244 nm leads to a reduction in the speckle contrast. However, the transmission of the laser light through the device reaches a minimum for pitch values of p = 0.5–1 μm before increasing with a further reduction in the pitch. The results indicate that there is an inverse relationship between the pitch and both the electric field amplitude and frequency required for peak device performance. Finally, we consider how changes in the temperature can further reduce the speckle contrast and concluding by presenting static images recorded for a monochromatic laser imaging system that demonstrates the noticeable improvement in the projected images when using these LC devices. Importantly, no loss in the resolution of the optical imaging system is observed. Reducing laser speckle using the dynamic scattering mode An example of how the speckle pattern and the corresponding speckle contrast changes with the use of a chiral nematic LC device that is subjected to different electric field amplitudes is presented in Fig. 1. In this case the chiral nematic LC mixture consists of the nematic host (E7, Synthon Chemicals Ltd.) and 2.5 wt% of the high twisting chiral dopant, BDH1281 (Merck). This mixture was found to have a pitch of p = 517 nm at T = 25 °C. For this mixture, the critical electric field, Ec, required for the chiral nematic-nematic transition was found to be Ec = 10 Vµm−1. Polarising microscope images are shown for four different electric field amplitudes, E, at the same applied frequency (f = 40 Hz) (Fig. 1a). The first image shows a static focal conic state at E = 0 Vµm−1 which then transforms to a turbulent, dynamic scattering state with the application of an electric field (examples are shown for amplitudes of E = 4 and 8 Vµm−1). The final image shown (far right) is for a homeotropically-aligned nematic state at 10 Vµm−1, where the helical structure has been unwound by the large field amplitude. The image appears dark, in accordance with a nematic LC aligned in the homeotropic state when viewed between crossed polarisers, except for the bright circular regions which correspond to the distortion in the director field around the spacer beads. Speckle reduction characteristics of a chiral nematic LC device (E7 + 2.5 wt% BDH1281, pitch 517 nm) when subjected to different amplitudes of a square-wave electric field at a frequency of 40 Hz (a) Optical polarising microscope images of the LC device (the 20 μm-diameter spacer beads can be seen in all images, although they are most obvious for the image recorded at E = 10 Vµm−1). Scale bars are 500 µm. (b) Corresponding images of the speckle pattern formed on a white screen after the beam from a He–Ne laser has passed through the LC device. The same electric field is applied as those used in (a). Images were captured by the CCD camera with a 50 ms exposure time and normalized by the average intensity. (c) Plots of the intensity distribution of the central line of pixels across the width of each image of the speckle pattern presented in (b). All measurements were carried out at 25 °C using a 20 µm-thick device. Corresponding images of the speckle pattern recorded for these electric field conditions are presented in Fig. 1b, with the calculated speckle contrast, C, appearing in the inset of each figure. It can be seen that for the static focal conic state and the homeotropic nematic state, the value for the speckle contrast is the same as that recorded for the laser without the LC device (i.e. C = 0.625). For both of these field conditions, the granular intensity pattern is very noticeable. However, for electric field amplitudes that correspond to the turbulent dynamic state, the speckle contrast reduces first to C = 0.596 at E = 4 Vµm−1 before decreasing further to C = 0.178 when the amplitude is increased to E = 8 Vµm−1. Line profiles of the intensity can be found in Fig. 1c for each speckle pattern, where it can be seen that the variation in the intensity flattens out for E = 8 Vµm−1 as expected from the smaller value of the speckle contrast and the more uniform intensity profile seen in the image presented in Fig. 1b. In accordance with our previous study for ionic mixtures22, we find that the speckle contrast reduces with increasing field amplitude up until the helical structure unwinds, at which point all dynamic scattering ceases. To identify the electric field conditions (amplitude and frequency) for which the speckle contrast is a minimum, we carry out a series of measurements across electric field amplitudes ranging from E = 0 Vµm−1 to E = 20 Vµm−1 and from frequencies ranging from 0 to 100 Hz. The variation in the speckle contrast with field conditions is shown in Fig. 2 in the form of a colourmap, where the legend on the righthand side depicts the value of the speckle contrast, C. Red sections in the map represent little-to-no speckle reduction (speckle contrast is virtually unchanged i.e. C ≈ 0.625) while dark blue regions represent high-levels of speckle reduction (speckle contrast, C < 0.2). From the map in Fig. 2a, which shows the results for a sweep of the field conditions in 2 Vµm−1 and 20 Hz increments, it is clear that the minimum in the speckle contrast (peak speckle reduction) occurs at an electric field amplitude of E = 8 Vµm−1 and a frequency of f = 40 Hz (highlighted by the black box). At this amplitude and frequency, we find that the speckle contrast has reduced from C = 0.625 to a speckle contrast of C = 0.19. Colour maps showing the variation in the Speckle Contrast, C (legend shown on the secondary axis) for different amplitudes and frequencies of a square wave electric field applied to the chiral nematic LC device (E7 + 2.5 wt% BDH1281, pitch 517 nm). (a) Electric field and frequency increments of 2 Vµm−1 and 20 Hz, respectively. (b) higher resolution measurements–electric field and frequency increments of 0.2 Vµm−1 and 5 Hz, respectively. The black boxes in (a) and (b) represent the electric fields corresponding to the smallest speckle contrast ratio, C (maximum speckle reduction). All measurements were carried out at 25 °C using a 20 µm-thick device. Having established the approximate amplitude and frequency for minimum speckle contrast, the next step was to study the cell at a higher resolution of electric field conditions close to the region where the peak speckle reduction was observed. Towards this end, the electric field amplitude was varied from 7.8 to 9.0 Vµm−1 in increments of 0.2 Vµm−1, and the frequency was varied from 30 to 50 Hz in increments of 5 Hz. The resulting colourmap is shown in Fig. 2b. With this finer resolution, the minimum speckle contrast was now found to occur more precisely at E = 8.4 Vµm−1 and f = 40 Hz where the speckle contrast was C ≈ 0.15. The importance of the pitch We now consider whether the magnitude of the helical pitch has any impact on the minimum speckle contrast that can be achieved. For this study, a range of pitch values are presented for mixtures consisting of the nematic host E7 and different concentrations by weight of the chiral dopant, BDH1281. The influence of a change in pitch on the electric field amplitude and frequency required for maximum speckle reduction is explored and for each pitch we determine the minimum speckle contrast value that can be achieved. Figure 3 shows the average speckle contrast measured during a 5-min steady state test at the electric field amplitude and frequency for which the lowest value of the speckle contrast was observed. The data show that for the chiral nematic samples with a pitch value below 500 nm the speckle contrast is relatively independent of the pitch and is approximately constant at a value of C ≈ 0.15. However, as the pitch is increased above this value the speckle contrast begins to increase with the pitch, rising to a speckle contrast of C ≈ 0.4 for p = 2700 nm, which represents only a 36% reduction in the speckle. Evidently, the data demonstrates that by reducing the pitch of the chiral nematic LC from 2700 to 244 nm the speckle contrast (speckle reduction) can be decreased (increased). Plots of the minimum speckle contrast (a) and transmission (b) as a function of the pitch of the chiral nematic LC device. Measurements were carried out at 25 °C using a 20 µm-thick device for mixtures consisting of E7 and the chiral dopant BDH1281 (0.5–6.4 wt%). The solid red lines represent a linear interpolation to guide the eye. For electrohydrodynamic instabilities (EHDI) in chiral nematic LCs the size of the domains that exist at lower electric field amplitudes has been shown to be of the order of the pitch23. Therefore, one might expect that a mixture with a shorter pitch would exhibit more spatial variation of the birefringence across layer, which would lead to increased scattering. Also, mixtures with long pitches (> 4 μm) typically exhibit an electric field-induced fingerprint texture rather than the focal conic state24,25, which will result in less scattering. This is consistent with the limited speckle reduction achieved for the long pitch mixtures considered in this study. In Fig. 3b, the transmission, which is defined as the ratio of light intensity at the projection screen to that measured without a LC device in place, is presented as a function of the pitch. The values shown are the average transmission measured over the 5-min steady state test at the electric field conditions for which a minimum in the speckle contrast was observed. From this plot it can be seen that as the pitch decreases from a value of p = 2700 nm, the transmission reduces with the pitch, reaching a minimum around p = 500 nm. However, a further shortening of the pitch results in an increase in the transmission. It is worth noting that the pitch corresponding to the minimum transmission does not coincide with an overlap of the band-gap with the wavelength of the laser source. Specifically, the mixture for which the minimum transmission was observed was the mixture, E7 + 2.5 wt% BDH1281, which was found to exhibit a band-gap from 785 to 902 nm. This is at longer wavelengths than the He–Ne laser used for the measurements of the speckle contrast (λ = 632.8 nm). The same reasoning applied to the speckle contrast could also be applied to the transmission behaviour observed for longer pitch mixtures. However, the existence of a transmission minima is more difficult to explain. When considering using a LC cell for speckle reduction in imaging, maximum transmission would be highly desirable while retaining minimum speckle contrast. As a result, the choice of the pitch value is rather important as it is desirable to obtain the smallest possible speckle contrast value whilst maximising the transmission (Fig. 3b). Therefore, chiral dopants with a high helical twisting power should be considered for future studies. Results for the dependence of the electric field amplitude corresponding to the minimum speckle contrast as a function of the pitch are presented in Fig. 4a. Values for the electric field amplitude and frequency were obtained from the high-resolution scans in accordance with the process demonstrated in Fig. 3b. The results demonstrate that there is an inverse relationship between the pitch of the chiral nematic LC and the electric field amplitude required for the minimum speckle contrast (maximum speckle reduction). At the longest pitch value tested (p = 2700 nm), E ≈ 2 Vµm−1, which increases to E = 20 Vµm−1 for the shortest pitch considered in this work. Any decrease in the pitch results in an increase in the electric field amplitude as \(E \propto 1/p\). Plot of the amplitude (a) and square-wave frequency (b) of the applied electric field at the minimum speckle contrast (maximum speckle reduction) as a function of the pitch. Each cell was 20 μm-thick and the cell temperature throughout the measurements was held at T = 25 °C. Data points represent the measured values and the dashed black lines are fits of the form ax−1. Theoretical studies of EHDI distortions in chiral nematic LCs that are subjected to ac electric fields applied parallel to the helical axis were carried out many years ago by Helfrich and separately by Hurault assuming that the cell gap is significantly larger than the helical pitch26,27. Subsequent experimental work by Rondelez and co-workers supported the results from the theoretical studies28. In these collective works, it was predicted that there is an inverse relationship between the threshold voltage for the onset of EHDI and the square root of the pitch. However, we find that the field required to achieve minimum speckle contrast is considerably larger than the threshold field for EHDI. Speckle contrast maps for each mixture indicate that the field required for the minimum speckle contrast was always just below the threshold field needed for the chiral nematic-nematic phase transition. The relationship between the threshold field required to unwind the helix and the pitch is E ∝ p−1, which is the same as the relationship shown in Fig. 4a. The dependence of the frequency that corresponds to the lowest value of the speckle contrast is presented as a function of the pitch of the chiral nematic LC in Fig. 4b. It is shown that the frequency for maximum speckle reduction is also approximately inversely proportional to the pitch. Frequencies of the applied field below 10 Hz were not tested because at such low frequencies, DC effects from carrier injection become significant. In the work of Hurault a relaxation frequency, ω, for positive dielectric anisotropy chiral nematic LCs is provided, which follows the form27 $$\omega = \left( {\frac{\pi }{d}} \right)\left( {\frac{2\pi }{p}} \right)\left( {\frac{{\overline{K}}}{\gamma }} \right)$$ where \(\overline{K}\) is the average value of the Frank elastic constants, d is the cell thickness, p is the pitch and γ is the rotational viscosity coefficient. Using typical values for E7 of \(\overline{K}\) = 14 pN and γ = 0.15 Pa s along with d = 20 μm and p = 0.5 μm, a relaxation frequency of f≈50 Hz is obtained. This value is remarkably close to the frequency recorded for the minimum speckle contrasts of the chiral nematic LC mixture consisting of E7 and BDH1281 with a pitch of p = 500 nm, which was found to be f = 40 Hz. Furthermore, Eq. (1) follows the same inverse relationship between frequency and pitch as observed in Fig. 4b. The sample that was found to give the largest reduction in the speckle contrast in this study was the chiral nematic LC doped with 4.7 wt% of the high twisting power chiral dopant BDH1281, which had a pitch of p = 317 nm at T = 25 °C. Specifically, this mixture was found to be capable of reducing the speckle contrast to C = 0.141 at E = 13.6 Vµm−1 and f = 75 Hz: a reduction of 77% from C = 0.625. The corresponding transmission through the device under these field conditions was found to be 16.0%. However, by increasing the concentration of chiral dopant further, leading to a shorter pitch (5.1 wt%, p = 273 nm), it was found that the speckle contrast reduced to almost the same level (C = 0.142) but with an improved transmission of 21.9%. Camera integration time The underlying principle governing the use of the LC device is that the speckle contrast can be reduced if a number of speckle patterns are superimposed upon each other within the integration time of the detector, in this case the CCD camera. As already stated, passing the coherent laser beam through a LC cell undergoing EHDI has the effect of applying a time-varying spatially random phase perturbation on the incoming wavefront. Goodman1 showed that for N statistically independent speckle patterns superimposed during one integration period, the speckle contrast is reduced by a factor of \(1/\sqrt N\). For our device, this would correspond to a relationship of the form of \(1/\sqrt \tau\), where \(\tau\) is the integration time of the detector. This is because there is a direct relationship between the integration time and the number of statistically independent speckle patterns observed by the camera as a result of LC director fluctuations in the device; the longer the time the more patterns that are captured. The relationship between the speckle contrast and the integration time of the detector can be observed clearly from our experimental data presented in Fig. 5. The cell was maintained under constant electric field conditions for each measurement and the observed speckle contrast is seen to reduce as the integration time of the CCD camera increases following a \(1/\sqrt \tau\) dependency. Speckle contrast measured after passing through a chiral nematic LC device (E7 + 4.7 wt% BDH1281, pitch 310 nm) under a square wave electric field of amplitude 13.6 Vµm−1 and a frequency of 75 Hz at a range of camera integration times. The cell thickness was d = 20 µm, cell temperature T = 25 °C. The data points were obtained from measurements and the solid red line is a fit of the form ax−0.5 + b which shows that C reduces approximately by \(1/\sqrt \tau\). Inset speckle images shown for (a) 1 ms, C = 0.555, (b) 10 ms, C = 0.277, (c) 2 s, C = 0.108 integration time and speckle contrast values, respectively. Examples of the speckle contrast maps for four different camera exposure times can be seen in Figure S1 in the Supplementary Information. The results show that the electric field amplitudes and frequency over which the speckle contrast significantly decreases narrows with a reduction in the camera exposure time. In accordance with Fig. 5, the minimum value of the speckle contrast is also seen to increase with a decrease in the exposure time. For our study, we are primarily concerned with an experimental configuration that mimics the response of the human eye and therefore a fixed integration time of 50 ms has been selected for our measurements. However, if the integration time is not a fixed quantity, then it is important to note that the minimum speckle contrast that can be achieved for a specific LC device operated at a single temperature depends upon three factors: the electric field amplitude, frequency of the applied field and the camera/detector integration time. Thus far, it has been demonstrated that the magnitude of the pitch can significantly influence the minimum speckle contrast value that can be achieved using the LC device. We now consider the influence that the environmental temperature has on the amount of speckle reduction. Colour maps for four different temperatures are presented in Fig. 6, ranging from T = 25 °C to T = 55 °C (just below the clearing temperature of the chiral nematic LC mixture). As the temperature is increased from 25 to 55 °C we see that the range of field amplitudes and frequencies for which a significant speckle reduction is observed (indicated by the blue region in the figure) is increased. For example, at a temperature of T = 25 °C we see that the lowest values for the speckle contrast occur at relatively low frequencies and relatively large electric field amplitudes. To illustrate this point, if we consider a frequency of f = 100 Hz we see that amplitudes of around E = 14–18 Vµm−1 are required to reduce the speckle contrast to C < 0.2. However, by increasing to T = 55 °C we see that at the same frequency (f = 100 Hz), the field amplitude required to reduce the speckle contrast to values of C < 0.2 has dropped to E = 8–15 Vµm−1. Speckle contrast maps measured for the chiral nematic LC device (E7 + 5.1 wt% BDH1281, pitch 273 nm) for four different operating temperatures. The cells tested were nominally 20 µm-thick and the cell temperature throughout measurements was (a) T = 25 °C, (b) T = 35 °C, (c) T = 45 °C, (d) T = 55 °C. In terms of the variation in the minimum speckle contrast value, the normalised speckle contrast (normalised to the value at T = 25 °C) is plotted as a function of the temperature in Fig. 7a. In this case it can be seen that C reduces in magnitude by almost 15% with an increase in temperature. It is well-known that the birefringence reduces with increased temperature29 and, as a rule of thumb, a decrease in the birefringence leads to a decrease in the speckle reduction as a result of reduced light scattering. However, the viscosity also decreases with an increase in temperature30, which enables the local director to reorient more freely during the EHDI process thus increasing the rate of fluctuations across the device. This effect is analogous to increasing the exposure time of the camera, as shown in Fig. 5, as a decrease in the viscosity would serve to increase the number of independent speckle patterns generated per unit time, which would reduce the magnitude of the speckle contrast. The temperature dependence of (a) minimum speckle contrast normalised to the value of C at 25 °C, (b) transmission, (c) electric field amplitude and (d) frequency at the minimum speckle contrast conditions. Devices were nominally 20 µm-thick and contained the chiral nematic LC mixture, E7 + 5.1 wt% BDH1281, which had a pitch of p = 273 nm at 25 °C. Encouragingly, alongside a decrease in the speckle contrast we also observe an increase in the transmission (Fig. 7b), indicating greater light throughput, a decrease in the required electric field amplitude from 16 to 13 Vµm−1 (Fig. 7c) and an increase in the drive frequency from 100 to 200 Hz (Fig. 7d). All of these changes are favourable in terms of device performance: in the latter case the increase to a higher value of the drive frequency ensures that the device can be operated at electric field conditions away from the low frequencies where the unwanted build-up of ionic species might occur. Projected image To illustrate, visually, the speckle reduction improvement afforded with the LC device used in this study, photographs of laser projected images are shown in Fig. 8. For this experiment, the He–Ne laser is propagated through the 1951 USAF target and the resultant image is projected onto a white screen using the set-up presented in Figure S1. Images were recorded both with and without the LC device in the path of the laser beam. In the upper panel (Fig. 8a), a section of the USAF target is shown without the LC device in the path of the laser. Here it can be seen that while the horizontal and vertical bars and the number 3 can be identified, the intensity is extremely nonuniform within the borders of these features due to the presence of speckle, which clearly deteriorates the quality and visual appeal of the image. From our measurements, we find that the speckle contrast in this image has a value of C = 0.625, which is consistent with the \(1/\sqrt 2\) expected for this experimental configuration. Images captured by the CCD of the white screen when an image of the 1951 USAF target is generated without (a) and with (b) a chiral nematic LC device (E7 + 4.7 wt% BDH1281, pitch = 310 nm). (c) Image showing that the resolution is preserved during the speckle reduction process. The left-hand and right-hand images show the speckle pattern for the case when the LC speckle reducer is switched off or on, respectively. An image of a microscope stage micrometer can be seen showing a scale bar from 0 to 100 µm. The device thickness was nominally 20 µm and measurements were carried out at T = 25 °C. With the insertion of the LC device (Fig. 8b), we see that the features are noticeably improved with the addition of the LC device operating at the electric field conditions for which maximum speckle reduction is observed (i.e. a speckle contrast of C = 0.14). The 78% reduction in the speckle contrast clearly has an observable effect on the quality of the image that is produced. Further improvement is expected through material/mixture optimisation and it anticipated that such devices could lead to speckle contrast values below C = 0.1, thus approaching the limit beyond which speckle can no longer be perceived (C = 0.04). Maintaining the resolution of an imaging or display system whilst reducing the speckle noise is of paramount importance for any speckle reducing technology. To verify that there is no loss of resolution when our LC speckle reducer is employed, we have recorded images of the speckle pattern when a microscope stage micrometer is imaged onto the screen. The experimental results are presented in Fig. 8c where it can be seen that the resolution of the smallest features on the micrometer are clearly discernible when the LC diffuser is switched on and that the ability of the optical system to resolve the lines on the micrometer scale is unaffected by the LC device. In contrast, the lines and features on the micrometer slide cannot be seen clearly when the LC device switched off due to the noise introduced by the speckle pattern. We conclude this study by considering how the performance of the LC speckle reducer presented here compares with other optical diversification techniques. In this work, we have shown that, under the right experimental conditions, the speckle contrast can be reduced from C = 0.625 to C = 0.14 (a 78% reduction in speckle). This reduction compares well with results reported for other optical time-varying techniques such as those applied to digital holography as well as those involving the use of a spatial light modulator, where an 80% reduction was observed31,32,33. Our results also compare very favourably with previous work involving liquid crystalline materials where speckle reduction is typically of the order of 50%18. It is important to note that there is still further scope, through material optimisation, to reduce the speckle contrast further. For example, through the use of additional scatterers or by decreasing the response time of the LC it may be possible obtain speckle contrast values below C = 0.1. In summary, we have shown that by varying the pitch of a chiral nematic LC (based upon the composition of E7 + chiral dopant) the speckle contrast (when measured using a camera integration time of 50 ms) is reduced from C ≈ 0.4 for p = 2700 nm to C ≈ 0.15 for p = 270 nm. A further reduction in the minimum speckle contrast value can be obtained by increasing the device temperature to close to the clearing temperature of the mixture. By combining short pitch chiral nematic LC devices with operating temperatures approaching the clearing temperature, it is possible reduce the speckle contrast by more than 80% from the value of C = 0.625 recorded when no LC device was used, i.e. the value obtained corresponding to just the illumination of the He–Ne laser on the white screen. Configuring the LC device as outlined in this study enables speckle contrast values of C ≈ 0.1 to be reached, which is approaching the limit for speckle not to be observed. It is also found that operating at higher temperatures leads to the complementary benefits of a 36% increase in the light transmission through the device and a 20% decrease in the electric field required to generate the minimum speckle contrast. The results in this study provide an important guide as to the design and development of thin-film speckle reducers based upon LCs for deployment in laser projection and imaging systems. The mixture components used in this work were weighed using a precision micro-balance (Mettler Toledo AB104-S) with an accuracy of ± 0.05 mg. For this work the well-characterised eutectic mixture, E7 (Synthon Ltd), was chosen as the nematic host as it is liquid crystalline at room temperature and its macroscopic physical properties such as the refractive indices, dielectric permittivities and elastic constants have been measured using a variety of techniques. To form a chiral nematic LC phase, the nematic host was dispersed with a low concentration by weight of the high twisting power chiral dopant BDH1281 (Merck), which had a helical twisting power of β = 72 µm−1 in E7. For this work a number of mixtures were prepared to study the influence of the pitch with concentrations ranging from 0.5 to 6.4 wt% of the chiral dopant. Each mixture was then heated to ∼ 10 °C above the clearing temperature and held at this temperature for at least 12 h to achieve complete thermally assisted diffusive mixing of the components. The resultant mixtures were found to exhibit a right-handed helix with pitch values ranging from p = 244 to p = 2700 nm. For the mixtures that exhibited a reflection band within the 350–1100 nm spectral range at a temperature of T = 25 °C (determined using an ultraviolet–visible spectrometer, Agilent Cary 8454 UV–Vis), the pitch was calculated from a combination of the birefringence (Δn) of E7 and the width of the photonic band gap, Δλ, according to the relationship p = Δλ/Δn. For mixtures that had a photonic band gap outside of the spectral range of the UV–Vis spectrometer (i.e. at longer wavelengths), the pitch was determined by extrapolating a linear plot of the inverse pitch (p−1) as a function of the concentration by weight (cw) (see Figure S2 in the Supplementary Information) as the two parameters are related through the expression p−1 = (βcwee) where β is the helical twisting power and ee is the enantiomeric purity, both of which remain constant for the same combination of nematic host and chiral dopant. In contrast to our previous study23, we found that the mixtures consisting of E7 + BDH1281 exhibit dynamic scattering without the need for additional ionic doping such as CTAB. After thermal mixing, the LC mixture was then filled into thin glass cells using capillary action. The glass cells used in this work were commercially-available INSTEC LC2 cells that have a nominal cell gap of d = 20.0 μm and consisted of a transparent electrode coating of Indium Tin Oxide (ITO) (thickness 23 nm) that was patterned onto the inner surfaces of the glass substrates creating a 5 mm × 5 mm active region at the centre of the cell. On top of each ITO layer, a rubbed polyimide alignment layer was deposited onto the inner surface of each substrate. The rubbing directions were oriented such that they were aligned antiparallel to one another. The thickness of each glass cell was determined by the spacer beads that were spray-coated across the entire glass substrate including the active electrode areas. The actual thickness was determined using an interference method with white light illumination at normal incidence. It should be noted that for future work it would be more desirable for the spacer beads to be confined to regions outside of the active area so as to avoid any unwanted increase in the speckle. Examination of the devices on a polarising optical microscope (Olympus BX51-P) revealed that the components had been uniformly dispersed. Speckle characterisation The experimental setup used to measure the Speckle Contrast is shown in Figure S3. Briefly, a single mode, continuous wave, linearly polarised Helium–Neon laser (JDS Uniphase 1122P, λ = 632.8 nm) was used as the coherent light source to illuminate the LC device. This device was placed on a hot-stage connected to a controller (INSTEC mK1000) to ensure that it was maintained at a constant temperature throughout each experiment. Temperatures used in the study of the speckle contrast vary from 25 to 55 °C. The controller has a temperature resolution of 0.001 °C. An alternating electric field was generated inside the LC device using the dual-channel function generator (Tektronix AFG 3022) and voltage amplifier (FLC Electronics F10AD) described previously. A 10 × microscope objective (Olympus UPlanFL N, NA = 0.3), placed directly after the LC device, was used to limit the divergence of the scattered beam transmitted through the LC cell before an absorptive neutral density filter (NDF) (Thorlabs NEK01) reduced the intensity of the beam observed at the camera (CCD) to ensure that the sensor was not over-exposed. At the same time, it was important to retain the maximum use of the full dynamic range available to avoid the loss of accuracy caused by discretisation errors. The camera used in this work was a cooled, monochrome CCD (QImaging QICAM 12-bit) with 1392 × 1040 pixels, 4.65 μm × 4.65 μm pixel size and 1/2" optical format). It has been reported that a minimum bit-depth of 6 bits is adequate for sampling the speckle pattern4, which is easily satisfied by our system. The camera was cooled to reduce the dark noise that would otherwise artificially increase the measured speckle contrast. The camera was used with unity gamma correction to ensure a linear relationship between the optical intensity and pixel value. Without this condition, a change in optical intensity would result in an unwanted change in the measured speckle contrast. Our objective was to design the system to resemble, as closely as possible, the response of the human eye. This is nontrivial as the integration time of the eye differs greatly at distinct positions on the retina. For example, rod cells are very sensitive to low levels of light and consequently integrate over a longer period of time than the less sensitive cones. Moreover, the integration time for the photoreceptor cells decreases with increased illumination intensity and is found to vary considerably across the visible spectrum34. Consequently, there is no single value of the camera integration time that would perfectly match human perception. However, for the level of luminance considered in this work (e.g. L = 48 cd/m2), photopic vision (cones) dominates over scotopic vision (rods) at the retina and the eyeball moves to ensure that light from the point of primary interest falls on to the region where there is a high density of cone receptors. Furthermore, it has been reported that photopic vision at long wavelengths has a temporal integration time of approximately 50 ms35. Therefore, as the eye directs light from its point of interest onto the cone-dense fovea centralis, this integration time was selected unless otherwise stated. To benchmark our measurements against existing literature the speckle contrast of a highly coherent He–Ne laser without a diffuser was tested and found to be C = 0.625 ± 0.008, close to the theoretical value of 1/√2. Another measurement was taken with a RGGD in the diffuser plane tested at 40–200 rpm. It was found that, in this case, C = 0.04 ± 0.003, which is in good agreement with previously published results for similar devices2. Unless stated otherwise, all values for the speckle contrast are quoted to an accuracy of ± 0.003. Goodman, J. W. Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, Oxford, 2006). Akram, M. N. & Chen, X. Speckle reduction methods in laser-based picture projectors. Opt. Rev. 23, 108–120 (2016). Huang, D. et al. Optical coherence tomography. Science 254, 1178–1181 (1991). Roelandt, S. et al. Standardized speckle measurement method matched to human speckle perception in laser projection systems. Opt. Express 20, 8770–8783 (2012). Shin, S. C. et al. Removal of hot spot speckle on laser projection screen using both the running screen and the rotating diffuser. Displays 27, 91–96 (2006). Zheng, G. et al. Laser digital cinema projector. IEEE/OSA J. Display Technol. 4, 314–318 (2008). Li, J. Design of optical engine for LCOS laser display with rotated diffuser plate. Microw. Opt. Technol. Lett. 55, 138–141 (2013). Redding, B., Choma, M. A. & Cao, H. Speckle-free laser imaging using random laser illumination. Nat. Photonics 6, 355–359 (2012). Montrésor, S., Memmolo, P., Bianco, V., Ferraro, P. & Picart, P. Comparative study of multi-look processing for phase map de-noising in digital Fresnel holographic interferometry. J. Opt. Soc. Am. A 36, A59–A66 (2019). Memmolo, P. et al. Encoding multiple holograms for speckle-noise reduction in optical display. Opt. Express 22, 25768–25775 (2014). Bianco, V. et al. Strategies for reducing speckle noise in digital holography. Light Sci. Appl. 7, 1–16 (2018). Hincapie, D., Herrera-Ramírez, J. & Garcia-Sucerquia, J. Single-shot speckle reduction in numerical reconstruction of digitally recorded holograms. Opt. Lett. 40, 1623 (2015). Redding, B., Allen, G., Dufresne, E. R. & Cao, H. Low-loss high-speed speckle reduction using a colloidal dispersion. Appl. Opt. 52, 1168 (2013). Bianco, V., Marchesano, V., Finizio, A., Paturzo, M. & Ferraro, P. Self-propelling bacteria mimic coherent light decorrelation. Opt. Express 23, 9388 (2015). Sueda, K., Tsubakimoto, K., Miyanaga, N. & Nakatsuka, M. Speckle suppression of laser light using liquid crystals aligned by photoisomerization of dye molecules. Appl. Phys. Lett. 81, 5111 (2002). Andreev, A. A., Andreeva, T. B., Kompanets, I. N., Minchenko, M. V. & Pozhidaev, E. P. Speckle-noise suppression due to a single ferroelectric liquid-crystal cell. J. Soc. Inf. Display 17, 801–807 (2009). Kompanets, I. N., Andreev, A. L., Andreeva, T. B. & Minchenko, M. V. Speckle suppression by means of ferroelectric LC cell. SID Symp. Digest Tech. Pap. 41, 1065–1068 (2010). Andreev, A. L., Andreeva, T. B. & Kompanets, I. N. Speckle reduction due to using the electro-optical cell with helix-free FLC. SID 2014 Digest 31, 411–414 (2014). Furue, H., Terashima, A., Shirao, M., Koizumi, Y. & Ono, M. Control of laser speckle noise using liquid crystals. J. Appl. Phys. 50(9 PART 3), 09NE14 (2011). Furue, H., Sugimoto, Y., Iwami, K., Weng, W. & Ono, M. Control of laser speckle noise by using polymer-dispersed LC. Mol. Cryst. Liq. Cryst. 612, 245–250 (2015). Trisnadi, J. L. Hadamard speckle contrast reduction. Opt. Lett. 29, 11–13 (2004). Hansford, D. J., Fells, J. A. F., Elston, S. J. & Morris, S. M. Speckle contrast reduction of laser light using a chiral nematic liquid crystal diffuser. Appl. Phys. Lett. 109, 261104 (2016). Arnould-Netillard, H. & Rondelez, F. Electrohydrodynamic instabilities in cholesteric liquid crystals with negative dielectric anisotropy. Mol. Cryst. Liq. Cryst. 26, 11–31 (1974). Morris, S. M. & Coles, H. J. Chiral nematic liquid crystals and electric, magnetic, and mechanical fields. In Handbook of Liquid Crystals: 8 2nd edn (eds Goodby, J. W. et al.) (Wiley-VCH Verlag GmbH & Co, Hoboken, 2014). Sartirana, M. L., Valenti, B. & Bartolino, R. Elastic deformations and electrohydrodynamic instabilities in large pitch cholesteric liquid crystals under an electric field. Mol. Cryst. Liq. Cryst. 98, 321–347 (1983). Helfrich, W. Electrohydrodynamic and dielectric instabilities of cholesteric liquid crystals. J. Chem. Phys. 55, 839–842 (1971). Hurault, J. P. Static distortions of a cholesteric planar structure induced by magnetic or ac electric fields. J. Chem. Phys. 59, 2068–2075 (1973). Rondelez, F., Arnould, H. & Gerritsma, C. J. Electrohydrodynamic effects in cholesteric liquid crystals under ac electric fields. Phys. Rev. Lett. 28, 735–737 (1972). Li, J. Refractive Indices Of Liquid Crystals And Their Applications In Display And Photonic Devices. Doctoral Dissertation, University of Central Florida (2005). Wu, S. T. & Wu, C. S. Experimental confirmation of the Osipov–Terentjev theory on the viscosity of nematic liquid crystals. Phys. Rev. A 42, 2219–2227 (1990). Quan, C. G., Kang, X. & Tay, C. J. Speckle noise reduction in digital holography by multiple holograms. Opt. Eng. 46, 115801 (2007). Baumbach, T., Kolenović, E., Kebbel, V. & Jüptner, W. Improvement of accuracy in digital holography by use of multiple holograms. Appl. Opt. 45, 6077 (2006). Amako, J., Miura, H. & Sonehara, T. Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator. Appl. Opt. 34, 3165 (1995). Hecht, S. & Shlaer, S. Intermittent stimulation by light: V. The relation between intensity and critcal frequency for different parts of the spectrum. J. Gener. Physiol. 19, 965–977 (1936). Krauskopf, J. & Mollon, J. D. The independence of the temporal integration properties of individual chromatic mechanisms in the human eye. J. Physiol. 219, 611–623 (1971). The authors gratefully acknowledge the Engineering and Physical Sciences Research Council (UK) for financial support through the graduate studentship, EP/L505031/1. S.M.M gratefully acknowledges The Royal Society for support through a University Research Fellowship. The authors thank the reviewers for their very helpful comments. Department of Engineering Science, University of Oxford, Parks Road, Oxford, OX3 1PJ, UK David J. Hansford, Yihan Jin, Steve J. Elston & Stephen M. Morris David J. Hansford Yihan Jin Steve J. Elston Stephen M. Morris S.M.M. and S.J.E. conceived and managed the project. D.J.H. and Y.J. prepared the mixtures and devices, and carried out the experiments. D.J.H. constructed the apparatus for measuring the speckle contrast. S.M.M., D.J.H., Y. J., and S.J.E. wrote the manuscript. Correspondence to Stephen M. Morris. Hansford, D.J., Jin, Y., Elston, S.J. et al. Enhancing laser speckle reduction by decreasing the pitch of a chiral nematic liquid crystal diffuser. Sci Rep 11, 4818 (2021). https://doi.org/10.1038/s41598-021-83860-3
CommonCrawl
地球与行星物理 CN 10-1502/P Get Content Alerts Geometry and tectonic deformation of the seismogenic structure for the 8 August 2017 MS 7.0 Jiuzhaigou earthquake sequence, northern Sichuan, China To reveal the geometry of the seismogenic structure of the Aug. 8, 2017 MS 7.0 Jiuzhaigou earthquake in northern Sichuan, data from the regional seismic network from the time of the main event to Oct. 31, 2017 were used to relocate the earthquake sequence by the tomoDD program, and the focal mechanism solutions and centroid depths of the ML ≥ 3.5 events in the sequence were determined using the CAP waveform inversion method. Further, the segmental tectonic deformation characteristics of the seismogenic faults were analyzed preliminarily by using strain rosettes and areal strains (As). The results indicate: (1) The relocated MS 7.0 Jiuzhaigou earthquake sequence displays a narrow ~ 38 km long NNW-SSE-trending zone between the NW-striking Tazang Fault and the nearly NS-striking Minjiang Fault, two branches of the East Kunlun Fault Zone. The spatial distribution of the sequence is narrow and deep for the southern segment, and relatively wide and shallow for the northern segment. The initial rupture depth of the mainshock is 12.5 km, the dominant depth range of the aftershock sequence is between 0 and 10 km with an average depth of 6.7 km. The mainshock epicenter is located in the middle of the aftershock region, showing a bilateral rupture behavior. The centroid depths of 32 ML ≥ 3.5 events range from 3 to 12 km with a mean of about 7.3 km, consistent with the predominant focal depth of the whole sequence. (2) The geometric structure of the seismogenic fault on the southern section of the aftershock area (south of the mainshock) is relatively simple, with overall strike of ~150° and dip angle ~75°, but the dip angle and dip-orientation exhibit some variation along the segment. The seismogenic structure on the northern segment is more complicated; several faults, including the Minjiang Fault, may be responsible for the aftershock activities. The overall strike of this section is ~159° and dip angle is ~59°, illustrating a certain clockwise rotation and a smaller dip angle than the southern segment. The differences between the two segments demonstrate variation of the geometric structure along the seismogenic faults. (3) The focal mechanism solutions of 32 ML ≥ 3.5 events in the earthquake sequence have obvious segmental characteristics. Strike-slip earthquakes are dominant on the southern segment, while 50% of events on the northern segment are thrusting and oblique thrusting earthquakes, revealing significant differences in the kinematic features of the seismogenic faults between the two segments. (4) The strain rosettes for the mainshock and the entire sequence of 31 ML ≥ 3.5 aftershocks correspond to strike-slip type with NWW-SEE compressional white lobes and NNE-SSW extensional black lobes of nearly similar size. The strain rosette and As value of the entire sequence of 22 ML ≥ 3.5 events on the southern segment are the same as those of the MS 7.0 mainshock, indicating that the tectonic deformation here is strike-slip. However, the strain rosette of the entire sequence of 10 ML ≥ 3.5 events on the northern segment show prominent white compressional lobes and small black extensional lobes, and the related As value is up to 0.52, indicating that the tectonic deformation of this segment is oblique thrusting with a certain strike-slip component. Differences between the two segments all reveal distinctly obvious segmental characteristics of the tectonic deformation of the seismogenic faults for the Jiuzhaigou earthquake sequence. Crustal structure beneath the Qilian Orogen Zone from multiscale seismic tomography The Qilian Orogen Zone (QOZ), located in the north margin of the Tibetan Plateau, is the key area for understanding the deformation and dynamics process of Tibet. Numerous geological and geophysical studies have been carried out on the mechanics of the Tibetan Plateau deformation and uplift; however, the detailed structure and deformation style of the Qilian Orogen Zone have remained uncertain due to poor geophysical data coverage and limited resolution power of inversion algorithms. In this study, we analyze the P-wave velocity structure beneath the Qilian Orogen Zone, obtained by applying multi-scale seismic tomography technique to P-wave arrival time data recorded by regional seismic networks. The seismic tomography algorithm used in this study employs sparsity constraints on the wavelet representation of the velocity model via L1-norm regularization. This algorithm can deal efficiently with uneven-sampled volumes, and can obtain multi-scale images of the velocity model. Our results can be summarized as follows: (1) The crustal velocity structure is strongly inhomogeneous and consistent with the surface geological setting. Significant low-velocity anomalies exist in the crust of northeastern Tibet, and slight high-velocity anomalies exist beneath the Qaidam Basin and Alxa terrane. (2) The Qilian Orogen Zone can be divided into two main parts by the Laji Shan Faults: the northwestern part with a low-velocity feature, and the southeastern part with a high-velocity feature at the upper and middle crust. (3) Our tomographic images suggest that northwestern and southeastern Qilian Orogen Zones have undergone different tectonic processes. In the northwest Qilian Orogen Zone, the deformation and growth of the Northern Tibetan Plateau has extended to the Heli Shan and Beida Shan region by northward over-thrusting at the upper crust and thickening in the lower crust. We speculate that in the southeast Qilian Orogen Zone the deformation and growth of the Northern Tibet Plateau were of strike-slip style at the upper crust; in the lower crust, the evidence suggests ductile shear extrusion style and active frontage extension to the Alxa terrane. (4) The multi-scale seismic tomography technique provides multi-scale analysis and sparse constraints, which has allowed to us obtain stable, high-resolution results. Ambient noise Love wave tomography of China We first report on the Love wave tomography of China based on ambient noise cross-correlations. We used 3 years of continuous waveform data recorded by 206 broadband seismic stations on the Chinese Mainland and 36 neighboring global stations and obtained Love wave empirical Green's functions from cross-correlations of the horizontal components. The Love wave group velocity dispersion measurements were used to construct dispersion maps of 8- to 40-s periods, which were then inverted to obtain a three-dimensional horizontally polarized S-wave (SH) velocity structure. The resolution was approximately 4° × 4° and 8° × 8° for eastern and western China, respectively, and extended to a depth of approximately 50 km. The SH model was generally consistent with a previously published vertically polarized S-wave (SV) model and showed large-scale features that were consistent with geological units, such as the major basins and changes in the crustal thickness across the north-south gravity lineament. The SH and SV models also showed substantial differences, which were used to examine the subsurface radial anisotropy. We define the radial anisotropy parameter as \begin{document}$\psi = 2\left( {{V_{\rm SH}} - {V_{\rm SV}}} \right)/\left( {{V_{\rm SH}} + {V_{\rm SV}}} \right)$\end{document}. At a shallow depth, we observed significant radial anisotropy under major basins, which may be related to thick sedimentary layers. At the mid to lower crust, most of the Chinese continent showed strong positive radial anisotropy (SH > SV). Central and southern Tibet showed strong positive anisotropy, whereas the radial anisotropy was relatively weak at the northern and eastern margins, which suggests a change in deformation style from the plateau interior to its margins. The North China craton showed prominent positive radial anisotropy, which may be related to decratonization and strong extension since the Mesozoic Era. Love waves are less well retrieved than Rayleigh waves from ambient noise cross-correlations. Increasing the duration of the cross-correlation data beyond 4 to 8 years may not aid in retrieving Love waves of longer periods, for which improved methods need to be explored. Heating of multi-species upflowing ion beams observed by Cluster on March 28, 2001 Cluster satellites observed three successive outflowing ion beams on 28 March, 2001. It is generally accepted that these ion beams, composed of H+, He+, and O+ ions, with three inverted-V structures in their energy spectra, are produced by acceleration through U-shaped potential structures. By eliminating the background ion population and employing Maxwelling fitting, we find that ions coming from the center of the potential structure have higher temperature than those from the flanks. Higher temperature of O+ and He+ compared to that of H+ indicates that heavy ions are preferentially heated; we further infer that the heating efficiencies of O+ and He+ ions differ between the center and edges of the U-shaped potential structures. Estimation based on pitch angle observations shows that heating may also occur at an altitude above the upper boundary of the auroral acceleration region (AAR), where these beams are generally thought to be formed. Multiple satellites observation evidence: High-m Poloidal ULF waves with time-varying polarization states We report multi-spacecraft observations of ULF waves from Van Allen Probes (RBSP), Magnetospheric Multiscale (MMS), Time History of Events and Macroscale Interactions during Substorm (THEMIS), and Geostationary Operational Environmental Satellites (GOES). On August 31, 2015, global-scale poloidal waves were observed in data from RBSP-B, GOES and THEMIS from L=4 to L=8 over a wide range of magnetic local time (MLT). The polarization states varied towards purely poloidal polarity. In two consecutive orbits over 18 hours, RBSP-A and RBSP-B recorded gradual variation of the polarization states of the poloidal waves; the ratio (|Ba|/|Br|) decreased from 0.82 to 0.13. After the variation of polarization states, the poloidal ULF waves became very purely poloidal waves, localized in both L and MLT. We identify the poloidal wave as second harmonic mode with a large azimuthal wave number (m) of –232. From RBSP particle measurements we find evidence that the high-m poloidal waves during the polarization variations were powered by inward radial gradients and bump-on-tail ion distributions through the N=1 drift-bounce resonance. Most of the time, the dominant free energy source was inward radial gradients, compared with the positive gradient in the energy distribution of the bump-on-tail ion distributions. Species-dependent ion escape on Titan Cassini observations over the past ten years have revealed that Titan possesses a chemically complex ionosphere. In this study, we investigate the relative contributions of different ion species to the total ion escape on Titan, by dividing all ion species probed by the Cassini Ion Neutral Mass Spectrometer (INMS) into six groups according to their mass-to-charge ratios (M/Z). For the three lightest ion groups, with characteristic M/Z of 22, 41, and 52 daltons , the observed scale heights tend to be lower than the scale heights predicted by assuming diffusive equilibrium; for the three heavier groups, observed and predicted scale heights are in general agreement, implying that most ion escape from Titan is by relatively light species, with M/Z < 60 daltons. A diffusion model is constructed to describe the density distribution of each ion group in regions where the effect of ionospheric chemistry could be neglected. The data model comparison predicts an optimal total ion escape rate of 3.1×1024 s–1, of which more than 99% is contributed by relatively light ions with M/Z < 32 daltons. ATMOSPHERIC PHYSICS Feature analysis of stratospheric wind and temperature fields over the Antigua site by rocket data Yang Li, Zheng Sheng, JinRui Jing Recently Published , Available online , doi: 10.26464/epp2019040 [Abstract](20) [FullText HTML](6) [PDF 2167KB](1) The wind and temperature fields at 20 to 55 km above the Antigua launch site (17°N, 61°W) were analyzed by using sounding rocket data published by the research organization on Stratosphere-Troposphere Processes and their Role in Climate (SPARC). The results showed distinct variations in the wind and temperature fields at different heights from the 1960s to the 1990s. The overall zonal wind speed showed a significant increasing trend with the year, and the overall change in meridional wind speed showed a falling trend from 1976 to 1990, whereas the temperature field showed a significant cooling trend from 1964 to 1990. The times the trends mutated varied at different levels. By taking the altitudes at 20, 35, and 50 km as representatives, we found that the zonal wind speed trend had mutated in 1988, 1986, and 1986, respectively; that the meridional wind speed trend had mutated in 1990, 1986, and 1990, respectively; and that the temperature trend had mutated separately in 1977, 1973, and 1967, respectively. Characteristics of the periodic wind and temperature field variations at different heights were also analyzed, and obvious differences were found in time scale variations across the different layers. The zonal and meridional wind fields were basically characterized as having a significant periodic variation of 5 years across the three layers, and each level was characterized as having a periodic variation of less than 5 years. Temperature field variation at the three levels was basically characterized as occurring in 10-year and 5-year cycles. SPACE PHYSICS: MAGNETOSPHERIC PHYSICS Statistical study on interplanetary drivers behind intense geomagnetic storms and substorms Tian Tian, Zheng Chang, LingFeng Sun, JunShui Bai, XiaoMing Sha, Ze Gao Geomagnetic storms and substorms play a central role in both the daily life of mankind and in academic space physics. The profiles of storms, especially their initial phase morphology and the intensity of their substorms under different interplanetary conditions, have usually been ignored in previous studies. In this study, 97 intense geomagnetic storms (Dstmin ≤ –100 nT) between 1998 and 2018 were studied statistically using the double superposed epoch analysis (DSEA) and normalized superposed epoch analysis (NSEA) methods. These storms are categorized into two types according to different interplanetary magnetic field (IMF) Bz orientations: geomagnetic storms whose IMF is northward, both upstream and downstream relative to the interplanetary shock, and geomagnetic storms whose upstream and downstream IMF is consistently southward. We further divide these two types into two subsets, by different geomagnetic storm profiles: Type I/Type II — one/two-step geomagnetic storms with northward IMF both upstream and downstream of the interplanetary shock; Type III/TypeIV — one/two-step geomagnetic storms with southward IMF both upstream and downstream of the interplanetary shock. The results show that: (1) geomagnetic storms with northward IMF both upstream and downstream of the interplanetary shock have a clear initial phase; geomagnetic storms with southward IMF in both upstream and downstream of the interplanetary shock do not; (2) the IMF is an important controlling factor in affecting the intensity characteristics of substorms. When Bz is positive before and after the interplanetary shock arrival, the Auroral Electrojet (AE) index changes gently during the initial phase of geomagnetic storms, the median value of AE index is maintained at 500–1000 nT; (3) when Bz is negative before and after the interplanetary shock arrival, the AE index rises rapidly and reaches its maxmum value about one hour after storm sudden commencements (SSC), although the time is scaled between reference points and the maximum value of AE is usually greater than 1,000 nT, representing intense substorms; (4) for most cases, the Dst0 usually reaches its minimum at least one hour after Bz. These results are useful in improving contemporary space weather models, especially for those that address geomagnetic storms and substorms. Photoelectron balance in the dayside Martian upper atmosphere XiaoShu Wu, Jun Cui, Jiang Yu, LiJuan Liu, ZhenJun Zhou [Abstract](41) [FullText HTML](9) [PDF 704KB](1) Photoelectrons are produced by solar Extreme Ultraviolet radiation and contribute significantly to the local ionization and heat balances in planetary upper atmospheres. When the effect of transport is negligible, the photoelectron energy distribution is controlled by a balance between local production and loss, a condition usually referred to as local energy degradation. In this study, we examine such a condition for photoelectrons near Mars, with the aid of a multi-instrument Mars Atmosphere and Volatile Evolution data set gathered over the inbound portions of a representative dayside MAVEN orbit. Various photoelectron production and loss processes considered here include primary and secondary ionization, inelastic collisions with atmospheric neutrals associated with both excitation and ionization, as well as Coulomb collisions with ionospheric thermal electrons. Our calculations indicate that photoelectron production occurs mainly via primary ionization and degradation from higher energy states during inelastic collisions; photoelectron loss appears to occur almost exclusively via degradation towards lower energy states via inelastic collisions above 10 eV, but the effect of Coulomb collisions becomes important at lower energies. Over the energy range of 30–55 eV (chosen to reduce the influence of the uncertainty in spacecraft charging), we find that the condition of local energy degradation is very well satisfied for dayside photoelectrons from 160 to 250 km. No evidence of photoelectron transport is present over this energy range. Display: | Variability of the Martian ionosphere from the MAVEN Radio Occultation Science Experiment MeiJuan Yao, Jun Cui, XiaoShu Wu, YingYing Huang, WenRui Wang 2019, 3(4): 283 -289 doi: 10.26464/epp2019029 [Abstract](186) [FullText HTML](50) [PDF 763KB](8) [Cited by](0) The Martian ionosphere is produced by a number of controlling processes, including solar extreme ultraviolet radiation (EUV) and X-ray ionization, impact ionization by precipitating electrons, and day-to-night transport. This study investigates the structural variability of the Martian ionosphere with the aid of the radio occultation (RO) experiments made on board the recent Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft. On the dayside, the RO electron density profiles are described by the superposition of two Chapman models, representing the contributions from both the primary layer and the low-altitude secondary layer. The inferred subsolar peak electron densities and altitudes are 1.24×105 cm–3 and 127 km for the former, and 4.28×104 cm–3 and 97 km for the latter, respectively, in general agreement with previous results appropriate for the low solar activity conditions. Our results strengthen the role of solar EUV and X-ray ionization as the driving source of plasma on the dayside of Mars. Beyond the terminator, a systematic decline in ionospheric total electron content is revealed by the MAVEN RO measurements made from the terminator crossing up to a solar zenith angle of 120°. Such a trend is indicative of day-to-night plasma transport as an important source for the nightside Martian ionosphere. MeiJuan Yao, Jun Cui, XiaoShu Wu, YingYing Huang, WenRui Wang, 2019: Variability of the Martian ionosphere from the MAVEN Radio Occultation Science Experiment, Earth and Planetary Physics, 4, 283-289. doi: 10.26464/epp2019029. SOLID EARTH: GEOELECTROMAGNETICS Detection of seismic events on Mars: a lunar perspective WeiJia Sun, Liang Zhao, Yong Wei, Li-Yun Fu [Abstract](205) [FullText HTML](65) [PDF 764KB](14) [Cited by](0) The interior structures of planets are attracting more and more detailed attention; these studies could be of great value in improving our understanding of the early evolution of Earth. Seismological investigations of planet interiors rely primarily on seismic waves excited by seismic events. Since tectonic activities are much weaker on other planets, e.g. Mars, the magnitudes of their seismic events are much smaller than those on Earth. It is therefore a challenge to detect seismic events on planets using such conventional techniques as short-time average/long-time average (STA/LTA) triggers. In pursuit of an effective and robust scheme to detect small-magnitude events on Mars in the near future, we have taken Apollo lunar seismic observations as an example of weak-activity data and developed an event-detection scheme. The scheme reported here is actually a two-step processing approach: the first step involves a despike filter to remove large-amplitude impulses arising from large temperature variations; the second step employs a matched filter to unmask the seismic signals from a weak event hidden in the ambient and scattering noise. The proposed scheme has been used successfully to detect a moonquake that was not in the known moonquake catalogue, demonstrating that the two-step strategy is a feasible method for detecting seismic events on planets. Our scheme will provide a powerful tool for seismic data analysis of the Interior Exploration using Seismic Investigations, Geodesy and Heat Transport (InSight) mission, and China's future lunar missions. WeiJia Sun, Liang Zhao, Yong Wei, Li-Yun Fu, 2019: Detection of seismic events on Mars: a lunar perspective, Earth and Planetary Physics, 4, 290-297. doi: 10.26464/epp2019030. Formation mechanism of the Lidang circular structure in the Guangxi Province Pan Yan, ZhiYong Xiao, YiZhen Ma, YiChen Wang, Jiang Pu [Abstract](101) [FullText HTML](30) [PDF 2298KB](2) [Cited by](0) The Lidang circular structure in the center of the Guangxi Province is about 8 km in diameter. This structure appears as an abnormal shallow depression that has disturbed the rather harmonic regional joint systems. Its unique occurrence in the whole region, the circular morphology, negative topography, and the spatial distribution of interior and exterior strata are all consistent with those of impact craters that are formed by asteroidal or cometary collision. To test the impact hypothesis, we carried out both field investigation and remote sensing study of this structure. Regional geological history suggests that if the impact hypothesis were correct, the impact event should have occurred at or after the Early Permian. Field investigation found that the strata inside and outside the crater are dominated by parallel stacks of Lower and Upper Permian limestone that have various thicknesses and different mud contents. The layers of limestone within and outside the circular structure have identical attitudes; no structural disturbances were visible in the outcrops. Field investigations provide conclusive evidence against the impact cratering hypothesis. A high-resolution digital elevation model shows that the spatial distribution of rounded mountains within the structure is controlled by faint but continual extension of joints, suggesting that the crater interior has gone through a much higher degree of erosion. Therefore, regional joints that had once existed within the crater are preserved less well than exterior terrains, forming the abruptly disrupted circular depression. Differential erosion, as the possible formation mechanism of the Lidang structure, is consistent with the different mud contents found between the interior and exterior limestone. The circular outline of this structure may correspond to the shape of the original deposition basin. In conclusion, the Lidang circular structure is a polje formed by karstification, not an astrobleme. Pan Yan, ZhiYong Xiao, YiZhen Ma, YiChen Wang, Jiang Pu, 2019: Formation mechanism of the Lidang circular structure in the Guangxi Province, Earth and Planetary Physics, 4, 298-304. doi: 10.26464/epp2019031. Poleward-moving recurrent auroral arcs associated with impulse-excited standing hydromagnetic waves HuaYu Zhao, Xu-Zhi Zhou, Ying Liu, Qiu-Gang Zong, Robert Rankin, YongFu Wang, QuanQi Shi, Xiao-Chen Shen, Jie Ren, Han Liu, XingRan Chen In Earth's high-latitude ionosphere, the poleward motion of east–west elongated auroral arcs has been attributed to standing hydromagnetic waves, especially when the auroral arcs appear quasi-periodically with a recurrence time of a few minutes. The validation of this scenario requires spacecraft observations of ultra-low-frequency hydromagnetic waves in the magnetosphere and simultaneous observations of poleward-moving auroral arcs near the spacecraft footprints. Here we present the first observational evidence from the multi-spacecraft THEMIS (Time History of Events and Macroscale Interactions during Substorms) mission and the conjugated all-sky imager to support the scenario that standing hydromagnetic waves can generate the quasi-periodic appearance of poleward-moving auroral arcs. In this specific event, the observed waves were toroidal branches of the standing hydromagnetic waves, which were excited by a pulse in the solar wind dynamic pressure. Multi-spacecraft measurements from THEMIS also suggest higher wave frequencies at lower L shells (consistent with the distribution of magnetic field line eigenfrequencies), which indicates that the phase difference across latitudes would increase with time. As time proceeds, the enlarged phase difference corresponds to a lower propagation speed of the auroral arcs, which agrees very well with the ground-based optical data. HuaYu Zhao, Xu-Zhi Zhou, Ying Liu, Qiu-Gang Zong, Robert Rankin, YongFu Wang, QuanQi Shi, Xiao-Chen Shen, Jie Ren, Han Liu, XingRan Chen, 2019: Poleward-moving recurrent auroral arcs associated with impulse-excited standing hydromagnetic waves, Earth and Planetary Physics, 4, 305-313. doi: 10.26464/epp2019032. MARINE GEOPHYSICS Crustal S-wave velocity structure across the northeastern South China Sea continental margin: implications for lithology and mantle exhumation WenAi Hou, Chun-Feng Li, XiaoLi Wan, MingHui Zhao, XueLin Qiu The northeastern margin of the South China Sea (SCS), developed from continental rifting and breakup, is usually thought of as a non-volcanic margin. However, post-spreading volcanism is massive and lower crustal high-velocity anomalies are widespread, which complicate the nature of the margin here. To better understand crustal seismic velocities, lithology, and geophysical properties, we present an S-wave velocity (VS) model and a VP/VS model for the northeastern margin by using an existing P-wave velocity (VP) model as the starting model for 2-D kinematic S-wave forward ray tracing. The Mesozoic sedimentary sequence has lower VP/VS ratios than the Cenozoic sequence; in between is a main interface of P-S conversion. Two isolated high-velocity zones (HVZ) are found in the lower crust of the continental slope, showing S-wave velocities of 4.0–4.2 km/s and VP/VS ratios of 1.73–1.78. These values indicate a mafic composition, most likely of amphibolite facies. Also, a VP/VS versus VP plot indicates a magnesium-rich gabbro facies from post-spreading mantle melting at temperatures higher than normal. A third high-velocity zone (VP : 7.0–7.8 km/s; VP/VS: 1.85–1.96), 70-km wide and 4-km thick in the continent-ocean transition zone, is most likely to be a consequence of serpentinization of upwelled upper mantle. Seismic velocity structures and also gravity anomalies indicate that mantle upwelling/ serpentinization could be the most severe in the northeasternmost continent-ocean boundary of the SCS. Empirical relationships between seismic velocity and degree of serpentinization suggest that serpentinite content decreases with depth, from 43% in the lower crust to 37% into the mantle. WenAi Hou, Chun-Feng Li, XiaoLi Wan, MingHui Zhao, XueLin Qiu, 2019: Crustal S-wave velocity structure across the northeastern South China Sea continental margin: implications for lithology and mantle exhumation, Earth and Planetary Physics, 4, 314-329. doi: 10.26464/epp2019033. Anomalous phenomena in DC–ULF geomagnetic daily variation registered three days before the 12 May 2008 Wenchuan MS 8.0 earthquake Mei Li, Li Yao, YaLi Wang, Michel Parrot, Masashi Hayakawa, Jun Lu, HanDong Tan, Tao Xie [Abstract](85) [FullText HTML](27) [PDF 1019KB](0) [Cited by](0) The hourly data of the vertical Z and the horizontal H components of 37 ground–based DC–ULF geomagnetic stations are examined during 20 April–12 May 2008. On 9 May 2008, three days before the Wenchuan MS 8.0 shock, anomalies — a double low-point and a decreased amplitude — are registered on the curves of the Z component at 25 stations in a large-scale area surrounding the Wenchuan epicentral area. The H component shows none of the double low-point phenomenon but does exhibit a reduced magnitude at the same time. The geomagnetic index Kp is also examined and indicates that the anomalies appear at a solar quiet period. The appearing time shift (Tzs) between the first low-point on May 9 and the minimum point occurring time of May 1–5, 2008 is also checked. The results show that Tzs is on the order of 1–2 hours earlier or later than usual and there is a 2–6 hours' gap between these two low-points. However, there is still a transition area which includes the epicenter where Tzs = 0. Variation amplitude examined on vertical Z increases as the distance from the epicenter decreases. An Earth–air–ionosphere model has been employed to investigate a possible mechanism of this phenomenon and positive results have been unexpectedly attained. All these above-related results tend to prove that the variations of the Z and H on May 9, 2008 during the solar quiet period are probably associated with the forthcoming Wenchuan MS 8.0 earthquake. Mei Li, Li Yao, YaLi Wang, Michel Parrot, Masashi Hayakawa, Jun Lu, HanDong Tan, Tao Xie, 2019: Anomalous phenomena in DC–ULF geomagnetic daily variation registered three days before the 12 May 2008 Wenchuan MS 8.0 earthquake, Earth and Planetary Physics, 4, 330-341. doi: 10.26464/epp2019034. SOLID EARTH: EXPLORATION GEOPHYSICS An efficient source wavefield reconstruction scheme using single boundary layer values for the spectral element method YouShan Liu, Tao Xu, YangHua Wang, JiWen Teng, José Badal, HaiQiang Lan [Abstract](173) [FullText HTML](48) [PDF 5602KB](10) [Cited by](0) In the adjoint-state method, the forward-propagated source wavefield and the backward-propagated receiver wavefield must be available simultaneously either for seismic imaging in migration or for gradient calculation in inversion. A feasible way to avoid the excessive storage demand is to reconstruct the source wavefield backward in time by storing the entire history of the wavefield in perfectly matched layers. In this paper, we make full use of the elementwise global property of the Laplace operator of the spectral element method (SEM) and propose an efficient source wavefield reconstruction method at the cost of storing the wavefield history only at single boundary layer nodes. Numerical experiments indicate that the accuracy of the proposed method is identical to that of the conventional method and is independent of the order of the Lagrange polynomials, the element type, and the temporal discretization method. In contrast, the memory-saving ratios of the conventional method versus our method is at least N when using either quadrilateral or hexahedron elements, respectively, where N is the order of the Lagrange polynomials used in the SEM. A higher memory-saving ratio is achieved with triangular elements versus quadrilaterals. The new method is applied to reverse time migration by considering the Marmousi model as a benchmark. Numerical results demonstrate that the method is able to provide the same result as the conventional method but with about 1/25 times lower storage demand. With the proposed wavefield reconstruction method, the storage demand is dramatically reduced; therefore, in-core memory storage is feasible even for large-scale three-dimensional adjoint inversion problems. YouShan Liu, Tao Xu, YangHua Wang, JiWen Teng, José Badal, HaiQiang Lan, 2019: An efficient source wavefield reconstruction scheme using single boundary layer values for the spectral element method, Earth and Planetary Physics, 4, 342-357. doi: 10.26464/epp2019035. Small-scale dipolarization fronts in the Earth′s magnetotail Jing Huang, Meng Zhou, HuiMin Li, XiaoHua Deng, Jiang Liu, ShiYong Huang Previous studies suggest that dipolarization fronts (DFs) are 1 to 3RE (RE is the earth radius) wide in the dawn-dusk direction. Recent kinetic simulations have found that DFs may break up into small-scale structures after they are produced by reconnection. Motivated by this simulation, we revisited the scale size of DFs in the dawn-dusk direction by using Cluster observations during the years when the inter-distance among Cluster spacecraft was between 1000 and 2000 km. We selected the DFs that were detected by more than one spacecraft and estimated the radii of these DFs by a simple geometrical analysis, which is based on comparison of DF normals observed by different spacecraft. We found a few DFs that were only a few ion inertial lengths in the dawn-dusk direction. These results point out the importance of multi-scale coupling during the evolution of DFs. Jing Huang, Meng Zhou, HuiMin Li, XiaoHua Deng, Jiang Liu, ShiYong Huang, 2019: Small-scale dipolarization fronts in the Earth′s magnetotail, Earth and Planetary Physics, 4, 358-364. doi: 10.26464/epp2019036. Ring current proton scattering by low-frequency magnetosonic waves Jiang Yu, Jing Wang, Jun Cui Magnetosonic (MS) waves are believed to have the ability to affect the dynamics of ring current protons both inside and outside the plasmasphere. However, previous studies have focused primarily on the effect of high-frequency MS waves (f > 20 Hz) on ring current protons. In this study, we investigate interactions between ring current protons and low-frequency MS waves (< 20 Hz) inside the plasmasphere. We find that low-frequency MS waves can effectively accelerate < 20 keV ring current protons on time scales from several hours to a day, and their scattering efficiency is comparable to that due to high-frequency MS waves (>20 Hz), from which we infer that omitting the effect of low-frequency MS waves will considerably underestimate proton depletion at middle pitch angles and proton enhancement at large pitch angles. Therefore, ring current proton modeling should take into account the effects of both low- and high-frequency MS waves. Jiang Yu, Jing Wang, Jun Cui, 2019: Ring current proton scattering by low-frequency magnetosonic waves, Earth and Planetary Physics, 4, 365-372. doi: 10.26464/epp2019037. Corotating drift-bounce resonance of plasmaspheric electron with poloidal ULF waves Qiu-Gang Zong, YongFu Wang, Jie Ren, XuZhi Zhou, SuiYan Fu, Robert Rankin, Hui Zhang 2017, 1(1): 2-12 doi: 10.26464/epp2017002 [Abstract](2345) [FullText HTML](859) [PDF](178) [Cited by](0) Ambient noise surface wave tomography of marginal seas in east Asia Qing Wang, XiaoDong Song, JianYe Ren 2017, 1(1): 13-25 doi: 10.26464/epp2017003 [Abstract](1940) [FullText HTML](1034) [PDF](142) [Cited by](0) A seismic model for crustal structure in North China Craton TianYu Zheng, YongHong Duan, WeiWei Xu, YinShuang Ai Thermal structures of the Pacific lithosphere from magnetic anomaly inversion Chun-Feng Li, Jian Wang The first joint experimental results between SURA and CSES XueMin Zhang, Vladimir Frolov, ShuFan Zhao, Chen Zhou, YaLu Wang, Alexander Ryabov, DuLin Zhai 2018, 2(6): 527-537 doi: 10.26464/epp2018051 [Abstract](4445) [FullText HTML](242) [PDF](91) [Cited by](0) Different earthquake patterns for two neighboring fault segments within the Haiyuan Fault zone ZhiKun Ren, ZhuQi Zhang, PeiZhen Zhang [Abstract](1896) [FullText HTML](1192) [PDF](85) [Cited by](0) Exact local refinement using Fourier interpolation for nonuniform-grid modeling JinHai Zhang, ZhenXing Yao Radiation belt electron scattering by whistler-mode chorus in the Jovian magnetosphere: Importance of ambient and wave parameters BinBin Ni, Jing Huang, YaSong Ge, Jun Cui, Yong Wei, XuDong Gu, Song Fu, Zheng Xiang, ZhengYu Zhao A simulation study of 630 nm and 557.7 nm airglow variations due to dissociative recombination and thermal electrons by high-power HF heating Tong Dang, JiuHou Lei, XianKang Dou, WeiXing Wan Monitoring the geospace response to the Great American Solar Eclipse on 21 August 2017 Shun-Rong Zhang, Philip J. Erickson, Larisa P. Goncharenko, Anthea J. Coster, Nathaniel A. Frissell Submission Log In Enter your e-mail address to receive your account information. https://mc03.manuscriptcentral.com/epphy Map for standard Chinese territory boundaries Copyright © 2019 by Earth and Planetary Physics. Supported by Beijing Renhe Information Technology Co. LtdE-mail: [email protected]
CommonCrawl
Communications in Mathematical Analysis Commun. Math. Anal. Volume 8, Number 2 (2010), 92-102. A Convex Minorant Problem Arising in Electron Density Theory Gisèle R. Goldstein, Jerome A. Goldstein, and Naima Naheed More by Gisèle R. Goldstein More by Jerome A. Goldstein More by Naima Naheed We find the largest convex minorant of the function \begin{equation*} F\left( x,y\right) =ax^{2}+xy+by^{2} \end{equation*} where $a,b$ are positive constants and $x\geq 0,\ y\geq 0$. We explain how the problem is closely connected with finding the ground state Thomas-Fermi electron density for a spin polarized quantum mechanical system with the Fermi-Amaldi correction. Commun. Math. Anal., Volume 8, Number 2 (2010), 92-102. First available in Project Euclid: 21 April 2010 https://projecteuclid.org/euclid.cma/1271890670 Primary: 52A41: Convex functions and convex programs [See also 26B25, 90C25] 26B25: Convexity, generalizations Secondary: 81Q99: None of the above, but in this section 81V55: Molecular physics [See also 92E10] 92E10: Molecular structure (graph-theoretic methods, methods of differential topology, etc.) Convex minorant Thomas-Fermi theory Fermi-Amaldi correction ground state electron density Goldstein, Gisèle R.; Goldstein, Jerome A.; Naheed, Naima. A Convex Minorant Problem Arising in Electron Density Theory. Commun. Math. Anal. 8 (2010), no. 2, 92--102. https://projecteuclid.org/euclid.cma/1271890670 Ph. Bénilan and H. Brezis, Nonlinear problems related to the Thomas-Fermi equation. J. Evol. Eqns. 3 (2003), pp 637-652. Mathematical Reviews (MathSciNet): MR2058057 Digital Object Identifier: doi:10.1007/s00028-003-0117-8 Ph. Bénilan, J. A. Goldstein and G. R. Rieder$ \footnote{ *G.\ R.\ Rieder\ is now G.\ R.\ Goldstein.}$, A nonlinear elliptic system arising in electron density theory. Comm. PDE. 17 (1992), pp 2079-2092. Digital Object Identifier: doi:10.1080/03605309208820914 Ph. Bénilan, J. A. Goldstein and G. R. Rieder, The Fermi-Amaldi correction in spin polarized Thomas-Fermi Theory, in Differential Equations and Mathematical Physics (ed. by C. Bennewitz), Academic Press. (1991), pp 25-37. Digital Object Identifier: doi:10.1016/S0076-5392(08)63373-1 H. Brezis, Some variational problems of Thomas-Fermi type , in Variational Inequalities and Complementary Problems: Theory and Applications (ed. by R.W. Cottle, F. Giannessi, and J. L. Lions), Wiley (1980), pp 53-73. Mathematical Reviews (MathSciNet): MR578739 H. Brezis, Nonlinear problems related to the Thomas-Fermi equation, in Contemporary Developments in Continuum Mechanics and Partial Differential Equations, (ed. by G. M. de la Penha and L.A. Medeiros), North Holland, Amsterdam, (1978), pp 81-89. E. Fermi, Un metodo statistico per la determinazione di alcune prioret$\grave{a}$ dell'atome$,\ $Rend. Acad. Naz. Lincei 6 (1927), pp 602-607. E. Fermi and E. Amaldi, Le orbit $\infty $s degli elementi, Mem. Accad. d'Italia 6 (1934), pp 119-149. G. R. Goldstein, J. A. Goldstein and W. Jia, Thomas-Fermi theory with magnetic fields and the Fermi-Amaldi correction, Diff. & Int. Eqns. 8 (1995), pp 1305-1316. Project Euclid: euclid.die/1368638167 G. R. Goldstein, J. A. Goldstein and N. Naheed, in preparation. J. A. Goldstein and G. R. Rieder, Spin-polarized Thomas-Fermi Theory, J. Math. Phys. 29 (1988), pp 709-716. Digital Object Identifier: doi:10.1063/1.528011 J. A. Goldstein and G. R. Rieder, Recent rigorous results in Thomas-Fermi theory, in Lecture Notes in Math No. 1394 (ed. by T. L. Gill and W. W. Zachary), Springer (1989), pp 68-82. Digital Object Identifier: doi:10.1007/BFb0086753 C. LeBris, On the spin-polarized Thomas-Fermi model with the Fermi-Amaldi correction, Nonlinear Anal., TMA 25 (1995), pp 669-679. E. Lieb and B. Simon, Thomas-Fermi Theory revisited , Phys. Rev. Lett. 31 (1975), pp 681-683. E. H. Lieb and B. Simon, The Thomas-Fermi Theory of atoms, molecules and solids, Adv. Math. 23 (1977), pp 22-116. Digital Object Identifier: doi:10.1016/0001-8708(77)90108-6 N. Naheed, Mathematical Contributions to Spin-polarized Thomas-Fermi Theory, Ph. D. Thesis, Univ. of Memphis, 2009. G. R. Rieder, Mathematical contributions to Thomas-Fermi theory, Houston J. Math. 16 (1990), pp 407-430. L. H. Thomas, The calculation of atomic fields, Proc. Camb. Phil. Soc. 23 (1927), pp 542-548. Mathematical Research Publishers Thomas-Fermi theory with magnetic fields and the Fermi-Amaldi correction Goldstein, Gisèle Ruiz, Goldstein, Jerome A., and Jia, Wenyao, Differential and Integral Equations, 1995 New Numerical Solution For Solving Nonlinear Singular Thomas-Fermi Differential Equation Parand, Kourosh and Delkhosh, Mehdi, Bulletin of the Belgian Mathematical Society - Simon Stevin, 2017 Spectral Method for Solving the Nonlinear Thomas-Fermi Equation Based on Exponential Functions Jovanovic, Raka, Kais, Sabre, and Alharbi, Fahhad H., Journal of Applied Mathematics, 2014 Quantum Matroids Terwilliger, Paul, , 1996 On the simplest sextic fields and related Thue equations Hoshi, Akinari, Functiones et Approximatio Commentarii Mathematici, 2012 Screening of an applied electric field inside a metallic layer described by the Thomas-Fermi-von Weizsäcker model Blanc, X. and Monneau, R., Advances in Differential Equations, 2002 Fractional heat equations on a segment Kaikina, Elena I., Differential and Integral Equations, 2006 Schrödinger equation with multiparticle potential and critical nonlinearity Chabrowski, Jan, Szulkin, Andrzej, and Willem, Michael, Topological Methods in Nonlinear Analysis, 2009 Rigid representations of the multiplicative coalescent with linear deletion Martin, James B. and Ráth, Balázs, Electronic Journal of Probability, 2017 Inhomogeneous quadratic congruences Baier, S. and Browning, T.D., Functiones et Approximatio Commentarii Mathematici, 2012 euclid.cma/1271890670
CommonCrawl
A symmetric Random Walk defined by the time-one map of a geodesic flow Sliding method for the semi-linear elliptic equations involving the uniformly elliptic nonlocal operators Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation with critical speed Yohei Yamazaki Hirhoshima University, 1-3-2 Kagamiyama, Higashi-Hiroshima City, Hiroshima, 739-8511, Japan Received April 2020 Revised November 2020 Published January 2021 Fund Project: The first author is supported by JSPS Research Fellowships for Young Scientists under Grant 18J00947 In this paper, we construct center stable manifolds around unstable line solitary waves of the Zakharov–Kuznetsov equation on two dimensional cylindrical spaces $ \mathbb {R} \times \mathbb {T}_L $ ($ {\mathbb T}_L = {\mathbb R}/2\pi L {\mathbb Z} $). In the paper [39], center stable manifolds around unstable line solitary waves have been constructed excluding the case of critical speeds $ c \in \{4n^2/5L^2;n \in {\mathbb Z}, n>1\} $. In the case of critical speeds $ c $, any neighborhood of the line solitary wave with speed $ c $ in the energy space includes solitary waves which are depend on the direction $ {\mathbb T}_L $. To construct center stable manifolds including their solitary waves and to recover the degeneracy of the linearized operator around line solitary waves with critical speed, we prove the stability condition of the center stable manifold for critical speed by applying to the estimate of the 4th order term of a Lyapunov function in [37] and [38]. Keywords: center stable manifolds, Zakharov–Kuznetsov equation, line solitary wave, transverse instability. Mathematics Subject Classification: Primary: 35B35, 35Q53. Citation: Yohei Yamazaki. Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation with critical speed. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2021008 P. W. Bates and C. K. R. T. Jones, Invariant manifolds for semilinear partial differential equations, in Dynam. Report Ser. Dynam. Systems Appl, (eds. U. Kirchgraber and H. O. Walther), Wiley, Chichester, UK, 2 (1989), 1–38. Google Scholar M. Beceanu, A critical center-stable manifold for Schrödinger's equation in three dimensions, Comm. Pure Appl. Math., 65 (2012), 431-507. doi: 10.1002/cpa.21387. Google Scholar M. Beceanu, A center-stable manifold for the energy-critical wave equation in $ {\mathbb R}^3$ in the symmetric setting, J. Hyperbolic Differ. Equ., 11 (2014), 437-476. doi: 10.1142/S021989161450012X. Google Scholar T. B. Benjamin, The stability of solitary waves, Proc. Roy. Soc. (London) Ser. A, 328 (1972), 153-183. doi: 10.1098/rspa.1972.0074. Google Scholar T. J. Bridges, Universal geometric conditions for the transverse instability of solitary waves, Phys. Rev. Lett., 84 (2000), 2614-2617. Google Scholar A. Comech and D. E. Pelinovsky, Purely nonlinear instability of standing waves with minimal energy, Comm. Pure Appl. Math., 56 (2003), 1565-1607. doi: 10.1002/cpa.10104. Google Scholar R. Côte, C. Muñoz, D. Pilod and G. Simpson, Asymptotic stability of high-dimensional Zakharov–Kuznetsov solitons, Arch. Ration. Mech. Anal., 220 (2016), 639-710. doi: 10.1007/s00205-015-0939-x. Google Scholar M. G. Crandall and P. H. Rabinowitz, Bifurcation from simple eigenvalues, J. Funct. Anal., 8 (1971), 321-340. doi: 10.1016/0022-1236(71)90015-2. Google Scholar A. de Bouard, Stability and instability of some nonlinear dispersive solitary waves in higher dimension, Proc. Roy. Soc. Edinburgh Sect. A, 126 (1996), 89-112. doi: 10.1017/S0308210500030614. Google Scholar H. Iwasaki, S. Toh and T. Kawahara, Cylindrical quasi-solitons of the Zakharov-Kuznetsov equation, Physica D Nonlinear Phenomena, 43 (1990), 293-303. doi: 10.1016/0167-2789(90)90138-F. Google Scholar J. Jin, Z. Lin and C. Zeng, Invariant manifolds of traveling waves of the 3D Gross–Pitaevskii equation in the energy space, Comm. Math. Phys., 364 (2018), 981-1039. doi: 10.1007/s00220-018-3189-6. Google Scholar J. Jin, Z. Lin and C. Zeng, Dynamics near the solitary waves of the supercritical gKDV equations, J. Differential Equations, 267 (2019), 7213-7262. doi: 10.1016/j.jde.2019.07.019. Google Scholar M. A. Johnson, The transverse instability of periodic waves in Zakharov–Kuznetsov type equations, Stud. Appl. Math., 124 (2010), 323-345. doi: 10.1111/j.1467-9590.2009.00473.x. Google Scholar J. Krieger and W. Schlag, Stable manifolds for all monic supercritical focusing nonlinear Schrödinger equations in one dimension, J. Amer. Math. Soc., 19 (2006), 815-920. doi: 10.1090/S0894-0347-06-00524-8. Google Scholar J. Krieger, K. Nakanishi and W. Schlag, Threshold phenomenon for the quintic wave equation in three dimensions, Comm. Math. Phys., 327 (2014), 309-332. doi: 10.1007/s00220-014-1900-9. Google Scholar J. Krieger, K. Nakanishi and W. Schlag, Center-stable manifold of the ground state in the energy space for the critical wave equation, Math. Ann., 361 (2015), 1-50. doi: 10.1007/s00208-014-1059-x. Google Scholar E. Kirr, P. G. Kevrekidis and D. E. Pelinovsky, Symmetry-breaking bifurcation in the nonlinear Schrödinger equation with symmetric potentials, Comm. Math. Phys., 308 (2011), 795-844. doi: 10.1007/s00220-011-1361-3. Google Scholar D. Lannes, F. Linares and J.-C. Saut, The Cauchy problem for the Euler-Poisson system and derivation of the Zakharov–Kuznetsov equation, in Studies in Phase Space Analysis with Applications to PDEs, Progr. Nonlinear Differential Equations Appl. (eds. M. Cicognani, F. Colombini, and D. Del Santo), New York: Birkhauser/Springer, 84 (2013), 181–213. doi: 10.1007/978-1-4614-6348-1_10. Google Scholar F. Linares, A. Pastor and J.-C. Saut, Well-posedness for the ZK equation in a cylinder and on the background of a KdV soliton, Comm. Partial Differential Equations, 35 (2010), 1674-1689. doi: 10.1080/03605302.2010.494195. Google Scholar F. Linares and J.-C. Saut, The Cauchy problem for the 3D Zakharov–Kuznetsov equation, Discrete Contin. Dyn. Syst., 24 (2009), 547-565. doi: 10.3934/dcds.2009.24.547. Google Scholar M. Maeda, Stability of bound states of Hamiltonian PDEs in the degenerate cases, J. Funct. Anal., 263 (2012), 511-528. doi: 10.1016/j.jfa.2012.04.006. Google Scholar Y. Martel and F. Merle, Asymptotic stability of solitons for subcritical generalized KdV equations, Arch. Ration. Mech. Anal., 157 (2001), 219-254. doi: 10.1007/s002050100138. Google Scholar Y. Martel and F. Merle, Asymptotic stability of solitons of the gKdV equations with general nonlinearity, Math. Ann., 341 (2008), 391-427. doi: 10.1007/s00208-007-0194-z. Google Scholar Y. Martel, F. Merle, K. Nakanishi and P. Raphaël, Codimension one threshold manifold for the critical gKdV equation, Comm. Math. Phys., 342 (2016), 1075-1106. doi: 10.1007/s00220-015-2509-3. Google Scholar T. Mizumachi, Large time asymptotics of solutions around solitary waves to the generalized Korteweg–de Vries equations, SIAM J. Math. Anal., 32 (2001), 1050-1080. doi: 10.1137/S0036141098346827. Google Scholar L. Molinet and D. Pilod, Bilinear Strichartz estimates for the Zakharov–Kuznetsov equation and applications, Ann. Inst. H. Poincaré Anal. Non Lineaire, 32 (2015), 347-371. doi: 10.1016/j.anihpc.2013.12.003. Google Scholar L. Molinet, J.-C. Saut and N. Tzvetkov, Global well-posedness for the KP-II equation on the background of a non-localized solution, Ann. Inst. H. Poincaré Anal. Non Lineaire, 28 (2011), 653-676. doi: 10.1016/j.anihpc.2011.04.004. Google Scholar K. Nakanishi and W. Schlag, Global dynamics above the ground state energy for the cubic NLS equation in 3D, Calc. Var. Partial Differential Equations, 44 (2012), 1-45. doi: 10.1007/s00526-011-0424-9. Google Scholar K. Nakanishi and W. Schlag, Invariant manifolds around soliton manifolds for the nonlinear Klein–Gordon equation, SIAM J. Math. Anal., 44 (2012), 1175-1210. doi: 10.1137/11082720X. Google Scholar R. Pego and M. I. Weinstein, Asymptotic stability of solitary waves, Comm. Math. Phys., 164 (1994), 305-349. Google Scholar D. Pelinovsky, Normal form for transverse instability of the line soliton with a nearly critical speed of propagation, Math. Model. Nat. Phenom., 13 (2018), 1-20. doi: 10.1051/mmnp/2018024. Google Scholar F. Ribaud and S. Vento, Well-posedness results for the three-dimensional Zakharov–Kuznetsov equation, SIAM J. Math. Anal., 44 (2012), 2289-2304. doi: 10.1137/110850566. Google Scholar F. Rousset and N. Tzvetkov, Transverse nonlinear instability of solitary waves for some Hamiltonian PDE's, J. Math. Pures. Appl., 90 (2008), 550-590. doi: 10.1016/j.matpur.2008.07.004. Google Scholar W. Schlag, Stable manifolds for an orbitally unstable nonlinear Schrödinger equation, Ann. of Math. (2), 169 (2009), 139-227. doi: 10.4007/annals.2009.169.139. Google Scholar B. K. Shivamoggi, The painlevé analysis of the Zakharov–Kuznetsov equation, Phys. Scripta, 42 (1990), 641-642. doi: 10.1088/0031-8949/42/6/001. Google Scholar J. Villarroel and M. J. Ablowitz, On the initial value problem for the KPII equation with data that do not decay along a line, Nonlinearity, 17 (2004), 1843-1866. doi: 10.1088/0951-7715/17/5/015. Google Scholar Y. Yamazaki, Stability of line standing waves near the bifurcation point for nonlinear Schrödinger equations, Kodai Math. J., 38 (2015), 65-96. doi: 10.2996/kmj/1426684443. Google Scholar Y. Yamazaki, Stability for line solitary waves of Zakharov–Kuznetsov equation, J. Differential Equations, 262 (2017), 4336-4389. doi: 10.1016/j.jde.2017.01.006. Google Scholar Y. Yamazaki, Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation, arXiv: 1808.07315. Google Scholar V. E. Zakharov and E. A. Kuznetsov, On three dimensional solitons, Sov. Phys.-JETP, 39 (1974), 285-286. Google Scholar Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 Maicon Sônego. Stable transition layers in an unbalanced bistable equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020370 Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215 Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015 Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054 José Madrid, João P. G. Ramos. On optimal autocorrelation inequalities on the real line. Communications on Pure & Applied Analysis, 2021, 20 (1) : 369-388. doi: 10.3934/cpaa.2020271 Claude-Michel Brauner, Luca Lorenzi. Instability of free interfaces in premixed flame propagation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 575-596. doi: 10.3934/dcdss.2020363 Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033 Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004 Hanyu Gu, Hue Chi Lam, Yakov Zinder. Planning rolling stock maintenance: Optimization of train arrival dates at a maintenance center. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020177 Yoshihisa Morita, Kunimochi Sakamoto. Turing type instability in a diffusion model with mass transport on the boundary. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3813-3836. doi: 10.3934/dcds.2020160 Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012 Tommi Brander, Joonas Ilmavirta, Petteri Piiroinen, Teemu Tyni. Optimal recovery of a radiating source with multiple frequencies along one line. Inverse Problems & Imaging, 2020, 14 (6) : 967-983. doi: 10.3934/ipi.2020044
CommonCrawl
Existence, blow-up and exponential decay estimates for a system of nonlinear viscoelastic wave equations with nonlinear boundary conditions CPAA Home Analytic integrability around a nilpotent singularity: The non-generic case January 2020, 19(1): 425-453. doi: 10.3934/cpaa.2020022 On the Schrödinger-Debye system in compact Riemannian manifolds Marcelo Nogueira and Mahendra Panthee , Department of Mathematics, State University of Campinas, Campinas-SP, 13083-859, Brazil Received December 2018 Revised May 2019 Published July 2019 Fund Project: The first author is supported by CAPES and CNPq, Brazil. The second author is partially supported by FAPESP (2016/25864-6) Brazil and CNPq (308131/2017-7) Brazil Full Text(HTML) We consider the initial value problem (IVP) associated with the Schrödinger-Debye system posed on a $d$-dimensional compact Riemannian manifold $M $ and prove the local well-posedness result for given data $ (u_0, v_0)\in H^s(M)\times (H^s(M)\cap L^{\infty}(M))$ whenever $s>\frac{d}2-\frac12 $, $d\geq 2 $. For $d=2 $, we apply a sharp version of the Gagliardo-Nirenberg inequality in compact manifold to derive an a priori estimate for the $H^1 $-solution and use it to prove the global well-posedness result in this space. Keywords: Schrödinger equation, Schrödinger-Debye system, initial value problem, local and global well-posedness. Mathematics Subject Classification: Primary: 35Q55; Secondary: 53C35. Citation: Marcelo Nogueira, Mahendra Panthee. On the Schrödinger-Debye system in compact Riemannian manifolds. Communications on Pure & Applied Analysis, 2020, 19 (1) : 425-453. doi: 10.3934/cpaa.2020022 N. Anantharaman and G. Revière, Dispersion and controllability for the Schrödinger equation on negative curved manifolds, Analysis and PDE, 5 (2012), 313–337. doi: 10.2140/apde.2012.5.313. Google Scholar R. Anton, Cubic nonlinear Schrödinger equation on three dimensional balls with radial data, Comm. in PDE, 33 (2008), 1862–1889. doi: 10.1080/03605300802402591. Google Scholar R. Anton, Strichartz inequalities for lipschitz metrics on manifolds and nonlinear Schrödinger equation on domains, Bull. Soc. math. France, 136 (2008), 27–65. doi: 10.24033/bsmf.2548. Google Scholar A. Arbieto and C. Matheus, On the periodic Schrödinger-Debye equation, Comm. Pure and Applied Anal., 7 (2008), 699–713. doi: 10.3934/cpaa.2008.7.699. Google Scholar S. Alinhac and P. Gérard, Pseudo-differential operators and the Nash-Moser theorem, Graduate Studies in Mathematics, 82 (2007). doi: 10.1090/gsm/082. Google Scholar B. Bidégaray, On the Cauchy problem for systems occurring in nonlinear optics, Adv. Diff. Equat., 3 (1998), 473–496. Google Scholar B. Bidégaray, The Cauchy problem for Schrödinger-Debye equations, Math. Models Methods Appl. Sci., 10 (2000), 307–315. doi: 10.1142/S0218202500000185. Google Scholar J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and application to nonlinear evolution equations I. Schrödinger equations, Geom. and Funct. Anal., 3 (1993), 107–156. doi: 10.1007/BF01896020. Google Scholar J. Bourgain, Exponential sums and nonlinear Schrödinger equations, Geom. and Funct. Anal., 3 (1993), 157–178. doi: 10.1007/BF01896021. Google Scholar M. D. Blair, H. F. Smith and C. D. Sogge, On Strichartz estimates for Schrödinger operators in compact manifolds with boundary, Proc. of the Amer. Math. Soc., 136 (2008), 247–256. doi: 10.1090/S0002-9939-07-09114-9. Google Scholar M. D. Blair, H. F. Smith and C. D. Sogge, Strichartz estimates and the nonlinear Schrödinger equation on manifolds with boundary, Math. Ann., 354 (2012), 1397–1430. doi: 10.1007/s00208-011-0772-y. Google Scholar N. Burq, P. Gérard and N. Tzvetkov, Strichartz inequalities and the nonlinear Schrödinger equation on compact manifolds, Amer. J. Math., 126 (2004), 569–605. Google Scholar X. Carvajal and P. Gamboa, Global well-posedness for the critical Schrödinger-Debye system, Dynamics of PDE., 11 (2014), 251–268. doi: 10.4310/DPDE.2014.v11.n3.a3. Google Scholar T. Cazenave, Semilinear Schrödinger Equations, Courant Lecture Notes in Mathematics, New York University, 2003. doi: 10.1090/cln/010. Google Scholar J. Ceccon and M. Montenegro, Optimal $L^{p}$- Riemannian Gagliardo-Nirenberg inequalities, Mathematische Zeitschrift, 258 (2008), 851–873. doi: 10.1007/s00209-007-0202-8. Google Scholar A. J. Corcho, F. Oliveira and J. D. Silva, Local and global well-posedness for the critical Schrödinger-Debye system, Proc. of the Amer. Math. Soc., 144 (2013), 3485–3499. doi: 10.1090/S0002-9939-2013-11612-6. Google Scholar A. J. Corcho and F. Linares, Well-posedness for the Schrödinger-Debye equation, Contemporary Mathematics, 362 (2004), 113–131. doi: 10.1090/conm/362/06608. Google Scholar P. Gérard, Nonlinear Schrödinger equations in compact manifolds, European Congress of Mathematics, Stockholm 2004, Editor Ary Laptov, 121–139. Google Scholar J. Ginibre, Y. Tsutsumi and G. Velo, On the Cauchy problem for the Zakharov system, J. Funct. Anal., 151 (1997), 384–436. doi: 10.1006/jfan.1997.3148. Google Scholar J. Ginibre and G. Velo, The global Cauchy problem for the nonlinear Schrödinger equation, Ann. Inst. H. Poincaré-AN, 3 (1985), 309–327. Google Scholar J. Ginibre, Le problème de Cauchy pour des EDP semi-linéaires périodiques en variables d'espace (d'après Bourgain), Séminaire Bourbaki, Exp. 796, Astérisque, 237 (1996), 163–187. Google Scholar M. Grillakis, On nonlinear Schrödinger equations, Comm. Partial Differential Equations, 25 (2000), 1827–1844. doi: 10.1080/03605300008821569. Google Scholar A. Hassell, T. Tao and J. Wunsch, Sharp Strichartz estimates on nontrapping asymptotically conic manifolds, Amer. Journal of Math., 128 (2006), 963–1024. Google Scholar Z. Hani, A bilinear oscillatory integral estimate and bilinear refinements to Strichartz estimate on closed manifolds, Analysis and PDE, 5 (2012), 339–363. doi: 10.2140/apde.2012.5.339. Google Scholar O. Ivanovici, On the Schrödinger equation outside strictly convex obstacles, Analysis and PDE., 3 (2010), 261–292. doi: 10.2140/apde.2010.3.261. Google Scholar J. C. Jiang, Bilinear Strichartz estimates for Schrödinger operators in two-dimensional compact manifolds with boundary and cubic NLS, Diff. and Integral Equations, 24 (2012), 83–108. Google Scholar T. Kato, On nonlinear Schrödinger equations, Ann. Inst. H. Poincaré, Physique théorique, 46 (1987), 113–129. Google Scholar M. Kell and T. Tao, End point Strichartz estimates, Amer. J. Math., 120 (1998), 955-980. Google Scholar F. Linares and G. Ponce, Introduction to Nonlinear Dispersive Equations, Second edition, Universitext, Springer, New York, 2015. doi: 10.1007/978-1-4939-2181-2. Google Scholar H. Mizutani and N. Tzvetkov, Strichartz estimates for non-elliptic Schrödinger equations on compact manifolds, Communications in Partial Differential Equations, 40 (2015), 1182–1195. doi: 10.1080/03605302.2015.1010211. Google Scholar A. C. Newell and J. V. Moloney, Nonlinear Optics, Addison-Wesley, 1992. Google Scholar I. Pesenson, An approach to spectral problems on Riemannian manifolds, Pacific Journal of Mathematics, 215 (2004), 183–199. doi: 10.2140/pjm.2004.215.183. Google Scholar I. Rodnianski and T. Tao, Longtime decay estimates for the Schrödinger equation on manifolds, Mathematical Aspects of Nonlinear Dispersive Equations, J. Bourgain, Carlos E. Kenig and S. Klainerman eds., Annals of Mathematics Studies, 163 (2007), 223–253. Google Scholar Alexander Arbieto, Carlos Matheus. On the periodic Schrödinger-Debye equation. Communications on Pure & Applied Analysis, 2008, 7 (3) : 699-713. doi: 10.3934/cpaa.2008.7.699 Nobu Kishimoto. Local well-posedness for the Cauchy problem of the quadratic Schrödinger equation with nonlinearity $\bar u^2$. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1123-1143. doi: 10.3934/cpaa.2008.7.1123 Hiroyuki Hirayama. Well-posedness and scattering for a system of quadratic derivative nonlinear Schrödinger equations with low regularity initial data. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1563-1591. doi: 10.3934/cpaa.2014.13.1563 Seckin Demirbas. Local well-posedness for 2-D Schrödinger equation on irrational tori and bounds on Sobolev norms. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1517-1530. doi: 10.3934/cpaa.2017072 Changxing Miao, Bo Zhang. Global well-posedness of the Cauchy problem for nonlinear Schrödinger-type equations. Discrete & Continuous Dynamical Systems - A, 2007, 17 (1) : 181-200. doi: 10.3934/dcds.2007.17.181 Massimo Cicognani, Michael Reissig. Well-posedness for degenerate Schrödinger equations. Evolution Equations & Control Theory, 2014, 3 (1) : 15-33. doi: 10.3934/eect.2014.3.15 Takafumi Akahori. Low regularity global well-posedness for the nonlinear Schrödinger equation on closed manifolds. Communications on Pure & Applied Analysis, 2010, 9 (2) : 261-280. doi: 10.3934/cpaa.2010.9.261 Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 37-65. doi: 10.3934/dcds.2007.19.37 Zihua Guo, Yifei Wu. Global well-posedness for the derivative nonlinear Schrödinger equation in $H^{\frac 12} (\mathbb{R} )$. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 257-264. doi: 10.3934/dcds.2017010 Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for the $L^2$ critical nonlinear Schrödinger equation in higher dimensions. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1023-1041. doi: 10.3934/cpaa.2007.6.1023 Yuanyuan Ren, Yongsheng Li, Wei Yan. Sharp well-posedness of the Cauchy problem for the fourth order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (2) : 487-504. doi: 10.3934/cpaa.2018027 Jaime Angulo, Carlos Matheus, Didier Pilod. Global well-posedness and non-linear stability of periodic traveling waves for a Schrödinger-Benjamin-Ono system. Communications on Pure & Applied Analysis, 2009, 8 (3) : 815-844. doi: 10.3934/cpaa.2009.8.815 E. Compaan, N. Tzirakis. Low-regularity global well-posedness for the Klein-Gordon-Schrödinger system on $ \mathbb{R}^+ $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3867-3895. doi: 10.3934/dcds.2019156 Takeshi Wada. A remark on local well-posedness for nonlinear Schrödinger equations with power nonlinearity-an alternative approach. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1359-1374. doi: 10.3934/cpaa.2019066 Benjamin Dodson. Global well-posedness and scattering for the defocusing, cubic nonlinear Schrödinger equation when $n = 3$ via a linear-nonlinear decomposition. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1905-1926. doi: 10.3934/dcds.2013.33.1905 Tarek Saanouni. Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity. Communications on Pure & Applied Analysis, 2014, 13 (1) : 273-291. doi: 10.3934/cpaa.2014.13.273 Yonggeun Cho, Gyeongha Hwang, Tohru Ozawa. Global well-posedness of critical nonlinear Schrödinger equations below $L^2$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1389-1405. doi: 10.3934/dcds.2013.33.1389 Ademir Pastor. On three-wave interaction Schrödinger systems with quadratic nonlinearities: Global well-posedness and standing waves. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2217-2242. doi: 10.3934/cpaa.2019100 Hartmut Pecher. Low regularity well-posedness for the 3D Klein - Gordon - Schrödinger system. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1081-1096. doi: 10.3934/cpaa.2012.11.1081 Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093 HTML views (172) Marcelo Nogueira Mahendra Panthee
CommonCrawl
Scaling properties of direct photons from gluon saturation in heavy ion collisions A recent analysis from the PHENIX collaboration on direct photon production has shown a universal, within experimental uncertainties, $multiplicity$ scaling, in which photon $p_{T}$-spectra for transverse momenta up to 2 GeV/$c$ are scaled with charged hadron pseudorapidity density at midrapidity raised to power $\alpha=1.25$. Expand abstract. A recent analysis from the PHENIX collaboration on direct photon production has shown a universal, within experimental uncertainties, $multiplicity$ scaling, in which photon $p_{T}$-spectra for transverse momenta up to 2 GeV/$c$ are scaled with charged hadron pseudorapidity density at midrapidity raised to power $\alpha=1.25$. On the other hand particle production in hadron and nucleus collisions, photons included, exhibits $geometrical$ scaling in the similar $p_{T}$ range. The geometrical scaling follows from gluon saturation and collision geometry. We show that these two scaling laws are interconnected and discuss physical conditions needed to relate one to another. 10/10 relevant Constraints from the GW170817 merger event on the nuclear matter equation of state In this paper we compare in particular the pressure-density relation obtained from heavy-ion collisions with the analysis of the NS merger event. Expand abstract. The detection of the GW170817 neutron star merger event has incited an intense research activity towards the understanding of the nuclear matter equation of state. In this paper we compare in particular the pressure-density relation obtained from heavy-ion collisions with the analysis of the NS merger event. Moreover, we present recent calculations of neutron star's moment of inertia and tidal deformability using various microscopic equations of state for nuclear and hybrid star configurations, and confirm several universal relations. We also discuss the recent constraints for the NS radii determined by GW170817, and find compatible radii between 12 and 13 kilometers, thus identifying the suitable equations of state. Statistical method in quark combination model Taking a naive case of quark number fluctuations and correlations at hadronization, we calculate ratios of multiplicity cumulants of final-state net-protons and discuss the potential applicability of quark combination model in studying hadronic multiplicity fluctuations and the underlying... Expand abstract. We present a new method of solving the probability distribution for baryons, antibaryons and mesons at the hadronization of constituent quark and antiquark system. The hadronization is governed by the quark combination rule in the quark combination model developed by the Shandong Group. We use the method of the generating function to derive the outcome of the quark combination rule, which is much simpler and easier to be generalized than the original method. Furthermore, we use the formula of the quark combination rule and its generalization to study the property of multiplicity distribution of net-protons. Taking a naive case of quark number fluctuations and correlations at hadronization, we calculate ratios of multiplicity cumulants of final-state net-protons and discuss the potential applicability of quark combination model in studying hadronic multiplicity fluctuations and the underlying phase transition property in relativistic heavy-ion collisions. 1+1 dimensional relativistic magnetohydrodynamics with longitudinal acceleration Non-central heavy-ion collisions generate the strongest magnetic field of the order of $10^{18}-10^{19}$ Gauss due to the electric current produced by the positively charged spectators that travel at nearly the speed of light. Expand abstract. Non-central heavy-ion collisions generate the strongest magnetic field of the order of $10^{18}-10^{19}$ Gauss due to the electric current produced by the positively charged spectators that travel at nearly the speed of light. Such transient electromagnetic fields may induce various novel effects in the hydrodynamic description of the quark gluon plasma for non-central heavy-ion collisions. We investigate the longitudinal acceleration effects on the 1+1 dimensional relativistic magnetohydrodynamics with transverse magnetic fields. We analyze the proper time evolution of the system energy density. We find that the longitudinal acceleration parameter $\lambda^*$, magnetic field decay parameter $a$, equation of state $\kappa$, and initial magnetization $\sigma_0$ have nontrivial effects on the evolutions of the system energy density and temperature. Analysis on hadron spectra in heavy-ion collisions with a new non-extensive approach The transverse momentum spectra of identified charged hadrons stemming from high energy collisions at different beam energies are described by a new non-extensive distribution, the Kaniadakis $\kappa$-distribution, with respect to the constraints in non-extensive quantum statistics. Expand abstract. The transverse momentum spectra of identified charged hadrons stemming from high energy collisions at different beam energies are described by a new non-extensive distribution, the Kaniadakis $\kappa$-distribution, with respect to the constraints in non-extensive quantum statistics. All fittings are also compared with the Tsallis distributions as well as the usual Boltzmann-Gibbs one. $\chi^2/ndf$ is also used to test the fitting goodness of all functions. Our results show that these different non-extensive approaches can be well applied in high energy collisions rather than the classical one. The Kaniadakis statistics is typically better applied into such systems with both positive and negative particles considered. This provides an alternative non-extensive view to study high energy physics. Analysis on the fitting parameters are present as well. The similar relationships of all functions remind us of the further understanding of the non-extensivity. An insight into strangeness with $\phi$(1020) production in small to large collision systems with ALICE at the LHC Hadronic resonances are unique tools to investigate the interplay of re-scattering and regeneration effects in the hadronic phase of heavy-ion collisions. Expand abstract. Hadronic resonances are unique tools to investigate the interplay of re-scattering and regeneration effects in the hadronic phase of heavy-ion collisions. As the $\phi$ meson has a longer lifetime compared to other resonances, it is expected that its production will not be affected by regeneration and re-scattering processes. Measurements in small collision systems such as proton-proton (pp) collisions provide a necessary baseline for heavy-ion data and help to tune pQCD inspired event generators. Given that the $\phi$ is a bound state of strange-antistrange quark pair (s$\bar{\rm{s}}$), measurements of its production can contribute to the study of strangeness production. Recent results obtained by using the ALICE detector show that although $\phi$ has zero net strangeness content, it behaves like a particle with open strangeness in small collision systems and the experimental results agree with thermal model predictions in large systems. The production mechanism of $\phi$ is yet to be understood. We report on measurements with the ALICE detector at the LHC of $\phi$ meson production in pp, p--Pb, Xe--Xe and Pb--Pb collisions. These results are reported for minimum bias event samples and as a function of the charged particle multiplicity or centrality. The results include the transverse momentum ($p_{\rm T}$) distributions of $\phi$ as well as the $\langle p_{\rm T}\rangle$ and particle yield ratios. The $\phi$ effective strangeness will be discussed in relation to descriptions of its production mechanism, such as strangeness canonical suppression, non-equilibrium production of strange quarks and thermal models. Heavy ion anisotropies: a closer look at the angular power spectrum Anisotropies in the final state of heavy-ion collisions carry information on the creation, expansion and evolution of the quark-gluon plasma. Expand abstract. Anisotropies in the final state of heavy-ion collisions carry information on the creation, expansion and evolution of the quark-gluon plasma. Currently, there is an abundance of studies on azimuthal anisotropies in comparison to longitudinal ones. The purpose of this work is to quantify angular $(\theta, \phi)$ correlations to further the understanding of the full spatial 3-D picture of emitted hadrons. Therefore, public ALICE data from Run 1 (2010) of Pb-Pb collisions at $\sqrt{s_{NN}} = 2.76\mathrm{~TeV}$ is analyzed through the estimation of an angular power spectrum. Issues with $|\eta| < 0.9$ limitation are tackled, as well as event multiplicity and detector efficiency. Firstly, spectra are calculated for toy Monte Carlo samples. Secondly, heavy-ion data spectra are presented for the full momentum phase space $0.15 < p_T < 100\mathrm{~GeV}$ and also separate intervals $p_T < 0.54\mathrm{~GeV}$ and $p_T > 0.54\mathrm{~GeV}$. The latter reveal how different geometries dominate at distinct scales and transverse momentum. Finally, the study submits particles generated through the AMPT model to the same power spectrum analysis. This comparison shows that in scales dominated by flow geometry, AMPT qualitatively describes the data spectra, while the opposite is true for smaller scales. Kinetic approach to a relativistic BEC with inelastic processes The phenomenon of Bose-Einstein condensation is investigated in the context of the Color-Glass-Condensate description of the initial state of ultrarelativistic heavy-ion collisions. Expand abstract. The phenomenon of Bose-Einstein condensation is investigated in the context of the Color-Glass-Condensate description of the initial state of ultrarelativistic heavy-ion collisions. For the first time, in this paper we study the influence of particle-number changing $2 \leftrightarrow 3$ processes on the transient formation of a Bose-Einstein Condensate within an isotropic system of scalar bosons by including $2 \leftrightarrow 3$ interactions of massive bosons with constant and isotropic cross sections, following a Boltzmann equation. The one-particle distribution function is decomposed in a condensate part and a non-zero momentum part of excited modes, leading to coupled integro-differential equations for the time evolution of the condensate and phase-space distribution function, which are then solved numerically. Our simulations converge to the expected equilibrium state, and only for $\sigma_{23}/\sigma_{22} \ll 1$ we find that a Bose-Einstein condensate emerges and decays within a finite lifetime in contrast to the case where only binary scattering processes are taken into account, and the condensate is stable due to particle-number conservation. Our calculations demonstrate that Bose-Einstein Condensates in the very early stage of heavy-ion collisions are highly unlikely, if inelastic collisions are significantly participating in the dynamical gluonic evolution. 30 years of jet quenching In the last 30 years, the physics of jet quenching has gone from an early stage of a pure theoretical idea to initial theoretical calculations, experimental verification and now a powerful diagnostic tool for studying properties of the quark-gluon plasma (QGP) in high-energy... Expand abstract. In the last 30 years, the physics of jet quenching has gone from an early stage of a pure theoretical idea to initial theoretical calculations, experimental verification and now a powerful diagnostic tool for studying properties of the quark-gluon plasma (QGP) in high-energy heavy-ion collisions. I will describe my collaboration with Miklos Gyulassy in this exciting area of high-energy nuclear physics in the past 30 years on this special occasion of his 70th birthday and discuss what is ahead of us in jet tomographic study of QGP in heavy-ion collisions. Experimental searches for the chiral magnetic effect in heavy-ion collisions The chiral magnetic effect (CME) in quantum chromodynamics (QCD) refers to a charge separation (an electric current) of chirality imbalanced quarks generated along an external strong magnetic field. Expand abstract. The chiral magnetic effect (CME) in quantum chromodynamics (QCD) refers to a charge separation (an electric current) of chirality imbalanced quarks generated along an external strong magnetic field. The chirality imbalance results from interactions of quarks, under the approximate chiral symmetry restoration, with metastable local domains of gluon fields of non-zero topological charges out of QCD vacuum fluctuations. Those local domains violate the $\mathcal{P}$ and $\mathcal{CP}$ invariance, potentially offering a solution to the strong $\mathcal{CP}$ problem in explaining the magnitude of the matter-antimatter asymmetry in today's universe. Relativistic heavy-ion collisions, with the likely creation of the high energy density quark-gluon plasma and restoration of the approximate chiral symmetry, and the possibly long-lived strong magnetic field, provide a unique opportunity to detect the CME. Early measurements of the CME-induced charge separation in heavy-ion collisions are dominated by physics backgrounds. Major efforts have been devoted to eliminate or reduce those backgrounds. We review those efforts, with a somewhat historical perspective, and focus on the recent innovative experimental undertakings in the search for the CME in heavy-ion collisions.
CommonCrawl
Regional lymphadenectomy vs. extended lymphadenectomy for hilar cholangiocarcinoma (Relay-HC trial): study protocol for a prospective, multicenter, randomized controlled trial Min He1 na1, Xinsen Xu1 na1, Hao Feng1,2 na1, Wei Chen1, Houbao Liu3, Yongjie Zhang4, Jianming Wang5, Zhimin Geng6, Yudong Qiu7, Weidong Duan8, Xiangcheng Li9, Xuting Zhi10, Weihua Zhu11, Fuyu Li12, Jiangtao Li13, Shengping Li14, Yu He15, Zhiwei Quan16 & Jian Wang1 The prognostic benefits and safety of extended lymphadenectomy for hilar cholangiocarcinoma remain uncertain. The available evidence is still insufficient concerning its retrospective aspect. The aim of this study is to explore the clinical effect and safety of extended lymphadenectomy compared to regional lymphadenectomy in patients with hilar cholangiocarcinoma. The Relay-HC trial is a prospective, multicenter, and randomized controlled trial. Seven hundred and thirty-four eligible patients with resectable perihilar cholangiocarcinoma across 15 tertiary hospitals in China will be randomly assigned (1:1) to receive either regional lymphadenectomy or extended lymphadenectomy. The primary objective is to determine the overall survival after the two approaches. Secondary objectives of the study include the evaluation of perioperative mortality, postoperative complication, and disease-free survival. This study has been approved by the ethics committee of each participating hospital. The Relay-HC trial is designed to investigate the prognostic benefits and safety of expanded lymphadenectomy for hilar cholangiocarcinoma. Currently, it has never been investigated in a prospective randomized controlled clinical trial. Chinese Clinical Trial Registry (ChiCTR), ChiCTR1800015688. Registered on 15 April 2018. Hilar cholangiocarcinoma is one of the most common bile duct cancers, accounting for 60–70% of extrahepatic cholangiocarcinomas. Surgical resection remains the mainstay of potentially curative treatment for hilar cholangiocarcinoma. However, the probability of radical curative resection is low, and the prognosis is insufficient [1,2,3]. The incidence of lymph node metastases is high on the presentation or exploratory laparotomy. As the prognosis of patients with nodal metastases is significantly worse, the American Joint Committee on Cancer (AJCC) has modified the staging of lymph nodes invasion (N) several times during the last decade. However, concerning lymphatic dissection, there are still many disputes [1, 4, 5], and currently, no consensus has been reached on the preferred method of lymphadenectomy in patients with hilar cholangiocarcinoma. The incidence of complications after extended and regional lymphadenectomy has only been assessed in small retrospective series. We hypothesize that extended lymph node dissection (8a/p, 9, 12a/b/c/h/p, 13a, 14, 16) might improve the prognosis of the patients without elevating the major complication rate. Therefore, the primary objective is to determine the overall survival (OS) rate for the two approaches with the secondary endpoint of perioperative mortality, postoperative complications, and disease-free survival (DFS). This study will explore the prognostic benefits and safety of extended lymphadenectomy for hilar cholangiocarcinoma. The Relay-HC trial is a prospective, multicenter, randomized controlled trial. Patients who will receive curative radical resection for hilar cholangiocarcinoma would be randomly assigned (1:1) to receive either regional lymphadenectomy or extended lymphadenectomy. The sample size calculation resulted in a requirement of 734 patients. On the basis of previous data, the median 5-year OS of patients with hilar cholangiocarcinoma who underwent regional lymphadenectomy (P1) was 0.17 (0.07–0.20) [6,7,8,9], and the 5-year OS for those with extended lymphadenectomy (P2) was 0.26 [3, 10, 11]. The α level (type I error) is set as 0.05 (one-sided), β is set as 0.2, and the power is set as 0.8. The formula to calculate the sample quantity for a high-quality clinical trial is [12]: $$ {n}_1={n}_2=\frac{\Big[{u}_a\sqrt{2p\left(1-p\right)}+{u}_{\beta}\sqrt{p_1\left(1-{p}_1\right)+{p}_2\left(1-{p}_2\right)\Big]{}^2}}{{\left({p}_1-{p}_2\right)}^2} $$ The minimum sample size for each group is 330. As the actual sample size includes 10% shedding, the actual sample size of each group is thus 367 patients. Case selection All patients aged between 18 and 80 years old with hilar cholangiocarcinoma would be referred for a multidisciplinary team evaluation at each center. Hilar cholangiocarcinoma is defined as a cholangiocarcinoma that develops at the point where the left and right hepatic ducts join to form the common hepatic duct (as determined by computed tomography [CT] imaging or magnetic resonance cholangiopancreatography, or both). Criteria for resectable hilar cholangiocarcinoma include an anticipated complete (R0) resection with adequate future liver remnant (FLR > 30%) and Child-Turcotte-Pugh grade A–B, as well as American Society of Anesthesiologists (ASA) grades I–III. The residual liver volume will be assessed by three-dimensional reconstruction of CT images. These procedures will be accomplished by professional radiographers and the multidisciplinary teams from each center, who will be trained at the leading affiliation to ensure standardization. Patients who also have other malignancies would be excluded from the study. The study is performed at hepatobiliary surgery centers from 15 tertiary hospitals that receive the majority of patients with hilar cholangiocarcinoma from various parts of China. Each of the centers is qualified for standard radical resection of hilar cholangiocarcinoma and regional/extended lymph node dissection. The institutional review board at each hospital has approved the protocol. The number of cases for each center is allocated according to the number of annual cholangiocarcinoma surgeries. A flowchart of the study design was shown in Fig. 1. A flowchart of the study design A Consolidated Standards of Reporting Trials (CONSORT) checklist for this study is provided in Additional file 2. The Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist is provided in Additional file 3. Patients who will receive curative radical resection for hilar cholangiocarcinoma would be randomly assigned (1:1) to regional lymphadenectomy or extended lymphadenectomy by computer-generated allocation based on the envelope method and the hierarchical block randomization method. The envelopes are sealed, opaque, and sequentially numbered. Randomization is performed by the trial coordinator (Study Group of Biliary Surgery of the Surgery Branch of the Chinese Medical Association). The random number table and the block assignment number table will be kept confidential by the full-time secretary of this project. Center-stratified block-permuted randomization is used in this trial. Trial participants are subdivided into strata such as centers; then permuted block randomization is used for each stratum with a block size of 4. The routine approach for hilar cholangiocarcinoma surgery is based on the National Comprehensive Cancer Network (NCCN) Guidelines Insights: Version 2.2019 Hepatobiliary Cancers [13]. The surgical procedures consist of (extended) hemi-hepatectomy and complete lymphadenectomy of the hepatoduodenal ligament. The (extended) hemi-hepatectomy contains en bloc excision of the liver hilum, extrahepatic bile ducts, and caudate lobe. Portal vein excision and reconstruction would be performed when the tumor infiltrates into the portal vein bifurcation. In this study, we will utilize the lymph node classification system by the Liver Cancer Study Group of Japan (Additional file 1: Figure S1) [14]. Patients in the regional lymph node dissection group will receive radical resection and numbers 8a/p, 12a/b/c/h/p, 13a lymph nodes dissection. In contrast, patients in the extended lymph node dissection group will receive radical resection and numbers 8a/p, 9, 12a/b/c/h/p, 13a, 14, 16 lymph nodes dissection. Experienced surgeons in each center will be educated in a standardized surgical approach, especially the surgical procedures and lymph node harvest procedures. There will be digital recording (video and photography) for each operation, which will be evaluated by third-party surgical experts, namely the data monitoring committee (DMC). The intraoperative evaluation, safety assessment, and tumor specimens will be evaluated by the surgical teams who perform the surgery as well as pathologists. Concerning random allocation, a sealed envelope will be issued to the surgeon before the operation by a project secretary. Normally there are two secretaries who are recruited for the clinical trial data management who are not involved in the operation. Then the surgeon unpacks the envelope and performs the operation according to the allocation in the envelope. The patients, outcome assessors, pathologists, and data analysts will not know the grouping information. To improve data quality, double data entry would be performed by the two secretaries. Final study follow-up is scheduled at 30 days after surgery, including perioperative mortality and operative complication evaluation. Long-term follow-up assessments including tumor markers, chest radiographs, upper abdomen enhanced CT, and survival status are scheduled at 6 months, 1 year, 3 years, and 5 years after surgery. Observation indices The primary endpoint will be the 5-year OS rate. The secondary efficacy index includes the primary complication (Clavien-Dindo grades > II) within 30 days postoperatively, perioperative mortality, 6-month OS and DFS, 1-year OS and DFS, 3-year OS and DFS, and 5-year DFS. Cancer-specific survival is determined at the time of cholangiocarcinoma-related death. Disease-free survival is the time to any recurrence. The efficacy evaluation is based on the postoperative pathology: if the postoperative pathological margin is negative and LN ≥ 6, the operation is considered to have achieved the desired purpose. The safety index includes vital signs, adverse events, and clinical laboratory parameters (blood routine, urine routine, myocardial enzymes, coagulation, blood biochemistry, electrocardiogram, cardiac ultrasound), as well as early withdrawal (Fig. 2). Enrollment, intervention, and assessments in the Relay-HC trial Specifically, the intraoperative evaluation includes the length of the operation, intraoperative hemorrhage, intraoperative blood transfusion, time of hepatic occlusion, vascular anastomosis, the area and number of lymph nodes dissected, and so on. Concerning the tumor-related evaluation, tumor specimens will be sent for pathological evaluation about the quality of the specimens, grading, pathological stage, tumor embolus in vein, perineuronal invasion, lymph node collection, and the ratio of positive lymph nodes. Concerning feasibility and safety, routine blood examination, hepatic and renal function, and the biochemical test will be performed the day after the surgery and then every 3 days. Chest and upper and lower abdomen CT will be performed on the seventh postoperative day to evaluate pleural effusion, lung infection, ascites, and abdominal infection. Additionally, vital signs and drainage will be monitored. Concerning the primary endpoint (overall survival), the log-rank test for univariable testing and Cox regression would be used to compare the long-term prognosis among patients in the extended lymphadenectomy group and the regional lymphadenectomy group in the intention-to-treat population. Secondary endpoints include DFS, survival without recurrence of regional nodal metastases, distant metastasis-free survival, the primary complication within 30 days after surgery (Clavien-Dindo grade > II), and perioperative mortality. Time zero is set as the time of randomization. After enrollment, all patients who have been randomized would be included in the full analysis set (FAS). On the basis of the FAS, patients who meet the inclusion and exclusion criteria (Table 1), are compliant, and do not violate the clinical trial protocol would form the per protocol set (PPS). Table 1 Eligibility criteria in Relay-HC trial The principal analysis consists of an intention-to-treat comparison of the major complications in both groups, using a Mann-Whitney U test for ordered categorical data with a two-sided 0.05 significance level. The proportion of patients with any severe operation-related complication will also be expressed in terms of relative risk (RR) and 95% confidence interval (CI). Categorical variables are evaluated using Pearson's χ2 test or Fisher's exact test. The main factors that affect the prognosis include cellular differentiation, perineural infiltration, and lymphatic and microvascular infiltration. Lymph node metastasis is an important factor leading to poor prognosis. It is reported that the rate of lymph node metastasis in hilar cholangiocarcinoma is 20–50%. When cholangiocarcinoma has already spread to the lymph node, the 5-year survival rate declines to 5% [1,2,3]. At present, lymphatic dissection for hilar cholangiocarcinoma remains controversial. In the seventh edition of the AJCC hilar cholangiocarcinoma guidelines, "N" staging was based on the lymph node infiltrating range, suggesting that the area of lymph node metastasis is an important parameter for hilar cholangiocarcinoma prognosis [4]. Several other studies confirmed that the survival rate of hilar cholangiocarcinoma patients with para-aortic lymph node metastasis (N2 in the seventh phase of AJCC) was significantly shorter than that for patients with regional metastasis (N1 in the seventh phase of the AJCC). The 5-year survival rate was only 0–12% [1]. In contrast, other researchers found that patients with regional or para-aortic lymph node metastasis have similar survival rates. This contradiction provokes more clinical research to explore the correlation between lymph node status and long-term outcome. The eighth edition of the AJCC staging system redefines the "N" staging based on the number of lymph node metastases. It reflects the prognostic value of the positive lymph nodes number and leads to higher demands for the dissection range of lymph nodes. Further studies are needed to investigate the sufficient number of lymph nodes' dissection in order to harvest an accurate number of positive lymph nodes. Whether the extended lymph node dissection could improve the prognosis of patients with hilar cholangiocarcinoma is yet unknown. The elevated comorbidity rate induced by extended dissection has always been a major concern for hepatobiliary surgeons. Hakeem et al. found that the 3-year and 5-year OS for regional lymphadenectomy were 41% and 31%, and the 3-year and 5-year OS for extended lymphadenectomy were 26% and 12% [10]. There was no significant difference in median OS and DFS between the two groups, suggesting no prognosis benefit in extended lymphadenectomy. Kitagawa et al. showed that the OS of patients who received regional lymphadenectomy plus para-aortic lymph node dissection was significantly better than that for those who only received regional lymphadenectomy [1]. The number of positive lymph nodes exceeding 20% was independent prognostic factor [15]. However, the complication rate for the former group was 63%, which was slightly higher than the rate for the regional lymphadenectomy group [16,17,18]. The in-depth analysis showed that the most common complications were pleural fluid and mild wound infections (Clavien-Dindo grades I–II) rather than complications such as lymphatic leakage, hemorrhage, and liver failure [19, 20]. In the preliminary study in our center, the perioperative mortality did not increase as other literature reported. Thus, the extended lymphadenectomy for hilar cholangiocarcinoma patients might be safe and feasible for a qualified hepatobiliary surgery center. The protocol version number is RELAY-HC Ver5.0, which was registered on 15 April 2018 (ChiCTR1800015688). Recruitment began on 30 July 2018, and the approximate date when recruitment will be completed is 30 July 2023. DFS: Disease-free survival Kitagawa Y, Nagino M, Kamiya J, Uesaka K, Sano T, Yamamoto H, et al. Lymph node metastasis from hilar cholangiocarcinoma: audit of 110 patients who underwent regional and paraaortic node dissection. Ann Surg. 2001;233(3):385–92. Nuzzo G, Giuliante F, Ardito F, Giovannini I, Aldrighetti L, Belli G, et al. Improvement in perioperative and long-term outcome after surgical treatment of hilar cholangiocarcinoma: results of an Italian multicenter analysis of 440 patients. Arch Surg. 2012;147(1):26–34. Aoba T, Ebata T, Yokoyama Y, Igami T, Sugawara G, Takahashi Y, et al. Assessment of nodal status for perihilar cholangiocarcinoma: location, number, or ratio of involved nodes. Ann Surg. 2013;257(4):718–25. Deoliveira ML, Schulick RD, Nimura Y, Rosen C, Gores G, Neuhaus P, et al. New staging system and a registry for perihilar cholangiocarcinoma. Hepatology. 2011;53(4):1363–71. Kurosaki I, Tsukada K, Hatakeyama K, Muto T. The mode of lymphatic spread in carcinoma of the bile duct. Am J Surg. 1996;172(3):239–43. Guglielmi A, Ruzzenente A, Campagnaro T, Valdegamberi A, Bagante F, Bertuzzo F, et al. Patterns and prognostic significance of lymph node dissection for surgical treatment of perihilar and intrahepatic cholangiocarcinoma. J Gastrointest Surg. 2013;17(11):1917–28. Ocuin LM, Bagci P, Fisher SB, Patel SH, Kooby DA, Sarmiento JM, et al. Discordance between conventional and detailed lymph node analysis in resected biliary carcinoma at or above the cystic duct: are we understaging patients? Ann Surg Oncol. 2013;20(13):4298–304. Oshiro Y, Sasaki R, Kobayashi A, Murata S, Fukunaga K, Kondo T, et al. Prognostic relevance of the lymph node ratio in surgical patients with extrahepatic cholangiocarcinoma. Eur J Surg Oncol. 2011;37(1):60–4. de Jong MC, Marques H, Clary BM, Bauer TW, Marsh JW, Ribero D, et al. The impact of portal vein resection on outcomes for hilar cholangiocarcinoma: a multi-institutional analysis of 305 cases. Cancer. 2012;118(19):4737–47. Hakeem AR, Marangoni G, Chapman SJ, Young RS, Nair A, Hidalgo EL, et al. Does the extent of lymphadenectomy, number of lymph nodes, positive lymph node ratio and neutrophil-lymphocyte ratio impact surgical outcome of perihilar cholangiocarcinoma? Eur J Gastroenterol Hepatol. 2014;26(9):1047–54. Murakami Y, Uemura K, Sudo T, Hashimoto Y, Nakashima A, Kondo N, et al. Prognostic factors after surgical resection for intrahepatic, hilar, and distal cholangiocarcinoma. Ann Surg Oncol. 2011;18(3):651–8. Charan J, Biswas T. How to calculate sample size for different study designs in medical research? Indian J Psychol Med. 2013;35(2):121–6. Benson AB, D'Angelica MI, Abbott DE, Abrams TA, Alberts SR, Anaya DA, et al. Guidelines Insights: Hepatobiliary Cancers, Version 2.2019. J Natl Compr Cancer Netw. 2019;17(4):302–10. Miyazaki M, Ohtsuka M, Miyakawa S, Nagino M, Yamamoto M, Kokudo N, et al. Classification of biliary tract cancers established by the Japanese Society of Hepato-Biliary-Pancreatic Surgery: 3(rd) English edition. J Hepatobiliary Pancreat Sci. 2015;22(3):181–96. Giuliante F, Ardito F, Guglielmi A, Aldrighetti L, Ferrero A, Calise F, et al. Association of lymph node status with survival in patients after liver resection for hilar cholangiocarcinoma in an Italian multicenter analysis. JAMA Surg. 2016;151(10):916–22. Su CH, Tsay SH, Wu CC, Shyr YM, King KL, Lee CH, et al. Factors influencing postoperative morbidity, mortality, and survival after resection for hilar cholangiocarcinoma. Ann Surg. 1996;223(4):384–94. Neuhaus P, Jonas S, Bechstein WO, Lohmann R, Radke C, Kling N, et al. Extended resections for hilar cholangiocarcinoma. Ann Surg. 1999;230(6):808–18 discussion 19. Tsao JI, Nimura Y, Kamiya J, Hayakawa N, Kondo S, Nagino M, et al. Management of hilar cholangiocarcinoma: comparison of an American and a Japanese experience. Ann Surg. 2000;232(2):166–74. Todoroki T, Kawamoto T, Koike N, Takahashi H, Yoshida S, Kashiwagi H, et al. Radical resection of hilar bile duct carcinoma and predictors of survival. Br J Surg. 2000;87(3):306–13. Nakeeb A, Pitt HA, Sohn TA, Coleman J, Abrams RA, Piantadosi S, et al. Cholangiocarcinoma. A spectrum of intrahepatic, perihilar, and distal tumors. Ann Surg. 1996;224(4):463–73 discussion 73–5. The study is supported by the Study Group of Biliary Surgery of the Surgery Branch of the Chinese Medical Association. Min He, Xinsen Xu and Hao Feng contributed equally to this work. Department of Biliary-Pancreatic Surgery, Renji Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Pujian Road 160, Shanghai, 200127, People's Republic of China Min He, Xinsen Xu, Hao Feng, Wei Chen & Jian Wang Department of General Surgery, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200025, People's Republic of China Hao Feng Department of General Surgery, Zhongshan Hospital, School of Medicine, Shanghai Fu Dan University, Shanghai, 200032, People's Republic of China Houbao Liu Department of Biliary Surgery, The Third Affiliated Hospital, The Second Military Medical University, Shanghai, 200438, People's Republic of China Yongjie Zhang Department of General Surgery, Tongji Medical College, Hua Zhong University of Science&Technology, Hubei, 430030, People's Republic of China Jianming Wang Department of General Surger, The First Affiliate Hospital of Xi An Jiao Tong University, Shaanxi, 710061, People's Republic of China Zhimin Geng Department of General Surgery, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Jiangsu, 210008, People's Republic of China Yudong Qiu Department of General Surgery, Chinese PLA General Hospital, Medical School of Chinese PLA, Beijing, 100853, People's Republic of China Weidong Duan Department of General Surgery, The First Affiliated Hospital with Nan Jing Medical University, Jiangsu, 210029, People's Republic of China Xiangcheng Li Department of General Surgery, Qilu Hospital of Shandong University, Shandong, 250012, People's Republic of China Xuting Zhi Department of General Surgery, Peking University People's Hospital, Beijing, 100044, People's Republic of China Weihua Zhu Department of General Surgery, West China Hospital Sichuan University, Sichuan, 610041, People's Republic of China Fuyu Li Department of General Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang, 310009, People's Republic of China Jiangtao Li Department of General Surgery, Sun Yat-Sen University Cancer Center, Guangdong, 510060, People's Republic of China Shengping Li Department of General Surgery, The First Hospital Affiliated to AMU (Southwest Hospital), Chongqing, 400038, People's Republic of China Yu He Department of General Surgery, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Kongjiang Road1665, Shanghai, 200092, People's Republic of China Zhiwei Quan Min He Xinsen Xu Wei Chen MH, HF, and JW* designed the study. MH, XX, and HF participated in writing the manuscript. WC revised the protocol. HL, YZ, JW, ZG, YQ, WD, XL, XZ, WZ, FL, JL, SL, YH, and ZQ were involved in the design of the study. ZQ and JW* approved the protocol. All authors read and approved the manuscript. Correspondence to Zhiwei Quan or Jian Wang. Central ethical approval has been obtained from the institutional review board at the primary site (Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, People's Republic of China). We will not begin recruiting at other centers in the trial until local ethical approval has been obtained. Informed consent will be obtained from all study participants. Figure S1 Lymph node classification system used in the Relay-HC trial (JPG 437 kb) CONSORT 2010 checklist of information to include when reporting a randomized trial. (DOC 217 kb) SPIRIT 2013 checklist: recommended items to address in a clinical trial protocol and related documents. (DOC 126 kb) He, M., Xu, X., Feng, H. et al. Regional lymphadenectomy vs. extended lymphadenectomy for hilar cholangiocarcinoma (Relay-HC trial): study protocol for a prospective, multicenter, randomized controlled trial. Trials 20, 528 (2019). https://doi.org/10.1186/s13063-019-3605-z Hilar cholangiocarcinoma Regional lymphadenectomy Extended lymphadenectomy
CommonCrawl
High-solids enzymatic hydrolysis of ball-milled corn stover with reduced slurry viscosity and improved sugar yields Minsheng Lu1, Junbao Li1, Lujia Han1 & Weihua Xiao ORCID: orcid.org/0000-0003-0142-88291 Biotechnology for Biofuels volume 13, Article number: 77 (2020) Cite this article High-solids enzymatic hydrolysis has attracted increasing attentions for the production of bioethanol from lignocellulosic biomass with its advantages of high product concentration, water saving, and low energy and capital costs. However, the increase of solids content would worsen the rheological properties, resulting in heat/mass transfer limitation and higher mixing energy. To address these issues, ball milling was applied to corn stover prior to enzymatic hydrolysis, and the rheological behaviors and digestibility of ball-milled corn stover under high-solids loading were investigated. Ball milling significantly modified the physicochemical properties of corn stover. The apparent viscosity of slurries at 30% solid loading decreased by a factor of 500 after milling for 60 min, and the yield stress was less than 10 Pa. The dramatic decrease of viscosity and yield stress enabled the hydrolysis process to be conducted in shake flask, and remained good mixing. Meanwhile, the estimated energy consumption for mixing during saccharification decreased by 400-fold compared to the untreated one. The resultant hydrolysate using 10 FPU g−1 solids was determined to contain 130.5 g L−1 fermentable sugar, and no fermentation inhibitors were detected. The proposed ball milling pretreatment improved rheological behavior and sugar yield of high-solids corn stover slurry. Ball milling enables high-solids slurry to maintain low viscosity and yield stress while obtaining a non-toxic high-concentration fermentable syrup, which is undoubtedly of great significance for inter-unit processing, mixing and downstream process. In addition, the energy input for ball milling could be balanced by the reduced mixing energy. Our study indicates ball milling a promising pretreatment process for industrial bioethanol production. Bioethanol production from lignocellulose materials is considered to be one of the solutions to improve energy structure and mitigate global climate change as its use of renewable materials such as agricultural and woody residues or energy crops. A pretreatment step using physical or chemical methods is essential to deconstruct the recalcitrance of the plant cell wall in order to increase the accessibility of cellulose and hemicellulose to enzymes. Then, the enzymes synergistically depolymerize the carbohydrates into monosaccharides for fermentation to bioethanol by microorganism. However, scale-up of the production of lignocellulosic ethanol still exist with some challenges, which impede the economic feasibility of the whole process [1,2,3]. A promising approach to improve the process economics is to increase the solids content in the stream. By increasing the solids content, the resulting products' concentration will be higher, which is beneficial for reducing equipment size and energy usage for heating and distillation [4]. Running at high solids consumes less water and produces less wastewater, and therefore less input for wastewater treating. In addition, cost-effective distillation requires ethanol concentration higher than 4% (w/w), i.e., sugar concentration above 8% (w/w), which implies that for most types of lignocellulose biomass, more than 15% of solid loadings is required [5, 6]. However, increasing solids loading is not without problems, these problems may counteract the cost savings of high-solids process. For example, due to the hydroscopicity of the lignocellulose materials and low water content at high-solids condition, most of water is retained within the porous structure (cell lumen, inter-cellular space and macro/micropores), resulting in an increase of viscosity and yield stress, which could cause mixing and mass transfer problem. Inadequate mixing causes transfer problem for heat and enzymes, as well as local accumulation of products, resulting in a decrease of conversion efficiency [7]. Moreover, the power consumption of stirring is related to the viscosity, as higher impeller torque is required to overcome the shear stress under high-solid conditions [8], this would dramatically increase the power consumption of the impeller agitation and affect the process economy. Zhang et al. [9] reported that the energy consumption for mixing increased 1 order of magnitude when the solids loading of steam-exploded corn stover increased from 15 to 30% (79.5 and 1009.2 MJ t−1 slurry), which is equivalent to 9.3% and 58.6% of the thermal energy of ethanol produced, respectively, and attributed this to the increase of viscosity. Therefore, a better understanding of rheological behaviors of biomass slurry under high-solids loading would help to solve these challenges. Early work by Pimenova and Hanley [10, 11] has indicated that corn stover slurries are shear thinning and yield stress fluids. The effects of solid loading, composition and particle morphology (including particle size, size distribution and aspect ratio) on the rheological properties of the biomass slurries have been reported [12,13,14,15,16]. Studies have also looked at the evolution of the rheological behavior during the course of saccharification or fermentation [17,18,19]. In general, biomass slurries containing smaller particles exhibit lower viscosity and yield stress under same solids loading [14, 15, 20], which implies that reducing particle size may potentially improve the rheology of high-solids slurry and therefore reduce the energy consumption for mixing. Ball milling is an effective method to alter the ultrastructure of lignocellulosic biomass. Our colleague showed that ball milling significantly decreases the particle size and crystallinity of rice straw, meanwhile achieving an 82.71% glucose yield by enzymatic hydrolysis (EH) using 10 FPU g−1 dry solids for 48 h at 5% (w/v) solids loading [21]. And ball milling can further dissociate the cross-linked cellulose–hemicellulose–lignin complex and depolymerize the cell wall polymers [22], which reduce the recalcitrance of plant cell wall and therefore may potentially improve the efficiency of EH. However, ball milling is an energy-intensive process, and a comparison between the increased energy consumption for milling and the potentially reduced mixing energy remains unknown. The aim of this study was to investigate the effects of ball milling on the rheological properties and enzymatic hydrolysis of corn stover under high-solids loading. Corn stover was subjected to ball milling for different time, and the rheological behaviors of ball-milled corn stover (BMCS) at 30% solids loading were characterized. Meanwhile, high-solids EH of BMCS under different solids loading and enzyme loading was conducted. The changes of viscosity during EH were measured to estimate the energy consumption for stirring, and the balance between the input energy for ball milling and the reduced energy consumption for stirring was analyzed. Physicochemical properties of BMCS The composition of BMCS is listed in Table 1. The results show that the carbohydrate and lignin content in the milled samples are basically the same as the unmilled one, revealing that ball milling did not change the composition of corn stover. Table 1 Physicochemical properties of BMCS As shown in Table 1, ball milling significantly reduced the particle size of corn stover. As ball milling time increases, the median particle size decreases sharply in the first 10 min, then slowly decreases, finally remains unchanged after 30 min. The D50 of BMCS0/BMCS10 is higher than 50 μm which is still at tissue scale, while the D50 of BMCS20/30/60/120 reaches cellular scale with particle size less than 50 μm [21, 23], indicating that the intact structure of cell wall has been destroyed. The PV of corn stover reduced from 2.673 to 0.935 cm3 g−1, while the porosity decreased from 78.88 to 56.22%, after 120 min of ball milling (Table 1). The PV and porosity of BMCS decrease slightly at the first 10 min, then sharply decrease within 10–60 min, and finally reach a plateau. The cell lumen represents the largest scale of porosity as its size is normally in the range of tens of micrometers [24]. Results showed that pores with diameter ranging from 10 to 100 μm in BMCS0 occupy 76.0% of the total pore volume, while BMCS120 only occupies 19.2%, indicating that cell lumen occupy most of the PV of BMCS0. These results demonstrate that, at first 10 min, the intact structure of cell walls was slightly broken as the fragmentation could only reach tissue scale. Within 10–60 min, the fragmentation came to cellular scale, with a severe damage to the macropores (especially cell lumen and inter-cellular space), resulting in a sharp decrease of PV and porosity. When milling more than 60 min, there were basically no intact cell lumen, and therefore, the PV/porosity remained unchanged. These results are consistent with the previous analysis of the particle size. The XRD patterns of BMCS are shown in Fig. 1. The results demonstrate that the crystal peaks, [101], [002] and [040], gradually decrease or even disappear with the increase of milling time. The CrI determined by peak height method is listed in Table 1, and it can be found that ball milling significantly reduced the crystallinity of corn stover. The CrI of BMCS0/10 remains relatively high (46.52%/42.37%) and then drops sharply, and the CrI of BMCS120 is 5.04%, indicating the damage of ball milling goes deeper and deeper with processing time prolonging. X-ray diffraction patterns of ball-milled corn stover Rheological behavior of BMCS slurries: apparent viscosity and yield stress For traditional thermochemical pretreated corn stover, the upper limit of solids content that can be effectively mixed in a conventional stirred tank reactor is 12–15% [25], and EH at 30% solids loading is relatively difficult to achieve at laboratory scale. However, we found that the BMCS slurries remained fluidity at up to 30% solids loading. Therefore, the rheological behaviors of BMCS slurries at 30% solids loading were measured within a logarithmically increasing shear rate between 0.01 and 100 s−1. The apparent viscosity varied greatly at different shear rates, exhibiting shear thinning behavior (see Additional file 1: Figure S1, for more details). For better comparison, the apparent viscosity at γeff = 25.12 s−1 was selected and plotted as a function of milling time, and the results are shown in Fig. 2a. It can be seen that the apparent viscosity and yield stress decrease dramatically with increasing milling time. For example, ball milling for 30 min reduced the apparent viscosity of slurry at 30% solids loading by about 300 times, further extending milling time to 120 min only reduced by around 10 times. According to the analysis in "Methods" section, the significant decrease of viscosity might have potential benefit for reducing the mixing energy during high-solids EH, this will be discussed later. It can be found that the yield stress of BMCS60/BMCS120 is less than 10 Pa (Fig. 2a). 10 Pa is considered to be a critical value below which the slurry behaves as a pourable liquid and could be easily pumped between different process units [18]. Previous studies reported a yield tress of about 1000 Pa for dilute acid-pretreated corn stover at 20% solids loading [18] and of around 1500 Pa for untreated and dilute acid-pretreated corn stover at 30% solids loading [14], which are higher than BMCS by several orders of magnitude. The dramatic decrease in apparent viscosity and yield stress may be attributed to the following two aspects. The entanglement between particles alleviated due to the significant decrease of particle size, thus reduced the apparent viscosity and yield stress [14, 20]. Another factor was the increased free water amount in slurries due to the decreased porosity (see Fig. 5 and Additional file 1: Table S1, for more details), which increased the lubrication between particles and reduced friction, thereby reduced the apparent viscosity and yield stress [14]. Rheological parameters at 30% solids loading a as a function of milling time, the horizontal-dashed line indicates the yield stress threshold (10 Pa) below which the slurry is pourable and pumpable; b at different enzymatic hydrolysis time, the solids loading of BMCS0 and BMCS10 is 20%. Apparent viscosity at a shear rate of 25.12 s−1 was used Figure 2b presents the changes in apparent viscosity as enzymatic hydrolysis proceeds for corn stover slurry with 30% solids loading. The apparent viscosity decreased as the hydrolysis process progressed. The decrease of viscosity could be explained by the decreased insoluble solid content and the modified particle properties as hydrolysis progressed [18, 19]. The viscosity during EH was used to estimate the energy consumption for mixing according to the theory in "Methods" section, and this is discussed in the next section. Energy balance for ball milling and mixing The energy consumption of ball milling and the required mixing energy during EH are shown in Table 2. The results demonstrate that the energy required for ball milling increases with increasing milling time, and the energy input for BMCS120 reaches 19.34 MJ kg−1 DM (dry matter). On the contrary, the required mixing energy during high-solids EH decreases with increasing milling time, due to the dramatic decrease of viscosity after ball milling. The mixing energy consumption of BMCS0 is 8.23 MJ kg−1 DM, which is higher than that of the thermochemical pretreated biomass (Table 2) [8, 9, 26, 27]. This could be attributed to the higher viscosity, and the lower liquefaction rate (i.e., EH rate) and digestibility of untreated corn stover (BMCS0). In fact, a strict comparison between energy consumption during EH for different pretreated substrates is difficult because different stirring tank, stirring speed, solid loading and hydrolysis time all affect the energy consumption for mixing. In this study, after milling for 10 min, the energy consumption for mixing was reduced to 3.69 MJ kg−1 DM. And further prolonging the milling time to 30 min, the mixing energy was sharply decreased to 0.037 MJ kg−1 DM. To calculate the energy consumption balance between ball milling and mixing, the increased energy due to ball milling and the reduced mixing energy due to decreased viscosity based on BMCS0 were compared, and the results are shown in Fig. 3. The results show that when the milling time is less than 30 min, the increased energy consumption for milling is less than the reduced mixing energy. For example, the increased energy consumption of BMCS30 is 4.16 MJ kg−1 DM, while the reduced mixing energy (based on BMCS0) is 8.19 MJ kg−1 DM. However, the increased energy consumption could not be offset by the reduced mixing energy when prolonging milling time to 60 min or 120 min. In terms of glucose yield, the results show that the glucose yield increases to 23.3%, 49.4% and 55.3% for BMCS10, BMCS20 and BMCS30, respectively. For BMCS60, the increased energy consumption for milling is basically the same as the reduced mixing energy, while the glucose yield increased by 287% (based on BMCS0). Increasing the milling time to 120 min obtains limited improvement in glucose yield, but doubles the energy consumption of milling. Table 2 Energy consumption of ball milling, and mixing energy during high-solids EH Energy balance between ball milling and mixing. The reduced energy consumption for mixing was calculated based on BMCS0, for example, BMCS30: 0.037 − 8.23 = − 8.19 Ball milling is considered to be an economically unfeasible pretreatment method because of its high energy consumption [28,29,30]. However, when the EH process is running at high solids, ball milling could significantly reduce the viscosity of the slurry, thus reducing the mixing energy and offset part of the energy consumption for milling, which is of positive implications for the feasibility of ball milling as a pretreatment method. The results also highlight the importance of considering the mixing energy consumption when conducting high-solids EH. High-solids enzymatic hydrolysis Given that ball milling could significantly reduce the viscosity and yield stress of the high-solid slurry, and made the slurry meet the industrial requirements of pourability and pumpability. We then evaluated the digestibility of BMCS at high-solids loading. Because BMCS0 and BMCS10 behaved as wet granules like material at 20% and 30% solids loading, and the liquefaction was insufficient, which made sampling difficult, therefore, sugar data under these conditions could not be obtained. The sugar yield of BMCS under different enzyme loading and different solids loading is shown in Fig. 4a. Results show that increasing milling time could increase the sugar yield under different solids loading. In specific, at enzyme loading of 10 FPU g−1 solids and solids loading of 10%, the glucose yield increased from 19.9% for BMCS0 to 57.3% for BMCS30, and further increased to 72.6% for BMCS60. Further increasing milling time to 120 min, glucose yield only increased to 77.1%, suggesting that there is little value to mill for more than 60 min, as ball milling is a high energy consumption process. In addition, we found that the enzymatic hydrolysate contained considerable cellobiose, indicating that the fermentable monosaccharides can be further improved by increasing the proportion of related enzymes. The increase of glucose yield may be a combination of the increase of specific surface area (SSA), the decrease in cellulose crystallinity and degree of polymerization (DP), and the dissociation of the cross-linked cellulose–hemicellulose–lignin complex after ball milling [21, 22, 31]. The increase of SSA, the decrease of DP, and the dissociation of the combination between lignin and carbohydrates expose more reactive site for enzyme to hydrolysis, and the loose disordered structure of amorphous cellulose is more active, thus boosts the EH of corn stover. The positive effect of ball milling on the rheological behavior and enzymatic hydrolysis of corn stover slurries could be explained using a schematic diagram shown in Fig. 5. Enzymatic hydrolysis of BMCS at different solids loading with different enzyme loading. Results for a sugar yield and b monomeric sugar concentration. The horizontal-dashed line indicates the monomeric sugar concentration threshold (87 g/L) above which the distillation of ethanol is cost-effective (assume an ethanol yield of 0.5 g/g glucose) Schematic diagram of the effect of ball milling on the rheological properties and digestibility As mentioned above, cost-effective distillation requires an ethanol concentration above 4% (w/w), i.e., fermentable sugar concentration higher than 8% (w/w) (corresponding to around 87 g L−1). It can be seen from Fig. 4b that the total monomeric sugar concentration for BMCS30/BMCS60/BMCS120 at 30% solids loading is still higher than 87 g L−1 after reducing enzyme loading from 10 FPU g−1 solids to 5 FPU g−1 solids, while for 20% solids loading, the results are different. After reducing the enzyme loading, the glucose yield of BMCS30 and BMCS60 at 20% solids loading decreases from 57.3% and 72.6% to 51.4% and 58.3%, respectively, but the decrease of yield is less for 30% solids loading. Future work should be carried out to optimize the enzyme loading and enzyme ratio (e.g., add xylanase) to further increase the sugars' yield. The change of glucose yield with solids loading is shown in Fig. 6a. It can be seen from the figure that the so-called 'solids effect', that is, the sugar yield decrease at high-solids loading, has been demonstrated in previous reports on high-solids EH of different pretreated lignocellulosic substrates [5, 32,33,34]. While in this study, the glucose yield of BMCS was basically unchanged when the solids loading increased from 10 to 20%, and further increased the solids loading to 30%, the glucose yield was significantly reduced. These results indicate that ball milling can raise the threshold of solids effect, in other words, alleviate the solids effect to some extent. The glucose yield appears to begin to decrease when the glucose concentration exceeds 60 g L−1 (Fig. 6b), so the solids effect seems to be caused by end-product inhibition. Future works will be carried out to apply the simultaneous saccharification and fermentation (SSF) to this high-solid process to alleviate the reduction of sugar yield caused by product inhibition. Effect of solids loading and glucose concentration on glucose yield. a Solids loading (data collected from several publications) and b glucose concentration The EH kinetic results show that the glucose yield of BMCS reached a maximum after 10 h (Additional file 1: Figure S2), which is meaningful for shortening the production time of cellulosic ethanol. In addition, inhibitors such as furfural, HMF and acetic acid were not detected during the sugar analysis. Conventional thermochemical or physicochemical pretreatment may produce these inhibitors [35]. Running at high-solids potentially implies increased concentration of these inhibitors and thus severely affects the fermentability of the microorganisms. Detoxification by overliming, ion exchange, zeolites or laccase enzyme is usually required before saccharification [36], or a rinsing step followed by dehydration is applied to eliminate the effects of inhibitors, which makes the entire process costly and some soluble sugar might be washed away. The production of a high-concentration sugar syrup free of fermentation inhibitors is very attractive to the downstream fermentation and distillation process. Ball milling significantly modified the physicochemical properties of corn stover. The dramatic decrease of particle size leaded to a decrease of entanglement between particles, and the increase of free water enhanced the lubrication; these two factors significantly reduce the viscosity and yield stress of BMCS slurries. As a result, ball milling allowed corn stover to successfully hydrolyze at up to 30% solids loading in a shake flask and maintain good mixing. Ball milling significantly decreased the crystallinity and DP of corn stover, and dissociated the cross-linked cellulose–hemicellulose–lignin complex, thus boosted the glucose yield at high-solids EH, and the resultant hydrolysate was demonstrated to contain fermentable sugar concentration exceeding 87 g L−1 (minimum sugars' concentration for cost-effective ethanol distillation), and no toxic fermentation inhibitors were detected. Furthermore, the energy balance analysis demonstrated that the increased energy used for ball milling could be balanced by the reduced mixing energy. Corn stover and enzyme preparation The corn stover used in this study was collected from Shangzhuang Experimental Station of China Agricultural University. The whole crop residues were air-dried and then cut into 2–3 cm length before drying at 40 °C for 48 h, and finally milled to pass through a 1-mm screen using an RT-34 milling machine (Hongquan Pharmaceutical Machinery Ltd., Hong Kong, China). The sample obtained here was denoted as BMCS0. The enzyme preparation used in this study was Novozymes Cellic CTec2, which was purchased from Sigma-Aldrich (St. Louis, MO, USA). The filter paper activity of CTec2 was determined to be 160.8 FPU mL−1. The protein content of CTec2 was measured by the Bradford method using bovine serum albumin as a standard [37], and the protein content was determined to be 106.2 mg mL−1. Ball milling The milling was conducted in a vibratory ball mill machine equipped with 2-L ZrO2 chamber (CJM-SY-B, Qinhuangdao Taiji Ring Nano Ltd., Hebei, China). The BMCS0 sample was mixed with ZrO2 ball (6–10 mm diameter) with a volume ratio of 1:2 and a filling rate of 30%, then milled for 10, 20, 30, 60 and 120 min, the resulting powders were denoted as BMCS10, BMCS20, BMCS30, BMCS60 and BMCS120, respectively. During the ball milling process, the temperature was controlled below 20 °C using a cooling system. Characterization of physicochemical properties The composition of BMCS was measured by a two-step acid hydrolysis method according to National Renewable Energy Laboratory (NREL) standard analysis procedure [38]. Distribution of particle size A laser scattering particle size analyzer, Model Mastersizer 3000 (Malvern Instruments Ltd., United Kingdom), was used to measure the particle size distribution of BMCS by dry mode. The median particle size (D50) was regarded as the average particle size of the sample. Porous structure The porous structure of the BMCS, such as pore volume (PV) and porosity were measured by an AutoPore-9500 mercury porosimeter (Micromeritics Instrument Ltd., United States) using a powder sample tube with the pressure ranges from 0.52 Pisa to 60,000 Pisa (corresponding to a pore size range of 347,263–3 nm) and an equilibrium time of 10 min. The X-ray diffraction (XRD) pattern was obtained by a Bruker D8 advance X-ray diffractometer (Bruker AXS Inc., WI, Germany) with a Cu Kα radiation source operated at 40 kV and 40 mA. The scanning range of 2θ was from 5° to 40°, with a scanning speed of 2° min−1 and a step size of 0.02°. The crystallinity index (CrI) of BMCS was determined by the peak height method and the formula was as follows [39]: $$ {\text{CrI}} = \left( {I_{002} - I_{\text{am}} } \right)/I_{002} \times 100, $$ where I002 is the intensity of [002] peak at approximately 2θ = 22.5°, and Iam the intensity of the minimum between the [002] and the [101] peaks at around 2θ = 18°. Commercial enzyme preparation Novozymes Cellic CTec2 was used for high-solids EH of BMCS. The digestion was carried out in a 250-mL shake flask loaded with 60 g BMCS slurry with 10%, 20% and 30% solids content prepared by citrate buffer (pH 4.8) using an enzyme loading of 5 and 10 FPU g−1 solids, and tetracycline hydrochloride (0.08 g L−1) was added to avoid microbial interference. EH was performed in a shaking water bath operating at 150 rpm and 50 °C. After digestion for 2 h, 5 h, 10 h, 24 h, 48 h and 72 h, the well-mixed solid–liquid mixture was taken for rheological measurement and products' analysis. Part of the samples was subjected to apparent viscosity measurement to estimate the energy consumption of stirring during hydrolysis process. The sugars and byproducts analysis was based on NREL standard analysis procedure [40]; deducting the sugar concentration in the enzyme blank to obtain the final result. Sugar yield was defined as the percentage of monosaccharide released during digestion process based on the theoretical maximum. Rheological measurement The rheological measurement of corn stover slurries before and after enzymatic hydrolysis was carried out with an AR-G2 rheometer (TA Instruments) using a 25-mm serrated parallel plate geometry with a 1.5 mm gap. Plate temperature was set to 50 °C in all cases. The apparent viscosity of slurries was measured using flow sweep mode. The transducers were initialized (Conditioning Transducer) before data acquisition, ensuring that the normal force and torque transducer were in Force Rebalance Transducer (FRT) mode, the appropriate torque range was selected, and the normal force and torque were zeroed. The shear rate was logarithmically increased from 0.01 to 100 s−1. All measurements were carried out in, at least, triplicates with fresh samples. Unlike model extrapolation method [14], the yield stress reported here was measured using an AR-G2 rheometer in oscillatory amplitude mode. Similarly, the transducers were initialized (Conditioning Transducer) before data acquisition. The strain amplitude was logarithmically increased from 0.01 to 100% at a frequency of 1 Hz. The yield stress (τy) was calculated as the maximum value of τ = G'*γ, where G' is the elastic modulus of the slurry, and γ the strain amplitude [18]. All measurements were carried out in, at least, triplicates with fresh samples. Calculation of energy consumption for ball milling and stirring The energy consumption of ball milling was measured by a wattmeter (Yadu, Ltd., Shanghai, China). The wattmeter recorded the real-time power of the ball mill machine every second, and the energy consumption was calculated by integrating the recorded power over milling time: $$ {\text{EM}} = \frac{{\mathop \smallint \nolimits_{0}^{t} P_{t} {\text{d}}t}}{m}, $$ where EM (kW h kg−1) is the energy consumption of ball milling, and converted to MJ kg−1 with a coefficient of 3.6; Pt (kW) is the power of milling machine at time t; m (kg) is the mass of corn stover. In a specific stirring tank, the energy dissipated by the impeller depends on the viscosity of the slurry when operating in the laminar and transition region (Reynolds number Re < 104) [26]. More specifically, the power consumption P = Np*ρ*N3*D5, where Np is the impeller power number, ρ the fluid density, N the impeller speed, and D the impeller diameter. And Np is a function of Reynolds number (Np = K*Re−1 = K*ηa/(ρ*N*D2)) [41], thus P = K*ηa*N2*D3, where K is power constant; therefore, the power consumption for mixing is proportional to the apparent viscosity of the slurry at a specific stirred tank and stirring speed. Assumed that the EH was carried out in a 7-L conventional stirred tank equipped with a helical ribbon impeller (D = 0.185 m) [42], then the energy consumption for mixing during EH could be estimated as described below. To save energy, the stirring speed is relatively low at high-solids EH, for example, the stirring speed in the pilot scale reactors of the NREL is 55 rpm [14]. The corresponding effective shear rate is calculated by γeff = Ks* N [43], where Ks is the Metzner constant and Ks equal to 32.9 for the selected helical ribbon impeller [42]. Therefore, selected the apparent viscosity at γeff = 32.9*55/60 = 30.16 s−1 (according to the data exported from the instrument, data at 25.12 s−1 were chosen). According to the power consumption curve of helical ribbon impeller, the power constant (K) was taken as 173.1 [42]. So far, the power consumption for stirring at different hydrolysis time could be estimated by P = K*ηa*N2*D3, and the energy consumption during the whole EH period (48 h) was calculated by integrating over time. EH: Enzymatic hydrolysis BMCS: Ball-milled corn stover SEM: XRD: Crystallinity index Pore volume SSA: Degree of polymerization Chen H, Fu X. Industrial technologies for bioethanol production from lignocellulosic biomass. Renew Sustain Energy Rev. 2016;57:468–78. Nguyen TY, Cai CM, Kumar R, Wyman CE. Overcoming factors limiting high-solids fermentation of lignocellulosic biomass to ethanol. Proc Natl Acad Sci U S A. 2017;114(44):11673–8. Xiros C, Janssen M, Byström R, Børresen BT, Cannella D, Jørgensen H, Koppram R, Larsson C, Olsson L, Tillman AM, et al. Toward a sustainable biorefinery using high-gravity technology. Biofuels Bioprod Biorefin. 2016;11(1):15–27. Kristensen JB, Felby C, Jorgensen H. Yield-determining factors in high-solids enzymatic hydrolysis of lignocellulose. Biotechnol Biofuels. 2009;2(1):11. PubMed PubMed Central Article CAS Google Scholar Jorgensen H, Vibe-Pedersen J, Larsen J, Felby C. Liquefaction of lignocellulose at high-solids concentrations. Biotechnol Bioeng. 2007;96(5):862–70. PubMed Article CAS Google Scholar Larsen J, Østergaard Petersen M, Thirup L, Wen Li H, Krogh Iversen F. The IBUS process—lignocellulosic bioethanol close to a commercial reality. Chem Eng Technol. 2008;31(5):765–72. Modenbach AA, Nokes SE. Enzymatic hydrolysis of biomass at high-solids loadings—a review. Biomass Bioenerg. 2013;56:526–44. Palmqvist B, Liden G. Torque measurements reveal large process differences between materials during high solid enzymatic hydrolysis of pretreated lignocellulose. Biotechnol Biofuels. 2012;5:57. Zhang J, Chu DQ, Huang J, Yu ZC, Dai GC, Bao J. Simultaneous saccharification and ethanol fermentation at high corn stover solids loading in a helical stirring bioreactor. Biotechnol Bioeng. 2010;105(4):718–28. Pimenova NV, Hanley AR. Effect of corn stover concentration on rheological characteristics. Appl Biochem Biotech. 2004;113:347–60. Pimenova NV, Hanley TR. Measurement of rheological properties of corn stover suspensions. Appl Biochem Biotech. 2003;105:383–92. Wiman M, Palmqvist B, Tornberg E, Liden G. Rheological characterization of dilute acid pretreated softwood. Biotechnol Bioeng. 2011;108(5):1031–41. Stickel JJ, Knutsen JS, Liberatore MW, Luu W, Bousfield DW, Klingenberg DJ, Scott CT, Root TW, Ehrhardt MR, Monz TO. Rheology measurements of a biomass slurry: an inter-laboratory study. Rheol Acta. 2009;48(9):1005–15. Viamajala S, McMillan JD, Schell DJ, Elander RT. Rheology of corn stover slurries at high solids concentrations—effects of saccharification and particle size. Bioresour Technol. 2009;100(2):925–34. Ehrhardt MR, Monz TO, Root TW, Connelly RK, Scott CT, Klingenberg DJ. Rheology of dilute acid hydrolyzed corn stover at high solids concentration. Appl Biochem Biotechnol. 2010;160(4):1102–15. Nguyen TC, Anne-Archard D, Fillaudeau L. Rheology of lignocellulose suspensions and impact of hydrolysis: a review. In: Krull R, Bley T, editors. Filaments in bioprocesses, vol. 149. Cham: Springer; 2015. Dunaway KW, Dasari RK, Bennett NG, Berson RE. Characterization of changes in viscosity and insoluble solids content during enzymatic saccharification of pretreated corn stover slurries. Bioresour Technol. 2010;101(10):3575–82. Roche CM, Dibble CJ, Knutsen JS, Stickel JJ, Liberatore MW. Particle concentration and yield stress of biomass slurries during enzymatic hydrolysis at high-solids loadings. Biotechnol Bioeng. 2009;104(2):290–300. CAS PubMed Article PubMed Central Google Scholar Ghosh S, Holwerda EK, Worthen RS, Lynd LR, Epps BP. Rheological properties of corn stover slurries during fermentation by Clostridium thermocellum. Biotechnol Biofuels. 2018;11:246. Dasari RK, Eric Berson R. The effect of particle size on hydrolysis reaction rates and rheological properties in cellulosic slurries. Appl Biochem Biotechnol. 2007;137–140(1–12):289–99. Ji G, Han L, Gao C, Xiao W, Zhang Y, Cao Y. Quantitative approaches for illustrating correlations among the mechanical fragmentation scales, crystallinity and enzymatic hydrolysis glucose yield of rice straw. Bioresour Technol. 2017;241:262–8. Liu H, Chen X, Ji G, Yu H, Gao C, Han L, Xiao W. Mechanochemical deconstruction of lignocellulosic cell wall polymers with ball-milling. Bioresour Technol. 2019;286:121364. Barakat A, Monlau F, Solhy A, Carrere H. Mechanical dissociation and fragmentation of lignocellulosic biomass: effect of initial moisture, biochemical and structural proprieties on energy requirement. Appl Energy. 2015;142:240–6. Meng X, Ragauskas AJ. Recent advances in understanding the role of cellulose accessibility in enzymatic hydrolysis of lignocellulosic substrates. Curr Opin Biotechnol. 2014;27:150–8. Hodge DB, Karim MN, Schell DJ, McMillan JD. Model-based fed-batch for high-solids enzymatic cellulose hydrolysis. Appl Biochem Biotechnol. 2009;152(1):88–107. Palmqvist B, Wiman M, Liden G. Effect of mixing on enzymatic hydrolysis of steam-pretreated spruce: a quantitative analysis of conversion and power consumption. Biotechnol Biofuels. 2011;4:10. Correa LJ, Badino AC, Cruz AJG. Mixing design for enzymatic hydrolysis of sugarcane bagasse: methodology for selection of impeller configuration. Bioproc Biosyst Eng. 2016;39(2):285–94. Barakat A, de Vries H, Rouau X. Dry fractionation process as an important step in current and future lignocellulose biorefineries: a review. Bioresour Technol. 2013;134:362–73. Inoue H, Yano S, Endo T, Sakaki T, Sawayama S. Combining hot-compressed water and ball milling pretreatments to improve the efficiency of the enzymatic hydrolysis of eucalyptus. Biotechnol Biofuels. 2008;1:2. Licari A, Monlau F, Solhy A, Buche P, Barakat A. Comparison of various milling modes combined to the enzymatic hydrolysis of lignocellulosic biomass for bioenergy production: glucose yield and energy efficiency. Energy. 2016;102:335–42. Lu MS, Li JB, Han LJ, Xiao WH. An aggregated understanding of cellulase adsorption and hydrolysis for ball-milled cellulose. Bioresour Technol. 2019;273:1–7. Bals BD, Gunawan C, Moore J, Teymouri F, Dale BE. Enzymatic hydrolysis of pelletized AFEX-treated corn stover at high solid loadings. Biotechnol Bioeng. 2014;111(2):264–71. Cara C, Moya M, Ballesteros I, Negro MJ, González A, Ruiz E. Influence of solid loading on enzymatic hydrolysis of steam exploded or liquid hot water pretreated olive tree biomass. Process Biochem. 2007;42(6):1003–9. Du J, Cao Y, Liu G, Zhao J, Li X, Qu Y. Identifying and overcoming the effect of mass transfer limitation on decreased yield in enzymatic hydrolysis of lignocellulose at high solid concentrations. Bioresour Technol. 2017;229:88–95. Klinke HB, Thomsen AB, Ahring BK. Inhibition of ethanol-producing yeast and bacteria by degradation products produced during pre-treatment of biomass. Appl Microbiol Biotechnol. 2004;66(1):10–26. Koppram R, Tomas-Pejo E, Xiros C, Olsson L. Lignocellulosic ethanol production at high-gravity: challenges and perspectives. Trends Biotechnol. 2014;32(1):46–53. Bradford MM. A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem. 1976;72:248–54. Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D, Crocker D. Determination of structural carbohydrates and lignin in biomass. National Renewable Energy Laboratory Technical Report NREL/TP-510-42618, 2008. Segal L, Creely JJ, Martin AE, Conrad CM. An empirical method for estimating the degree of crystallinity of native cellulose using the X-Ray diffractometer. Text Res J. 1959;29(10):786–94. Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D. Determination of sugars, byproducts, and degradation products in liquid fraction process samples. National Renewable Energy Laboratory Technical Report NREL/TP-510-42623, 2008. Carreau PJ, Chhabra RP, Cheng J. Effect of rheological properties on power-consumption with helical ribbon agitators. AIChE J. 1993;39(9):1421–30. BritodelaFuente E, Choplin L, Tanguy PA. Mixing with helical ribbon impellers: effect of highly shear thinning behaviour and impeller geometry. Chem Eng Res Des. 1997;75(A1):45–52. Metzner AB, Otto RE. Agitation of non-newtonian fluids. AIChE J. 1957;3(1):3–10. This work was financially supported by the National Key R&D Program of China [Project No. 2016YFE0112800] and the Program for Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China [Project No. IRT-17R105]. College of Engineering, China Agricultural University (East Campus), P.O. Box 191, 17 Qing-Hua-Dong-Lu, Hai-Dian District, Beijing, 100083, People's Republic of China Minsheng Lu, Junbao Li, Lujia Han & Weihua Xiao Minsheng Lu Junbao Li Lujia Han Weihua Xiao MSL conceived this study, designed and performed the experiments, analyzed the results, and wrote the manuscript. JBL assisted in the experimental analysis. LJH helped revise the paper. WHX coordinated the overall study and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Weihua Xiao. Apparent viscosity as a function of shear rate and shear stress for BMCS slurry at different solids loading. Figure S2. Enzymatic hydrolysis kinetic data of ball-milled corn stover. Table S1. Free water amount for BMCS slurry at 30% solids loading. Lu, M., Li, J., Han, L. et al. High-solids enzymatic hydrolysis of ball-milled corn stover with reduced slurry viscosity and improved sugar yields. Biotechnol Biofuels 13, 77 (2020). https://doi.org/10.1186/s13068-020-01717-9 High-solids Rheology
CommonCrawl
Steady State Vector 3x3 Matrix Calculator This is almost identical to the previous example. 17 may be substituted into the homogeneous transformation matrices to. Find helpful Advanced Math questions and answers on Chegg. Рубрики: 100x100 px, 128x128 px красивые и гламурные анимированные и статичные аватары девушек, аниме аватары, мультфильм-аватары, эмо аватарки и аватары знаменитостей. Preface This is a collection of class notes, handouts, homework assignments, and exam problems developed over the past years teaching courses in Process Systems Engineering at the Univer-. 1) Divide matrices A and B in 4 sub-matrices of size N/2 x N/2 as shown in the below diagram. With this calculator you can: find the determinant of its rank, matrix multiply, to find the inverse, etc. An n × n matrix A has at most n eigenvalues. The last three options are for those surgeons who have developed their personalized IOL constants based on their own patients implanted with the IOL A-Constant is a numerical value used in IOL power calculations. Rules for inverting a 3x3 matrix are here. Six days to U. 0 representing the steady state for a large number of flips, and the smaller two are complex. Matrix Calculator Matrix Calculator computes all the important aspects of a matrix: determinant, inverse, trace , norm. To find out if two vectors are orthogonal, simply enter their coordinates in the boxes below and then click the "Check orthogonality" button. We reproduce a memory representation of the matrix in R with the matrix function. A rotation matrix has three degrees of freedom, and mathematicians have exercised their creative freedom to represent a 3D rotation in every imaginable way — using three numbers, using four numbers, using a 3×3 matrix. In Reactor and RxJava, you declare logic through operators. Correct Even though matrix multiplication is not commutative in general ( for general matrices A,B), for the special case where , we have , and also. Since you are multiplying the 33 x 33 matrix on the left with a row vector you should find the "left" eigenvectors. The relation Av = λv, v 6= 0 is a linear equation system where λis an eigenvalue and v is an eigenvector. Below are plots of the family of lines y= x+ rand the curve y= ln(1 + x) x y r>0 r. Notice, the arrows exiting a state always sums up to exactly 1, similarly the entries in each row in the transition matrix must add up to exactly 1 - representing probability distribution. For each of the following built-in matrix functions, there is both a single-precision floating point version, where all arguments and return values are single precision, and a double-precision floating version, where all ( detail::tmat3x3< valType > const &. , a standard coordinate vector). I would like to solve for $X$ in the matrix equation $$ XCX + AX = I $$ where all the matrices are $n\times n$, have real components, $X$ is positive semidefinite and $C$ is symmetric. 78,328 new cases and 1,014 new deaths in the United States. Tutorial letter for Engineering Maths 3. matrix tests if its argument is a (strict) matrix. Page 3 of 11. Let A be a positive stochastic matrix. Calculators¶. find a steady state x = xst , so that 2. NumPy is a package for scientific computing which has support for a powerful N-dimensional array object. we would have a vector equation with two different vectors and two independent degrees of freedom. When we want to control the system in general we use the Laplace transform (Z-Transform for digital systems) to represent the system. Finite Math: Markov Chain Steady-State Calculation. ) This convergence of Pt means that for larget, no matter WHICH state we start in, we always have probability • about 0. We are supposed to use the formula A(x-I)=0. Markov 1 This program will produce a steady-state vector for a transition matrix common in Vector Calculator Enter your vectors in polar format. Matrix Calculator focus for price, performance, size, models and design. These fall under the category of state space models, and include decomposition (described below), and exponential smoothing (described in an upcoming section). may be generated from any The steady-state response may be conrmed directly from the state equations. 1) What happens next? After payment, your answer will be immediately delivered to your email (so don't forget to check your spam folder in case you don't see anything!). using UnityEngine; using System. 3 notes about variables in matlab 41 2. This is a basic post about multiplication operations in R. written by Prof. The convection–diffusion equation is a collective representation of both diffusion and convection equations and describes or explains every physical phenomenon involving the two processes: convection and diffusion in transferring of particles, energy or other physical quantities inside a physical system. That is true because, irrespective of the starting state, eventually equilibrium must be achieved. Sage Tutorial. Email: [email protected] 5 q =-(Type an integer or simplified fraction for each matrix element. 1) std::vector is a sequence container that encapsulates dynamic size arrays. In this video I will use method 2 to find the stable state matrix (3x3). The higher the power of A, the closer its columns approach the steady state. As n n n increases, there is no limiting behavior to P n \textbf{P}^n P n. Three dimensional vectors have length. Its result is a transform-function data type. matrix3x4_t is a C++ class that represents a matrix: a mathematical construct that allows Vectors to be transformed. online matrix LU decomposition calculator, find the upper and lower triangular matrix by factorization. Page updated. It works over GF(q) for q = 2,3,4*,5,7,11. SCALE (拡大・縮小). 0 1/2 0 and you minus the identity to give:-1 1/2 1/2. No book, calculator or other devices are allowed. Moreover, it computes the power of a square matrix, with applications to the Markov chains computations. if all the eigenvalues of A have negative real part then xst is stable 4. and its dimension. V is a $3 \times 1$ vector whose components are non-negative and add up to 1. So I'm going to scroll down, past the "ScalarParameters", the "Texture2DParameters", and the "Vector3Parameters", until I see just "VectorParameters" at the top of one of the paragraphs. Period Calculator: Computes the orbital period of an object. Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. Semi-Major Axis Calculator: Computes the semi-major of an object from its period. First Iteration: x. octave: C = A*D error: operator *: nonconformant arguments (op1 is 3x2, op2 is 1x3) error: evaluating binary operator `*' near line 44, column 6 error: evaluating assignment expression near line 44, column 3. Tank 3 will reach its steady state last as it depends on both the other tanks and also has a volume ten times larger. Definition. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. (multiplying the Steady State Vector by the Transition Matrix = the Steady State Vector. The eigenvector x2 is a "decaying mode" that virtually disappears (because 2 D :5/. 2 Moving averages. Create an account or log into Facebook. For first time online students, this module reviews technology requirements, Penn State resources, tips and suggestions that will help to reduce frustration and ensure success. Figure 2: The matrix transformation T takes population vectors as input and as output. 0 1/2 -1 what do you do from here? thanks for any help :). In this section we have developed basic Matlab functions for rotations and for free-precession. For example, maybe you want to plot column 1 vs column 2, or you want the integral of data between x = 4 and x = 6, but your vector covers 0 < x < 10. passes through the point is determined by the direction vector (i. 2- Anisotropic and Non-homogeneous Media 2. For example, if there is a matrix of: 0 1/2 1/2. (c) The set of polynomials satifying p''(x) = 0 is a subspace of P3. Free matrix calculator 3x3 for Android. 2 R-L-Ccircuits. We use matrices containing numeric elements to be used in mathematical calculations. What makes these vectors vector spaces is that they are closed under. The transition matrix is P = 1 0 0 0. we would have a vector equation with two different vectors and two independent degrees of freedom. This matrix expression is mainly of academic interest, and is not used to program the method. And there are a ton of different ways of representing a rotation as three. (b) Explain why the set of polynomials of degree exactly 3 is not a vector space. If the matrix is at most 3x3, a hard-coded formula is used and the specified method is ignored. ncol is the number of columns to be created. Préambule Soucieuse de toujours mieux répondre aux attentes de ses clients, la société 44 GALERIES LAFAYETTE. An eigenvector v corresponding to an eigenvalue is a nonzero vector for which Av = v. Answer to Find the Steady-state Vector of a 3x3 matrix (. It is actually used for computing the covariance in between every column of data matrix. let me know if you used it,If you use this LOOP, I hope I can hear the BEAT you made with this LOOP. Just enter the matrix, choose what you want to calculate, push the button and let the matrix calculator do the job for you!. Here is how to approximate the steady-state vector of A with a computer. Math230 Pe Feb2010 Final (2) - Free download as PDF File (. To access the underlying dense matrix one needs to use the qutip. P = (0 1 1 0 ). Below are plots of the family of lines y= x+ rand the curve y= ln(1 + x) x y r>0 r. To unlock this. ) where A is a 3x3 matrix of left. Calculator Use. Kyung-Won Kim. Use the BRAVE browser - It's FREE, it's 3X FASTER than Chrome & it WON'T spy on you. Calculator for Matrices Up-to 10 Rows and Up-to 10 Columns, and Markov Chains Computations. Example 9 Here is a simple example of a stochastic matrix which is not positive, but still irreducible. With help of this calculator you can: find the matrix determinant, the rank, raise the matrix to a power, find the sum and the multiplication of matrices, calculate the inverse matrix. Name Student ID number In vector form, this is X = 2 6 6 4 5 0 1 0 3 7 7 5+s 2 6 6 4 1 1 0 0 3 7 7 5. Determine the state transition matrix for the system discussed in Example 2, and nd its homogeneous response to the If a system is described by a state vector x, a new set of state variables x. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Standard conversions affect fundamental data types, and allow the conversions between numerical types (short to int, int to float, double to int), to or from bool, and. (1) a) Re. Finally, while we looked specifically at examples of a 2x2 and 3x3 matrix, you should remember that this formula works for finding the eigenvalues for a square matrix of any size. In the above method, we do 8 multiplications for matrices of size N/2 x N/2 and 4 additions. 4- Heat Input Load Vector 2. 5 -1] by doing M - the identity matrix. At the steady state concentration, the sinks remove CO_2 at the rate , balancing the sources. This online calculator may be used to calculate the determinant of a 3 by 3 matrix. Using a matrix formulation the three effects can be collectively described by the form M1 = A*M+B, where A is a 3x3 matrix and B is a 3x1 vector. Compartment analysis diagram. 44 respectively. However, the book came up with these steady state vectors without an explanation of how they got. But we did not discuss the case when one of the eigenvalues is zero. Alternatively, the steady state xis characterized by x= Ax =⇒ (A−I)x= 0. 28 of being in State 1after t steps; • about 0. First find (sI-A) and the Φ=(sI-A)-1 (note: this calculation is not obvious. The matrix form of a system of m linear equations in n unknowns is or, more concisely, AX = B. wav File 2D 3 x 3 32768 Hz 8-bit Accept ACLK ADC Add Star Statement Static. 2 #1,2,3,4,6 Solve the given di erential equation. or 1,500 V d. Typically most of this used if your going into a field with heavy math such as Physics, Math. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Режиссер: Лана Вачовски, Лилли Вачовски. is the iteration count. To refresh your memory, the first nonzero elements in the rows of the echelon form are the pivots. For a sum of functions f = f1 + f2 you can plot f1 and f2 and add them. By using this website, you agree to our Cookie Policy. The physical stability of the linear system (3) is determined completely by the eigenvalues of the matrix A which are the roots to the polynomial p( ) = det(A I) = 0 where Iis the identity matrix. Cheap essay writing sercice. Using the method above, we find the determinant of d1 to be 14. If this is so, and if the conjecture is true, it is precisely because the function 3n + 1, and apparently perhaps only it, manages to avoid the kind of residue 0. A rotation matrix has three degrees of freedom, and mathematicians have exercised their creative freedom to represent a 3D rotation in every imaginable way — using three numbers, using four numbers, using a 3×3 matrix. fast matrix vector products. Dot Matrix [Breakout]. (b) If (X, Y, Z) be a trivariate random variable, where X, Y and Z are independent. Alternatively, the steady state xis characterized by x= Ax =⇒ (A−I)x= 0. The topics covered are: First order PDEs. The long-term behavior of such systems is their termed steady-state behavior. 5 -1] by [x1 x2 x3] to get [0 0 0] I understand that they got the: [-1. The converter can therefore also be used to normalize a rotation matrix or a quaternion. I need to find the normalized (emphasis on normalized) values of the eigenvectors for a 3 x 3 matrix. Craig Miller, M. 296 Sat, 29. Selling Steam | 3x1080+ | 195 Exots | Trials sparrow | 9 Seals | 105k Triumphs | $800. This is part 1 of a series of articles which will analyze execution times of sparse matrices and their dense counterparts in Pytorch. If we have a matrix A having the following values. It looks terrific for various jobs with various functions. Correct Even though matrix multiplication is not commutative in general ( for general matrices A,B), for the special case where , we have , and also. After you minus the idenitity matrix from the P matrix how do you solve for the steady state vector. The eigenvector x1 is a "steady state" that doesn't change (because 1 D 1/. Matrices are chiefly used to rotate vectors, since translation and scaling are taken care of by vector addition and multiplication respectively. Steady-state error is defined as the difference between the input (command) and the output of a system in the limit as time goes to infinity (i. Leave extra cells empty to enter non-square matrices. Official Google Search Help Center where you can find tips and tutorials on using Google Search and other answers to frequently asked questions. How to inverse, transpose, and extract columns and rows from a matrix? Transpose matrix: you can use the transpose function in MATLAB by adding a single quotation mark at the end of your matrix. How does a vector b in the column space come from a vector in the row space? 1 What is the steady state of a stochastic matrix if it has two linearly independent eigenvectors corresponding to the eigenvalue $1$?. In fact, it is easy to see that this happen if and only if we have more than one equilibrium point (which is (0,0)). An n × n matrix A has at most n eigenvalues. Using specified above functions, characteristics of object can be plotted with ease. ) Get more help from Chegg Get 1:1 help now from expert Algebra tutors Solve it with our algebra problem solver and calculator. forms a subspace of Rn for some n. Rules for inverting a 3x3 matrix are here. matrix3x4_t is a C++ class that represents a matrix: a mathematical construct that allows Vectors to be transformed. Free Matrix Eigenvectors calculator - calculate matrix eigenvectors step-by-step This website uses cookies to ensure you get the best experience. Use the BRAVE browser - It's FREE, it's 3X FASTER than Chrome & it WON'T spy on you. byrow is a logical clue. It looks terrific for various jobs with various functions. Every irreducible finite state space Markov chain has a unique stationary distribution. We will show that from the singular value decomposition of. Calculate precious metal dimensions, weights and purity. It can be shown that if is a regular matrix then approaches to a matrix whose columns are all equal to a probability vector which is called the steady-state vector of the regular. The Jacobian matrix is J = 2 2x 1 1 2+2y (18) At (0;0), this is J = 2 1 1 2 (19) This matrix has eigenvalues 1 = p 3 and 2 = p 3, so the origin of the linearized system is a saddle point. To find "k1, k2, k3, and k4" the constants of the Linearization matrix equation, "m1" must be defined, which is the 2nd matrix on the right-hand side of the. - no observable result. Super kvalitetan i robustan držač štapova Matrix 3D-R Multi Angle Rod Holder, nov, ne korišten. Date: 05/12/2000 at 12:51:17 From: Doctor Anthony Subject: Re: Finding the steady state matrix I ALWAYS work with the columns adding to 1 when using probability matrices. TRUE The eigenspace is the nullspace. The different states are represented by circles, and the probability of going from one state to another is shown by using curves with arrows. Find an orthogonal matrix P such that D = P ?1 AP is a diagonal matrix. Visual Magnitude Calculator: Computes the visual magnitude of a star from its absolute magnitude and distance. 1874 Note that after solution we can substitute these values back into our four equations above to check that these values are consistent with those equations. NPTEL provides E-learning through online Web and Video courses various streams. Both eigenvalues are real and nonzero, so we conclude that the equilibrium (0;0) of the nonlinear system is also a saddle point. Kirchhoff's First & Second Laws with solved Example A German Physicist "Robert Kirchhoff" introduced two important electrical laws in 1847 by which, we can easily find the equivalent resistance of a complex network and flowing currents in different conductors. Transformation using matrices. Sets the rotation component (the upper left 3x3 matrix) of this matrix to the rotation specified by the given [page:Euler Euler Angle]. In order to access WIMS services, you need a browser supporting forms. Transforming plane equations. The process we will use to expand the matrix is known as "expansion by minors". This is usually denoted using double subscript notation a ij. To many readers, "Calculating a growth rate" may sound like an intimidating mathematical process. (If you have a calculator that can handle matrices, try finding Pt for t = 20 and t = 30: you will find the matrix is already converging as above. Vector and Matrix Norms. Verify that the vector functions y 1= et 0 and y 2 = tet 2 e t are solutions to this problem. This unit covers selecting wiring systems and cables for electrical installations operating at voltages up to 1,000V a. Each transformation is represented by a single matrix. 1) Best Answer 100% (1 rating) Previous question Next question Get more help from Chegg. Rotation matrices have several special properties that, while easily seen in this discussion of 2-D vectors, are equally applicable to 3-D applications as well. Typically most of this used if your going into a field with heavy math such as Physics, Math. We already know how to check if a given vector is an eigenvector of A and in that case to find the eigenvalue. When the matrix is a square matrix, both the matrix and its determinant are referred to as the Jacobian in literature. % The function computes a vector X, giving the amplitude of % each degree of freedom % X = (K-M*omega^2)\f; end. DIRECTION must be entered in degrees, increasing 'counterclockwise'. 3 Cyber Exploration Laboratory, Bibliography. Step-by-step solutions to millions of textbook and homework questions! - Slader. It only takes a minute to sign up. ? physics A criminal is escaping across a rooftop and runs off the roof horizontally at a speed of 5. Media of the day. So, if $A$ is an $m Since we view vectors as column matrices, the matrix-vector product is simply a special case of the matrix-matrix product (i. So the vector is a steady state vector of the matrix above. Orthogonal Projection Matrix Calculator - Linear Algebra. assisted by Shaowei Sun. A square matrix which has an inverse is called invertible or nonsingular, and a square matrix without an inverse is called noninvertible or singular. CONDITIONS GÉNÉRALES D'UTILISATION DU PROGRAMME DE FIDÉLITÉ MES GALERIES En vigueur au 01/12/2019 1. 05 So solve 2 1. The determinant of matrix A is calculated as. Using the results of a), b), c) ?nd the solution to the system of di?erential equations dx1 = 3x1 + 2x2 + 2x3 dt dx2 = 2x1 + 2x2 dt dx3 = 2x1 + 4x3 dt subject to the initial conditions x1 (0) = 3, x2. (a) State 0 means the recorder has been replaced, State 1 means the machine is new, State 2 means the machine works well through the whole first year, State 3 means the machine works well for the whole first 2 years, and does not need repair. The circuit response at steady-state (when voltages and currents have stopped changing due to a disturbance) is also known as the "steady-state response". Both AC and DC circuits can be solved and simplified by using these simple laws which is known as Kirchhoff's Current Law (KCL) and. where the matrix, or its generalization to higher dimension, is called the stability matrix. Next video in th Skip navigation Sign in. Polynomial Linear Combination Calculator. Projection onto a subspace. Objective of the program is to solve for the steady state voltage distribution in a region 0 Text Label Implicit Surface Parametric Surface Region Slider ────────── Function: r=f(θ,z) Function: z=f(r,θ) Function: ρ=f(θ,φ) Function: x=f(y,z) Function: y=f(x,z). Comp Sci does not really use calculus this heavily, it's uses more of discrete math/Boolean logic. The null space of an m n matrix A is a subspace of Rn. We call the xy-plane the phase plane for the differential equation and the plot the phase portrait. T = P = --- Enter initial state vector. Matrix is similar to vector but additionally contains the dimension attribute. Let A be a 3 by 5 matrix and consider the matrix equation Ax=0. 6, it was seen that as k → ∞, the k-step transition probability matrix approached that of a matrix whose rows were all identical. Page 3 of 11. This is almost identical to the previous example. Below are some insights to explain the reason for this constant feature. What makes these vectors vector spaces is that they are closed under. 3d Transformation Matrix Calculator. set is an element of is a subset of implies if and only if gives by row operations i th row vector of the matrix A j th column vector of the matrix A inverse of the matrix A transpose of the matrix A matrix giving rotation through angle θ line segment joining A and B vector corresponding to the directed line segment from A to B product of the. These converge to the steady state vector w. sparse API for dealing with sparse matrices. 10 Systems of Linear Equations and Linear Combinations of Vectors 3. Calculators¶. (2) thatis,asasetofcoupled,first-orderdifferentialequations. The length of a vector represented by a three-component matrix is. Such a Markov chain is said to have a unique steady-state distribution, π. nrow is the number of rows to be created. (2) 첫번째 고유벡터 는 음이 아니고, 이므로 steady state 이다. Fast and easy calculations with matrices on your. Includes problems with solutions. One Eigenvalue is 1. Full version is here. The transition matrix is P = 1 0 0 0. Steady-state convection diffusion equation []. I will like to have an example with steps given this sample matrix : [. What is the steady state value of θ, the angle made by a mass hung from a cord that's attached to the rearview mirror. efficient row slicing. Markov matrix의 특성. For each of the following built-in matrix functions, there is both a single-precision floating point version, where all arguments and return values are single precision, and a double-precision floating version, where all ( detail::tmat3x3< valType > const &. Markov Chain Calculator. If we let A be B_n and B be S_n then inequality (1) fails, since the norm of A-B is 3^0. ) * The above can only applied on Regular Markov chain. Give short proof or counter-example to support your answer. Hill Sphere Calculator. heated_plate_test hello , a MATLAB code which prints out "Hello, world!". Make sure to only include a pure orthogonal matrix without scaling. Chapter 8 The Simple Harmonic Oscillator A winter rose. It is actually used for computing the covariance in between every column of data matrix. Choose any vector v 0 whose entries sum to 1 (e. Explicații detaliate sunt furnizate pentru toate calculele. Economics Stack Exchange is a question and answer site for those who study, teach, research and apply economics and econometrics. This is part 1 of a series of articles which will analyze execution times of sparse matrices and their dense counterparts in Pytorch. We create a Maple procedure called steadyStateVector that takes as input the transition matrix of a Markov chain and returns the steady state vector, which contains the long-term probabilities of the system being in each state. 15x4 - 15x2 = 0 15x2(x2 - 1) = 0 15x2 = 0 или x2 - 1 = 0 x1 = 0 x2 = 1 x3 = -1. Vector [Breakout]. Next video in th Skip navigation Sign in. Date: 05/12/2000 at 12:51:17 From: Doctor Anthony Subject: Re: Finding the steady state matrix I ALWAYS work with the columns adding to 1 when using probability matrices. 025] This final matrix is called a stationary matrix. To learn about your choices, visit the matrix privacy policy. • After integrationAfter integration-by-parts twiceparts twice 32 22 32 2200 00 () , 1, , LL LL ii ii dw dwd dwd dx p x x dx i N dx dx dx dxdx. Choose any vector v 0 whose entries sum to 1 (e. Figure 2: The matrix transformation T takes population vectors as input and as output. In this case, we will have a line of equilibrium points (the direction vector for this line is the eigenvector associated to the eigenvalue zero). The first part draws some analogies from electrical engineering concept, the second looks at understanding the ones vector by using a simple machine learning example. com Using eigenvalues and eigenvectors to calculate the final values when repeatedly applying a matrix. [BO] gives a speech and he is panicking. Matrix is similar to vector but additionally contains the dimension attribute. Steady-state error is defined as the difference between the input (command) and the output of a system in the limit as time goes to infinity (i. Режиссер: Лана Вачовски, Лилли Вачовски. starts at t = - with a density matrix p(--). Try to determine an expression for the angular frequency. u(0) = 0. In order to calculate forces and energies. 3x3 example. Learn the basics of matrix creation and multiplication in MATLAB. This includes some functions identical to regular mathematical functions such as mm for multiplying a. If the subset H satisfies these three properties, then H itself is a vector space. Axis-angle Axis x y z Angle (radians). Find more Mathematics widgets in Wolfram|Alpha. The transient, or sorting-out phase takes a different number of iterations for different transition matrices, but. Markov Chain Steady State 3x3. where the non-zero elements are restricted. Why? The answer lies in examining the corresponding eigenvectors. Stochastic Matrix The Steady State Vector The Steady State Vector The steady state vector x satisfies the equation Mx = x. 3 Cyber Exploration Laboratory, Bibliography. online matrix LU decomposition calculator, find the upper and lower triangular matrix by factorization. Thanks to this wikipedia image which makes clear everything about matrix. But since we already said that matrix multiplication is not commutative, the For example; given that matrix A is a 3 x 3 matrix, for matrix multiplication AB to be possible, matrix B must have. To compute the steady state vector, solve the following linear system for Pi, the steady. Here, the transition probability matrix, P, will have a single (not repeated) eigenvalue at λ = 1, and the corresponding eigenvector (properly normalized) will be the steady-state distribution, π. However, the book came up with these steady state vectors without an explanation. Chapter 10 Finite-State Markov Chains (Online) INTRODUCTORY EXAMPLE: Google and Markov Chains 10. In order to access WIMS services, you need a browser supporting forms. Solving the simultaneous equations Given AX = B we can multiply both sides by the inverse of A, provided this exists, to give A−1AX = A−1B But A−1A = I, the identity matrix. Find the flux of the vector field in the negative z direction through the part of the surface z=g(x,y)=16-x^2-y^2 that lies above the xy plane (see the figure below). The eigenvector x2 is a "decaying mode" that virtually disappears (because 2 D :5/. 2) If v1, ,vp are in a vector space V, then Span v1, ,vp is a subspace of V. (If you have a calculator that can handle matrices, try finding Pt for t = 20 and t = 30: you will find the matrix is already converging as above. Find the Steady-state Vector of a 3x3 matrix (. 4th: Use this new value in the step 1 equation to get the other variable. Free Matrix Eigenvectors calculator - calculate matrix eigenvectors step-by-step This website uses cookies to ensure you get the best experience. Since you are multiplying the 33 x 33 matrix on the left with a row vector you should find the "left" eigenvectors. Let A A be a 3x3 matrix given as Each element of ~A A ~ is and in the form of a matrix, Calculator Enter a 3×3 3 × 3 matrix and press "Execute" button. Stochastic Matrix The Steady State Vector The Steady State Vector The steady state vector x satisfies the equation Mx = x. Steady State & Transient. creating a one-dimensional array (vector) 35 creating a two-dimensional array (matrix) 39 2. In vector calculus, the Jacobian matrix (/ dʒ ə ˈ k oʊ b i ə n /, / dʒ ɪ-, j ɪ-/) of a vector-valued function in several variables is the matrix of all its first-order partial derivatives. For this problem: It follows that the normal vector is <-2x,-2y,-1>. This simple linear response model shows precisely how one expects the eventual atmospheric concentration of CO_2 to saturate as long as saturation is achieved at low net concentrations of the total atmosphere such that the total relative fractions of N. the example. The transition diagram in […]. You can change the entries in the matrix A and vector b by clicking on them and typing. the matrix Dis diagonal with positive real entries. You can examine multiplication apart that was used to get the current power on every step. % Function to calculate steady state amplitude of % a forced linear system. (c) The set of polynomials satifying p''(x) = 0 is a subspace of P3. A Matrix and a vector can be multiplied only if the number of columns of the matrix and the the dimension of the vector have the same size. , the matrix-vector product), we need to view the vector as a column. Choose any vector v 0 whose entries sum to 1 (e. NET 2D 3D 3D vector 74HC595 access control adaptive random search adjacency list adjacency matrix adventure adversarial examples agents alarm algorithm allocation Android AngularJS animation ant colony optimization API Arduino Aristotle University of Thessaloniki array artificial immune artificial life assembly associativity AT&T syntax. In this video I will use method 2 to find the stable state matrix (3x3). Find an orthogonal matrix P such that D = P ?1 AP is a diagonal matrix. The transition matrix is P = 1 0 0 0. For example, if there is a matrix of: 0 1/2 1/2. Free matrix and vector calculator - solve matrix and vector operations step-by-step This website uses cookies to ensure you get the best experience. If you're seeing this message, it means we're having trouble loading external resources on our website. Just enter your equation like 2x+1. Examples of matrix decompositions that Wolfram|Alpha can compute include triangularization, diagonalization, LU, QR, SVD and Cholesky decompositions. In transforming vectors in three-dimensional space, rotation matrices are often encountered. Assume our probability transition matrix is: \[P = \begin{bmatrix} 0. 11 El Capitan limetorrents work version iCloud Norm Scale Calculator 1. We mention that this particular A is a Markov matrix. $\endgroup$ - Kavi Rama Murthy Feb 12 '18 at 5:57. we would have a vector equation with two different vectors and two independent degrees of freedom. Steady-state solution The final value, when all circuit elements have a constant or periodic behavior, is also known as the steady-state value of the circuit. Cheap essay writing sercice. 1- Heat Transfer Matrix 2. (b) Explain why the set of polynomials of degree exactly 3 is not a vector space. x [generic] [fuzzy] (generic tunnel or VPN, MTU: 1450). Solve matrix problems for free with Open Omnia. decomposition: Matrix Decomposition. The tangent vector at each given point can be calculated directly from the given matrix-vector equation x′ = Ax, using the position vector x = (x 1, x 2). Related tools: matrix calculator, linear system solver. Compartment analysis diagram. If you take a look on the function graphs, you see that die y-Achse bei schneidet und die y-Achse bei schneidet. Math230 Pe Feb2010 Final (2) - Free download as PDF File (. Figure 2: The matrix transformation T takes population vectors as input and as output. 11 Homogeneous Systems of Linear Equations 3. Escape Velocity Calculator: Computes the escape velocity of an object. 2 Eigenspaces. The classical method of time series decomposition originated in the 1920s and was widely used until the 1950s. For our example we will use a 3x3 matrix: Any row or column may be used to calculate the determinate. may be generated from any The steady-state response may be conrmed directly from the state equations. In Reactor and RxJava, you declare logic through operators. Participating. 1) independently solved Maxwell's equations in the early 1960s, which allows quantum field theory effects to be easily seen in the Maxwell correction to Coulomb's force law for steady charges to an equation which allows for charge motion. Sao Matrix Uthando Download. x [generic] [fuzzy] (generic tunnel or VPN, MTU: 1450). By signing up, I agree to receive emails from Matrix and other L'Oréal brands and programs. Solve matrix problems for free with Open Omnia. Below are plots of the family of lines y= x+ rand the curve y= ln(1 + x) x y r>0 r. In this case, we will have a line of equilibrium points (the direction vector for this line is the eigenvector associated to the eigenvalue zero). 2 Eigenspaces. Feynman (equation 28. Here, the value of a is promoted from short to int without the need of any explicit operator. To unlock this. Find the flux of the vector field in the negative z direction through the part of the surface z=g(x,y)=16-x^2-y^2 that lies above the xy plane (see the figure below). Matrices (arrays) are particularly nice for storing data points 2. Two's complement converter calculator is used to calculate the 2's complement of a binary or a decimal number. 5 q =-(Type an integer or simplified fraction for each matrix element. These vectors are scalar multiples of each other. More information. Finding the inverse of a 3x3 matrix using the Casio (left) and TI (right) Powered by Create your own unique website with customizable templates. At (1;1), the Jacobian matrix. As a similarity metric, how does cosine similarity differ from the number of. In this video I will use method 2 to find the stable state matrix (3x3). Finding a Steady-State Probability Matrix [05/12/2000] Find the steady-state matrix for the likelihood of a guard's being at each of the corners of a rectangular lot, if he is instructed to wait 10 minutes at each corner, and then either stay where he is or move to one of the adjacent corners randomly. The vector or tensor is usually related to some object that is actually undergoing the rotation, and the vector and/or tensor is along for the ride. A steady-state vector for a stochastic matrix is actually an eigenvector. 1 Examples of Systems 523 0 x3 x1 x2 x3/6 x2/4 x1/2 Figure 2. Previous Media of the day. Participating. the example. Explicații detaliate sunt furnizate pentru toate calculele. The rst three chapters treat vectors in Euclidean space, matrix algebra, and systems of linear equations. 1) The eigenvalues of a matrix are on its main diagonal If A is 3x3 with columns. Theorem: The steady-state vector of the transition matrix "P" is the unique probability vector that satisfies this equation:. Recall, {We would find if we calculated the 5th, 6th and and kth state matrix, we would find that they approach a limiting matrix of [0. Semilinear and quasilinear PDEs; method of characteristics. (2) 첫번째 고유벡터 는 음이 아니고, 이므로 steady state 이다. As a first approximation one can assume that the steady states in tanks 1 and 2. Sketch the bifurcation diagram of xed points x vs. ) where A is a 3x3 matrix of left. (1), and then the state response is substituted into the algebraic output equations,Eq. Quake Champions (2018). may be generated from any The steady-state response may be conrmed directly from the state equations. The matrix is called the state transition matrix or transition (X_4=3|X_3=2)$. 1908, x 2 = 0. 458 Chapter 17 Differential Equations EXAMPLE 17. Solving the simultaneous equations Given AX = B we can multiply both sides by the inverse of A, provided this exists, to give A−1AX = A−1B But A−1A = I, the identity matrix. The diffusion coefficient D as a function of reciprocal temperature for some metals and ceramics. T^51*S and T^52*S gave you the same answer. org Foundation, we aim to create an open platform which is as independent, vibrant and evolving as the Web itself but for communication. 5 x 11 sheet in your own handwriting Show your work and circle the correct answer. 3 Communication Classes 10. Matrices (arrays) are particularly nice for storing data points 2. 05 0 By factoring 0. At time t we calculate the polarization. This is one of midterm 1 exam problems at the Ohio State University Spring 2018. The steady state values found for "a, b, c, and d" are called "s1doubleBrackets(7)" After the steady state values are found, the Jacobian matrix can be found at those values. When we want to control the system in general we use the Laplace transform (Z-Transform for digital systems) to represent the system. $\begingroup$ This is not a standard terminology but the only guess I can make is to think that it stands for a stationary vector: a vector v such that Mv=v where M is the transition matrix. The transition matrix P must list all possible states in the state space S. Freemathhelp. 2 Moving averages. where the non-zero elements are restricted. Let e be the n-vector of all 1's, and b be the (n+1)-vector with a 1 in position n+1 and 0 elsewhere. steady-state solution. heated_plate, a MATLAB code which solves the steady state heat equation in a 2D rectangular region, and is intended as a starting point for a parallel version. can be expressed as a linear combination of them. If we assume that reaching steady state required about five time-constants, then the time constant for "charging" the parasitic inductance is about 4/5 microseconds which is a result of the inductive reactance in series with the parasitic resistance of the. 5 -1] by doing M - the identity matrix. Given a square matrix mat[][] of size N x N. the tangent vector) x′, the derivative of the solution vector x, evaluated at the given point. Solving the simultaneous equations Given AX = B we can multiply both sides by the inverse of A, provided this exists, to give A−1AX = A−1B But A−1A = I, the identity matrix. A is called the matrix of coefficients. Matrices (arrays) are particularly nice for storing data points 2. Using the Calculator Based on the information you input, the SNAP calculator will estimate whether a household meets SNAP's income guidelines, as well as the benefit amount for SNAP. They contain elements of the same atomic types. Based on this fact, Newton's-type method that required Jacobian computation may not be a good candidate for solving singular nonlinear equations [3, 5]. transform property allows to scale, rotate, skew and move HTML element. The integral calculator with limits helps you to get accurate results. Using the results of a), b), c) ?nd the solution to the system of di?erential equations dx1 = 3x1 + 2x2 + 2x3 dt dx2 = 2x1 + 2x2 dt dx3 = 2x1 + 4x3 dt subject to the initial conditions x1 (0) = 3, x2. com Then, it tells you that in order to find the steady state vector for the matrix, you have to multiply [-1. If you're not sure what statistics calculator you require, check out our Which Statistics Test? wizard. Includes problems with solutions. inline void to_axis_angle(const vector3 &axis, float &angle) {. One way we value our customers is to offer them the best value when it comes to ammunition. The resulting steady-state regions and their stability are visualised as a two-dimensional bifurcation phase plane for (θ 1, θ 2). org Foundation, we aim to create an open platform which is as independent, vibrant and evolving as the Web itself but for communication. Markov matrix의 특성. We define PageRank Vector x x = (1 0 0 0…0), the Probability of reaching from Page (1) to itself is 1, to other pages is 0. Compute v 1 = Av 0, v 2 = Av 1, v 3 = Av 2, etc. be/87u7a2XGq1s. 2nd Iteration: or,. It will do conversions and sum up the vectors. 5 x 11 sheet in your own handwriting Show your work and circle the correct answer. Calculator Use. R - Matrices - Matrices are the R objects in which the elements are arranged in a two-dimensional rectangular layout. Markov matrix는 모든 요소가 0보다 크거나 같고, 각 열 벡터들의 요소들을 더하면 1이 되는 행렬 이다. In that case, the limiting product lim k → ∞ π(0)P k is the same regardless of the initial distribution π(0). Block Matrix This file contains three programs concerning block matrices, including LDU decomposition, inverse and Woodbury's formula. Free matrix calculator 3x3 for Android. RowsAtCompileTime and ColsAtCompileTime are the number of rows and columns of the matrix as known at compile time (see below for what to do if the number is not known at compile time). 95 0 0 0 1 5. Rotation matrix. We define PageRank Vector x x = (1 0 0 0…0), the Probability of reaching from Page (1) to itself is 1, to other pages is 0. Substituting this result into the other equation determines x 1. what they described as the \ultra ne metal plus liquid matrix method", using 300 A cobalt nanoparticles in glycerol as a matrix for the sample, and using a 337 nm nitrogen laser f. Explanation of Solution. The matrix() function is specified with six values. Full version is here. An n × n matrix A has at most n eigenvalues. This makes it a great technique to use in almost any important decision where there isn't a. You can examine multiplication apart that was used to get the current power on every step. The question is to find the steady state vector. Find more Mathematics widgets in Wolfram|Alpha. THEOREM 1, 2 and 3 (Sections 4. Find the steady-state vector for the matrix below. if none of the eigenvalues of A are zero and at least one of the eigenvalues has positive real part then xst is unstable. 5 q =-(Type an integer or simplified fraction for each matrix element. 1- Quasi-harmonic Steady State Field Problem 1. For this problem: It follows that the normal vector is <-2x,-2y,-1>. Recall, {We would find if we calculated the 5th, 6th and and kth state matrix, we would find that they approach a limiting matrix of [0. Free matrix and vector calculator - solve matrix and vector operations step-by-step This website uses cookies to ensure you get the best experience. Matrix is an open source project that publishes the Matrix open standard for secure, decentralised, real-time communication Maintained by the non-profit Matrix. ) Get more help from Chegg Get 1:1 help now from expert Algebra tutors Solve it with our algebra problem solver and calculator. Here, the value of a is promoted from short to int without the need of any explicit operator. ij)isthetransition matrix of the chain. The notable feature of a Markov chain model is that it is historyless in that with a fixed transition matrix, the next state depends only on the current state, not on any prior states. Stationary Matrix {When we computed the fourth state matrix of a previous problem we saw that the numbers appeared to approaching fixed values. matrix by specifying the names of both matrices and the rows and columns to extract. Figure 2: The matrix transformation T takes population vectors as input and as output. 1874 Note that after solution we can substitute these values back into our four equations above to check that these values are consistent with those equations. Media of the day. Recipe 2: Approximate the steady state vector by computer. Each vector of 's is a probability vector and the matrix is a transition matrix. Držač štapova za natjecateljsku stolicu za 3 štapa, mogu se koristiti i pojedinačno. Fast and easy calculations with matrices on your. The column space of an m n matrix A is a subspace of Rm. inline void to_axis_angle(const vector3 &axis, float &angle) {. And there are a ton of different ways of representing a rotation as three. But since we already said that matrix multiplication is not commutative, the For example; given that matrix A is a 3 x 3 matrix, for matrix multiplication AB to be possible, matrix B must have. I'd love to hear them,Prod by Red killer,If you want to collaborate with me, click on my avatar and send me your intention Free Trap Piano loops download 140bpm. In control systems engineering, the stability of a system (modeled in the form of Transfer Function) is determined by the poles of the system in the right or left hand sides. The steady state values found for "a, b, c, and d" are called "s1doubleBrackets(7)" After the steady state values are found, the Jacobian matrix can be found at those values. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. This includes some functions identical to regular mathematical functions such as mm for multiplying a. energy states. Dot Matrix [Breakout]. Try to determine an expression for the angular frequency. decomposition: Matrix Decomposition. In Example 9. matrix by specifying the names of both matrices and the rows and columns to extract. Quake Champions (2018). matrix returns TRUE if x is a vector and has a "dim" attribute of length 2 and FALSE otherwise. This is a powerful method for understanding the relationship between model parameters and the existence and stability of the system steady-states. Sao Matrix Uthando Download. was processed using Markov decision-making processes to calculate striptease files. In three dimensions, the rotation axis can be determined from a directed line or a direction vector v⃗. be/87u7a2XGq1s. Matlab post There are times where you have a lot of data in a vector or array and you want to extract a portion of the data for some analysis. Solve matrix problems for free with Open Omnia. This vector addition calculator can add up to 10 vectors at once. However for a 3x3 matrix, I am confused how I could compute the steady state. Calculator for finite Markov chain. Skip to steady state vector: 6:00 A positi. If you let x 3 and x 4 be free variables, the second equation directly above implies. If you need professional help with completing any kind of homework, Success Essays is the right place to get it. This is one of midterm 1 exam problems at the Ohio State University Spring 2018. Matrix Power Calculator Here you can raise a matrix to a power with complex numbers online for free. By superposition, the general solution to the differential equation has the form. This matrix calculator uses the techniques described in A First Course in Coding Theory by Raymond Hill [HILL86] to transform a generator matrix or parity-check matrix of a linear [n,k]-code into standard form. So if the populations of the city and the suburbs are given by the vector , after one year the proportions remain the same (though the people may move between the city and the suburbs). - no observable result. Vector intersection angle. out Enter the no. The Covariance Matrix is also known as dispersion matrix and variance-covariance matrix. If we assume that reaching steady state required about five time-constants, then the time constant for "charging" the parasitic inductance is about 4/5 microseconds which is a result of the inductive reactance in series with the parasitic resistance of the. We reproduce a memory representation of the matrix in R with the matrix function. CALCULATOR; COMMENTS; We often list the transition probabilities in a matrix. 78,328 new cases and 1,014 new deaths in the United States. Rules for inverting a 3x3 matrix are here. Free online inverse eigenvalue calculator computes the inverse of a 2x2, 3x3 or higher-order square matrix. Here is how to approximate the steady-state vector of A with a computer. Theorem: The steady-state vector of the transition matrix "P" is the unique probability vector that satisfies this equation:. Matrix Calculator focus for price, performance, size, models and design. Connect with friends, family and other people you know. The Matrix3x3 class implements a 3x3 rotation matrix, to perform linear algebra in combination with Quaternion, Transform and Vector3. Home Math Matrix calculatorLu factorization calculator. 6, it was seen that as k → ∞, the k-step transition probability matrix approached that of a matrix whose rows were all identical. edu is a platform for academics to share research papers. Related tools: matrix calculator, linear system solver. You can do that by using 'eig' on the transpose of your matrix. The nodal admittance matrix of the typical power system is large and sparse, and can be constructed in a systematic building-block manner; The building-block approach provides. Let A be a positive stochastic matrix. Analysis of the eigenvalues (and eigenvectors) of the stability matrix characterizes the type of fixed point. matrix(data, nrow, ncol, byrow, dimnames) Following is the description of the parameters used − data is the input vector which becomes the data elements of the matrix. (2) 첫번째 고유벡터 는 음이 아니고, 이므로 steady state 이다. Chapter 10 Finite-State Markov Chains (Online) INTRODUCTORY EXAMPLE: Google and Markov Chains 10. It says the kth state of our model is equal to the matrix of eigenvectors S times the matrix of eigenvalues Λ raised to the power of k, times some vector c that gives combinations of them. Let A(t) be an anti-symmetric n ×n-matrix depending continu-. As a similarity metric, how does cosine similarity differ from the number of. pdf), Text File (. Super kvalitetan i robustan držač štapova Matrix 3D-R Multi Angle Rod Holder, nov, ne korišten. In order to calculate forces and energies. sparse API for dealing with sparse matrices. The matrix() function is specified with six values. Since the coefficient matrix is 2 by 4, x must be a 4‐vector. the example. Selling Steam | 3x1080+ | 195 Exots | Trials sparrow | 9 Seals | 105k Triumphs | $800. In matrix terms, the the Gauss-Seidel iteration can be expressed as where and , , and represent the diagonal, lower triangular, and upper triangular parts of the coefficient matrix. I can solve it by hand, but I am not sure how to input it into Matlab. The Matrix System MT5 EA is a scalping trading robot. out Enter the no. • Steady State: the concentration profile doesn't. Recall, {We would find if we calculated the 5th, 6th and and kth state matrix, we would find that they approach a limiting matrix of [0. heated_plate, a MATLAB code which solves the steady state heat equation in a 2D rectangular region, and is intended as a starting point for a parallel version. These chapters provide the motivation and 3. It still forms the basis of many time series decomposition methods, so it is important to understand how it works. Good luck with those physics problems !!!. A linear property is to determine the frequency of small oscillations about its equilibrium configuration. Matrix Calculator: A beautiful, free matrix calculator from Desmos. If we have a matrix A having the following values. Matrices worksheet pdf. 5 -1] by [x1 x2 x3] to get [0 0 0] I understand that they got the: [-1. and expresses the inner product in terms of the vector sum and length. Indexing is the way to do these things. This is a JavaScript that performs matrix multiplication with up to 10 rows and up to 10 columns. Chapter 10 Finite-State Markov Chains (Online) INTRODUCTORY EXAMPLE: Google and Markov Chains 10. Matrix Calculator apk. Calculators¶. 95 0 0 0 1 5. Get 1:1 help now from expert Other Math tutors. Eigenvalues and Eigenvectors Calculator. More information. However, I am supposed to solve it using Matlab and I am having trouble getting the correct answer. Normalize Matrix Calculator. State the value of n and explicitly determine this subspace.
CommonCrawl
BMC Ecology Differences in growth-economics of fast vs. slow growing grass species in response to temperature and nitrogen limitation individually, and in combination Claudia Colesie1, Zsofia Reka Stangl2 & Vaughan Hurry ORCID: orcid.org/0000-0001-5151-51843 BMC Ecology volume 20, Article number: 63 (2020) Cite this article Fast growing invasive alien species are highly efficient with little investment in their tissues. They often outcompete slower growing species with severe consequences for diversity and community composition. The plant economics trait-based approach provides a theoretical framework, allowing the classification of plants with different performance characteristics. However, in multifaceted background, this approach needs testing. The evaluation and prediction of plant performance outcomes in ecologically relevant settings is among the most pressing topics to understand and predict ecosystem functioning, especially in a quickly changing environment. Temperature and nutrient availability are major components of the global environmental change and this study examines the response of growth economic traits, photosynthesis and respiration to such changes for an invasive fast-growing (Bromus hordaceus) and a slow-growing perennial (Bromus erectus) grass species. The fully controlled growth chamber experiment simulated temperature—and changes in nitrogen availability individually and in combination. We therefore provide maximum control and monitoring of growth responses allowing general growth trait response patterns to be tested. Under optimal nitrogen availability the slow growing B. erectus was better able to handle the lower temperatures (7 °C) whilst both species had problems at higher temperatures (30 °C). Stresses produced by a combination of heat and nutrient availability were identified to be less limiting for the slow growing species but the combination of chilling with low nutrient availability was most detrimental to both species. For the fast-growing invader B. hordeaceus a reduction of nitrogen availability in combination with a temperature increase, leads to limited growth performance in comparison to the slow-growing perennial species B.erectus and this may explain why nutrient-rich habitats often experience more invasion than resource-poor habitats. The spread of fast-growing invasive alien species is one of the major threats to habitats and their species diversity with implications for plant community assembly in future climate change scenarios. Invasive species may succeed even in low-resource environments by employing resource conservation traits such as high resource-use efficiency [18] and they are typically species located on the 'fast' end of the productivity‐persistence trade‐off axes [13]. The 'plant economics spectrum' concept provides the theoretical framework to arrange plant species from the 'fast' end of the productivity‐persistence trade‐off axis to taxa with 'slow', conservative life traits. It integrates across leaves, stems and roots and is a key feature helping to explain individual ecological traits, community assembly processes and the functioning of ecosystems [48]. According to Reich [48], a fast or a slow growth strategy each requires a particular set of leaf, root and stem traits. Plants with a slow growth strategy will have low respiration rates, low nutrient concentrations, dense tissues, with low water movement and loss capacities across all plant tissues. In contrast, fast-growing species are highly efficient in transporting water, in acquiring and using nutrients and in fixing carbon, but invest less in their tissues (whether root, stem, or leaf). The plant individual performance results from the coordinated operation of many processes, such as nutrient uptake, organ turnover or photosynthesis, thus a prediction requires a certain set of traits. A plants economy is determined by its handling/usage of three key resources: carbon, water, and mineral nutrients, and the most critical functional and eco-physiological traits relevant to these. Functional traits encapsulate the relative and overall constitutive adaptation of plants by revealing the strategies developed under evolutive forces. Therefore, functional traits inform on the overall level of environmental stress in each environment. The most prominent functional trait relevant to the plant's carbon economics is the specific leaf area (SLA, defined as the amount of leaf area per unit leaf weight). SLA is widely used as a proxy to predict a plants position on the resource use axis [60] and can be considered as the prime factor determining interspecific variation in relative growth rate (RGR, [28]. By definition, leaves with a lower SLA are denser (greater mass per volume) or thicker [46] and tend to invest more in structural leaf defences [9]. The reciprocal of SLA, the leaf mass per area (LMA) is also frequently used [28] as an indicator of plant function [16] and to position a species along an axis based on resources acquisition. The root to shoot ratio (R:S) is a measure of allocation of biomass to roots in relation to aboveground biomass, and can be interpreted according to the "optimal resource partitioning strategy". Fast growing species are characterized by having low LMA and high SLA and vice versa for slow growers. For evaluation of a plants nutrient economy, functional traits like nitrogen and phosphorus content, the carbon to nitrogen ratio (C:N) ratio and the nitrogen use efficiency (NUE) are widely used. Nitrogen concentrations are a common leaf and root trait syndrome that links traits to effects on whole plant processes [55]. The C:N ratio of an organ is often regarded as a convenient indicator of growth and quality, and can also be considered as a good indicator of secondary compound concentrations in all plant organs. The NUE (increase in dry weight per unit of nitrogen) describes the efficiency of carbon incorporation into biomass [30]. Other than functional traits, ecophysiological traits condense acclimative processes in an individual plant and they can account for variations in flows of material and energy. The most prominent ecophysiological traits relevant to the plant's carbon economics are respiration (R) and net photosynthesis (NP, [48]). Products from photosynthesis account for approximately 90% of a plant`s dry weight, therefore the photosynthetic properties of a plant are the basis to understand any variation in growth. However, the daily carbon budget of a plant is also strongly influenced by respiration because approximately 50% of the fixed carbon is respired [44]. Respiration takes places in all plant organs and is therefore very important when whole plant carbon economics are to be understood. The major outcome of the plant economics trait-based approach is to evaluate performance outcomes in ecologically relevant settings. Many studies exist that test the variability of these traits in response to changes of one parameter, such as irradiance [4, 33], nutrient availability [1, 10, 54], water availability [35,36,37], and temperature/climate [41, 62]. Under natural conditions, plants are often exposed to complex stresses from several of these resources and the impact of combined effects has been examined under simultaneously varying nutrient and light availabilities [4, 22, 49] as well as nutrient and drought stress [51]. The response of plants to combinations of two or more stressors is unique and cannot be directly extrapolated from the response of plants to each stressor applied individually [53]. The simultaneous occurrence of different stressors results in a high degree of complexity in plant responses because the responses are largely controlled by different, and sometimes opposing, signalling pathways that may interact, both positively and negatively [53]. The question remains open whether the categorisation, implemented by the plant economics spectrum approach, remains valid when individuals of one species are exposed to different and combined effects of stress [11]. For example, it is shown that single-factor studies could be inadequate to forecast plant responses in a climate change scenario [38]. In the climate change context especially, stresses produced by a combination of temperature (both chilling and heat) and nutrient availability were identified as a white spot on the plants stress response matrix [53]. Consequently, we aim to help fill this knowledge gap, by testing the generality of trait relationships and analysing how shifts in temperature and nutrient stoichiometry influence plant functional and ecophysiological traits. The traits we study are considered as 'hard' traits, with a direct functional role such as carbon fixation, leaf instantaneous photosynthetic rate, nutrient uptake [21, 31] or SLA [46]. We test a fast-growing, invasive, annual C3 grass and a slow-growing perennial species. To differentiate the temperature and nutrient effects, we chose an experimental approach (aeroponic growth chambers) that allows maximum control and monitoring of conditions. We formulate three hypotheses in which trait based plant economics strategies are evaluated against changes in (i) nutrient availability and (ii) temperature individually, and in combination (iii). Nitrogen limitation will limit growth performance independent of growth strategy but via different routes. While slow growing species have evolved functional traits resulting in a more conservative life strategy that allows growth in low nutrient conditions, fast growing, invasive species will employ resource conservative ecophysiological traits in response to nutrient shortage. Temperature affects the plant's energy balance and metabolic rate. As a response, fast growing annual and slow growing perennial species will respond with similar changes in their ecophysiological trait coordination. The individual effects of nutrient or temperature stress are additive when applied in combination. Potential benefits of a more conservative life strategy in slow-growing plants through functional traits vanish when ecophysiological trait coordination is needed as well. Because of the multivariate nature and the various interactions, the results are structured to focus on the comparison between the species. Each parameter will be presented separately starting with the comparison of free access nitrogen (FA) for both species against temperature, followed by the low access nitrogen (LA) for both species against temperature. For both species, RGR and carbon gain were highest when plants were grown at 20 °C with free access nitrogen (Fig. 1a). The LMA values were within the expected range for these species ((52 ± 16 g/m2 for B. erectus vs. 10 ± 2 g/m2 for B. hordeaceus [42, 57]. Growth economic traits. a Relative growth rate (RGR), b Specific leaf area (SLA), c Net assimilation rate (NAR) and d Carbon to nitrogen ration (C:N ratio) At 20 °C, the fast-growth life strategy was validated for B. hordeaceus because RGR and NPmax were almost twice as high as for B. erectus (Figs. 1, 2), C:N ratio was very low, and LMA significantly lower than for B.erectus (T-test: t = 5.6, df = 8, p < 0.005). The more conservative life traits were shown by B. erectus supporting a slow-growth strategy in this species. Carbon uptake and nitrogen use efficiency. a Maximum carbon uptake rates (NPmax), b Nitrogen use efficiency (NUE). Arrows indicate changes with changes in nutrient availability Both species showed the expected suite of characters for their particular growth strategy when grown at 20 °C, FA (Table 1). B. hordeaceus had higher RGR, SLA, NPmax, and lower NUE, C:N and R:S ratios than B.erectus, the slow growing species. Table 1 Growth parameters Relative growth rate For both species, RGR were highest when plants were grown at 20° with FA conditions (Fig. 1a) but there were species-specific responses at other temperatures. The slow-growing B. erectus maintained RGR at a high level at 7 °C and around 20% lower at 30 °C whilst the fast-growing B. hordeaceus, showed a strong reduction (around 45%) at both 7 °C and 30 °C so that it had a very clear maximum at 20 °C (Table 1). Under LA conditions, the slow-growing B. erectus had similar RGR at all three treatment temperatures at around 65% of FA at 20 °C. In contrast, the fast-growing B. hordeaceus again showed a maximum at 20 °C, although this was about 70% of the rate at FA. RGR at 7 °C and 30 °C were almost identical to rates for the slow-growing B. erectus at those temperatures. Specific leaf area For both species SLA was maximal under FA conditions at 20 °C which was significantly higher for the fast-growing B. hordeaceus (p = 0.016; df = 8; t = 2.58; Fig. 1b). B. hordeaceus had a similar SLA at 7 °C and 30 °C, which was about 55% of the maximum. The slow-growing B. erectus showed a similar pattern, but SLA was much more reduced at 7 °C than at 30 °C, 36% versus 87% of maximum (Table 1). Under LA conditions the response patterns varied between the species. The slow-growing B. erectus showed a reverse pattern to FA conditions with lowest SLA at 20 °C. SLA was very much higher at 7 °C and 350% higher than in the FA treatment;, and a similar level at 30 °C as under FA conditions. In complete contrast, the fast-growing B. hordeaceus had an almost stable SLA with a small and steady decline (Table 1). Net assimilation rate In the FA treatment, the slow-growing B. erectus had almost identical net assimilation rate (NAR) at 20 °C and 30 °C. At 7 °C NAR almost doubled (Fig. 1c). Rates for the fast-growing B. hordeaceus were similar at all three temperatures and, in general, about 30% higher than for B. erectus. Under LA conditions, there was little difference between the two species in pattern and response to temperature (Fig. 1c). Both species had their highest NAR at 20 °C and lower and similar values at 7 °C and 30 °C. Root: shoot ratio B. erectus and B. hordeaceus showed a similar response pattern in the FA treatment. The R:S ratio was declining with increasing temperature from 7 °C to 20 °C (Table 1; F1,73 = 3.13; p = 0.05), remaining constant at 30 °C. Under LA conditions, the R:S ratio was significantly increased in both species compared to FA conditions (Table 1; F1,72 = 11.59; p = 0.002). In response to the combination of warming (30 °C) and low nitrogen availability the R:S ratio was higher for both species but more so, almost doubled, in the slow-growing B. erectus (Table 1). C:N ratio Under FA conditions, both species had their lowest ratios at 20 °C and the ratio was increased at higher and lower temperatures (Table 2), with this effect being greater in the slow-growing B. erectus (Fig. 1d). The pattern of response changed markedly at low nitrogen supply with both species showing a significant (1.7 fold) increase in C:N ratio. In response to temperature, the slow-growing B. erectus showed a more mixed response with a marked increase (60%) at 30 °C and a decline (10%) at 7 °C. The fast-growing B. hordeaceus had higher ratios overall with a steady decline with a rise in temperature. Table 2 Physiological response Nitrogen use efficiency Under FA conditions, both species B. erectus and B. hordeaceus showed a similar pattern of response, being minimal at 20 °C (p = 0.003, Fig. 2b). NUE rates were higher but similar, at 7 °C and 30 °C with the fast-growing B. hordeaceus always having lower values, 12% at 7 °C and 22% at 30 °C (p = 0.007 and p = 0.000, respectively). Responses of NUE to low nitrogen availability were species-specific (Table 1). In the slow-growing B. erectus NUE increased slightly from 7 °C to 20 °C, and then markedly at 30 °C, resulting in a 59% higher NUE when compared to FA conditions. In contrast, the fast-growing B. hordeaceus had similar NUE at 7 °C and 20 °C, which then declined to only 208.4 g/g at 30 °C. Leaf carbon uptake and release For both species, the uptake rates were highest when grown at 20 °C with those for the fast-growing B. hordeaceus being twice as high as those for the slow-growing B. erectus (Fig. 2a). Both chilling and warming reduced these rates, with NPmax close to zero at 7 °C for both species but a smaller decline at 30 °C. Under LA conditions the response pattern for both species remained similar to that under FA conditions (Fig. 2a) but rates at 20 °C and 30 °C were lower than under FA. Leaf respiration rates in the dark were highest at 30 °C growth temperature with those of the fast-growing B. hordeaceus being twice as those of the slow-growing B. erectus (Table 2). At LA conditions, the dark respiration rates had a maximum at 20 °C. Optimal temperature for photosynthesis (Topt) Under FA conditions both species showed very similar responses in the temperature for optimal net photosynthesis, Topt (Fig. 3). Topt was almost identical to growth temperature at 20 °C and 30 °C indicating that acclimation to these warmer growth temperatures was realised. However, acclimation was less obvious at 7 °C with Topt being more than twice as high as the growth temperature. Under low nitrogen delivery, this pattern remained similar for both species with the exception that Topt was closer to the actual growth temperature at 7 °C. Optimal growth temperature. Displayed is a ratio between the optimal temperature for photosynthesis (Topt) and the growth temperature during the experiment. These values are related to the growth temperature to visualize deviations and acclimation. A ratio of one indicates that the optimal temperature matches the growth temperature, values above one indicate that the optimal temperature was higher than the growth temperature and vice versa Daily carbon budget Under FA and at 20 °C, the fast-growing B. hordeaceus lost only one-fourth of its daily available carbon via root and leaf dark respiration, while this was more than a half for the slow-growing B.erectus (Fig. 4). At 7 °C, this pattern remained almost identical in both species, but the total amount of available carbon (size of the circle) was much reduced in the fast-growing B. hordeaceus, but remained almost identical for the slow-growing B. erectus. At 30 °C total available carbon decreased and because total respiration losses increased in both species the available carbon fraction was cut to one-fourth (Fig. 4). Daily carbon budget. Dry weight-based rates for net photosynthesis, dark respiration and root respiration rates were converted to a per day basis and the daily rates were weighted according to the number of hours that each gas exchange parameter took place (NP:16 h; DR: 8 h and RR: 24 h). The total size of the circle reflects the RGR that is normalized for fast-growing B.hordaceus at control conditions to 100% Under LA conditions total available carbon for the slow-growing B. erectus was almost constant at all temperatures with totals being lower than at FA at 7 °C and 20 °C but almost identical at 30 °C (Fig. 4). In contrast, total available carbon for the fast-growing B. hordeaceus was highest at 20 °C, almost twice that at the other temperatures, with the total being lower than under FA at 7 °C and 20 °C but, as for the slow-growing B. erectus, almost identical at 30 °C. The proportion of total available carbon allocated to root respiration was higher than in FA which lowered the daily available carbon fraction in both species, but much more so in the fast-growing B. hordeaceus. At 30 °C, carbon losses, due to increased root and dark respiration, were maximal in both species and resulted in the smallest available carbon fractions (Fig. 4). Our results indicate clear differences in trait coordination between a slow growing grass perennial and a fast growing, invasive competitor. Any fast acclimation to growth conditions via ecophysiological trait adjustments is costly, especially when low metabolic rates associated with low growth temperatures occur in combination with nutrient depletion. By having a set of well-conserved adaptive functional traits, slow-growing perennials might have benefits when handling such a combination of stressors. A slow grower's response to changes in growth temperature, nutrient limitation and their combination Plant traits are strongly correlated with temperature [39] and at 7 °C B. erectus had almost identical RGR as at 20 °C (control conditions) due to a well-balanced adjustment of its functional traits. Because NAR and SLA are co-dependent in the calculation of RGR [28], for RGR to remain at levels similar to control conditions, the significant decrease in SLA involved a simultaneous increase in NAR (more than double to that of control conditions (Fig. 1c)). This indicates that, at low temperature, the net dry weight gain was converted to higher investments in leaf stability, making them thicker and more sustainable and agrees with the common response of plants when grown at low temperatures [32, 61]. Surprisingly, growing in the cold also resulted in higher investments in root tissue, although root growth is usually known to be limited at temperatures ~ 6 °C (Alvarez‐Uria and Körner [2]). These results could be explained via a the following response cascade starting with the carbon uptake in the leaf: At 7 °C, due to a lack of thermal acclimation of net photosynthesis (Fig. 3), carbon uptake rates were much reduced (Fig. 2a, Table 2). As a response to operating at such low, sub-optimal uptake rates, NUE was increased to enhance the efficiency of carbon incorporation into biomass (Fig. 2b). For the plant's growth economics, this is a costly process, which was displayed in a higher C:N ratio. The higher C:N ratio in the leaves can then be the trigger for the plant to allocate biomass investments towards the root in order to increase the surface tissue for nutrient uptake according to the "optimal resource partitioning strategy". According to this model, plants respond to environmental factors that limit the acquisition of below-ground resources relative to above-ground resources by shifting their partitioning to tissues associated to gaining the relatively limiting resource [5]. In contrast to the cold temperature response, the RGR of B.erectus was reduced when growing at higher temperatures (30 °C) (Fig. 1a). This occurs because NAR remained at the same level as at the 20 °C growth temperature but SLA decreased, which is a typical response of plants to warming [23]. In comparison to growing at 20 °C, the carbon uptake rates were reduced (Fig. 2a, Table 2), while simultaneously, respiration rates increased (Table 2). Temperature is known to have a crucial influence on respiratory CO2 efflux [3] and therefore the daily carbon budget of a plant [32]. As a result, the carbon losses were so high that the available carbon fraction for growth was cut to one-fourth (Fig. 4) making it impossible for NAR to increase. Limited nitrogen delivery changed the response patterns and the previously described reaction norm of the slow growing B.erectus in response to temperature. Now, RGR was generally reduced to around 65% of FA conditions and was even lower at both treatment temperatures (7 °C and 30 °C). The cold + low nitrogen treatment could be identified as the most limiting for B.erectus, despite having adjusted so well under FA conditions. Neither the up-regulation of NAR, nor the down regulation of SLA demonstrated under FA conditions was realised, resulting in lower RGR most possibly linked to a limited potential to up regulate NUE (Fig. 2b) under LA conditions. In order to compensate for the lower nutrient availability, the close functional coordination between root and shoot traits [59] resulted in higher investments in root biomass (accompanied with higher root respiration losses) indicating that leaf-acquired resources are linked to the root economics and vice versa. In direct contrast, when B. erectus was exposed to warming (30 °C) in combination with LA conditions, the growth was less affected and only slightly lower than under FA conditions. However, due to increased root and leaf dark respiration rates the amount of available carbon resources were at their lowest (Fig. 4), and, to improve the efficiency of carbon incorporation into biomass, NUE was highest. As found at 7 °C under FA conditions, this process appears to be costly and results in high C:N ratios and also the highest R:S ratio, indicating a coordinated variation of root and shoot nitrogen content as well as SLA [11, 12]. For B. erectus, this could reflect the species' habitat specialisation, as B. erectus' favoured habitat types are warm–nutrient-poor environments such as calcareous grasslands of the Mesobromion alliance [17], of which it is a characteristic, name-giving species. In general, species with a slow-growing traits syndrome are more successful under low nutrient conditions [11, 12, 27, 34]. What is different in the fast-grower? In contrast to B. erectus, B. hordeaceus had a fast-grower's life strategy and grew twice as fast, had doubled NPmax and significantly higher SLA at 20 °C with FA conditions. This also displays the invasive character of B. hordeaceus, because invasive alien species are known to have higher values for performance-related traits than non‐invasive species [56]. The following section highlights the differences in the reaction norm to changes in growth temperature, nutrient supply and their combinations for B. hordeaceus in comparison to the slow-growing B. erectus. The main difference was that B. hordeaceus had a sharp maximum in RGR at 20 °C and could not shift to grow well at higher or lower temperatures. Although NAR, in general, was about 30% higher in B. hordeaceus than in B. erectus, the impossibility to adjust NAR in response to temperature resulted in a drastic decrease in RGR. When exposed to warm growing temperatures RGR was then similar to those of the slow growing B. erectus and when exposed to cold temperatures RGR were even below those of B. erectus. 'Fast' traits, are costly in the face of any kind of resource shortfall [48], and fast-growing species are less tolerant to any changes in resource availability (whether water, nutrients or light). It has been shown previously, that the relatively low rates of root respiration in fast-growing grasses in comparison to slow growing ones are a result of the lower costs for nutrient uptake [52]. Thus, it is not surprising to find that LA conditions had more severe effects the total carbon budget of B. hordeaceus than of B. erectus. Especially when exposed to a combination of cold growing temperatures and restricted nitrogen supply, the inability to adjust SLA and NUE (which was the response of B. erectus leading to only 10% reduction of RGR) results in the lowest RGR, which is less than 50% from LA 20 °C levels and even below that of B. erectus. Such a strongly limited growth performance was also shown when nitrogen limitation occurred in combination with warmer growth temperature; conditions that did not greatly affect RGR of B. erectus (Fig. 1a). Here, for B. hordeaceus RGR was almost halved together with a reduction of NUE and an increase in R:S of less significance than in B. erectus. Effects on the daily available carbon fraction were additive, such that the uptake rates were reduced by temperature plus the carbon losses were higher because of increased root respiration due LA conditions (Fig. 4). From the plant carbon economics perspective, this meant that any resource allocation as a response to environmental changes became problematic, simply due to the fact that available resources were low. We can support our initial hypothesis that nitrogen limitation will limit growth performance independent of growth strategy because we saw that the 50% reduction of N-availability at optimal temperature decreased RGR to the same amount in both species. However, both species seem to use the same mechanism to achieve this (increasing NUE). A good nutrient acquisition capacity could be the result of low biomass density at least in the fast-growing grasses [50]. As an additional explanation it needs to be considered that although the nitrogen availability was reduced to 50% in the growth units, the nutrient availability was still higher than it would be within a nitrogen depletion scenario in natural conditions (such as found in a limestone grassland, [26]. However, for our approach, the decision to reduce N by 50% displayed an experience-based trade-off between experimental handling (overall time of the plants in the growth units) and effect size, which was proven to be significant in most treatments. Temperature affects leaf energy balance [20, 29], metabolic rate [19] and plant growth rate [32], and many ecological traits are known to be correlated with temperature [45], Went 1953), including leaf nutrient content or leaf mass per unit area and leaf lifespan [47, 62]. On a global basis, mean annual temperature has been shown to strongly correlate with plant traits [39] but slow‐ and fast‐growing species did not appear to differ in their plasticity of RGR in response to growth temperature [32]. Therefore, our second hypothesis handled the response of plant ecophysiological traits to changes in temperature and we were suggesting this to be independent of growth strategy (fast vs. slow). Our results support this hypothesis by showing that significant increases in carbon losses via both shoot and root respiration, a reduction of the root biomass and inflexible NAR values, were similar responses to changes in temperature in both species. Most prominent, in response to chilling, the unaltered carbon budget pattern as a result of a uniform reduction of the absolute rates carbon uptake and release rates (Fig. 4), was uniform across species and growth strategy. Nevertheless, the effect size of these responses was consistently higher in the fast-growing B. hordeaceus than it was in the slow-growing B. erectus and this led to more drastic effects in RGR reduction in the fast-growing species. Accordingly, the fast-growing B. hordeaceus showed a marked optimum at 20 °C growth temperature whilst B. erectus did not. Finally, we posed the question of whether potential benefits of a more conservative life strategy in slow-growing plants through functional traits vanish when ecophysiological trait coordination is needed as well, in a combined stress scenario. Our results imply, that for the fast-growing invader B. hordeaceus a reduction of nitrogen availability in combination with a temperature increase may indeed lead to a disadvantage in comparison to the slow-growing perennial species B.erectus and this may explain why nutrient-rich habitats often experience more invasion than resource-poor habitats [6, 14]. However, the absolute values of traits (such as RGR, SLA or NPmax) were similar between fast-and slow-growing species when the plants where grown at suboptimal temperature and at low- nitrogen availability. This implies that any differentiation between the two growth strategies becomes difficult in such a scenario and a growth strategy convergence can occur as a result from combined stress effects. Species selection This research was concentrated on graminoids because, not only were they suitable for the experimental conditions (growth units) but they are also the dominant vegetation in many habitats, including grassland, salt-marsh, reed swamp and steppes and include some of the most versatile plant functional types. We selected two C3 grass species, Bromus erectus and Bromus hordeaceus. B. hordeaceus is a grass species native to Europe. It has several features shared by successful invasive species including a short life cycle and a predominantly autogamous breeding system (CABI [7]). It is an annual species of grass (Poaceae) and flowers from May until July [8]. B. hordeaceus has been introduced into parts of North and South America and Australia. It is a weed of crop fields, grasslands, orchards and turf where it competes with native vegetation and monopolizes resources. The species is described to be fast growing and to have a very low LMA (LMA = 20 g/m2) even in comparison to other fast-growing species [57]. B. erectus is a grass species also native to Europe. It is a medium-tall grass which forms loose tussocks and produces few tillers. It is a perennial grass that flowers in May/June [8]. This species is mainly found on warm, well-drained, calcareous soils in upland areas. The species is described to have a resource-acquisition strategy as a slow growing species (LMA = 60.54 g/m2, [42]). Seedlings were germinated on a mixture of sand and vermiculite (1:1). Immediately after the appearance of the second leaf, seedlings were removed from the sand, washed carefully and placed into custom-made aeroponic growth units (Biotronic AB, Sweden) that allow accurate control over the nutrient supply to the plants [25]. One growth unit contained up to 84 seedlings. Four growth units (2 for each species) were placed in a growth cabinet with a selected constant temperature, 16-h light period (7 am–11 pm) with a light intensity of 200 μmol photons/m2 s and 70% relative humidity. The temperature of the nutrient solution was kept the same as the air temperature, with a maximum deviation of ± 1 °C. After transplantation, plants were first acclimated to the growing conditions in the aeroponic units, and this was determined to be completed once the pH and the conductivity of the medium solution became stable and the nutrient uptake had recovered as assessed by regular nutrient titrations being necessary (pH 5.5, conductivity 99–101 µS in 6 dm3 solution). The experiment consisted of 6 treatments for each species, in a 3 × 2 factorial design: three temperature conditions (7 °C (chilling), 20 °C (control) and 30 °C (warming)) and two nutrient conditions (free access and low access (50% reduced) nitrogen). Plants incubated at 20 °C were considered the control group because this temperature has been described to be optimal for grasses from steppe or meadow vegetation in Europe [30]. Under free access nitrogen (FA) treatments, the seedlings were supplied with mineral nutrients in a proportion that was known to be optimal for growth [24], with a nitrogen concentration that was low, but optimal (30 ± 2 mg in 6 dm3 solution, [25]. In the nitrogen limited (LA) treatments, the proportion of N in the stock solutions was reduced by 80% and nitrogen supply was adjusted manually on a daily basis using a 1 molar ammonium-nitrate stock solution, so that the N supply (as mg N/day) was reduced to 50% of that required by the seedlings for optimal growth. The supply of all the other nutrients remained unchanged. Harvests for the growth analysis were performed at regular intervals each with five replicates of each species. The plants were divided into leaves and roots. Fresh and dry mass (after 4 days in the dry oven at 80 °C) were recorded. Leaf area was measured using a LI-COR Li-3000C leaf area meter (LI-COR Inc., Lincoln, NE, USA). The dry material was ground to a fine powder using a mortar and pestle and analyzed for total C and N concentration by mass spectrometry (Isotope ratio mass spectrometer (DeltaV, Thermo Fisher Scientific, Bremen, Germany; Elemental analyzer (Flash EA 2000, Thermo Fisher Scientific, Bremen, Germany)) as described by Werner et al. [58]. The C:N ratio as well as the NUE (increase in dry weight per unit of nitrogen) was calculated. RGR was calculated according to Lambers and Poorter [28]: $$ {\text{RGR }} = {\text{ LAR}}\; \times \;{\text{NAR}} $$ LAR (leaf area ratio) is the product of SLA, and the leaf weight ratio (LWR, fraction of biomass allocated to the leaves). Net assimilation rate (NAR) is defined as the rate of increase in plant weight per unit leaf area. Additional growth-related parameters such as LMA and the R:S ratio were determined. Resource uptake, partitioning and allocation CO2 exchange was measured in order to demonstrate changes in the carbon exchange of the plants. Therefore, as soon as the third leaf was fully expanded (indicated by stable leaf dry weight) three replicates were harvested and experiments performed. This approach was chosen to assure sampling of plants at similar physiological stages of development and that the sampling material (the third leaf) was completely developed under treatment conditions. The whole plant was harvested carefully and with the roots immersed in a small plastic container containing the original nutrient solution. The third leaf was carefully positioned in the gas exchange cuvette (3010-GWK1, Walz, Effeltrich, Germany). The cuvette was attached to an infrared gas analyser (LI-COR 6400, LI-COR Inc., Lincoln, NE, USA) to measure the CO2 fluxes. CO2 concentration in the cuvette was adjusted to saturating conditions (1000 ppm) and the light was set to 200 µmol photons/m2 s (mimicking the conditions in the growth chamber). The temperature was set to mimic the growth conditions for an initial equilibration phase. Once the signal was stable, net photosynthesis (NP, light in the cuvette switched on), and dark respiration (light switched off and cuvette completely shaded) were measured at different temperatures (5, 7, 10, 15, 20, 25, 30, 35 and 40 °C, in a randomly chosen sequence). From these measurements, we obtained maximum net photosynthesis (NPmax), optimal temperature for photosynthesis (Topt, the temperature range over which net photosynthesis was above 90% of its maximum), net photosynthesis and leaf respiration in the dark (DR) at the ambient growth temperature. Thermal acclimation was displayed as a ratio between Topt and the growth temperature during the experiment. A ratio of one indicated that the optimal temperature matches the growth temperature, values above one indicate that the optimal temperature was higher than the growth temperature and vice versa. Carbon uptake rates were expressed on a dry weight basis, because the higher assimilation rates of fast-growing species on an area basis, are 'diluted' by having a higher SLA simultaneously [15, 40, 43] and the aim of this study was to compare the net carbon gain rather than the net photosynthetic rate. It was stated, therefore, that comparisons of photosynthesis and growth can only be made per unit plant weight and per unit of time [44]. Once the leaf gas exchange measurement was finished, the roots were cut off and root respiration (RR) was determined as the decrease of O2 concentration in a liquid phase oxygen electrode system (CB1D, Hansatech Instruments, Norfolk, United Kingdom). Measurements were made at 7 °C, 20 °C, and 30 °C. An estimate of each individual plant's daily carbon budget was obtained by converting net photosynthesis, respiration (in the dark) and root respiration rates to a per day basis according to the number of hours per day that each gas exchange parameter took place (NP 16 h, DR 8 h and RR 24 h). In order to reflect the whole plant budget, these rates were weighted according to the plants R:S ratio. All statistical analyses were performed using SPSS software (SPSS statistics 24, IBM Analytics, New York). Prior analysis, the within-group normal distribution was checked using Shapiro–Wilk tests. To evaluate the effects of temperature, nitrogen availability and their interaction with the dependent variables SLA, NPmax, NUE and C:N ratio, we applied a univariate linear model. Species (2 discrete levels: B. erectus vs. B. hordeaceus), temperature (3 discrete levels: 7 °C, 20 °C and 30 °C) and nitrogen treatment (2 discrete levels: full access vs. low access) were treated as discrete explanatory variables. We evaluated single effects as well as all two-way and three-way interactions. T-tests were performed to compare between the species on selected groups. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. All authors read and approved the final manuscript. SLA: RGR: LMA: Leaf mass area R:S: Root to shoot ratio C:N: Carbon to nitrogen ratio NUE: Net photosynthesis NPmax : Maximum net photosynthesis FA: Free access nitrogen treatment Low access nitrogen treatment NAR: Topt : Optimal temperature for photosynthesis Leaf area ratio LWR: Leaf weight ratio RR: Root respiration DR: Dark respiration Aerts R, Chapin FS. The mineral nutrition of wild plants revisited: a re-evaluation of processes and patterns. Adv Ecol Res. 2000;30:1–67. https://doi.org/10.1016/S0065-2504(08)60016-1. Alvarez-Uria P, Körner C. Low temperature limits of root growth in deciduous and evergreen temperate tree species. Funct Ecol. 2007;21:211–8. https://doi.org/10.1111/j.1365-2435.2007.01231.x. Atkin OK, Bruhn D, Hurry VM, Tjoelker MG. Evans Review No 2: The hot and the cold: unravelling the variable response of plant respiration to temperature. Funct Plant Biol. 2005;32:87–105. https://doi.org/10.1071/FP03176. Baltzer JL, Thomas SC. Physiological and morphological correlates of whole plant light compensation point in temperate deciduous tree seedlings. Oecologia. 2007;153:209–23. Bloom AJ, Chapin FS III, Mooney HA. Resource limitation in plants-an economic analogy. Annu Rev Ecol Syst. 1985;16:363–92. Burke MJW, Grime JP. An experimental study of plant community invasibility. Ecology. 1996;77:776–90. https://doi.org/10.2307/2265501. CABI Bromus hordeaceus In: Invasive Species Compendium Wallingford UK: CAB International. 2018. https://www.cabi.org/isc. Clayton WD, Vorontsova MS, Harman KT, Williamson H. GrassBase–The Online World Grass Flora. 2006. http://www.keworg/data/grasses-dbhtml. Cornelissen JHC, Lavorel S, Garnier E, Diaz S, Buchmann N, Gurvich DE, Pausas JG. A handbook of protocols for standardised and easy measurement of plant functional traits worldwide. Aust J Bot. 2003;51:335–80. https://doi.org/10.1071/BT02124. Craine J, Tilman DG, Wedin DA, Reich P, Tjoelker MG, Knops JMH. Functional traits productivity and effects on nitrogen cycling of 33 grassland species. Funct Ecol. 2002;16:563–74. https://doi.org/10.1046/j.1365-2435.2002.00660.x. Craine JM, Lee WG. Covariation in leaf and root traits for native and non-native grasses along an altitudinal gradient in New Zealand. Oecologia. 2003;134:471–8. https://doi.org/10.1007/s00442-002-1155-6. Craine JM, Lee WG, Bond WJ, Williams RJ, Johnson LC. Environmental constraints on a global relationship among leaf and root traits of grasses. Ecology. 2005;86:12–9. https://doi.org/10.1890/04-1075. Cronk QC, Fuller JL. Plant invaders: the threat to natural ecosystems. People and Plant Conservation Manuals. 2nd edn. Earthscan, Routledge, New York; 2013. Daehler CC. Performance comparisons of co-occurring native and alien invasive plants: implications for conservation and restoration. Annu Rev Ecol Evol Syst. 2003;34:183–211. https://doi.org/10.1146/annurev.ecolsys.34.011802.132403. Dijkstra P, Lambers H. A physiological analysis of genetic variation in relative growth rate within Plantago major L. Funct Ecol. 1989;3:577–87. https://doi.org/10.2307/2389572. Enrique G, Olmo M, Poorter H, Ubera JL, Villar R. Leaf mass per area (LMA) and its relationship with leaf structure and anatomy in 34 Mediterranean woody species along a water availability gradient. PLoS ONE. 2016;11(2):e0148788. https://doi.org/10.1371/journal.pone.0148788. Ellenberg H. Vegetation Mitteleuropas mit den Alpen. 5th ed. Stuttgart: Ulmer; 1996. Funk JL, Vitousek PM. Resource-use efficiency and plant invasion in low-resource systems. Nature. 2007;446:1079–81. https://doi.org/10.1038/nature05719. Gillooly JF, Brown JH, West GB, Savage VM, Charnov EL. Effects of size and temperature on metabolic rate. Science. 2001;293:2248–51. https://doi.org/10.1126/science.1061967. Harrison SP, Prentice IC, Barboni D, Kohfeld KE, Ni J, Sutra JP. Ecophysiological and bioclimatic foundations for a global plant functional classification. J Veg Sci. 2010;21:300–17. https://doi.org/10.1111/j.1654-1103.2009.01144.x. Hodgson JG, Wilson PJ, Hunt R, Grime JP, Thompson K. Allocating C-S-R plant functional types: a soft approach to a hard problem. Oikos. 1999;85:282–94. https://doi.org/10.2307/3546494. Holste EK, Kobe RK, Vriesendorp CF. Seedling growth responses to soil resources in the understory of a wet tropical forest. Ecology. 2011;92:1828–38. https://doi.org/10.1890/10-1697.1. Hudson JMG, Henry GHR, Cornwell WK. Taller and larger: shifts in Arctic tundra leaf traits after 16 years of experimental warming. Glob Change Biol. 2011;17:1013–21. https://doi.org/10.1111/j.1365-2486.2010.02294.x. Ingestad T. A definition of optimum nutrient requirements II. Physiol Plantarum. 1971;24:118–25. https://doi.org/10.1111/j.1399-3054.1971.tb06728.x. Ingestad T, Lund AB. Nitrogen stress in birch seedlings I. Growth Technique and Growth. Physiol Plantarum. 1979;45:137–48. https://doi.org/10.1111/j.1399-3054.1979.tb01678.x. Köhler B, Ryser P, Güsewell S, Gigon A. Nutrient availability and limitation in traditionally mown and in abandoned limestone grasslands: a bioassay experiment. Plant Soil. 2001;230:323–32. https://doi.org/10.1023/A:1010335825818. Körner C. Plant‐environment interactions In: Strasburger's Plant Sciences; Springer, Heidelberg; 2013. pp 1065–1166 Lambers H, Poorter H. Inherent variation in growth rate between higher plants: a search for physiological causes and ecological consequences. Adv Ecol Res. 2004;34:283–362. https://doi.org/10.1016/S0065-2504(08)60148-8. Lambers H. Leaf energy budgets: Effects of radiation and temperature In: H Lambers FS Chapin and TL Pons (Eds) Plant Physiological Ecology. New York: Springer; 1998 Larcher W. Physiological Plant Ecology: Ecophysiology and Stress Physiology of Functional Groups. 4th ed. Berlin: Springer; 2004. Lavorel S, Garnier E. Predicting changes in community composition and ecosystem functioning from plant traits: revisiting the Holy Grail. Funct Ecol. 2002;16:545–56. https://doi.org/10.1046/j.1365-2435.2002.00664.x. Loveys BR, Scheurwater I, Pons TL, Fitter AH, Atkin OK. Growth temperature influences the underlying components of relative growth rate: an investigation using inherently fast-and slow-growing plant species. Plant Cell Environ. 2002;25:975–88. https://doi.org/10.1046/j.1365-3040.2002.00879.x. Lusk CH, Jorgensen MA. The whole-plant compensation point as a measure of juvenile tree light requirements. Funct Ecol. 2013;27:1286–94. https://doi.org/10.1111/1365-2435.12129. Mason NWH, Richardson SJ, Peltzer DA, de Bello F, Wardle DA, Allen RB. Changes in coexistence mechanisms along a long-term soil chronosequence revealed by functional trait diversity. J Ecol. 2012;100:678–89. https://doi.org/10.1111/j.1365-2745.2012.01965.x. Meinzer FC, Campanello PI, Domec JC, Gatti MG, Goldstein G, Villalobos-Vega R, et al. Constraints on physiological function associated with branch architecture and wood density in tropical forest trees. Tree Physiol. 2008;28:1609–17. https://doi.org/10.1093/treephys/28.11.1609. Meinzer FC, Woodruff DR, Domec J-C, Goldstein G, Campanello PI, Gatti MG, et al. Coordination of leaf and stem water transport properties in tropical forest trees. Oecologia. 2008. https://doi.org/10.1007/s00442-008-0974-5. Meinzer FC, McCulloh KA, Lachenbruch B, Woodruff DR, Johnson DM. The blind men and the elephant: the impact of context and scale in evaluating conflicts between plant hydraulic safety and efficiency. Oecologia. 2010;164:287–96. Mueller KE, LeCain DR, McCormack ML, Pendall E, Carlson M, Blumenthal DM. Root responses to elevated CO2 warming and irrigation in a semiarid grassland: integrating biomass length and lifespan in a 5-year field experiment. J Ecol. 2018;106:2176–89. https://doi.org/10.1111/1365-274512993. Moles AT, Perkins SE, Laffan SW, Flores-Moreno H, Awasthy M, Tindall ML, Anand M. Which is a better predictor of plant traits: temperature or precipitation? J Veg Sci. 2014;25:1167–80. https://doi.org/10.1007/s00442-010-1734-x. Mooney HA, Ferrar PJ, Slatyer RO. Photosynthetic capacity and carbon allocation patterns in diverse growth forms of Eucalyptus. Oecologia. 1978;36:103–11. https://doi.org/10.1007/BF00344575. Ordoñez JC, Bodegom PM, Witte J-PM, Wright IJ, Reich PB, Aerts R. A global study of relationships between leaf traits climate and soil measures of nutrient fertility. Global Ecol Biogeogr. 2009;18:137–49. https://doi.org/10.1111/j.1466-8238.2008.00441.x. Pérez-Ramos IM, Volaire F, Fattet M, Blanchard A, Roumet C. Tradeoffs between functional strategies for resource-use and drought-survival in Mediterranean rangeland species. Environ Exp Bot. 2013;87:126–36. https://doi.org/10.1016/j.envexpbot.2012.09.004. Poorter H. Interspecific variation in relative growth rate: on ecological causes and physiological consequences. In: Lambers H, editor. Causes and consequences of variation in growth rate and productivity of higher plants. SPB Academic Publishing: Hague; 1989. p. 45–68. Poorter H, Remkes C, Lambers H. Carbon and nitrogen economy of 24 wild species differing in relative growth rate. Plant Physiol. 1990;94:621–7. https://doi.org/10.1104/pp.94.2.621. Rawson H. Plant responses to temperature under conditions of elevated CO2. Aust J Bot. 1992;40:473–90. https://doi.org/10.1071/BT9920473. Reich PB, Ellsworth DS, Walters MB. Leaf structure (specific leaf area) modulates photosynthesis–nitrogen relations: evidence from within and across species and functional groups. Funct Ecol. 1998;12:948–58. https://doi.org/10.1046/j.1365-2435.1998.00274.x. Reich PB, Oleksyn J. Global patterns of plant leaf N and P in relation to temperature and latitude. P Natl Acad Sci USA. 2004;101:11001–6. https://doi.org/10.1073/pnas.0403588101. Reich PB. The world-wide 'fast–slow' plant economics spectrum: a traits manifesto. J Ecol. 2014;102:275–301. https://doi.org/10.1111/1365-2745.12211. Russo SE, Davies SJ, King DA, Tan S. Soil-related performance variation and distributions of tree species in a Bornean rain forest. J Ecol. 2005;93:879–89. https://doi.org/10.1111/j.1365-2745.2005.01030.x. Ryser P, Lambers H. Root and leaf attributes accounting for the performance of fast-and slow-growing grasses at different nutrient supply. Plant Soil. 1995;170:251–65. https://doi.org/10.1007/BF00010478. Saud S, Fahad S, Cui G, Yajun C, Anwar S. Determining nitrogen isotopes discrimination under drought stress on enzymatic activities, nitrogen isotope abundance and water contents of Kentucky bluegrass. Sci Rep. 2020;10:1–16. https://doi.org/10.1038/s41598-020-63548-w. Scheurwater I, Cornelissen C, Dictus F, Welschen R, Lambers H. Why do fast- and slow-growing grass species differ so little in their rate of root respiration considering the large differences in rate of growth and ion uptake? Plant Cell Environ. 1998;21:995–1005. https://doi.org/10.1046/j.1365-3040.1998.00341.x. Suzuki N, Rivero RM, Shulaev V, Blumwald E, Mittler R. Abiotic and biotic stress combinations. New Phytol. 2014;203:32–43. https://doi.org/10.1111/nph.12797. Tilman D, Wedin D. Plant traits and resource reduction for five grasses growing on a nitrogen gradient. Ecology. 1991;72:685–700. https://doi.org/10.2307/2937208. Tjoelker MG, Craine JM, Wedin D, Reich PB, Tilman D. Linking leaf and root trait syndromes among 39 grassland and savannah species. New Phytol. 2005;167:493–508. https://doi.org/10.1111/j.1469-8137.2005.01428.x. Van Kleunen M, Weber E, Fischer M. A meta-analysis of trait differences between invasive and non-invasive plant species. Ecol Lett. 2010;13:235–45. https://doi.org/10.1111/j.1461-0248.2009.01418.x. Waddell HA, Simpson RJ, Lambers H, Henderson B, Ryan MH, Garden DL, Richardson AE. Phosphorus-utilisation efficiency and leaf-morphology traits of Rytidosperma species (wallaby grasses) that differ in their growth response to phosphorus fertilisation. Aust J Bot. 2016;64:65–76. https://doi.org/10.1071/BT15202. Werner RA, Bruch BA, Brand WA. ConFlo III-an interface for high precision d13C and d15N analysis with an extended dynamic range. Rapid Commun Mass Spectrometry. 1999;13:1237–41. https://doi.org/10.1002/(SICI)1097-0231(19990715)13:13%3c1237::AID-RCM633%3e3.0.CO;2-C. Westoby M, Wright IJ. Land-plant ecology on the basis of functional traits. Trends Ecol Evol. 2006;21:261–8. https://doi.org/10.1016/j.tree.2006.02.004. Wilson PJ, Thompson KEN, Hodgson JG. Specific leaf area and leaf dry matter content as alternative predictors of plant strategies. The New Phytol. 1999;143:155–62. https://doi.org/10.1046/j.1469-8137.1999.00427.x. Wolfe DW. Low temperature effects on early vegetative growth leaf gas exchange and water potential of chilling-sensitive and chilling-tolerant crop species. Ann Bot. 1991;67:205–12. https://doi.org/10.1093/oxfordjournals.aob.a088124. Wright IJ, Reich PB, Cornelissen JHC, Falster DS, Groom PK, Hikosaka K, et al. Modulation of leaf economic traits and trait relationships by climate. Global Ecol Biogeogr. 2005;14:411–21. https://doi.org/10.1111/j.1466-822x.2005.00172.x. CC gratefully acknowledges the Alexander von Humboldt Foundation for financial support via the Feodor Lynen Research Fellowship. VH acknowledges grant funding support from the Swedish Research Council and the project "TC4F – Trees and Crops for the Future" funded through the Swedish government's Strategic Research Environment "Sustainable use of Natural Resources". Special thanks to Prof. Allan Green for significant input in earlier versions of the manuscript and helpful discussions. Open Access funding provided by Swedish University of Agricultural Sciences. Alexander von Humboldt Foundation—Feodor Lynen Research Fellowship (fellowship funding CC. Funder role: study design, analysis, interpretation of data and in writing the manuscript). Swedish Research Council—"TC4F – Trees and Crops for the Future" (Research funding VH. Funder role: Study design and equipment support). Edinburgh Global Change Institute, School of GeoSciences, University of Edinburgh, Alexander Crum Brown Road, Edinburgh, UK Claudia Colesie Department of Forest Ecology and Management, Swedish University of Agricultural Sciences, Umeå, Sweden Zsofia Reka Stangl Umeå Plant Science Centre (UPSC), Department of Forest Genetics and Plant Physiology, Swedish University of Agricultural Sciences, Umeå, Sweden Vaughan Hurry ZRS, VH and CC conceived and designed the experiments, CC performed the experiments and data analysis, CC wrote the manuscript; other authors provided intellectual and editorial advice. Correspondence to Vaughan Hurry. Colesie, C., Stangl, Z.R. & Hurry, V. Differences in growth-economics of fast vs. slow growing grass species in response to temperature and nitrogen limitation individually, and in combination. BMC Ecol 20, 63 (2020). https://doi.org/10.1186/s12898-020-00333-3 Plant trait coordination Stress physiology Nutrient availability Functional type Ecophysiology Behavioral and physiological ecology
CommonCrawl
Machine Learning Theory - Part 3: Regularization and the Bias-variance Trade-off In first part we explored the statistical model underlying the machine learning problem, and used it to formalize the problem in terms of obtaining the minimum generalization error. By noting that we cannot directly evaluate the generalization error of an ML model, we continued in the second part by establishing a theory that relates this elusive generalization error to another error metric that we can actually evaluate, which is the empirical error. Our final result was that: That is: the generalization error (or the risk) $R(h)$ is bounded by the empirical risk (or the training error) plus a term that is proportionate to the complexity (or the richness) of the hypothesis space $|\mathcal{H}|$, the dataset size $N$, and the degree of certainty $1 - \delta$ about the bound. We can simplify that bound even more by assuming that we have a fixed dataset (which is the typical case in most practical ML problems), so that for a specific degree of certainty we have: Starting from this part, and based on this simplified theoretical result, we'll begin to draw some practical concepts for the process of solving the ML problem. We'll start by trying to get more intuition about why a more complex hypothesis space is bad. Why rich hypotheses are bad? To make things a little bit concrete and to be able to visualize what we're talking about, we'll be using the help of a simulated dataset which is a useful tool that is used often to demonstrate concepts in which we might need to draw multiple instance of the dataset from the same distribution; something that cannot be effectively done with real datasets. In a simulated dataset we define our own target function, and use that function, through the help of a computer program, to draw as much datasets as we want form the distribution it describes. In the following discussion we're gonna to sample $x$ uniformly from the interval $[-1,1]$ and use a one-dimensional target function $f(x) = \sin(x)$ which generates a noisy response (as we discussed in the first part) $y = f(x) + \zeta$ where $\zeta$ is a random noise drawn from a zero-mean distribution, in our case a Gaussian distribution with a standard deviation of 2. Recall that when we train an ML model on a dataset, we are trying to find the relation between the predictor features $x$ and the response $y$, so ideally we need hypothesis to account for the noise as little as possible; as noise by definition has no explanatory value whatsoever, and accommodation of noise will skew the model away from the true target resulting in poor performance on future data, hence poor generalization. In order to understand the problem of rich hypotheses, we'll investigate how different hypotheses of different complexities adhere to such criteria. In the following animation we train a linear, a cubic and a tenth degree polynomial hypotheses each on 100 different simulated datasets of 200 points (only 20 are shown) drawn form the above described distribution. Each of these models is drawn with light-blue line, the average of each hypothesis is shown with the darker blue line, while the true target is shown by the dashed black line. The offset of the points from the true target curve is an indicator of the noise; because if there wasn't any noise, the points would lie on the dashed black curve. So the further the point is from the true target curve, the more noisy it is. The first thing we notice from that animation is that the richer and more complex the hypothesis gets, the less the its difference from the true target becomes on average. That difference between the estimator's mean (the hypothesis) and the value it's trying to estimate (the target) is referred to in statistics as the bias. Where $\overline{h}(x)$ is the mean of different hypotheses generated from training the model on different datasets, i.e. $\overline{h}(x) = \mathop{\mathbb{E}}_{\mathcal{D}}\left[h^{(\mathcal{D})}(x)\right]$, where $h^{(\mathcal{D})}(x)$ indicates a hypothesis generated by training on the dataset $\mathcal{D}$. In English, the word "bias" commonly implies some kind of an inclination or prejudice towards something. Analogously, the bias in a statistical estimator can be interpreted as the estimator favoring some specific direction or a component in the target distribution over other major components. To make this interpretation concrete let's take a look at the Taylor's expansion of our target function. If you're not familiar with the concept of Taylor's expansion, you can think of it method to write a function as a infinite sum of simpler functions, you can consider such simpler functions as components for the sake of our discussion here. It's obvious from the increasing value of the denominator that each higher component contributes very little to the value of the function which makes higher components minor and unimportant. The high bias of the linear model can now be interpreted by the linear hypothesis function $h(x) = w_1x + w_0$ favoring the $x$ component of the target over the other major component $\frac{x^3}{3!}$. With the same logic, the seemingly low bias of cubic model can be explained by the fact that th cubic hypothesis function $h(x) = w_1x + w_2x^2+w_3x^3 + w_0$ includes on average both the major components of the target without favoring any of them over the other. The little decrease in bias introduced by the tenth degree model can also be explained by the fact that a the tenth degree polynomial includes the other minor components that do not contribute much to the value. It's simple to see that the closer the hypothesis gets to the target on average, the less its average loss from the target value becomes. This means that a low bias hypothesis results in low empirical risk; which makes it desirable to use low bias models, and since rich models have the lower bias, what makes them so bad then? Then answer to that question lies in second thing we notice in the animation, which is that richer the hypothesis gets, the greater its ability to extends its reach and grab the noise becomes. Go back to the animation and see how the linear model cannot reach the noisy points that lie directly above the peaks of the target graph, then notice how the cubic model can reach these but remains unable to reach those at the top of the frame, and finally see how the tenth degree model can even reach those on the top. In such situations, we say that the hypothesis is overfitting the data by including the noise. This overfitting behavior can quantified by noticing how tightly the linear hypothesis realizations (the light-blue curves) are packed around its mean (the darker blue curve) compared to the messy fiasco the tenth degree model is making around its mean.This shows that the more the hypothesis overfits, the wider its possible realizations are spread around the its mean, which is precisely the definition of variance! So how much the hypothesis overfits can be quantified by how much its variance around its mean: Obviously, a high variance model is not desired because, as we mentioned before, we don't want to accommodate for the noise, and since rich models have the higher variance, this is what makes them so bad and penalized against in the generalization bound. The Bias-variance Decomposition Let's take a closer look at the mess the tenth degree model made in its plot: Since $h^{\mathcal{(D)}}(x)$ changes as $\mathcal{D}$ changes as its randomly sampled each time, we can consider $h^{\mathcal{(D)}}(x)$ as a random variable of which the concrete hypothesis are realizations. Leveraging a similar trick we used in the first part, we can decompose that random variable into two components: a deterministic component that represents it mean and a random one that purely represents the variance: where $H^{\mathcal{(D)}}_{\sigma}(x)$ is a random variable with zero mean and a variance equal to the variance of the hypothesis, that is: So some realization of $h^{\mathcal{(D)}}(x)$, such as $\widetilde{h}(x)$ (the red curve in the above plot) can be written as $\widetilde{h}(x) = \overline{h}(x) + h_{\sigma}^{\mathcal{(D)}}(x)$, where $h_{\sigma}^{\mathcal{(D)}}(x)$ is a realization of $H_{\sigma}^{\mathcal{(D)}}(x)$. Using the squared difference loss function (which is a very generic loss measure) $L(\hat{y},y) = (\hat{y} - y)^2$, we can write the risk at some specific data point x as: Here we replaced the expectation on $(x,y) \sim P(X,Y)$ by an expectation on the dataset $\mathcal{D}$ as the set of points $(x,y)$ distributed by $P(X,Y)$ are essentially the members of the dataset. Using the decomposition of $h^{\mathcal{(D)}}(x)$ we made earlier we can say that: By the linearity of the expectation and the fact that the bias does not depend on $\mathcal{D}$, we write the previous equation as: By recalling that the mean of $H^{\mathcal{(D)}}_{\sigma}(x)$ is 0, and by noticing that: we can say that: Now for all the data points in every possible dataset $\mathcal{D}$, the risk is: This shows that the generalization error decomposes nicely into the bias and the variance of the model, and by comparing this decomposition to our generalization inequality we can see the relation between the bias and the empirical risk, and between the variance and the complexity term. The decomposition also shows how the generalization error will be high even if the model has low bias due to its high variance, and how it will remain high when using a low variance model due to its high bias and high training error. This is the origin story of the Bias-variance Trade-off, our constant need to find the sweet model with the right balance between the bias and variance. A Little Exercise I cheated a little bit earlier when I defined because the correct definition should measure the loss from the label $y$ (which the available piece of information) not the target $f(x)$. Try to decompose the correct risk definition and see how it differs from the result we just got. View your results in light of what we claimed back in part 1 where we said: "We'll later see how by this simplification [abstracting the model by a target function and noise] we revealed the first source of error in our eventual solution" and see how it relates to your result. Taming the Rich Let's investigate more into this overfitting behavior, this time not by looking at how the different hypotheses are spread out but by looking at individual hypothesis themselves. Let's take the red-curve hypothesis $\widetilde{h}(x)$ in the recent plot and look at the coefficients of its polynomial terms, especially those that exist in the Taylor expansion of the target function. For that particular function we find that: its $x$ coefficient is about $3.9$, as opposed to $1$ in the target's Taylor expansion. its $x^3$ coefficient is about $-5.4$ as opposed to $-\frac{1}{3!} \approx -0.17$. its $x^5$ coefficient is about $22.7$ as opposed to $\frac{1}{5!} \approx 8.3 \times 10^{-3}$. its $x^7$ coefficient is about $-53.1$ as opposed to $-\frac{1}{7!} \approx -2.0 \times 10^{-4}$ its $x^9$ coefficient is about $33.0$ as opposed to $\frac{1}{9!} \approx 2.8 \times 10^{-6}$ It turns out that the hypothesis drastically overestimate its coefficients; they are much larger than they're supposed to be. This overestimation is the reason behind the hypothesis ability to reach beyond the target mapping $x \mapsto f(x)$ and grab the noise as well. So this gives us another way to quantify the overfitting behavior, which is the magnitude of the hypothesis parameters or coefficients; the bigger this magnitude is, the more the hypothesis would overfit. It also gives us a way to prevent a hypothesis form overfitting: we can force it to have parameters of small magnitudes! In training our models, we find a vector of parameters $\mathbf{w}$ that minimizes the empirical risk on the given dataset. This can be expressed mathematically as the following optimization problem: where $m$ is the size of the dataset, $\mathbf{x}$ is the feature vector, and $h(\mathbf{x};\mathbf{w})$ is our hypothesis but with explicitly stating that it's parametrized by $\mathbf{w}$. Utilizing our observation on the magnitudes of $\mathbf{w}$'s component, we can add a constraint on that optimization problem to force these magnitudes to be small. Instead of adding a constraint on every component of the parameters vector, we can equivalently constraint the magnitude of one of its norms (or more conveniently, the square of its norm) to be less than or equal some small value $Q$. One of these norms that we can choose is the Euclidean norm: Where $n$ is the number of features. So we can rewrite the optimization with the constraint on the Euclidean norm as: Using the method of Lagrange multipliers (here's a great tutorial on Khan Academy if you're not familiar with it) that states that: The constrained optimization problem: is equivalent to the unconstrained optimization problem: where $\lambda$ is a saclar called the Lagrange multiplier. we can now write our constrained optimization problem in an unconstrained fashion as: By choosing $\lambda$ to be proportionate to $\frac{1}{Q}$ we can get rid of the explicit dependency on $Q$ and replace it with an arbitraty constant $k$: If you're up for a little calculus you can prove that the value of $\lambda$ that minimizes the problem when we drop the term involving $k$ also minimizes the problem with term involving $k$ kept intact. So we can drop $k$ and write our minimization problem as: and this is the formula for the regularized cost function that we've practically worked with a lot. This form of regularization is called L2-regularization because the norm we used, the Euclidean norm, is also called the L2-norm. If we used a different norm, like the L1-norm: the resulting regularization would be called L1-regularization. The following plot shows the effect of L2-regularization (with $\lambda = 2$) on training the tenth degree model with the simulated dataset from earlier: The regularization resulted in a much more well behaved spread around the mean than the unregulraized version. Although the regularization introduced an increase in bias, the decrease in variance was greater, which makes the overall risk smaller (with software help we can get numerical estimates for these values and see these changes for ourselves). We can also examine the effect of regularization on the risk in light of our generalization bound. The following plot shows the contours of the squared difference loss of a linear model (two parameters). The red circle depicts our L2-regularization constraint $w_0^2 + w_1^2$. The plot shows that when the regularization is applied, the solution to the optimization problem shifted from its original position to the lowest value position that lies on the constraint circle. This means that for a solution to feasible, it has to be within that constraining circle. So you can think of the whole 2D grid as the hypothesis space before the regularization, and regularization comes to confine the hypothesis space into this red circle. With this observation, we can think of the minimization problem: as a direct translation of the generalization bound $R(h) \leq R_{\mathrm{emp}}(h) + C(|\mathcal{H}|)$, with the regularization term as a minimizer for the complexity term. The only piece missing from that translation is the definition of the loss function $L$. Here we used the squared difference, next time we'll be looking into other loss functions and the underlying principle that combines them all. References and Additional Readings Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA. Abu-Mostafa, Y. S., Magdon-Ismail, M., & Lin, H. (2012). Learning from data: a short course. Mostafa Samir Wandering in a lifelong journey seeking after truth. [email protected] © Mostafa Samir - Theme By Willian Justen
CommonCrawl
OSA Publishing > Optics Express > Volume 28 > Issue 4 > Page 4501 James Leger, Editor-in-Chief Analysis of three-dimensional mapping problems in incoherent digital holography Philjun Jeon, Heejung Lee, Jongwu Kim, Cheng Liu, and Dugyoung Kim Philjun Jeon,1 Heejung Lee,1 Jongwu Kim,1 Cheng Liu,2,3 and Dugyoung Kim1,* 1Physics Department, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, South Korea 2Department of Optoelectric Information Science and Technology, Jiangnan University, 1800 Lihu Ave, Binhu, Wuxi, Jiangsu, China 3Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China *Corresponding author: [email protected] Heejung Lee https://orcid.org/0000-0001-7892-6995 P Jeon H Lee J Kim C Liu D Kim •https://doi.org/10.1364/OE.384477 Philjun Jeon, Heejung Lee, Jongwu Kim, Cheng Liu, and Dugyoung Kim, "Analysis of three-dimensional mapping problems in incoherent digital holography," Opt. Express 28, 4501-4515 (2020) Get PDF (2499 KB) Spatially incoherent common-path off-axis color digital holography (AO) Recent advances in self-interference incoherent digital holography (AOP) Self-interference compressive digital holography with improved axial resolution and signal-to-noise ratio (AO) Table of Contents Category Holography, Gratings, and Diffraction Digital holographic imaging Fresnel incoherent correlation holography Incoherent holography Original Manuscript: November 28, 2019 Revised Manuscript: January 22, 2020 Manuscript Accepted: January 22, 2020 Theoretical analysis of the three-dimensional mapping in incoherent holography Self-interference digital holography (SIDH) and Fresnel incoherent correlation holography (FINCH) are recently introduced holographic imaging schemes to record and reconstruct three-dimensional (3-D) information of objects by using incoherent light. Unlike conventional holography, a reference wave in incoherent holography is not predetermined by an experimental setup, but changes with target objects in incoherent holography. This makes the relation between the 3-D position information of an object and those stored in a measured hologram quite complicated. In this paper, we provide simple analytic equations for an effective 3D mapping between object space and the image space in incoherent holography. We have validated our proposed method with numerical simulations and off-axis SIDH experiments. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Ever since the first invention of holography by Gabor [1], a lot of efforts and studies have been made to use it for 3-D imaging and display. A hologram is a recorded interference pattern by an object wave and a reference wave. In conventional holography, the reference wave is either a fixed plane wave or a spherical wave predetermined by an experimental setup. The 3-D information of an object can be retrieved from a measured hologram with a known reference wave. Several significant advances and brilliant ideas have been achieved and suggested lately in optical holography [2–4]. In 1999, E. Cuche demonstrated quantitative phase microscopy with digital holography, which achieved measurements in the scale of one-tenth of a nanometer in the axial direction [5]. Moreover, it has been shown that the 3-D information of an object can be acquired from a single hologram by numerical reconstructions of wavefields from digitally recorded holograms by using diffraction formula [6,7]. However, digital holography requires a coherent light source to generate an interference pattern for an object. Unlike coherent laser light, natural lights, such as incandescent or fluorescent lights, have short coherence lengths and are hard to observe interference patterns. Several ingenious methods have thus been proposed for incoherent holography: scanning holography [8,9], Fresnel incoherent correlation holography (FINCH) [10–12], and self-interference incoherent digital holography (SIDH) [13–16]. In scanning holography, interference patterns are obtained by using a heterodyne detection scheme with a single-pixel detector. A 2D hologram of a fluorescent sample is obtained with an incoherent light source after scanning a sample along the transverse plane. This method requires a complicated measurement system and heavy numerical post-processing. It does not have the advantage of single-shot data acquisition. FINCH and SIDH methods use self-interference, where object light which is scattered or transmitted through a sample is divided into two. In FINCH or SIDH, light diffracted from one object point can interfere only with itself. FINCH used a spatial light modulator (SLM), and SIDH used a Michelson interferometer to obtain self-interference holograms [10–15]. Additionally, there are several approaches to improve FINCH and SIDH [16–22]. A measured hologram can be expressed with the following equation. (1)$$H = |R+O|^2 = |R|^2 + |O|^2 + RO^* + R^*O$$ It is an intensity of a measured interference pattern by a reference light $(R)$ and an object light $(O)$. The first two terms on the right side do not contain any phase information and are known as the DC-term of a hologram. The remaining two terms on the right are complex conjugates of each other. The third term is called the twin image term, and the last term has the object wave information we need to retrieve. A phase-shifting method is often used in in-line holography to remove the two DC terms and the twin image term, where three or four phase-shifted holograms are measured by using a precision phase-shifting device.[10,13,23] In off-axis holography, the propagation directions of the reference wave $(R)$ and the object wave $(O)$ are different. When the reference and the object waves are not parallel, the DC terms and the twin image term are placed at different positions in the Fourier domain and can be easily removed in off-axis holography [24–26]. In 2013, Kim et al. reported an off-axis SIDH with an LED source [16]. Muhammad et al. proposed another incoherent off-axis holography system without a spatial light modulator [17] A laser with a long temporal and spatial coherence is used in traditional digital holography to generate an interference pattern when the path length difference between the reference and object waves is significant. In this case, a reference wave is directly from a laser source and does not contain any information of an object. The reference wave is fixed on the detection plane of a holographic imaging system, and only an object wave changes with a sample and contains the 3D information of it. The 3D information of a sample can be retrieved from a hologram with numerical beam propagation, and we need to know the mathematical expression of a reference wave to do this. Therefore, accurate calibration of a reference wave is necessary for coherent holography. Unlike the case for coherent holography, the reference wave changes with a sample position, as well as the object wave, in incoherent holography such as FINCH or SIDH. When a reference wave changes with a sample position, the relation between the 3D position of an object and those stored in a hologram does not follow a conventional relation [27], and the 3D reconstruction of an object is a quite complicated procedure in FINCH and SIDH [28]. In this paper, we have investigated the relationship between the 3D position of an object and that of a stored hologram in incoherent holography. We provide simple analytic equations for an effective 3D mapping between the object and image spaces in incoherent holography. By using these equations, we have analyzed the axial and the transverse magnifications of an incoherent holographic imaging system with two lenses. The validity of our proposed equations and their usage in practical 3D mapping applications were demonstrated experimentally by using an off-axis SIDH system with an artificial point object. The artificial point object was made by guiding an LED light into a single-mode fiber. We took a series of holograms while scanning the artificial point object along with the x-y-z directions in the object space. The x-y-z coordinates of a point source were calculated from each measured hologram. A 3D volumetric mapping between the object and the image spaces of our SIDH system was obtained by comparing the actual 3D positions of a point source and the 3D positions calculated from measured holograms. We have demonstrated that the actual 3D position of an object can be retrieved well from a measured incoherent hologram. 2. Theoretical analysis of the three-dimensional mapping in incoherent holography 2.1 Incoherent holography The coherence of a light source determines the visibility of interference patterns in a measured hologram. The visibility of an interference pattern becomes maximum when a perfect coherent light source is used. The spatial coherence of an extended light source decreases as the solid angle of the light source observed from an object illuminated by the light source increases. For a point source, the solid angle of a light source is zero, and the spatial coherence and the visibility of a measured hologram become maximum. Coherence length is a parameter representing the degree of temporal coherence of a light source. Coherence length is inversely proportional to the spectral bandwidth of a light source. As the path length difference between a reference and an object wave increases in an interferometer, the fringe visibility of an interference pattern decreases and drops by half when the path length difference is equal to the coherence length. Since the coherence length of a typical incoherent light source, such as an LED, is only in the order of ten micrometers, it is hard to take a hologram image with incoherent light; the path lengths of two interferometer arms should be matched within the coherence length. Even when the two paths of an interferometer are perfectly matched, the maximum measurable distance of an object along the axial direction is limited to the coherence length of a light source for a conventional holographic imaging system. For avoiding this problem, a reference wave with a separate light path is not used in FINCH nor SIDH. Instead of using a separate reference light, interference patterns are formed by mixing two similar object lights in incoherent holography. Object light is split into two, and the two object lights are arranged to have little different radii of curvature. A hologram is obtained by overlapping them on a detection screen. 2.2 Extracted image position in incoherent holography When two spherical object waves with different radii of curvature $Z_1$ and $Z_2$ are overlapped on a detection screen, we obtain an interference pattern with a new radius of curvature $Z_c$ given by Eq. (2) [29]. (2)$$Z_c = \pm\frac{Z_1Z_2}{Z_1-Z_2}$$ $Z_1$ and $Z_2$ are relative axial positions from the detection plane to the focusing points of the two object waves. $Z_c$ is the retrieved axial position of an image by using incoherent holography. $Z_1$, $Z_2$, and $Z_c$ are positive when waves are converging on an observation plane, while these are negative when waves are diverging. When the axial position of an object changes, both $Z_1$ and $Z_2$ change. Since we can only obtain $Z_c$ from a measured hologram by using a numerical focusing method, retrieving the axial position of a point object from a measured hologram is not a straight forward process. Thorough calibration of an optical system is required to calculate the axial position of an object from a measured hologram. Since there exist multiple ($Z_1$, $Z_2$) pairs that satisfy Eq. (2) for a given $Z_c$, we should find proper constraints of an optical system that allow only a single solution of ($Z_1$, $Z_2$). The axial position of the point object can be obtained from $Z_1$ and $Z_2$ by using the thin-lens equation. Figure 1(a) shows a schematic diagram of a simple incoherent holographic imaging system, where an object is imaged by two colocated lenses whose focal lengths are $f_1$ and $f_2$. This can be realized by using a spatial light modulator (SLM) in FINCH or a Michelson interferometer in SIDH. "$O$" on the optical axis represents an object."$L$" is the axial position of the two lenses. $P_1$ and $P_2$ are images produced by the two lenses. Light passing through these two lenses work as two object waves producing interference patterns. Pink, green, and red lines labeled with "$A$", "$B$", and "$C$" on the right side of the two lenses are three possible positions of an observation plane in incoherent holography. Figure 1(b) illustrates axial and transverse positions of two images formed by the two lenses in incoherent holography. A red arrow on the far left side of the diagram is an object. The distance between the object and the two lenses is $a$. The distance from the two lenses to the measurement plane is $S$. Both $a$ and $S$ are positive in this notation. The two lenses produce two different images on the right side of the lenses. $(Z_1+S)$ and $(Z_2+S)$ are the distances from the two lenses to the images. By using the thin-lens equation, we have the following relation. (3)$$Z_1 = \frac{(f_1-S)a+Sf_1}{a-f_1}, Z_2 = \frac{(f_2-S)a+Sf_2}{a-f_2}$$ Fig. 1. System configuration for incoherent holography with two different imaging lenses. (a) Three cases with three different observation plane positions: $A$, $B$, and $C$. (b) Illustration of an object and two images formed by two different imaging lenses. Download Full Size | PPT Slide | PDF $Z_1$ and $Z_2$ are the radii of curvature of the two object waves observed on the measurement plane. $Z_1$ and $Z_2$ are both positive because the two images are formed on the right sides of the screen. $Z_1$ or $Z_2$ becomes negative if its image is located on the left side of the measurement plane. Putting Eq. (3) into Eq. (2), we obtain the relation between the axial position of an object $a$ and the radius of curvature $Z_c$, which is the axial position of the image reconstructed from a measured hologram. $Z_c$ is a relative axial position with respect to a measurement plane; it is negative (positive) when the image is on the left (right) side of the measurement plane. (4)$$Z_c(a) = \frac{Z_1Z_2}{Z_2-Z_1} = \frac{(S+f_1)(S+f_2)a^2 + [2f_1f_2-(f_1+f_2)S]Sa+f_1f_2S^2}{(f_1-f_2)a^2}$$ Instead of $Z_1$ or $Z_2$, we obtain $Z_c$ from a measured hologram. Equation (4) shows a complicated relationship between the axial position of an object $a$ and that of an image $Z_c$ in incoherent holography. When two lenses are parallel and located at the same axial position, the fringe patterns in a hologram become concentric. This can be easily explained by Fig. 1(a), where a point object illustrated with a small red circle with a black and its two images formed by the two lenses are all on a single green line. The reconstructed image position from the measured hologram is also on the green line. From the geometry illustrated in Fig. 1(b), we can deduce that the transverse magnification $M_t$ of a reconstructed image from a hologram. (5)$$M_t = -\frac{S+Z_c(a)}{a}$$ Once $Z_c$ is obtained from a hologram, we can also calculate the axial magnification $M_z$ of a reconstructed image with (6)$$M_z = -\frac{\partial Z_c(a)}{\partial a}$$ 2.3 The effect of a measurement plane Unlike conventional coherent holography, the axial position of the measurement plane produces different results in reconstructed images in incoherent holography. When the measurement plane is at the "$A$" position in Fig. 1(a), a hologram is made with two converging object waves. Since "$P_1$" and "$P_2$" in Fig. 1(a) are both on the right side of the measurement plane, $Z_1$ and $Z_2$ are both positive. When the measurement plane is at the "$B$" position, $Z_1$ is negative, and $Z_2$ is positive. $Z_1$ and $Z_2$ are both negative when the screen is at the "$C$" position, and a hologram is made with two diverging waves. Figure 2 shows simulation data for the relation between the object space and the image space of a simplified incoherent holographic imaging system for the three different positions of a measurement plane illustrated in Fig. 1(a). We considered an incoherent holographic imaging system composed of two colocated lenses whose focal lengths are $f_1 = 75mm$ and $f_2 = 100 mm$. We have calculated the axial position of an image, the axial and the transverse magnifications while the axial position of an object is varied from $190 mm$ to $350 mm$. The first case is when the observation plane is at $S = 80 mm$. $Z_1$ and $Z_2$ are both positive; this corresponds to the case when the measurement plane is at the "$A$" position. Figure 2(a) shows that the axial position of an image $Z_2$ is positive and decreases as the distance between the two lenses and the object increases from $190 mm$ to $350 mm$. Figure 2(b) shows the transverse and the axial magnifications of the incoherent imaging system as a function of the object position. The axial magnification $M_z$ ranges from 0.11 to 0.65, while the transverse magnification $M_t$ ranges from -0.77 to -0.29. The second case is when the measurement plane is located $140 mm$ ($S = 140 mm$) on the right side of the two lenses. $Z_1$ is positive, and $Z_2$ is negative; this corresponds to the case when the measurement plane is at the "$B$" position. Figure 2(c) shows that $Z_2$ is negative and is a U shaped function of $a$. Variations in $Z_2$ are only $16.3 mm$ while changes from $190 mm$ to $350 mm$. Since 3D positions in the image space do not have a one-to-one correspondence to those in the object space, this case should be avoided for practical 3D imaging. The axial magnification $M_z$ starts from 0.24 and ends at -0.16; it changes the sign from positive to negative. The transverse magnification $M_t$ ranges from -0.40 to -0.67. The last case we have considered is when the measurement plane is located at $S = 220 mm$. Figure 2(e) shows that $Z_2$ is positive and is almost a linear function of $a$. Since $Z_1$ and $Z_2$ are both negative, this corresponds to the case where the measurement plane is at the "$C$" position. Figure 2(f) shows that the axial and transverse magnifications are both negative and large in this case. Since the variation of $Z_2$ with respect to $a$ is large, this is the most preferred situation out of the three cases. Since there is a trade-off between the axial resolution and the lateral resolution in incoherent holographic imaging [10–16], this condition inevitably provides a very low transverse resolution. Fig. 2. Numerical calculation for image position $Z_c$ and transverse/axial magnifications $M_t/M_z$ of a simple inline holographic imaging system for three different system configurations depending on the position of an observation plane. (a), (b) are when the screen is at "$A$" position, (c), (d) are when it is at the "$B$" position and (e), (f) are when it is at "$C$" position. Figure 3 shows the mapping of 3D volumes from the object space to the image space. We considered the same incoherent holographic imaging system that produces the results of Fig. 2. A cube with a side length of $40 mm$ is used as an object. Holographic images are calculated for the object when its center is located at $a = 220 mm$. Figure 3(a) shows 3D volume images obtained from the three different measurement planes, which produce the results of in Fig. 2. Three volume images are shown in a single figure by making the axial position of each measurement plane placed at the same position ($0 mm$). A cyan plane in the middle of the figure is the measurement plane. A green square frustum located very close to the screen on the left side is the image obtained when the measurement plane is at the "$B$" position with $S = 140 mm$. This corresponds to the results of Fig. 2(c). Since $Z_c$ is a surjective function of $a$, the 3D volume in the image space is shrunken very small. A large red square frustum on the right side of the cyan plane is the image obtained when the measurement plane is at the "$C$" position with $S = 220 mm$. It has the largest 3D volume because $Z_c$ is a fast varying and almost linear function of $a$ as shown in Fig. 2(e). A small yellow square frustum inside the red one is the 3D volume image when the measurement plane is located at the "$A$" position with $S = 80 mm$. This corresponds to the results shown in Fig. 2(a). Since $Z_c$ is a slowly varying function of a, it covers a little volume in the image space. Figure 3(b) shows more examples of volume mapping from the object space to the image space when the measurement plane is at the "$C$" position with $S = 220 mm$. A cube with a side length of $30 mm$ is used as an object. We have calculated four 3D image volumes, while the center of the object is shifted from $210 mm$ to $330 mm$ with a step size of $40 mm$. Four 3D volume images in Fig. 3(b) show that the transverse magnification, as well as the axial magnification of this holographic imaging system, decreases as a is increased. It shows that a large 3D volume in the object space is mapped almost linearly to another large 3D volume in the object space with little distortion in this case. Fig. 3. (a) Three 3D volume images corresponding to an object of a cube with a side length of 40 mm for three different observation positions. A cyan square plate is the observation plane. Green, red, and yellow square frustums are 3D images of the object when the observation plane is at "$B$", "$C$", and "$A$" positions in Fig. 1(a). (b) Four 3D volumes images of an object of a cube with a side length of 30 mm for three different object positions when the observation plane is fixed at "$C$" position. Blue, orange, green, and red square frustums are when the object is at $a$ = 210 mm, 250 mm, 290 mm, and 330 mm, respectively. 3. Method 3.1 Off-axis SIDH setup In order to verify the complex relations of 3D volume mapping from the object space to the image space in incoherent holography, we build a simple incoherent holographic imaging system shown in Fig. 4. It is an off-axis self-interference digital holography (SIDH) system based on a Michelson interferometer. We did a series of measurements while scanning a point object along with the axial and transverse directions in the object space. Light from an LED is butt-coupled into single-mode fiber (630-HP, Nufern). The center wavelength of the LED is 640 nm, and its spectral width is 14 nm. The core diameter of the fiber is 3.5 µm, and the numerical aperture is 0.13. The other end of the fiber is cleaved, and diverging light from the fiber end is used as a point object. The optical power of this artificial point object was measured to be about 90 nW. The light from the fiber end is collimated with an achromatic lens $L$ ($f_1 = 48 mm$). We used a non-polarizing beam splitter cube (CCM1-BS013 Thorlabs) to divide the object light into two parts with a 50:50 intensity ratio. The thickness of the beam splitter $\Delta$ is 25.4 mm. It is made of BK7 glass, which has a refractive index of 1.515 at 640 nm wavelength. Fig. 4. (a) Off-axis SIDH system, and (b) schematics with notation. Two red lines are the two light paths of a Michelson interferometer, and a green line shows the beam splitter. $CM1$ and $CM2$ are two concave mirrors. Two concave mirrors ($CM1$ and $CM2$) are used to make an interference pattern on a hologram plane. The focal lengths of them are $f_2 = 50 mm$ and $f_3 = 100 mm$. d2 is the distance from the lens $L$ to the center of the beam splitter cube. $d_3$ and $d_4$ are distances from the center of the beam splitter to the two concave mirrors $CM1$ and $CM2$, respectively. We made $d_3$ and $d_4$ equal by adjusting $d_4$ with a micrometer while monitoring the visibility of fringe patterns. We used an EMCCD (Luca-S, Andor) to obtain holograms. $d_5$ is the distance from the EMCCD to the center of the beam splitter cube. We have $d_2 = 105 mm, d_3 = d_4 = 28 mm,$ and $d_5 = 292.5 mm$. We tilted $CM1$ about 0.9 degrees to obtain an off-axis hologram. We used a 3-axis motorized stage (MX310/M, Thorlabs) to move the point object (fiber tip) along x-y-z space in the object space. 3.2 Ray transfer matrix analysis for incoherent holography Ray transfer matrix method uses the paraxial approximation to find the output ray vector of an optical system from an input ray vector [27]. A ray vector is defined as, where $\left(\begin {smallmatrix} \theta \\ y \end {smallmatrix}\right)$ is the angle of the ray with respect to an optical axis, and y is the transverse distance of the ray from the optical axis. We derived the ray transfer matrix of our optical system to obtain the analytic expressions of $Z_1$ and $Z_2$ by using the parameters of the experimental setup shown in Fig. 4. The output rays on the measurement plane through the two beam paths of the Michelson interferometer can be calculated by using two system matrices $M_{sys1}$ and $M_{sys1}$. (7)$$\begin{cases}M_{sys1} = T_5(T_3L_2T_3)T_2L_1T_1\\M_{sys2} = T_5(T_4L_3T_4)T_2L_1T_1\end{cases}$$ (8)$$T_m = \begin{pmatrix}1 & 0 \\ d'_m & 1\end{pmatrix},\; L_m = \begin{pmatrix}1 & -1/f_m \\ 0 & 1\end{pmatrix},\;m = 1 \sim 5$$ where $d'_1 = d_1$, and $d'_m=d_m+\Delta (1/n-1)$ when $m = 2 \sim 5$. $n$ and $2\Delta$ are the refractive index and the thickness of the beam splitter cube used in the Michelson interferometer. $M_{sys1}$ is the transfer matrix for the beam path through the mirror $CM1$ whose focal length is $f_2$, while $M_{sys2}$ is the transfer matrix for the beam path through the mirror $CM2$ whose focal length is $f_3$. $T_1$ is the transfer matrix for a free space propagation with a path length of $d_1$. $L_1$ represents the transfer matrices for the first lens $L$ whose focal length is $f_1$. $T_2$ is the transfer matrix for a free space propagation with a path length of $d_2-\Delta (1-1/n)$. $T_3$ is the transfer matrix for a free space propagation with a path length of $d_3-\Delta (1-1/n)$. $CM1$ is the matrix for the first concave mirror with a focal length of $f_2$, while $CM2$ is the matrix for the send concave mirror with a focal length of $f_3$. $T_4$ is the transfer matrix for a free space propagation with a path length of $d_5-\Delta (1-1/n)$. In order to find the axial positions of two images formed through the two beam paths of a Michelson interferometer, we considered a marginal ray as an input ray vector given as $\left(\begin{smallmatrix}\theta \\ 0\end{smallmatrix}\right)$. The two output ray vectors $\left(\begin{smallmatrix}\theta _1 \\ y_1\end{smallmatrix}\right)$ and $\left(\begin{smallmatrix}\theta _2 \\ y_2\end{smallmatrix}\right)$ on the measurement plane are computed by using the two ray transfer matrices $M_{sys1}$ and $M_{sys2}$. The focusing positions of two images with respect to the measurement plane are obtained as $Z_1=-y_1/\theta _1$ and $Z_2=-y_2/\theta _2$. Then, the axial position of a measured hologram $Z_c$ can be calculated by using Eq. (2). The analytic expression for the axial position of an image $Z_c$ with respect to the axial position of an object $x$ is given with the following equation. (9)$$Z_c = -\frac{a_1(a_2+a_3x)(a_4+a_5x)}{(a_6+a_7x)^2}$$ $$\begin{array}{c} where \quad a_1 \equiv -\frac{f_1^2f_2f_3}{(f_2-f_3)}, \quad a_2 \equiv d_3 + d_4 - \frac{(d_2+d_3)(d_3+d_4-f_2)}{f_2}\\ a_3 \equiv \frac{b_1-b_2f_2}{f_1f_2}, \quad a_4 \equiv d_3 + d_4 - \frac{(d_2+d_3)(d_3+d_4-f_3)}{f_3}\\ a_5 \equiv \frac{b_1-b_2f_3}{f_1f_3}, \quad a_6 \equiv (d_2+d_3)f_1 \quad a_7 \equiv f_1-d_2-d_3\\ b_1 \equiv d_2d_3+d_2d_4+d_3d_4-(d_3+d_4)f_1+d_3^2 \quad b_2 \equiv d_2+2*d_3+d_4-f_1 \end{array}$$ Equation (9) can be simplified to (10)$$Z_c(x) = C_1\frac{(x-C_2)(x-C_3)}{(x-c_4)^2}$$ where $C_1, C_2, C_3$, and $C_4$ are constants which can be determined by the parameters of the experimental setup shown in Fig. 4: $f_1, f_2, f_3, d_2, d_3, d_4, d_5$, and $\Delta$ Here, we replaced $d_1$ with $x$, which is the distance of a fiber tip from the first lens $L$ in Fig. 4. Note that Eq. (10) has the same form of Eq. (4), when $(x-c_4)$ is replaced with $a$. This is because the schematic diagram of Fig. 1 is a simplified version of the incoherent imaging setup shown in Fig. 4. As the reconstructed axial position $Z_c$ of an image is represented by an analytic expression of $x$ with four unknowns, we need at least four different reference data points to find the four unknown constant$S$. 3.3 Experimental 3D mapping from an image space onto an object space We have experimentally verified the relation between the axial position $x (= d_1)$ of an object and that of a reconstructed image $Z_c$ by measuring multiple holograms of a point object. We used a fiber tip as a point object and calculated its 3D position from a measured incoherent hologram. A commercially available LED with 640 nm center wavelength, 14 nm spectral width, and 3 W average power was used as a light source. The output of a commercially available LED is butt-coupled into single-mode fiber (630-HP, Nufern) whose core diameter is 3.5 µm. The output power from the other end of the fiber was measured to be about 100 nW. The fiber was sandwiched with two slide glasses using double-sided tape, shown in Fig. 5. Figure 6 shows the reconstruction sequence of finding the axial position of a point object from a measured hologram in SIDH. Fig. 5. Artificial incoherent point source sample made of single fiber tip. Fig. 6. Reconstruction sequence of finding the 3D position of a point object in incoherent holography. (a) Measured hologram, (b) Fourier transformed intensity in the frequency-domain, (c) Focused image of a point source after numerical focusing, and (d) Tamura's coefficient plot with respect to numerical propagation distance. Figure 6(a) shows a typical hologram of a fiber tip measured by our SIDH system. About 10 thin curved lines along the diagonal direction from the top left corner to the bottom right corner are off-axis interference patterns by the two beam paths. Three thick circular fringe patterns at the center and the lower right corner are due to dust. Figure 6(b) shows the intensity of the Fourier transformed function in the frequency-domain for the hologram shown in Fig. 6(a). We used the spatial filtering method [26] for zero-order and twin-image elimination. All the frequency components are set to zero except for those within the red square in Fig. 6(b). These frequency components are cut and pasted to the center of the Fourier plane, before doing numerical propagation calculation to find the axial position of a focused image. We used the angular spectrum method (ASM) [30] to generate a series of propagated intensity images along the optical axis from the detection plane. Tamura's coefficient is used to measure the focus level of each generated image [31]. Figure 6(c) shows the best-focused image of the fiber tip among the many generated images. Note that the full width at half maximum (FWHM) of the best-focused spot along the +45 degree direction is about three times larger than that along the -45 degree direction. This is because the interference patterns in Fig. 6(a) are not circular shapes but curved diagonal lines. Since the measurement plane of our SIDH setup is located too far from the two focused image points of a fiber tip, only a fraction of elliptic interference patterns are captured by the array sensor. In order to have a circular symmetric intensity profile for a focused image of a fiber tip, we need to put the array sensor close to one of the two focusing points such that the size of elliptic interference becomes small and its center is within the sensor area. Figure 6(d) shows the normalized Tamura's coefficient as a function of propagation distance. The axial position of an image ($Z_c$) is the axial position at which Tamura's coefficient becomes maximum. Figure 7 shows the relation between the axial position $x (= d_1)$ of a point object (fiber tip) and that of a reconstructed image $Z_c$ measured by our SIDH setup. A series of holograms were taken while translating the point object along the axial direction with a stepper motor. Figure 7(a) shows 110 data points showing the relation between the object position $x$ and the reconstructed image position $Z_c$. They consist of 10 sets of data points covering 20 mm of axial distance in the object space. Each set has 11 equally spaced data points covering 300 µm object distance with a step size of 30 µm along the axial direction. Blue solid circles are reconstructed image position data calculated from measured holograms by the sequence illustrated in Fig. 6. The solid orange curve in the figure shows a theoretical curve calculated from the analytic expression of $Z_c$ from Eq. (10) with measured parameters of $f_1, f_2, f_3, d_2, d_3, d_4, d_5$, and $\Delta$. Figure 7(b) is an expanded view of Fig. 7(a) when the axial position of an object is near $x = 38 mm$. The solid green line is the least-square fitting of experimental data with Eq. (10) for $C_1, C_2, C_3$, and $C_4$. Note that the least-square fit (green line) has better matching with experimental data compared to the system analysis curve (red line). This is because some parameters ($d_2, d_3, d_4, d_5$) are measured by a ruler with 1 mm marks and are not very accurate. The half-length of an error bar represents the standard deviations of 11 repeated measurements for a given object position. These results show that the accuracy (or the mean of standard deviations) of the measured axial position of our method is about 0.5 mm. Since the axial magnification is about 43 at $x = 38 mm$ (Fig. 8), this corresponds to 12 µm in the object space. Fig. 7. (a) Reconstructed axial position $Z_c$ of a point object for various axial object positions $x (= d_1)$. (b) Expanded view near $x = 38 mm$. Reconstructed image positions and their $2\sigma$ (twice of standard deviations), least square curve fit (green), and parameter fitted curve (orange) for Eq. (10). Fig. 8. Axial and transverse magnifications of our SIDH system as a function of the axial object position $x (= d_1)$. For a given axial position, we measured the transverse magnification of our SIDH system. 11 holograms were measured while translating the fiber tip 300 µm distance along the transverse direction with a step size of 30 µm. Because of the low temporal coherence of the light source and tilting between two object lights in our off-axis SIDH system, we could only obtain holograms for objects close to the optical axis of the imaging system. We expect that this problem can be removed when a phase-shifting inline holographic imaging system is used. Figure 8 shows changes in the axial and the transverse magnifications of our SIDH system with respect to the axial object position. Each of the axial and the transverse magnifications are calculated by fitting 11 data points with straight lines along the axial and the transverse directions. The blue and orange curves are best-fitted curves with second-order polynomial functions. When the axial position of the point object is scanned from 26 mm to 44 mm, the transverse magnification varies little from 4.0 to 5.5, while the axial magnification changes much from 21.0 to 65.6. In a conventional imaging system, the axial magnification is the square of the transverse magnification. Unlike a conventional imaging system, our results clearly show that this relation does not hold in incoherent holography. 3.4 Imaging an artificial 3D object In order to demonstrate the 3D imaging capability of our SIDH system, we took hologram images of an artificial 3D object made of three fiber tips. A homemade fiber bundle was prepared by putting ten fibers into a capillary glass tube. One end of the fiber bundle was cut and polished for light coupling. It is directly butt-coupled into an LED whose center wavelength is 640 nm. The output powers of coupled lights into fibers range 60~300 nW. Three fibers with similar output powers around 150 nW were selected to make an artificial 3D object. The three fiber tips were sandwiched with two slide glasses using double-sided tape. Figure 9 shows the schematic diagram of the artificial 3D object composed of three fiber tips. Three fibers are aligned on a straight line along the transverse direction by the two slide glasses. The axial positions of the three fiber tips are all different. Fig. 9. Artificial 3D sample made of three fiber tips. We took a hologram of this artificial 3D objects with our SIDH setup and calculated their 3D positions in the object space. Figure 10(a) is the measured hologram. Intensity images of the 3D object were calculated for various propagation distances by using the ASM. Figure 10(b) shows reconstructed images of three fiber tips at their focused positions. Red squares represent areas within which Tamura's coefficient was calculated. Each square consists of 60 by 60 image pixels. Figure 10(c) is an image of the three fiber tips viewed from the side. It is imaged by a microscope with 10X magnification. Transverse and axial distances between fibers were calibrated by the fact that the outer diameter of a commercial fiber (630-HP, Nufern) is 125 µm. Fig. 10. SIDH measurement for a 3D object which is composed of three point-sources. (a) A raw hologram image. (b) Reconstructed images of three fiber tips at their focused positions by using Tamura's coefficient. Red squares are areas within which Tamura's coefficient was calculated. (c) Side image of the artificial 3D object. (d) Normalized Tamura's coefficient curve for three fiber tips as a function of the propagation distance. Blue, orange, and green curves are for Fiber 1, Fiber 2, and Fiber 3 shown in figure (c), respectively. Figure 10(d) shows normalized Tamura's coefficient as a function of the axial position for the three fiber tips. Calculated axial positions of the three fiber tips correspond to the peak positions of the three Tamura's coefficient curves. Table 1 shows the 3D positions of the three fiber tips both in the image space and the object space. The 3D coordinates of the three fiber tips ($x'_i, y'_i, z'_i$) were obtained by the numerical focusing method illustrated in Fig. 10 for $i = 1, 2, 3$. According to the axial position of each fiber tip ($z'_i$) in the image space, the axial and the transverse magnifications were obtained from Fig. 8 and used to find the corresponding 3D position in the object space($x_i, y_i, z_i$). We have calculated the distances between two point objects ($\Delta {}x_{ij}, \Delta {}y_{ij}, \Delta {}z_{ij}$)along the $x$, $y$, and $z$ directions, and results are given in Table 2. These distances were compared with the measured distance shown in Fig. 10(c). We have good agreements between the results by SIDH and those obtained by direct side-view imaging with a microscope. Axial distances agree well with each other within 0.2% of accuracy. However, the transverse distances show about 12% accuracy. The reason for these large errors along the transverse direction is because of large distortion due to the off-axis geometry of our SIDH setup and the large pixel pitch (10 µm) of the array sensor we have used. Table 1. 3D positions of three fiber tips in the image and the object space [µm]. View Table | View all tables in this article Table 2. Relative distances between objects obtained by SIDH and side-view imaging [µm]. In this study, we have investigated the 3D mapping problem in incoherent holography. We did theoretical analysis on a simple incoherent holographic imaging system with two lenses. It is demonstrated that the axial position of the observation plane is a critical parameter. In order to obtain a high axial resolution in incoherent holographic holography, the observation screen of a holographic imaging system should be located further away from two images of an object such that interference patterns are formed by two diverging waves. In Eq. (4) we provided a simple analytic expression with 5 unknown parameters for the relation between the axial object position $a$ and the axial image position $Z_c$ for the simple two-lens incoherent holographic imaging system. We also provide an analytic expression of Eq. (5) for the transverse magnification between a transverse plane in the object space and the corresponding transverse plane in the image space for a given axial position of an object. In theory, we can map the 3D object space and the 3D image space in a simple incoherent holographic imaging system using two lenses. In order to demonstrate simple 3D mapping relations between the object and the image space in incoherent holography, we build an off-axis SIDH imaging system with three lenses. We provided the analytic expressions for the axial position of an object $a$ and that of a produced image $Z_c$ with ray transfer matrices in Eq. (10), which has four unknown parameters. By using a fiber tip as a point source, we have mapped the 3D positions in the object space to those in the image space for the SIDH system we built. As predicted in theory, we have demonstrated that only a few points along the axial direction are needed to fit four unknown parameters in the analytic expression for the 3D mapping between object positions and image positions. We have tested the precision of our 3D mapping method with an artificial 3D object composed of three fiber tips. The axial and transverse distances between the three fiber tips were measured both with our SIDH system and a microscope. We have verified that the results of the two different methods show good agreement with each other. National Research Foundation of Korea (2017R1A2B4003950); Korea Institute for Advancement of Technology (P0011925). The authors declare that there are no conflicts of interest related to this article. 1. D. Gabor, "Microscopy by Reconstructed Wave-Fronts," Proc. R. Soc. London, Ser. A 197(1051), 454–487 (1949). [CrossRef] 2. J. W. Goodman and R. W. Lawrence, "Digital image formation from electronically detected holograms," Appl. Phys. Lett. 11(3), 77–79 (1967). [CrossRef] 3. U. Schnars, "Direct phase determination in hologram interferometry with use of digitally recorded holograms," J. Opt. Soc. Am. A 11(7), 2011–2015 (1994). [CrossRef] 4. U. Schnars and W. Jüptner, "Direct recording of holograms by a CCD target and numerical reconstruction," Appl. Opt. 33(2), 179–181 (1994). [CrossRef] 5. E. Cuche, P. Marquet, and C. Depeursinge, "Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms," Appl. Opt. 38(34), 6994–7001 (1999). [CrossRef] 6. S. Grilli, P. Ferraro, S. D. Nicola, A. Finizio, G. Pierattini, and R. Meucci, "Whole optical wavefields reconstruction by Digital Holography," Opt. Express 9(6), 294–302 (2001). [CrossRef] 7. C. J. Mann, L. Yu, and M. K. Kim, "Movies of cellular and sub-cellular motion by digital holographic microscopy," BioMed Eng OnLine 5(1), 21 (2006). [CrossRef] 8. T.-C. Poon, K. B. Doh, B. W. Schilling, M. H. Wu, K. K. Shinoda, and Y. Suzuki, "Three-dimensional microscopy by optical scanning holography," Opt. Eng. 34(5), 1338–1344 (1995). [CrossRef] 9. G. Indebetouw and W. Zhong, "Scanning holographic microscopy of three-dimensional fluorescent specimens," J. Opt. Soc. Am. A 23(7), 1699–1707 (2006). [CrossRef] 10. J. Rosen and G. Brooker, "Digital spatially incoherent Fresnel holography," Opt. Lett. 32(8), 912–914 (2007). [CrossRef] 11. J. Rosen and G. Brooker, "Non-scanning motionless fluorescence three-dimensional holographic microscopy," Nat. Photonics 2(3), 190–195 (2008). [CrossRef] 12. J. Rosen and G. Brooker, "Fluorescence incoherent color holography," Opt. Express 15(5), 2244–2250 (2007). [CrossRef] 13. M. K. Kim, "Full color natural light holographic camera," Opt. Express 21(8), 9636–9642 (2013). [CrossRef] 14. D. C. Clark and M. K. Kim, "Nonscanning three-dimensional differential holographic fluorescence microscopy," J. Electron. Imaging 24(4), 043014 (2015). [CrossRef] 15. J. Hong and M. Kim, "Overview of techniques applicable to self-interference incoherent digital holography," JEOS:RP 8, 13077 (2013). [CrossRef] 16. J. Hong and M. K. Kim, "Single-shot self-interference incoherent digital holography using off-axis configuration," Opt. Lett. 38(23), 5196–5199 (2013). [CrossRef] 17. D. Muhammad, C. M. Nguyen, J. Lee, and H. Kwon, "Spatially incoherent off-axis Fourier holography without using spatial light modulator (SLM)," Opt. Express 24(19), 22097–22103 (2016). [CrossRef] 18. X. Quan, O. Matoba, and Y. Awatsuji, "Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings," Opt. Lett. 42(3), 383–386 (2017). [CrossRef] 19. D. Liang, D. Liang, Q. Zhang, Q. Zhang, J. Liu, and J. Liu, "Single-shot Fresnel incoherent digital holography based on geometric phase lens," in Digital Holography and Three-Dimensional Imaging 2019 (2019), paper M5A.6, (Optical Society of America, 2019), p. M5A.6. 20. T. Tahara, T. Kanno, Y. Arai, and T. Ozawa, "Single-shot phase-shifting incoherent digital holography," J. Opt. 19(6), 065705 (2017). [CrossRef] 21. T. Nobukawa, T. Muroi, Y. Katano, N. Kinoshita, and N. Ishii, "Single-shot phase-shifting incoherent digital holography with multiplexed checkerboard phase gratings," Opt. Lett. 43(8), 1698–1701 (2018). [CrossRef] 22. V. Anand, T. Katkus, S. Lundgaard, D. Linklater, E. P. Ivanova, S. H. Ng, and S. Juodkazis, "Fresnel incoherent correlation holography with single camera shot," arXiv:1911.08291 [physics] (2019). ArXiv: 1911.08291. 23. I. Yamaguchi and T. Zhang, "Phase-shifting digital holography," Opt. Lett. 22(16), 1268–1270 (1997). [CrossRef] 24. E. N. Leith and J. Upatnieks, "Reconstructed wavefronts and communication theory," J. Opt. Soc. Am. 52(10), 1123–1130 (1962). [CrossRef] 25. E. N. Leith and J. Upatnieks, "Wavefront reconstruction with diffused illumination and three-dimensional objects," J. Opt. Soc. Am. 54(11), 1295–1301 (1964). [CrossRef] 26. E. Cuche, P. Marquet, and C. Depeursinge, "Spatial filtering for zero-order and twin-image elimination in digital off-axis holography," Appl. Opt. 39(23), 4070–4075 (2000). [CrossRef] 27. E. Hecht, Optics (Addision Wesley, 2002), 4th ed. 28. J. Rosen, N. Siegel, and G. Brooker, "Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging," Opt. Express 19(27), 26249–26268 (2011). [CrossRef] 29. H. Lee, P. Jeon, and D. Kim, "3d image distortion problem in digital in-line holographic microscopy and its effective solution," Opt. Express 25(18), 21969–21980 (2017). [CrossRef] 30. M. K. Kim, "Basic Methods of Numerical Diffraction," in Digital Holographic Microscopy: Principles, Techniques, and Applications, (Springer, 2011), pp. 43–54. 31. Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, "Edge sparsity criterion for robust holographic autofocusing," Opt. Lett. 42(19), 3824–3827 (2017). [CrossRef] Article Order D. Gabor, "Microscopy by Reconstructed Wave-Fronts," Proc. R. Soc. London, Ser. A 197(1051), 454–487 (1949). [Crossref] J. W. Goodman and R. W. Lawrence, "Digital image formation from electronically detected holograms," Appl. Phys. Lett. 11(3), 77–79 (1967). U. Schnars, "Direct phase determination in hologram interferometry with use of digitally recorded holograms," J. Opt. Soc. Am. A 11(7), 2011–2015 (1994). U. Schnars and W. Jüptner, "Direct recording of holograms by a CCD target and numerical reconstruction," Appl. Opt. 33(2), 179–181 (1994). E. Cuche, P. Marquet, and C. Depeursinge, "Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms," Appl. Opt. 38(34), 6994–7001 (1999). S. Grilli, P. Ferraro, S. D. Nicola, A. Finizio, G. Pierattini, and R. Meucci, "Whole optical wavefields reconstruction by Digital Holography," Opt. Express 9(6), 294–302 (2001). C. J. Mann, L. Yu, and M. K. Kim, "Movies of cellular and sub-cellular motion by digital holographic microscopy," BioMed Eng OnLine 5(1), 21 (2006). T.-C. Poon, K. B. Doh, B. W. Schilling, M. H. Wu, K. K. Shinoda, and Y. Suzuki, "Three-dimensional microscopy by optical scanning holography," Opt. Eng. 34(5), 1338–1344 (1995). G. Indebetouw and W. Zhong, "Scanning holographic microscopy of three-dimensional fluorescent specimens," J. Opt. Soc. Am. A 23(7), 1699–1707 (2006). J. Rosen and G. Brooker, "Digital spatially incoherent Fresnel holography," Opt. Lett. 32(8), 912–914 (2007). J. Rosen and G. Brooker, "Non-scanning motionless fluorescence three-dimensional holographic microscopy," Nat. Photonics 2(3), 190–195 (2008). J. Rosen and G. Brooker, "Fluorescence incoherent color holography," Opt. Express 15(5), 2244–2250 (2007). M. K. Kim, "Full color natural light holographic camera," Opt. Express 21(8), 9636–9642 (2013). D. C. Clark and M. K. Kim, "Nonscanning three-dimensional differential holographic fluorescence microscopy," J. Electron. Imaging 24(4), 043014 (2015). J. Hong and M. Kim, "Overview of techniques applicable to self-interference incoherent digital holography," JEOS:RP 8, 13077 (2013). J. Hong and M. K. Kim, "Single-shot self-interference incoherent digital holography using off-axis configuration," Opt. Lett. 38(23), 5196–5199 (2013). D. Muhammad, C. M. Nguyen, J. Lee, and H. Kwon, "Spatially incoherent off-axis Fourier holography without using spatial light modulator (SLM)," Opt. Express 24(19), 22097–22103 (2016). X. Quan, O. Matoba, and Y. Awatsuji, "Single-shot incoherent digital holography using a dual-focusing lens with diffraction gratings," Opt. Lett. 42(3), 383–386 (2017). D. Liang, D. Liang, Q. Zhang, Q. Zhang, J. Liu, and J. Liu, "Single-shot Fresnel incoherent digital holography based on geometric phase lens," in Digital Holography and Three-Dimensional Imaging 2019 (2019), paper M5A.6, (Optical Society of America, 2019), p. M5A.6. T. Tahara, T. Kanno, Y. Arai, and T. Ozawa, "Single-shot phase-shifting incoherent digital holography," J. Opt. 19(6), 065705 (2017). T. Nobukawa, T. Muroi, Y. Katano, N. Kinoshita, and N. Ishii, "Single-shot phase-shifting incoherent digital holography with multiplexed checkerboard phase gratings," Opt. Lett. 43(8), 1698–1701 (2018). V. Anand, T. Katkus, S. Lundgaard, D. Linklater, E. P. Ivanova, S. H. Ng, and S. Juodkazis, "Fresnel incoherent correlation holography with single camera shot," arXiv:1911.08291 [physics] (2019). ArXiv: 1911.08291. I. Yamaguchi and T. Zhang, "Phase-shifting digital holography," Opt. Lett. 22(16), 1268–1270 (1997). E. N. Leith and J. Upatnieks, "Reconstructed wavefronts and communication theory," J. Opt. Soc. Am. 52(10), 1123–1130 (1962). E. N. Leith and J. Upatnieks, "Wavefront reconstruction with diffused illumination and three-dimensional objects," J. Opt. Soc. Am. 54(11), 1295–1301 (1964). E. Cuche, P. Marquet, and C. Depeursinge, "Spatial filtering for zero-order and twin-image elimination in digital off-axis holography," Appl. Opt. 39(23), 4070–4075 (2000). E. Hecht, Optics (Addision Wesley, 2002), 4th ed. J. Rosen, N. Siegel, and G. Brooker, "Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging," Opt. Express 19(27), 26249–26268 (2011). H. Lee, P. Jeon, and D. Kim, "3d image distortion problem in digital in-line holographic microscopy and its effective solution," Opt. Express 25(18), 21969–21980 (2017). M. K. Kim, "Basic Methods of Numerical Diffraction," in Digital Holographic Microscopy: Principles, Techniques, and Applications, (Springer, 2011), pp. 43–54. Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, "Edge sparsity criterion for robust holographic autofocusing," Opt. Lett. 42(19), 3824–3827 (2017). Anand, V. Arai, Y. Awatsuji, Y. Brooker, G. Clark, D. C. Cuche, E. Depeursinge, C. Doh, K. B. Ferraro, P. Finizio, A. Gabor, D. Goodman, J. W. Grilli, S. Hecht, E. Hong, J. Indebetouw, G. Ishii, N. Ivanova, E. P. Jeon, P. Juodkazis, S. Jüptner, W. Kanno, T. Katano, Y. Katkus, T. Kim, D. Kim, M. Kim, M. K. Kinoshita, N. Kwon, H. Lawrence, R. W. Lee, H. Lee, J. Leith, E. N. Liang, D. Linklater, D. Liu, J. Lundgaard, S. Mann, C. J. Marquet, P. Matoba, O. Meucci, R. Muhammad, D. Muroi, T. Ng, S. H. Nguyen, C. M. Nicola, S. D. Nobukawa, T. Ozawa, T. Ozcan, A. Pierattini, G. Poon, T.-C. Quan, X. Rosen, J. Schilling, B. W. Schnars, U. Shinoda, K. K. Siegel, N. Suzuki, Y. Tahara, T. Tamamitsu, M. Upatnieks, J. Wu, M. H. Wu, Y. Yamaguchi, I. Yu, L. Zhang, Q. Zhang, T. Zhong, W. Appl. Opt. (3) Appl. Phys. Lett. (1) BioMed Eng OnLine (1) J. Electron. Imaging (1) J. Opt. (1) J. Opt. Soc. Am. (2) J. Opt. Soc. Am. A (2) JEOS:RP (1) Nat. Photonics (1) Opt. Eng. (1) Opt. Express (6) Opt. Lett. (6) Proc. R. Soc. London, Ser. A (1) OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper Fig. 10. View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) H=|R+O|2=|R|2+|O|2+RO∗+R∗O (2) Zc=±Z1Z2Z1−Z2 (3) Z1=(f1−S)a+Sf1a−f1,Z2=(f2−S)a+Sf2a−f2 (4) Zc(a)=Z1Z2Z2−Z1=(S+f1)(S+f2)a2+[2f1f2−(f1+f2)S]Sa+f1f2S2(f1−f2)a2 (5) Mt=−S+Zc(a)a (6) Mz=−∂Zc(a)∂a (7) {Msys1=T5(T3L2T3)T2L1T1Msys2=T5(T4L3T4)T2L1T1 (8) Tm=(10dm′1),Lm=(1−1/fm01),m=1∼5 (9) Zc=−a1(a2+a3x)(a4+a5x)(a6+a7x)2 (10) wherea1≡−f12f2f3(f2−f3),a2≡d3+d4−(d2+d3)(d3+d4−f2)f2a3≡b1−b2f2f1f2,a4≡d3+d4−(d2+d3)(d3+d4−f3)f3a5≡b1−b2f3f1f3,a6≡(d2+d3)f1a7≡f1−d2−d3b1≡d2d3+d2d4+d3d4−(d3+d4)f1+d32b2≡d2+2∗d3+d4−f1 (10) Zc(x)=C1(x−C2)(x−C3)(x−c4)2 3D positions of three fiber tips in the image and the object space [µm]. Measured image position (x,y,z) Reconstructed object position (x′,y′,z′) Fiber 1 (200, 218, 041) (201, 272, 040) Fiber 3 (200, 852, -510) (203, 848, -510) Relative distances between objects obtained by SIDH and side-view imaging [µm]. Δx12 -1 0 Δy12 137 201 Δz12 364 363 Δx13 2 0 Δy13 576 634 Δz13 -550 -554
CommonCrawl
Ultra High Energy Cosmic Rays 2018 8-12 October 2018 Ecole Supérieure de Chimie, Paris First Circular Second Circular Mini workshop on Future Lunch and more Local Organizing Commitee Mme Isabelle Lhenry-Yvon [email protected] 8 Oct 2018, 14:00 Friedel Amphitheater (Ecole Supérieure de Chimie, Paris) Friedel Amphitheater Chimie ParisTech École Nationale Supérieure de Chimie de Paris 11, rue Pierre et Marie Curie 75231 PARIS Cedex 05 Isabelle Lhenry-Yvon (IPN Orsay) Gordon Thomson (University of Utah) Hiroyuki Sagawa (Institute for Cosmic Ray Research, University of Tokyo) Karl-Heinz Kampert (Bergische Universität Wuppertal) Tony Bell (University of Oxford) Peter Grieder () Günter Sigl (University of Hamburg) Andrew Taylor (MPIK) Enrique Zas () Petr Tinyakov (Universite Libre de Bruxelles) Charles Jui () Ralph Engel (Karlsruhe Institute of Technology) Piergiorgio Picozza Picozza (INFN and University of Rome Tor Vergata) Shoichi Ogio (Osaka City University) Sessions: future M. Panasyuk () 203. Welcome and Opening 80. The Highest Energy Particles in Nature – the Past, the Present and the Future Prof. Alan Watson (University of Leeds) The Highest Energy Particles in Nature – the Past, the Present and the Future Alan Watson Since the earliest days cosmic-ray physicists have been studying the highest-energy particles in Nature. A basic understanding of the development of electromagnetic cascades led to the first targeted searches for air showers and, soon after the discovery of charged and neutral pions,... 149. TA Spectrum Dmitri Ivanov (University of Utah) Telescope Array (TA) is measuring cosmic rays of energies from PeV to 100 EeV and higher in the Northern hemisphere. TA has two parts: main TA and the TA low energy extension (TALE). Main TA is a hybrid detector that consists of 507 plastic scintillation counters on a 1200m - spaced square grid that are overlooked by three fluorescence detector stations. TALE is also a hybrid detector and it... 154. Measurement of energy spectrum of ultra-high energy cosmic rays with the Pierre Auger Observatory Valerio Verzi (INFN Roma "Tor Vergata") The energy spectrum of high-energy cosmic rays measured using the Pierre Auger Observatory is presented. The measurements extend over three orders of magnitude in energy from 3 x 10^17 eV up to the very end of the spectrum and they benefit of the almost calorimetric estimation of the shower energies performed with the fluorescence telescopes. The huge amount of data collected with the surface... 160. Auger-TA energy spectrum working group report The energy spectrum of ultra-high energy cosmic rays is the most emblematic observable for describing these particles. Beyond a few tens of EeV, the Pierre Auger Observatory and the Telescope Array, currently being exploited, provide the largest exposures ever accumulated in the Northern and the Southern hemispheres to measure independently a suppression of the intensity, in a complementary... 78. Minimal model of UHECR and IceCube neutrinos Mr Dmitri Semikoz (APC, Paris) In this talk I'll present minimal model, which explain UHECR spectrum and composition and at the same time explain IceCube astrophysical neutrino signal (M.Kachelriess et al, ``Minimal model for extragalactic cosmic rays and neutrinos,'' Phys.Rev.D 96}, 083006 (2017) Also I'll discuss galactic-extragalactic transition in context of this model. 134. NICHE: Air-Cherenkov light observation at the TA site Prof. Douglas Bergman (University of Utah) An array of non-imaging Cherenkov light collectors has recently been installed at the Telescope Array Middle Drum site, in the field-of-view of the TALE FD telescopes. This allows for imaging/non-imaging Cherenkov hybrid observations of air showers in the energy range just above 1 PeV. The performance of the array and the first analyses using hybrid measurements will be presented. 143. Data-driven model of the cosmic-ray flux and mass composition over all energies Hans Dembinski (Max Planck Institute for Nuclear Physics, Heidelberg) We present a parametrisation of the cosmic-ray flux and its mass composition over an energy range from 1 GeV to $10^{11}$ GeV, which can be used for theoretical calculations. The parametrisation provides a summary of the experimental state-of-the-art for individual elements from proton to nickel. We seamlessly combine measurements of the flux of individual elements from high-precision... 74. Particle Acceleration in Radio Galaxies Prof. Tony Bell (University of Oxford) Ultra-high energy cosmic rays pose an extreme challenge to theories of particle acceleration. We discuss the reasons why diffusive acceleration by shocks is a leading contender. A crucial aspect of shock acceleration is that cosmic rays must be efficiently scattered by magnetic field. This requires magnetic field amplification on scales comparable with the cosmic ray Larmor radius, which in... 159. Estimates of the Cosmic-Ray Composition with the Pierre Auger Observatory Michael Unger (KIT) We present measurements from the Pierre Auger Observatory related to mass composition of ultra-high energy cosmic rays. Using the fluorescence telescopes of the Observatory we determine the distribution of shower maxima (Xmax) from 10^17.2 to 10^19.6 eV and derive estimates of the mean and variance of the average logarithmic mass of cosmic rays. The fraction of p, He, N and Fe nuclei as... 111. Measurements of UHECR Mass Composition by Telescope Array William Hanlon (University of Utah) Telescope Array (TA) has recently published results of nearly nine years of $X_{\mathrm{max}}$ observations providing it's highest statistics measurement of UHECR mass composition to date for energies exceeding $10^{18.2}$ eV. This analysis measured agreement of observed data with results expected for four different single elements. Instead of relying only on the first and second moments of... 151. Depth of maximum of air-shower profiles: testing the compatibility of measurements performed at the Pierre Auger Observatory and the Telescope Array experiment Alexey Yushkov (Institute of Physics AS CR, Prague) At the Pierre Auger Observatory and the Telescope Array (TA) experiment the measurements of depths of maximum of air-shower profiles, $X_{\rm max}$, are performed using direct observations of the longitudinal development of showers with the help of the fluorescence telescopes. Though the same detection technique is used by both experiments, the straightforward comparison of the characteristics... 115. Telescope Array search for ultra-high energy photons and neutrinos Grigory Rubtsov (Institute for Nuclear Research of the Russian Academy of Sciences) We report the ultra-high energy (> 1EeV) photon flux limits based on the analysis of the 9 years data from the Telescope Array Surface detector. The multivariate classifier is built upon 16 reconstructed parameters of the extensive air shower. These parameters are related to the curvature and the width of the shower front, the steepness of the lateral distribution function and the timing... 114. High-energy emissions from neutron star mergers Shigeo Kimura (Pennsylvania State University) ast year, LIGO-VIRGO collaborations reported detection of the first neutron star merger event, GW170817, which accompanied with observations of electromagnetic counterparts from radio to gamma rays. High-energy gamma rays and neutrinos were not observed. However, the mergers of neutron stars are expected to produce these high-energy particles. Relativistic jets are expected to be launched when... 88. Ultra-high energy neutrinos from neutron-star mergers Valentin Decoene (Institut d'Astrophysique de Paris) In the context of the recent multi-messenger observation of neutron-star merger GW170817, we examine whether such objects could be sources of ultra-high energy astroparticles. At first order, the energetics and the population number is promising to envisage the production of a copious amount of high-energy particles, during the first minutes to weeks from the merger. In addition, the strong... 91. Ultra-High-Energy Cosmic Rays and Neutrinos from Tidal Disruptions by Massive Black Holes Claire Guépin (IAP) In addition to the emergence of time domain astronomy, the advent of multi-messenger astronomy opens up a new window on transient high-energy sources. Through the multi-messenger study of the most energetic objects in our universe, two fundamental questions can be addressed: what are the sources of ultra-high energy cosmic rays (UHECRs) and the sources of very-high energy neutrinos? Jetted... 101. Supergalactic Structure of Multiplets with the Telescope Array Surface Detector Jon Paul Lundquist (University of Utah - Telescope Array) Evidence of supergalactic structure of multiplets has been found for ultra-high energy cosmic rays (UHECR) with energies above 10$^{19}$ eV using 7 years of data from the Telescope Array (TA) surface detector. The tested hypothesis is that UHECR sources, and intervening magnetic fields, may be correlated with the supergalactic plane, as it is a fit to the average matter density within the GZK... 66. Ultra-High-Energy Cosmic Rays from Radio Galaxies Björn Eichmann Radio galaxies are intensively discussed as the sources of cosmic rays observed above about 3 EeV, called ultra-high energy cosmic rays (UHECRs). The talk presents a first, systematic study that takes the individual characteristics of these sources into account, as well as the impact of the galactic magnetic field, as well as the extragalactic magnetic-field structures up to a distance of 120... 107. Cosmogenic neutrinos from a combined fit of the Auger spectrum and composition Jonas Heinze We present a combined fit of the Auger spectrum and composition based on a newly developed code for the extragalactic propagation of cosmic ray nuclei (PriNCe). This very efficient numerical solver of the transport equations allows for scans over large ranges of unknown UHECR source parameters. Here, we present a study of a generalized source population with three parameters... 73. The most updated results of the magnetic field structure of the Milky Way Prof. JinLin Han (National Astronomical Observatories, Chinese Academy of Sciences) Magnetic fields are an important agent for cosmic rays to transport. The observed all-sky Faraday rotation distribution implies that the magnetic fields in the Galactic halo have a toroidial structure, but the radius range and scale height as well as the strength of the toroidial fields are totally unknown. In the Galactic disk, the magnetic fields probably follow the spiral structure with a... 6. Ultra-high energy cosmic rays from radio galaxies James Matthews (University of Oxford) The origin of ultra-high energy cosmic rays (UHECRs) is an open question, but radio galaxies offer one of the best candidate acceleration sites. Acceleration at the termination shocks of relativistic jets is problematic because relativistic shocks are poor accelerators to high energy. Using hydrodynamic simulations and general physical arguments, I will show that shocks with non- or mildly... 85. UHECR science with ground-based imaging atmospheric Cherenkov telescopes Dr Iftach Sadeh (DESY-Zeuthen) Arrays of imaging atmospheric Cherenkov telescopes (IACTs), such as VERITAS and the future CTA observatory, are designed to detect particles of astrophysical origin. IACTs are nominally sensitive to gamma rays and cosmic rays at energies between tens of GeV and hundreds of TeV. As such, they can be used as both direct and indirect probes of particle acceleration to very high energies. 103. Latest cosmic-ray results from IceCube and IceTop Karen Andeen (Marquette University) The IceCube Neutrino Observatory at the geographic South Pole, with its surface array IceTop, detects three different components of extensive air showers: the total signal at the surface, low energy muons in the periphery of the showers, and high energy muons in the deep array of IceCube. These three components allow for a variety of cosmic ray measurements including the energy spectrum and... 135. The Cosmic-Ray Energy Spectrum between 2 PeV and 2 EeV Observed with the TALE detector in monocular mode Charles Jui We present a measurement of the cosmic ray energy spectrum by the Telescope Array Low-Energy Extension (TALE) air fluorescence detector (FD). The TALE FD is also sensitive to the Cherenkov light produced by shower particles. Low energy cosmic rays, in the PeV energy range, are detectable by TALE as ``Cherenkov Events''. Using these events, we measure the energy spectrum from a low energy... 77. KASCADE-Grande: Post-operation analyses and latest results Andreas Haungs (KIT), KASCADE-Grande collaboration The KASCADE-Grande experiment has significantly contributed to the current knowledge about the energy spectrum and composition of cosmic rays for energies between the knee and the ankle. Meanwhile, post-LHC versions of the hadronic interaction models are available and used to interpret the entire data set of KASCADE-Grande. In addition, a new, combined analysis of both arrays, KASCADE and... 198. Primary Energy Spectrum by the Data of EAS Cherenkov Light Arrays Tunka-133 and TAIGA-HiSCORE Vasily Prosin Tunka-133 collected data since 2009. The data of 7 winter seasons (2009-2014 and 2015-2017) are processed and analyzed till now. The new TAIGA-HiSCORE array, designed for gamma astronomy tasks mostly, can be used for reconstruction of the all primary particle energy spectrum too. These two arrays provide the very wide range of primary energy measurements 2.10^14 – 2.10^18 eV with the same... 75. Transition from Galactic to Extragalactic Cosmic Rays Michael Kachelriess (Department of Physics, NTNU) Additionally to the all-particle cosmic ray (CR) spectrum, data on the primary composition and anisotropy have become available from the knee region up to few $\times 10^{19}$ eV. These data point to an early Galactic-extragalactic transition and the presence of Peter's cycle, i.e. a rigidity-dependent maximal energy. Theoretical models have to explain therefore the ankle as a feature in the... 98. Ultra High Energy Cosmic Ray Propagation and Source Signatures Prof. Andrew Taylor Knowledge about the processes dictating UHECR losses during their propagation in extragalactic space allows the secondary species to be used to probe the source location. In this talk I will cover the state of our knowledge on these processes, and gives examples about properties of the sources that may be inferred from the observed secondary species at Earth. Some suggestion will also be... 190. Galactic and Intergalactic magnetic fields Prof. Andrii Neronov (University of Geneva & APC, Paris) I will review the status of measurements and modelling of Galactic and intergalactic magnetic fields in the context of multi-messenger astrophysics and in particular of UHECR observations. 145. The extragalactic gamma-ray background above 100 MeV Markus Ackermann (DESY) I will review our knowledge about the properties and the origin of the extragalactic gamma-ray background above 100 MeV. Since the universe is transparent to MeV and GeV gamma rays up to very high redshifts, the extragalactic gamma-ray background contains the imprint of all gamma-ray emission from the beginning of star formation until the present day. Its properties have important implications... 178. Inductive Particle Acceleration 181. Black hole jets in clusters of galaxies as sources of high-energy cosmic particles Ke FANG It has been a mystery that with ten orders of magnitude difference in energy, high-energy neutrinos, ultrahigh-energy cosmic rays, and sub-TeV gamma rays all present comparable energy injection rate, hinting an unknown common origin. Here we show that black hole jets embedded in clusters of galaxies may work as sources of all three messengers. By numerically simulating the propagation of... 153. Multi-messenger Astrophysics at Ultra-High Energy with the Pierre Auger Observatory Alvarez-Muniz Jaime (Dept. Particle Physics, Univ. Santiago de Compostela) The study of correlations between observations of fundamentally different nature from extreme cosmic sources promises extraordinary physical insights into the Universe. With the Pierre Auger Observatory we can significantly contribute to multi-messenger astrophysics by searching for ultra-high energy particles, particularly neutrinos and photons which, being electrically neutral, point back to... 79. Recent IceCube results - evidences of neutrino emission from the blazar TXS 0506+056 and searches for Glashow resonance Lu Lu (Chiba University) Finally a hundred years after the discovery of cosmic-rays, a blazar has been identified as a source (at ~3 sigma level) of high-energy neutrinos and cosmic-rays thanks to the real-time multimessenger observation lead by the cubic-kilometer IceCube neutrino observatory. In this talk, details of the spatial-timing correlation analysis of the ~290 TeV neutrino event with Fermi light curves will... 89. Latest results on high-energy cosmic neutrino searches with the ANTARES neutrino telescope Agustín Sánchez Losa (INFN - Sezione di Bari) The ANTARES detector is currently the largest undersea neutrino telescope. Located in the Mediterranean Sea at a depth of 2.5 km, 40 km off the Southern coast of France, it has been looking for cosmic neutrinos for more than 10 years. High-energy cosmic neutrino production is strongly linked with cosmic ray production. The latest results from IceCube represent a step forward towards the... 156. Search for a correlation between the UHECRs measured by the Pierre Auger Observatory and the Telescope Array and the neutrino candidate events from IceCube and ANTARES Dr Lorenzo Caccianiga (Università degli studi di Milano) We present the results of three searches for correlations between UHECR events measured by the Pierre Auger Observatory and Telescope Array and high energy neutrino candidate events from IceCube and ANTARES. A cross-correlation analysis is performed, where the angular separation between the arrival directions of UHECRs and neutrinos is scanned. The same events are also exploited in a separate... 199. Overview and results from the first four flights of ANITA Amy Connolly (The Ohio State University) ANITA was designed as a discovery experiment for ultra-high energy (UHE) neutrinos using the radio Askaryan detection technique, launching from McMurdo Station in Antarctica under NASA's long duration balloon program and observing 1.5 million square kilometers of ice at once from an altitude of 40 km. Over ANITA's four flights we set the best constraints on UHE neutrino fluxes above 10^19 eV,... 104. The cosmogenic neutrino flux determines the fraction of protons in UHECRs Dr Arjen van Vliet (DESY Zeuthen) When UHECRs propagate through the universe, cosmogenic neutrinos are created via several interactions. In general, the expected flux of these cosmogenic neutrinos depends on multiple parameters describing the sources and propagation of UHECRs. However, using CRPropa, we show that a 'sweet spot' occurs at a neutrino energy of ~1 EeV. At that energy this flux only depends strongly on two... 138. TA Anisotropy Summary Kazumasa Kawata (ICRR, University of Tokyo) The Telescope Array (TA) is the largest ultra-high-energy cosmic-ray (UHECR) detector in the northern hemisphere, which consists of 507 surface detector (SD) covering a total 700 km^2 and three fluorescence detector stations. In this presentation, we will summarize recent results on the search for directional anisotropy of UHECRs using the latest data set collected by the TA SD array. 195. Study of the arrival directions of ultra-high-energy cosmic rays detected at the Pierre Auger Observatory Piera Luisa Ghia (IPNO) The distribution of the arrival directions of ultra-high energy cosmic rays is, together with the spectrum and the mass composition, a harbinger of their nature and origin. As such, it has been the subject of intense studies at the Pierre Auger Observatory since its inception in 2004, with two main lines of analysis being pursued at different angular scales and at different energies. One... 158. Covering the sphere at ultra-high energies: full-sky cosmic-ray maps beyond the ankle and the flux suppression Jonathan Biteau (IPNO) Despite deflections by Galactic and extragalactic magnetic fields, the distribution of the flux of ultra-high energy cosmic rays (UHECRs) over the celestial sphere remains a most promising observable for the identification of their sources. This distribution is remarkably close to being isotropic. Thanks to a large number of detected events over the past years, a large-scale anisotropy at... 95. A Close Correlation between TA Hotspot UHECR Events and Local Filaments of Galaxies and its Implication Dr Jihyun Kim (UNIST) The Telescope Array (TA) experiment identified a concentration of ultra-high-energy cosmic ray (UHECR) events on the sky, so-called hotspot. Besides the hotspot, the arrival directions of TA events show another characteristic feature, i.e., a deficit of events toward the Virgo cluster. As an effort to understand the sky distribution of TA events, we investigated the structures of galaxies... 90. High energy cosmic ray interactions and UHECR composition problem Dr Sergey Ostapchenko (Frankfurt Institute for Advanced Studies (FIAS)) I'll discuss the differences between contemporary Monte Carlo generators of high energy hadronic interactions and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs). In particular, key directions for model improvements will be outlined. The prospect for a coherent interpretation of the data in terms of the primary composition will be investigated. 172. Measurements and tests of hadronic interactions at ultra-high energies with the Pierre Auger Observatory Dr Markus Roth (Karlsruhe Institute of Technology, Institut für Kernphysik, Karlsruhe, Germany), Dr Lorenzo Cazon (LIP, Lisbon) Extensive air showers are complex objects, resulting of billions of particle reactions initiated by single cosmic ray at ultra-high-energy. Their characteristics are sensitive both to the mass of the primary cosmic ray and to the details of hadronic interactions. Many of the interactions that determine the shower features occur in energy and kinematic regions beyond those tested by human-made... 137. Hadronic interaction studied by TA Takashi Sako (ICRR, University of Tokyo) Telescope Array (TA) is measuring ultra-high energy cosmic rays in the Northern hemisphere since 2008. Using hybrid detectors namely surface detector array (SD) and fluorescence telescopes (FD), TA can measure the lateral and longitudinal developments of extensive air showers, respectively, in detail. Recent analysis of SD data reveals the excess of muons at large distance from the shower core... 165. Report on tests and measurements of hadronic interaction properties with air showers Unambiguously determining the mass composition of ultra-high energy cosmic rays is a key challenge at the frontier of cosmic ray research. The mass composition is inferred from air shower observables using air shower simulations, which rely on hadronic interaction models. Current hadronic interaction models lead to varying interpretations, therefore tests of hadronic interaction models with... 186. LHC results David d'Enterria (CERN) 82. Probing the hadronic energy spectrum in proton air interactions through the fluctuations of the EAS muon content felix riehn (LIP, Lisbon) The average number of muons in air showers and its connection with the development of air showers has been studied extensively in the past. With the upcoming detector upgrades, UHECR observatories will be able to also probe higher moments of the muon distribution. Here we present a study of the physics of the fluctuations of the muon content. In addition to proving that the fluctuations must... 65. EPOS 3 Tanguy Pierog (KIT, IKP) With the recent results of large hybrid air shower experiments, it is clear that the simulations of the hadronic interactions are not good enough to obtain a consistent description of the observations. Even the most recent models tuned after the first run of LHC show significant discrepancy with air shower data. Since then many more data have been collected at LHC and lower energies which are... 133. Recent results from the LHCf experiment Hiroaki Menjo (ISEE, Nagoya University, Japan) The LHCf experiment aims for measurements of the forward neutral particles at an LHC interaction point to test hadronic interaction models which are widely used in cosmic-ray air-shower simulations. The LHCf had an operation with proton-proton collisions at the center of mass collision energy of 13 TeV in 2015. The LHCf detectors were composed of sampling and imaging calorimeters and they were... 177. Overview of the Auger@TA project and preliminary results from Phase I Fred Sarazin (Colorado School of Mines), and the Pierre Auger and Telescope Array Collaborations Auger@TA is a joint experimental program of the Telescope Array experiment (TA) and the Pierre Auger Observatory (Auger), the two leading ultra-high energy cosmic-ray experiments located respectively in the northern and southern hemispheres. The aim of the program is to achieve a cross-calibration of the Surface Detector (SD) from both experiments. The first phase of this joint effort is... 102. Air showers, hadronic models, and muon production. Sergio Sciutto (Departamento de Física - Universidad Nacional de La Plata - Argentina) We report on a study about the mechanisms of muon production during the development of extended air showers initiated by ultra-high-energy cosmic rays. In particular, we analyze and discuss on the observed discrepancies between experimental measurements and simulated data. 164. Atmospheric Muons Measured with IceCube Dr Dennis Soldin (University of Delaware), for the Ice Cube collaboration IceCube is a cubic-kilometer Cherenkov detector in the deep ice at the geographic South Pole. The dominant event yield is produced by penetrating atmospheric muons with energies above several 100 GeV. Due to its large detector volume, IceCube provides unique opportunities to study atmospheric muons with large statistics in great detail. Measurements of the energy spectrum and the lateral... 99. Results of the first orbital ultra-high-energy cosmic ray detector TUS in view of future space mission KLYPVE-EUSO P. Klimov The observation of ultra-high energy cosmic rays (UHECR) from Earth orbit relies on the detection of the UV fluorescence tracks of the extensive air shower (EAS). This technique is widely used by ground-based detectors. Analogous measurements from space will allow to achieve the largest instantaneous aperture for observation the whole sky with nearly homogeneous exposure. It is important for... 86. Results from the first missions of the JEM-EUSO program Mario Bertaina (University & INFN Torino) The origin and nature of Ultra-High Energy Cosmic Rays (UHECRs) remain unsolved in contemporary astroparticle physics. To give an answer to these questions is rather challenging because of the extremely low flux of a few per km^2 per century at extreme energies such as E > 5 × 10^19eV. The objective of the JEM-EUSO program, Extreme Universe Space Observatory, is the realization of a space... 109. Leading cluster approach to simulations of hadron collisions with GHOST generator Jean-Noel Capdevielle (APC et IRFU CEA-Saclay) We present the current version of generator GHOST which can be used in the simulation of Non Diffractive (ND),Non Single Diffractive (NSD), single diffractive (SD) and double diffractive (DD) events at cosmic ray energies. The generator is based on four-gaussian parameterization of pseudorapidity distribution which is related to the leading cluster approach in distribution of secondary... 136. Status and prospects of the TAx4 experiment Dr Eiji Kido (Institute for Cosmic Ray Research, University of Tokyo) The TAx4 experiment is a project to observe highest energy cosmic rays by expanding the detection area of the TA experiment with newly constructed surface detectors (SDs) and fluorescence detectors (FDs). The construction of both SDs and FDs is ongoing. New SDs are arranged in a square grid with 2.08 km spacing at the north east and south east of the TA SD array. Field of view of new FDs... 188. AugerPrime: the Pierre Auger Observatory upgrade. Antonella Castellina (INFN & INAF-OATo) The world largest exposure to ultra high energy cosmic rays accumulated by the Pierre Auger Observatory lead to major advances in our understanding of their properties, but the many unknowns about the nature and distribution of the sources, the primary composition and the underlying hadronic interactions prevent the emergence of a uniquely consistent picture. The new perspectives opened by... 96. A next-generation ground array for the detection of ultrahigh-energy cosmic rays: the Fluorescence detector Array of Single-pixel Telescopes (FAST) Toshihiro Fujii (ICRR, University of Toyo) The origin and nature of ultrahigh-energy cosmic rays (UHECRs) is one of the most intriguing mys- teries in astroparticle physics. The two largest observatories currently in operation, the Telescope Array Experiment in central Utah, USA, and the Pierre Auger Observatory in western Argentina, have been steadily observing UHECRs in both hemispheres for over a decade. We highlight the latest... 108. Detection of ultra-high energy cosmic ray air showers by Cosmic Ray Air Fluorescence Fresnel-lens Telescope for next generation Dr Yuichiro Tameda (Osaka Electro-Communication University) In the future, ultra-high energy cosmic ray (UHECR) observatory will be expanded due to the small flux. Then, cost reduction is useful strategy to realize a huge scale observatory. For this purpose, we are developing a simple structure cosmic ray detector named as Cosmic Ray Air Fluorescence Fresnel-lens Telescope (CRAFFT). We deployed CRAFFT detectors at the Telescope Array site and performed... 192. Precision measurements of cosmic rays up to the highest energies with a large radio array at the Pierre Auger Observatory Dr Jörg Hörandel (Radboud University Nijmegen) High-energy cosmic rays impinging on the atmosphere of the Earth induce cascades of secondary particles, the extensive air showers. Many particles in the showers are electrons and positrons. Due to interactions with the magnetic field of the Earth they emit radiation with frequencies of several tens of MHz. In the last years huge progress has been achieved in this field through strong... 171. In-ice radio arrays for the detection of ultra-high energy neutrinos Radio techniques show the most promise for measuring and characterizing the astrophysical neutrino flux above about 10^17 eV. Complementary strategies include observing a target volume from a distance and deploying sensors in the target volume itself. I will focus on the current status of experiments utilizing the latter strategy, in-ice radio arrays. I will give an overview of results from... 84. The GRAND Project Olivier Martineau (IN2P3) The Giant Radio Array for Neutrino Detection (GRAND) aims at detecting ultra-high-energy extraterrestrial neutrinos via the extensive air showers induced by the decay of tau leptons created in the interaction of neutrinos under the Earth's surface. Consisting of an array of $\sim200\,000$ radio antennas deployed over $\sim200\,000\,$km$^2$, GRAND plans to reach, for the first time, a... 92. The space road to UHECR observations: challenges and expected rewards Etienne Parizot (APC - University Paris 7) Significant progress has been made in the last decade in the field of Ultra-High-Energy Cosmic Rays (UHECRs), thanks to the operation of large ground-based detectors and to the renewed theoretical interest that they triggered. While multi-messenger astronomy is rapidly developing worldwide, the sources of the charged messengers, namely the cosmic rays, are still to be determined, and the... 105. POEMMA: Probe Of Multi-Messenger Astrophysics Dr John Krizmanic (CRESST/NASA//GSFC/UMBC) Developed as a NASA Astrophysics Probe mission concept study, the Probe Of Multi-Messenger Astrophysics (POEMMA) science goals are to identify the sources of ultra-high energy cosmic rays (UHECRs) and to observe cosmic neutrinos above 10 PeV. POEMMA consists of two satellites flying in loose formation at 525 km altitudes. A novel focal plane design is optimized to observe the UV air... 205. Closing and Concluding Remarks 5. Galactic model of ultra-high energy cosmic rays. Sergey Shaulov (P.N.Lebedev Physical Institute) The hypothesis of existence of the new stable heavy hadrons in the cosmic rays is proposed. It follows from the comprehensive study of extensive air showers in the hybrid experiment HADRON which was carried out at the level 685 g/cm^2 of the Tien Shan mountains. The spectra of the high energy hadrons inside the cores of extensive air showers were obtained for the first time by means of the...
CommonCrawl