text
stringlengths
100
500k
subset
stringclasses
4 values
Integrals of functions of several variables 1 Volumes and the Riemann sums 2 Properties of the Riemann sums 3 The Riemann integral over rectangles 4 The weight as the 3d Riemann sum 5 The weight as the 3d Riemann integral 6 Lengths, areas, volumes, and beyond 7 Outside the sandbox 8 Triple integrals 9 The $n$-dimensional case 10 The center of mass 11 The expected value 12 Gravity Volumes and the Riemann sums All functions in this chapter are real-valued. Our understanding of volumes is limited to that of Chapter 13. If $D$ is a region on the plane and it is lifted off the plane to the height $h$ then the cylinder-like solid (a "shell") between these two plane regions is assumed to have the volume of: $$V=A\cdot h.$$ We furthermore represented more complex solid in terms of these shells. We will have to start over though. Example. Let's review the Area Problem: we confirm that the area of a circle of radius $1$ is \( A = \pi \). First we plot the graph of $$y=f(x)=\sqrt{1-x^2},\ -1\le x\le 1,$$ with $21$ points ($20$ intervals). We let the values of $x$ run from $-1$ to $1$ every $.1$ and apply the formula: $$\texttt{=SQRT(1-RC3^2)},$$ to get the values of $y$. We next cover this half-circle with vertical bars based on the interval $[-1,1]$: the bases of the bars are our intervals in the $x$-axis and the heights are values of $y=f(x)$. To see the bars, we simply change the type of the chart plotted by the spreadsheet: Then the area of the circle is approximated by the sum of the areas of the bars: we multiply the widths of the bars by the heights, place the result in the last column, and finally add all entries in this column. The result $1.552$ is close to $\pi/2\approx 1.571$. We proceed to the Volume Problem. We will confirm that the volume of a sphere of radius $1$ is \( V = \frac{4}{3}\pi \). First we plot the graph of $$z=f(x,y)=\sqrt{1-x^2-y^2},\ -1\le x\le 1,\ -1\le y\le 1.$$ We recycle our spreadsheet for the sphere. We already have $20$ intervals for $x$ in the first column. Now, just as before, we construct $20$ intervals for $y$ in the first row. We let the values of $x$ and $y$ run from $-1$ to $1$ every $.1$ and apply the formula: $$\texttt{=SQRT(1-RC3^2-R4C^2)}.$$ to get the values of $z$. We next fill this half-sphere with vertical bars based on the square $[-1,1]\times [-1,1]$: the bases of the bars (pillars) are our little squares in the $xy$-plane and the heights are values of $z$. To see the bars, we simply change the type of the chart plotted by the spreadsheet: We can see each row of bars as an approximation of the area of a slice of the sphere, which is another circle... The volume of the sphere is now approximated by the sum of the volumes of the bars. Each of these volumes is the product of the height of the bar in this rectangle equal to the value of the function and with the ones outside the domain replaced with $0$s, and the area of the base is equal to $.01$. Their sum is simply the sum of these heights multiplied by $.01$. The result produced by the spreadsheet is the following: $$\text{Approximate volume of the hemisphere}= 2.081.$$ It is close to the theoretical result: $$\text{Exact volume of the hemisphere}= 2\pi/3\approx 2.094.$$ $\square$ Exercise. Approximate the volume of the sphere radius $1$ within $.0001$. We have showed that indeed the area of a sphere of radius $1$ is close to $A = \frac{4}{3}\pi$. But the real question is: What is the volume? One thing we do know. The volume of a box $a \times b \times c$ is $abc$. With that we can compute volume of various geometric figures with straight edges but what are the volumes of curved objects? The idea, once again, comes from the ancient Greek's approach to understanding and computing the areas and volumes. They approximated the circle with regular polygons and the sphere with regular polyhedra: The setup for the Riemann sums for functions of two variables is very similar to the one for numerical functions but by far more cumbersome. Let's consider a rectangle $R=[a,b]\times [c,d],\ a < b,\ c<d$. Suppose also that we have two integers $n,m \ge 1$. First, we have a partition of $[a,b]$ into $n$ intervals of possibly different lengths: $$ [x_{0},x_{1}],\ [x_{1},x_{2}],\ ... ,\ [x_{n-1},x_{n}],$$ with $x_0=a,\ x_n=b$. The increments of $x$ are: $$\Delta x_i = x_i-x_{i-1},\ i=1,2,...,n.$$ Second, we have a partition of $[c,d]$ into $m$ intervals of possibly different lengths: $$ [y_{0},y_{1}],\ [y_{1},y_{2}],\ ... ,\ [y_{m-1},y_{m}],$$ with $y_0=c,\ y_m=d$. The increments of $y$ are: $$\Delta y_j =y_j-y_{j-1},\ j=1,2,...,m.$$ Altogether, we have a partition $P$ of the rectangle $[a,b]\times [c,d]$ into smaller rectangles $$R_{ij}= [x_{i-1},x_{i}]\times [y_{j-1},y_{j}].$$ These are $2$-cells! The points $$(x_i,y_{j}),\ i=1,2,...,n,\ j=1,2,...,m,$$ will be called the (primary) nodes of the partition. We are also given the tertiary nodes of $P$ for each pair $i=0,1,2,...,n-1$ and $i,j=0,1,2,...,m-1$: a point $U_{ij}$ in the rectangle $R_{ij}=[x_{i-1},x_{i}]\times [y_{j-1},y_{j}]$. Such a combination of rectangles and tertiary nodes in its intervals will be called an augmented partition $P$ of $R$. We won't need secondary nodes in this chapter. In the example above, the right upper corners were chosen. Before we address how to compute volumes, let's consider a simpler problem. Suppose a function $y = f(X)=f(x,y)$ defined at the tertiary nodes of the partition of the rectangle $R$ and gives us the amount of some material contained in the corresponding cell. Then the total amount of the material in the whole rectangle is simply the sum of the values of $f$. Definition. The sum of a function $z=f(x,y)$ defined at the tertiary nodes of an augmented partition $P$ of a rectangle $R=[a,b]\times [c,d]$ is defined to be: $$\sum_R f =\sum_{i=1}^n\sum_{j=1}^m f(U_{ij}).$$ Note that when tertiary nodes aren't provided, we can think of the $2$-cells themselves as the inputs of the function: $U_{ij}=R_{ij}= [x_{i-1},x_{i}]\times [y_{j-1},y_{j}]$. This makes $f$ a $2$-form. The area of each $2$-cell is: $$\Delta A_{ij} = \Delta x_i \cdot \Delta y_j.$$ In other words, the product of the increments of $x$ and $y$ is the increment of the area. Suppose next we have a function $y = f(X)=f(x,y)$ defined at the tertiary nodes of the partition of the rectangle $R$ and gives us the height of a bar on top of the corresponding cell: $$\text{the volume of }ij\text{ bar} = \underbrace{f(U_{ij})}_{\text{height of bar}} \cdot \overbrace{\Delta x_i}^{\text{depth of base}}\cdot \overbrace{\Delta y_j}^{\text{width of base}}.$$ We then add all of these together in order to compute the volume of the solid under the graph of $z=f(x,y)$ over rectangle $R$. Example. We consider the particular case when the tertiary nodes of $P$ come from the secondary nodes of the augmented partitions of the intervals $[a,b]$ and $[c,d]$: $$U_{ij}=(s_i,t_j).$$ The computation is illustrated below: $$\begin{array}{r|lcccccccr|} &y_0&\Delta y_1&y_1&\Delta y_2&y_3&...&y_{m-1}&\Delta y_m&y_m\\ \hline x_0&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ \Delta x_1&|&f(s_1,t_1)\Delta x_1\Delta y_1&|&f(s_1,t_2)\Delta x_1\Delta y_2&|&...&|&f(s_1,t_m)\Delta x_1\Delta y_m&|\\ x_1&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ \Delta x_2&|&f(s_2,t_1)\Delta x_2\Delta y_1&|&f(s_2,t_2)\Delta x_2\Delta y_2&|&...&|&f(s_2,t_m)\Delta x_2\Delta y_m&|\\ x_2&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ ..&.&..&.&..&.&...&.&..&.\\ x_{n-1}&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ \Delta x_n&|&f(s_n,t_1)\Delta x_n\Delta y_1&|&f(s_n,t_2)\Delta x_n\Delta y_2&|&...&|&f(s_n,t_m)\Delta x_n\Delta y_m&|\\ x_n&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ \hline \end{array}$$ $\square$ Definition. The Riemann sum of a function $z=f(x,y)$ defined at the tertiary nodes of an augmented partition $P$ of a rectangle $R=[a,b]\times [c,d]$ is defined and denoted to be: $$\sum_R f(U_{ij}) \, \Delta x_i\Delta y_j =\sum_{i=1}^n\sum_{j=1}^m f(U_{ij})\Delta x_i\Delta y_j. $$ The Riemann sum of a sampled function of two variables is shown below: The abbreviated notation for the Riemann sum is: $$\sum_R f \, \Delta x\Delta y . $$ Example. Let's consider a very simple example: $$f(x,y)=x+y, \ R=[0,1]\times [0,1].$$ We choose $$n=2,\ m=2.$$ Then the end-points of the intervals are: $$x_0=0,\ x_1=.5,\ x_2=1 \text{ and }y_0=0,\ y_1=.5,\ y_2=1 .$$ The nodes are $$\begin{array}{ccccccc} (0,1)&-&(.5,1)&-&(1,1)\\ |&&|&&|\\ (0,.5)&-&(.5,.5)&-&(1,.5)\\ |&&|&&|\\ (0,0)&-&(.5,0)&-&(1,0)\\ \end{array}$$ They are the corners of the rectangles of the partition: $$\begin{array}{ccccccc} [0,.5]\times [.5,1]& [.5,1]\times [.5,1]\\ [0,.5]\times [0,.5]& [.5,1]\times [0,.5] \end{array}$$ Now we choose the tertiary nodes. Specifically, let's choose the bottom left corners: $$\begin{array}{|l|l|} \hline (0,.5)& (.5,.5)\\ \hline (0,0)& (.5,0)\\ \hline \end{array} \leadsto \begin{array}{|l|l|} \hline f(0,.5)=.5& f(.5,.5)=1\\ \hline f(0,0)=0& f(.5,0)=.5\\ \hline \end{array}\leadsto \begin{array}{|l|l|} \hline .5\cdot .5^2=.125& 1\cdot .5^2=.25\\ \hline 0\cdot .5^2=0& .5\cdot .5^2=.125\\ \hline \end{array}$$ The values of the function and then the volumes of the bars are shown on right. Then the sum of those is: $$\sum_R f \, \Delta x\Delta y =.5.$$ Example. Riemann sums of the paraboloid of revolution: Just as in the one-dimensional case, we are allowed to have negative values of $f$, with possibly negative volumes of the bars. These are the signed distance and the signed volume respectively. We speak then of the volume of the solid between the graph of $z=f(x,y)$ and the rectangle $R$ in the $xy$-plane. Furthermore, we can have negative lengths for the independent variables too. Suppose $a<b$ and $c<d$, then the rectangles $[a,b]\times [c,d]$ and $[b,a]\times [d,c]$ are positively oriented and have positive areas, while the rectangles $[a,b]\times [d,c]$ and $[b,a]\times [c,d]$ are negatively oriented and have negative areas. Once again, the Riemann sum over an oriented rectangle is referred to as the signed volume. The algebraic properties of sum and Riemann sums are similar to the one in Chapter 11... Theorem (Constant Function Rule). Suppose $f$ is constant on a rectangle $R$, i.e., $f(x,y) = m$ for all $(x,y)$ in $R$ and some real number $c$. Then $$\sum_R f \, \Delta x\Delta y = m ( b-a )(d-c). $$ Indeed, the Riemann sum represents the area of the rectangle with width $b-a$, the depth $d-c$, and the height $m$. Properties of the Riemann sums In spite of their somewhat complex definition, the Riemann sums are finite! There are no issues of infinity or divergence to worry about. They are, therefore, subject to the usual rules of algebra: their terms can be multiplied, factored, re-arranged and added in a different order, etc. If we proceed to the adjacent rectangle, we can just continue to add terms of the Riemann sum: Theorem (Additivity Rule). Suppose $z=f(x,y)$ is a function. Suppose $a,b,c,d,q$ are any numbers and suppose we have partitions of the intervals $[a,q]$, $[q,b]$, $[c,d]$ and augmented partitions of the rectangles $[a,q]\times [c,d]$ and $[q,b]\times [c,d]$. Then we have: $$\sum_{[a,q]\times [c,d]} f \, \Delta x\Delta y +\sum_{[q,b]\times [c,d]} f \, \Delta x\Delta y = \sum_{[a,b]\times [c,d]} f \, \Delta x\Delta y,$$ with summations over these partitions. If two functions are comparable then so are the terms of their Riemann sums: Theorem (Comparison Rule). Suppose $f$ and $g$ are functions defined on a rectangle $R$. Then we have: $$(1)\ f(x,y)\geq g(x,y) \text{ on } R \ \Longrightarrow\ \sum_R f\, \Delta x\Delta y \geq \sum_R g\, \Delta x\Delta y;$$ $$(2)\ f(x,y)< g(x,y) \text{ on } R \ \Longrightarrow\ \sum_R f\, \Delta x\Delta y < \sum_R g\, \Delta x\Delta y.$$ Theorem (Estimate Rule). Suppose $z=f(x,y)$ is a function. For any $a,b$ with $a<b$ and any $c,d$ with $c<d$, if $$m \leq f(x,y)\leq M,$$ for all $x$ with $a\le x \le b$ and all $y$ with $c\le y\le d$, then $$m(b-a)(d-c)\leq \sum_{[a,b]\times [c,d]} f\, \Delta x\Delta y \leq M(b-a)(d-c).$$ The picture below illustrates the idea of multiplication of the function viz. multiplication of the volume under its graph: Theorem (Constant Multiple Rule). Suppose $f$ is a function. For any rectangle $R$ and any real $c$, we have: $$ \sum_R (c\cdot f) \, \Delta x\Delta y = c \cdot \sum_R f\, \Delta x\Delta y.$$ The picture below illustrates the idea of adding functions viz. adding the volumes under their graphs: Theorem (Sum Rule). Suppose $f$ and $g$ are functions. For any rectangle $R$, we have: $$\sum_R \left( f + g \right) \, \Delta x\Delta y = \sum_R f\, \Delta x\Delta y + \sum_R g\, \Delta x\Delta y. $$ Proof. This is a simple algebraic manipulation of the corresponding Riemann sums: $$\begin{array}{lll} \sum_R(f+g) \, \Delta x\Delta y &=\sum_{ij} \left(f(s_i,t_j)+g(s_i,t_j)\right)\Delta A_{ij}\\ &=\sum_{ij} \left(f(s_i,t_j)\Delta A_{ij}+g(s_i,t_j)\Delta A_{ij}\right)\\ &=\sum_{ij}f(s_i,t_j)\Delta A_{ij}+\sum_{ij}g(s_i,t_j)\Delta A_{ij}\\ &=\sum_Rf\, \Delta x\Delta y +\sum_Rg\, \Delta x\Delta y. \end{array}$$ $\blacksquare$ Exercise. Prove the rest of these theorems. The summation of the Riemann sum can be carried out in two main ways. We can add the rows first and then the resulting column or vice versa. A more profound way is to recognize the presence of functions of single variable in this function of two variables as well as the Riemann sums of these functions in its Riemann sum. Fixing any value of $x$ creates a new single variable function of $y$ from $z=f(x,y)$: $$f_x(y)=f(x,y).$$ We pick one of the end-points $x=x_i$ of the partition of $[a,b]$ and form the Riemann sum of this function (with respect to $y$ over $[c,d]$): $$\sum_c^d f_{x_i}\, \Delta y =\sum_{j=1}^m f(s_i,t_j)\Delta y_j.$$ For the spreadsheet, the Riemann sum is computed and placed at the cell at its end of each row. As a result the highlighted row of bars in the Riemann sum of the function of two variables, $a$ and $y$, reappears as the Riemann sum of a function of single variable, $y$: When this is done for each $x_i$, the result is a new function $g$ defined on the end-points of the partition of $[c,d]$: $$g(x_i)=\sum_c^d f_{x_i}\, \Delta y.$$ We now compute its Riemann sum (with respect to $x$ over $[a,b]$): $$\sum_R f\, \Delta x\Delta y =\sum_{i=1}^ng(x_i)\Delta x_i,$$ producing the Riemann sum of the original function $f$. We show this computation for a spreadsheet below: For the spreadsheet, the Riemann sums placed at the cell at the end of each row, together, produce a column of values, i.e., a function of single variable, $x$. Its Riemann sum is now computed and placed at its bottom. The result is the same (of course!) number as the original, "double" Riemann sum. Alternatively, for each column, the Riemann sum is computed and placed at the cell at its bottom. Together, the numbers give us a row of values. Its Riemann sum is now computed and placed at its end. The results are the same. Exercise. Carry out the missing part of the above analysis starting with: "Fixing any value of $y$ creates a new single variable function of $x$ from $z=f(x,y)$...". Our analysis amounts to the following formula. Theorem (Riemann sum of Riemann sums). $$\begin{array}{lll} \sum_R f \, \Delta x\Delta y &=\sum_{i=1}^n\left(\sum_{j=1}^m f(s_i,t_j)\Delta y_j\right)\Delta x_i\\ &=\sum_{j=1}^m\left(\sum_{i=1}^n f(s_i,t_j)\Delta x_i\right)\Delta y_j. \end{array}$$ In other words, the order of summation doesn't matter. The Riemann integral over rectangles The familiar integral of a numerical function is the (algebraic) area under the graph and now this is the volume under the graph of a function of two variables. It is as if we used to look at a certain sandbox from aside and tried to find the area of the visible sand as a way to determine the amount of sand in it and now we stand up and see that this was just a cross-section and we need to start to think three-dimensional... Recall that the function is defined on the rectangle partitioned into smaller rectangles and then it is sampled -- in a possibly non-uniform manner -- in each. The locations are called the secondary nodes. The result is a two-dimensional array of numbers: $$\text{heights: }\begin{array}{|cccccc} \hline \ f(s_1,t_1)&f(s_1,t_2)&...\\ \ f(s_2,t_1)&f(s_2,t_2)&...\\ ...&...&... \end{array}\quad\leadsto\quad \text{volumes: } \begin{array}{|cccccc} \hline \ f(s_1,t_1)\Delta A_{11}&f(s_1,t_2)\Delta A_{12}&...\\ \ f(s_2,t_1)\Delta A_{21}&f(s_2,t_2)\Delta A_{22}&...\\ ...&...&... \end{array}$$ The sum of all the members of this array is the Riemann sum of the function. Now, the Riemann sums are just approximations of the volume and in order to improve them, we have to refine the partition. And we keep refining, so that we have simultaneously: $$n,m \to \infty \text{ and } \Delta x_i, \Delta y_j\to 0.$$ To make this idea specific, we define the mesh of a partition $P$ as: $$|P|=\max_{i,j} \, \{ \Delta x_i,\ \Delta y_j\}.$$ It is a measure of "refinement" of $P$. Definition. The Riemann integral of a function $z=f(x,y)$ over rectangle $R=[a,b]\times [c,d]$ is defined to be the limit of a sequence of its Riemann sums with the mesh of their augmented partitions $P_k$ approaching $0$ as $k\to \infty$. When all these limits exist and are all equal to each other, $f$ is called an integrable function over $R$ and the result is denoted by: $$ \iint_R f(x,y)\, dxdy =\lim_{k \to \infty} \sum_R f_k \, \Delta x\Delta y,$$ where $f_k$ is $f$ sampled at the secondary nodes of the partition. It is also called the definite integral, the double integral, or simply integral, of $f$ over $R$ called the domain of integration. When all these limits are equal to $+\infty$ (or $-\infty$), we say that the integral is infinite and write: $$ \iint_R f(x,y)\, dxdy =+\infty\ (\text{or }-\infty).$$ An abbreviated notation is: $$\iint_R f\, dA ,$$ where "$A$" stands for "area". We will also refer to the integral as well as the (signed) volume under the graph of $f$ or the (signed) volume between the graph and the $xy$-plane. The custom of placing the domain of integration under the integral sign is also applied to the numerical functions, as follows: $$\int_a^bf\, dx=\int_{[a,b]}f\, dx.$$ Instead of "from $a$ to $b$" we have now an integral "over $[a,b]$". The advantage of the old notation is that it specifies the orientation of the segment $[a,b]$. From now on, all orientations are positive by default. In general, we follow the rule: $$\text{orientation of }[a,b] = \operatorname{sign}(b-a),$$ and $$\text{orientation of }[a,b]\times [c,d]\ = \operatorname{sign}((b-a)(d-c)).$$ Let's verify the definition for a simple function. Theorem (Constant Integral Rule). Suppose $z=f(x,y)$ is constant on rectangle $R=[a,b]\times [c,d]$, i.e., $f(x,y) = m$ for all $(x,y)$ in $R$ and some real number $m$. Then $f$ is integrable on $R$ and $$\iint_R f\, dA = m (b-a)(d-c).$$ The following result proves that our definition makes sense for a large class of functions. Theorem. All continuous functions on rectangle $R$ are integrable on $R$. Proof. $\blacksquare$ The converse isn't true because the sign function $\operatorname{sign}(xy)$ is integrable over any rectangle. We once again utilize the idea of signed distance, signed area, and signed volume. Theorem (Orientation). The Riemann integral of a function $z=f(x,y)$ over rectangle $[b,a]\times [c,d]$ is equal to the negative of the integral over $[a,b]\times [c,d]$: $$\iint_{[b,a]\times [c,d]} f\, dA =-\iint_{[a,b]\times [c,d]} f\, dA ;$$ and, similarly, $$\iint_{[a,b]\times [d,c]} f\, dA =-\iint_{[a,b]\times [c,d]} f\, dA ;$$ but $$\iint_{[b,a]\times [d,c]} f\, dA =\iint_{[a,b]\times [c,d]} f\, dA .$$ It all depends on whether the rectangle is positively or negatively oriented! Now the properties of the Riemann integral. The mimic the ones for numerical functions but they follow from the corresponding properties of the Riemann sums (and the analogous rules of limits). The additivity property for integrals -- combining the domains of integration -- of numerical functions, $$\int_a^b f(x)\, dx +\int_a^b f(x)\, dx = \int_a^b f(x)\, dx,$$ has a match. The interpretation is the same: the amount of a quantity contained in a region formed by two regions with a negligible overlap is equal to the sum of the quantities in the two. We just look at the region from above which reveals the third dimension. So, we switch from areas to volumes: It's as if we put a divider in our sandbox... The result follows from the Additivity for Riemann sums. Theorem (Additivity). Suppose a function $z=f(x,y)$ is integrable over the rectangles $$R=[a,q]\times [c,d] \text{ and }S=[q,b]\times [c,d].$$ Then $f$ is integrable over the rectangle $R\cup S=[a,b]\times [c,d]$ and we have: $$\iint_R f\, dA +\iint_S f\, dA = \iint_{R\cup S} f\, dA .$$ Thus, adding the domains of integration adds the integrals too. Theorem. If $z=f(x,y)$ is integrable over $R$ then it is also integrable over any rectangle $R'$ in $R$. The following is another important corollary. Theorem. All piece-wise continuous functions are integrable. Proof. It follows from the integrability of continuous functions and the Additivity Rule. $\blacksquare$ In particular, all step-functions are integrable... Just as before, the larger function contains a larger volume under its graph: Theorem (Comparison Rule). If $$f(x,y)\geq g(x,y) \text{ for all }(x,y)\text{ in rectangle } R,$$ then $$\iint_R f\, dA \geq \iint_R g\, dA ,$$ provided $z=f(x,y)$ and $z=g(x,y)$ are integrable functions over $R$. Otherwise we have: $$\begin{array}{lll} \iint_R f\, dA =-\infty&\Longrightarrow& \iint_R g\, dA =-\infty;\\ \iint_R f\, dA =+\infty&\Longleftarrow& \iint_R g\, dA =+\infty. \end{array}$$ If we know only estimates of the function, we have an estimate -- below and above -- for the volume under its graph. For the general case, we have the following. Theorem (Estimate Rule). Suppose $z=f(x,y)$ is an integrable function over $R=[a,b]\times [c,d]$. Then, if $a<b$ and $c<d$ and $$m \leq f(x,y)\leq M,$$ for all $(x,y)$ in $R$, we have $$m (b-a)(d-c)\leq \iint_R f\, dA \leq M (b-a)(d-c).$$ Exercise. What if the orientation of the rectangle is negative? Finally, these are the algebraic properties. First, the picture below illustrates the idea that tripling the height of a surface will need tripling the amount of soil under it: Theorem (Constant Multiple Rule). Suppose $z=f(x,y)$ is an integrable function over a rectangle $R$. Then so is $c\cdot f$ for any real $c$ and we have: $$ \iint_R(c\cdot f)\, dA = c \cdot \iint_R f\, dA.$$ Finally, the picture below illustrates what happens when the bottom drops from a bucket of sand and it falls on a curved surface: Theorem (Sum Rule). Suppose $z=f(x,y)$ and $z=g(x,y)$ are integrable functions over a rectangle $R$. Then so is $f+g$ and we have: $$\iint_R \left( f + g \right)\, dA = \iint_R f\, dA + \iint_R g\, dA.$$ Proof. We take the limit, as $|P|\to 0$, of the Sum Rule for Riemann sums: $$\begin{array}{cccc} \sum_R(f+g) \, \Delta x\Delta y &=&\sum_Rf\, \Delta x\Delta y &+&\sum_Rg\, \Delta x\Delta y \\ \downarrow&&\downarrow&&\downarrow\\ \iint_R (f + g)\, dA&&\iint_R f\, dA&&\iint_R g\, dA. \end{array}$$ The transition is justified by the Sum Rule for Limits. $\blacksquare$ The actual computations of integrals require single integrals. The method is derived from the Riemann sum of Riemann sums formula in the last section. We recognize the presence of functions of single variable in this function of two variables as well as the Riemann sums and the integrals of these functions in its Riemann sum and the integral. Fixing any value of $x$ creates a new single variable function of $y$ from $z=f(x,y)$: $$f_x(y)=f(x,y).$$ We pick one and form the Riemann integral of this function: $$\int_c^df_{x}\, dy=\int_{y=c}^{y=d} f(x,y)\, dy.$$ This integral is with respect to $y$ with the domain of integration $[c,d]$. It represents the area of the cross section of our solid! When this is done for each $x$, the result is a new function $g$ defined on the interval $[a,b]$: $$g(x)=\int_c^d f_{x}\, dy.$$ We now compute its Riemann integral: $$\int_R f\, dxdy=\int_{a}^b g\, dx,$$ This integral is with respect to $x$ with the domain of integration $[a,b]$. It produces the Riemann integral of the original function $z=f(x,y)$. Alternatively, for each $y$, the Riemann integral is computed and, together, these numbers form a function of $y$. Its Riemann integral is now computed. The results are the same. Our analysis amounts to the following formula similar to the formula for the second, mixed partial derivative. Theorem (Fubini's Theorem: integral of integral). If a function $z=f(x,y)$ is integrable over a rectangle $R=[a,b]\times [c,d]$, then it is also integrable with respect to $x$ over the interval $[a,b]$ and with respect to $y$ over the interval $[c,d]$; moreover, we have: $$\begin{array}{lll} \iint_R f\, dA&=\int_{x=a}^{x=b}\left(\int_{y=c}^{y=d} f(x,y)\, dy \right)\, dx\\ &=\int_{y=c}^{c=d}\left(\int_{x=a}^{x=b} f(x,y)\, dx \right)\, dy. \end{array}$$ The right-hand sides are called iterated integrals. They supply as, for the first time, with a method for computing area integrals! The method relies on an approach similar to partial differentiation in that we initially treat $y$ as the only variable and, then, $x$ has to be treated as a constant. This coordinate-by-coordinate integration may be called "partial integration". Example. Fubini's Theorem allows us to use the Fundamental Theorem of Calculus: $$\begin{array}{lll} \iint_{[0,1]\times [0,2]} xy\, dxdy&=\int_{x=0}^{x=1}\left(\int_{y=0}^{y=2} xy\, dy \right)\, dx&\text{use the CMR...}\\ &=\int_{x=0}^{x=1}x\left(\int_{y=0}^{y=2} y\, dy \right)\, dx&\text{use the FTC...}\\ &=\int_{x=0}^{x=1}x\left(\frac{y^2}{2}\bigg|_{y=0}^{y=2} \right)\, dx\\ &=\int_{x=0}^{x=1}x\left(2 \right)\, dx&y\text{ is gone!}\\ &=2\int_{x=0}^{x=1}x\, dx&\text{use the FTC...}\\ &=2\frac{x^2}{2}\bigg|_{x=0}^{x=1}\\ &=1. \end{array}$$ $\square$ Using "$x=,\ y=$" is optional and so are the parentheses: $$\iint_{[0,1]\times [0,2]} xy\, dxdy=\int_{0}^{1}\int_{0}^{2} xy\, dydx.$$ Non-rectangular domains of integration, such as the disk, are still to come. The weight as the 3d Riemann sum We can find volumes of various objects and, therefore, their weights -- as long as the density is constant! What if the density is variable? We don't have a solution. Even simpler, what is the mass of a rectangular contained (box) filled with a gas of variables density? Recall the two main metaphors (in addition to that of motion) we have used for integrals: $$\begin{array}{lllll} \text{variables}&\text{domain}&\text{metaphor #1}&\text{metaphor #2}\\ \hline 1&\text{interval }[a,b]&\text{area under the graph in }{\bf R}^2&\text{linear density}\\ 2&\text{rectangle }[a,b]\times [c,d]&\text{volume under the graph in }{\bf R}^3&\text{planar density}\\ 3&\text{box }[a,b]\times [c,d]\times [p,q]&\text{volume? under? the graph in }{\bf R}^4?&\text{density} \end{array}$$ It is hard to provide a similar interpretation for the $3$ variables. As we have run out of dimensions to visualize these functions, we choose the second metaphor: a function of three variables $u=f(x,y,z)$ is understood as the density of the medium at location $(x,y,z)$. The level surfaces below show where the density is the same. In addition to the common meaning of density, i.e., the distribution of weight, we can speak of the density of a particular material within the medium, the density of population, the temperature (the density of heat), etc. Example. Spreadsheets are tables of numbers and may only be good for representing functions of up to two variables, $x$ and $y$. To add the third, $z$, we use "sheets", one for each value of $z$: We enter the value of $z$ at the corner of our table of values of $f$. The formula in each cell of this table refers -- just as before -- to the values of $x$ (the first column) and $y$ (the top row) as well as this single entry for $z$: $$\texttt{=SIN(RC3^2+R4C^2+R6C5^2)^2}.$$ A few more sheets, or layers, are created with different entries for $z=1,2,3,4,5$. The result is a box, or a three-dimensional array, of numbers. The graphs shown are those of the functions of two variables: $u=f(x,y,1),u=f(x,y,2)$, etc. The graph of each of these functions can again be seen to represent a terrain. Now, what if we think of $z$ as time? Then we face a terrain changing in time! Such a process could be a wave (the sea, a drum, etc.). $\square$ The function is represented by a list of rectangular arrays. Alternatively, we see it as a rectangular array of lists... The setup for the Riemann sums for functions of three variables is very similar to the one for two. Suppose we have a function $u = f(x,y,z)$ defined on a box $$B=[a,b]\times [c,d]\times [p,q],\ a < b,\ c<d,\ p<q.$$ Suppose also that we have two integers $n,m,r \ge 1$. We have three partitions of these intervals: $$\begin{array}{lllll} \text{axis}&\text{segment}&\text{end-points}&\text{lengths}\\ \hline x&[a,b]& a=x_{0}<x_{1}<x_{2}< ... < x_{n-1}<x_n=b&\Delta x_i =x_i-x_{i-1}\\ y&[c,d]& c=y_{0}<y_{1}<y_{2}< ... < y_{m-1}<y_m=d&\Delta y_j =y_j-y_{j-1}\\ z&[p,q]& p=z_{0}<z_{1}<z_{2}< ... < z_{n-1}<z_r=q&\Delta z_k =z_k-z_{k-1} \end{array}$$ Altogether, we have a partition $P$ of the box $B$ into smaller boxes or compartments: $$B_{ijk}= [x_{i},x_{i+1}]\times [y_{j},y_{j+1}]\times [z_{k},z_{k+1}].$$ We know these boxes as $3$-cells. The whole partition may look like this wire-frame (it is non-uniform in general): The volume of each $3$-cell is: $$\Delta V_{ijk} = \Delta x_i \cdot \Delta y_j\cdot \Delta z_k.$$ In other words, the product of the increments of $x$, $y$, and $z$ is the increment of the volume. The points $$X_{ijk}=(x_i,y_{j},z_k),\ i=1,2,...,n,\ j=1,2,...,m,\ k=1,2,...,r,$$ are the primary nodes, or the nodes of degree $0$. What about the secondary, etc. nodes? We have a more uniform look at the nodes consistent with our view on cells. These are the $3$-cells of our partition as well as the lower-dimensional cells that make up the boundaries of these cells: $$\begin{array}{lll} \text{ cells }&\text{ terms }&\text{ product representation }&\text{ nodes }&\text{ }\\ \hline 3\text{-cells}&\text{ boxes }&[x_{i},x_{i+1}]\times [y_{j},y_{j+1}]\times [z_{k},z_{k+1}]&\\ 2\text{-cells}&\text{ faces }&[x_{i},x_{i+1}]\times [y_{j},y_{j+1}]\times \{z_{k}\}&\text{ tertiary nodes}\\ &&[x_{i},x_{i+1}]\times \{y_{j}\}\times [z_{k},z_{k+1}]\\ &&\{x_{i}\}\times [y_{j},y_{j+1}]\times [z_{k},z_{k+1}]\\ 1\text{-cells}&\text{ edges }&[x_{i},x_{i+1}]\times \{y_{j}\}\times \{z_{k}\}&\text{ secondary nodes}\\ &&\{x_{i}\}\times [y_{j},y_{j+1}]\times \{z_{k}\}\\ &&\{x_{i}\}\times \{y_{j}\}\times [z_{k},z_{k+1}]\\ 0\text{-cells}&\text{ vertices }&\{x_{i}\}\times \{y_{j}\}\times \{z_{k}\}&\text{ primary nodes}\\ \end{array}$$ For the weight we will only need the nodes at the $3$-cells; for each triple $i=0,1,2,...,n-1$, $j=0,1,2,...,m-1$, and $k=0,1,2,...,r-1$, we have: a point $S_{ijk}$ in the box $B_{ijk}$. Such a combination of boxes and nodes will be called an augmented partition of $B$. Before we address how to compute the weight, let's consider a simpler problem. Suppose a function $y = f(X)=f(x,y,z)$ defined at $S_{ijk}$ and gives us the amount of some material contained in the corresponding compartment. Then the total amount of the material in the whole box is simply the sum of the values of $f$. Definition. The sum of a function $z=f(x,y)$ defined at the nodes at the $3$-cells of an augmented partition $P$ of a box $R=[a,b]\times [c,d]\times [p,q]$ is defined to be: $$\sum_B f=\sum_{i=1}^n\sum_{j=1}^m \sum_{k=1}^r f(S_{ijk}).$$ Note that when these nodes aren't provided, we can think of the $3$-cells themselves as the inputs of the function: $S_{ijk}=B_{ijk}$. This makes $f$ a $3$-form. Each degree $3$ node gives us the density (as if constant) of the medium in the corresponding box: $$\text{the weight of box }B_{ijk} = \underbrace{f(S_{ijk})}_{\text{density}} \cdot \overbrace{\Delta x}^{\text{depth of box}}\cdot \overbrace{\Delta y}^{\text{width of box}}\cdot \overbrace{\Delta y}^{\text{height of box}}. $$ We then add all of these together in order to approximate the weight of the solid in the box of $u=f(x,y,z)$ inside box $B$. For each $k$, the values in the $k$th layer look like this: $$\begin{array}{r|lcccccc} z_k&y_0&\Delta y_1&y_1&\Delta y_2&y_2&...\\ \hline x_0&\bullet&--&\bullet&--&\bullet&...\\ \Delta x_1&|&f(S_{1,1,k})\Delta x_1\Delta y_1\Delta z_k&|&f(S_{1,2,k})\Delta x_1\Delta y_2\Delta z_k&|&...\\ x_1&\bullet&--&\bullet&--&\bullet&...\\ \Delta x_2&|&f(S_{2,1,k})\Delta x_2\Delta y_1\Delta z_k&|&f(S_{2,2,k})\Delta x_2\Delta y_2\Delta z_k&|&...\\ x_2&\bullet&--&\bullet&--&\bullet&...\\ ..&.&..&.&..&.&...&.&..&.\\ \end{array}$$ Definition. The Riemann sum of a function $u=f(x,y,z)$ defined at the nodes at the $3$-cells of an augmented partition $P$ of a box $B=[a,b]\times [c,d]\times [p,q]$ is defined to be $$\sum_B f\, \Delta x\Delta y\Delta z =\sum_{i=1}^n\sum_{j=1}^m\sum_{k=1}^r f(S_{ijk})\Delta x_i\Delta y_j\Delta z_k. $$ The abbreviated formula is: $$\sum_B f\, \Delta x\Delta y \Delta z=\sum_{i,j,k} f(S_{ijk})\Delta V_{ijk} $$ Just as before, we are allowed to have negative values of $f$. Furthermore, we can have negative lengths of the intervals for the independent variables. The box $[a,b]\times [c,d]\times [p,q]$ has a positive orientation whenever $a<b,\ c<d,\ p<q$; in other words, when all three segments have positive orientations within their respective axes. A reversal of the orientation of a single segment reverses the orientation of the box. Exercise. What if the orientations of two segments are reversed? The weight as the 3d Riemann integral Now, if the density function varies continuously, the Riemann sums are just approximations of the actual weight and, in order to improve them, we have to refine the partition: $$n,m,r \to \infty \text{ and } \Delta x_i, \Delta y_j, \Delta z_k\to 0.$$ Just as before, we define the mesh of a partition $P$ as: $$|P|=\max_{i,j,k} \, \{ \Delta x_i,\ \Delta y_j, \Delta z_k\}.$$ It is a measure of "refinement" of $P$. Definition. The Riemann integral of a function $u=f(x,y,z)$ over box $B=[a,b]\times [c,d]\times [p,q]$ is defined to be the limit of a sequence of its Riemann sums with the mesh of their augmented partitions $P_s$ approaching $0$ as $s\to \infty$. When all these limits exist and are all equal to each other, $f$ is called an integrable function over $B$ and the result is denoted by: $$ \iiint_B f(x,y,z)\, dxdydz =\lim_{s \to \infty} \sum_B f_s\, \Delta x\Delta y\Delta z,$$ where $f_s$ is $f$ sampled over the partition $P_s$. It is also called the definite integral, the triple integral, or simply integral, of $f$ over $B$. When all these limits are equal to $+\infty$ (or $-\infty$), we say that the integral is infinite and write: $$ \iiint_R f(x,y,z)\, dxdydz =+\infty\ (\text{or }-\infty).$$ An abbreviated notation is: $$\iiint_B f\, dV .$$ Theorem (Constant Integral Rule). Suppose $z=f(x,y,z)$ is constant on box $B=[a,b]\times [c,d]\times [p,q]$, i.e., $f(x,y,z) = m$ for all $(x,y,z)$ in $B$ for some real number $m$. Then $f$ is integrable over $B$ and $$\iiint_B f\, dV = m (b-a)(d-c)(q-p).$$ Theorem. All continuous functions on box $B$ are integrable on $B$. The converse isn't true... The properties of the Riemann integral mimic the ones for numerical functions (and functions of two variables). The interpretation of additivity is the same: the quantity in a region formed from two regions with a negligible overlap is equal to the sum of the quantities in the two. It's as if we put a divider in our container... Theorem (Additivity). Suppose a function $u=f(x,y,z)$ is integrable over the boxes $$R=[a,q]\times [c,d]\times [p,q] \text{ and }S=[q,b]\times [c,d]\times [p,q].$$ Then $f$ is integrable over the box $R\cup S=[a,b]\times [c,d]\times [p,q]$ and we have: $$\iiint_R f\, dV +\iiint_S f\, dV = \iiint_{R\cup S} f\, dV .$$ Theorem. If $u=f(x,y,z)$ is integrable over box $B$ then it is also integrable over any box $B'$ in $B$. Just as before, the box with a larger density weighs more: Theorem (Comparison Rule). If $$f(x,y,z)\geq f(x,y,z) \text{ for all }(x,y,z)\text{ in box } B,$$ then $$\iiint_B f\, dV \geq \iiint_B f\, dV ,$$ provided $z=f(x,y,z)$ and $z=g(x,y,z)$ are integrable functions over $B$. Otherwise we have: $$\begin{array}{lll} \iiint_B f\, dV =-\infty&\Longrightarrow& \iiint_B g\, dV =-\infty;\\ \iiint_B f\, dV =+\infty&\Longleftarrow& \iiint_B g\, dV =+\infty. \end{array}$$ If we know only estimates of the density, we have an estimate -- below and above -- for the weight of the box. Theorem (Estimate Rule). Suppose $u=f(x,y,z)$ is an integrable function over box $B=[a,b]\times [c,d]\times [p,q]$. Then, if $a<b$, $c<d$, $p<q$, and $$m \leq f(x,y,z)\leq M,$$ for all $(x,y,z)$ in $B$, we have $$m (b-a)(d-c)(q-p)\leq \iiint_R f\, dV \leq M (b-a)(d-c)(q-p).$$ Exercise. What if the orientation of the box is negative? Finally, these are the algebraic properties. First, the picture below illustrates the idea that tripling the density of a solid will triple its weight: Theorem (Constant Multiple Rule). Suppose $u=f(x,y,z)$ is an integrable function over a box $B$. Then so is $c\cdot f$ for any real $c$ and we have: $$ \iiint_B(c\cdot f)\, dV = c \cdot \iiint_B f\, dV.$$ Second, the picture below illustrates that when gas is pumped from one container to another that already contains gas, the resulting density is the sum of the two: Theorem (Sum Rule). Suppose $u=f(x,y,z)$ and $u=g(x,y,z)$ are integrable functions over a box $B$. Then so is $f+g$ and we have: $$\iiint_B \left( f + g \right)\, dV = \iiint_B f\, dV + \iiint_B g\, dV.$$ Exercise. Prove these theorems. The actual computations of integrals require single integrals. We recognize the presence of functions of single variable in this function of three variables as well as the Riemann sums and the integrals of these functions in its Riemann sum and the integral. Theorem (Fubini's Theorem: integral of integral of integral). If a function $u=f(x,y,z)$ is integrable over a box $B=[a,b]\times [c,d]\times [p,q]$, then it is also integrable with respect to $x$ over the interval $[a,b]$, with respect to $y$ over the interval $[c,d]$, and with respect to $z$ over the interval $[p,q]$; moreover, we have (in any order of variables): $$\iiint_B f\, dV=\int_{x=a}^{x=b}\left(\int_{y=c}^{y=d} \left(\int_{z=p}^{z=q} f(x,y,z)\, dz \right)\, dy \right)\, dx.$$ Example. We gradually progress from inside out: $$\begin{array}{lll} \iiint_{[0,1]\times [0,2]\times [0,3]} x^3y^2z\, dV &=\int_{x=0}^{x=1}\left(\int_{y=0}^{y=2} \left(\int_{z=0}^{z=3} x^3y^2z\, dz \right)\, dy \right)\, dx\\ &=\int_{x=0}^{x=1}\left(\int_{y=0}^{y=2} x^3y^2\left(\int_{z=0}^{z=3} z\, dz \right)\, dy \right)\, dx\\ &=\int_{x=0}^{x=1}\left(\int_{y=0}^{y=2} x^3y^2\left( \frac{z^2}{2}\bigg|_{z=0}^{z=3} \right)\, dy \right)\, dx\\ &=\int_{x=0}^{x=1}\left(\int_{y=0}^{y=2} x^3y^2\left( \frac{9}{2} \right)\, dy \right)\, dx&z\text{ is gone!}\\ &=\int_{x=0}^{x=1}\frac{9}{2}x^3\left(\int_{y=0}^{y=2} y^2\, dy \right)\, dx\\ &=\int_{x=0}^{x=1}\frac{9}{2}x^3\left( \frac{y^3}{3}\bigg|_{y=0}^{y=2} \right)\, dx\\ &=\int_{x=0}^{x=1}\frac{9}{2}x^3\left( \frac{8}{3} \right)\, dx&y\text{ is gone!}\\ &= 12\int_{x=0}^{x=1}x^3\, dx\\ &= 12\frac{x^4}{4}\bigg|_{x=0}^{x=1}\\ &=12\frac{1}{4}&x\text{ is gone too!}\\ &=3. \end{array}$$ $\square$ Specifying the variables of the bounds of integration as well as using the parentheses is optional. Example. Alternative notation: $$\begin{array}{lll} \iiint_{[0,1]\times [0,2]\times [0,3]} x^3y^2z\, dV &=\int_{ 0}^{ 1}&\int_{ 0}^{ 2} &\int_{ 0}^{ 3} x^3y^2z\, dz & \, dy & \, dx\\ &=\int_{ 0}^{ 1}&\int_{ 0}^{ 2} x^3y^2&\int_{ 0}^{ 3} z\, dz & \, dy & \, dx\\ &=\int_{ 0}^{ 1}&\int_{ 0}^{ 2} x^3y^2& \frac{z^2}{2}\bigg|_{ 0}^{ 3} & \, dy & \, dx\\ &=\int_{ 0}^{ 1}&\int_{ 0}^{ 2} x^3y^2& \frac{9}{2} & \, dy & \, dx\\ &=\int_{ 0}^{ 1}\frac{9}{2}x^3&\int_{ 0}^{ 2} &y^2&\, dy & \, dx\\ &=\int_{ 0}^{ 1}\frac{9}{2}x^3&& \frac{y^3}{3}\bigg|_{ 0}^{ 2} && \, dx\\ &=\int_{ 0}^{ 1}\frac{9}{2}x^3&& \frac{8}{3} && \, dx\\ &= 12\int_{ 0}^{ 1}x^3\, dx\\ &= 12\frac{x^4}{4}\bigg|_{ 0}^{ 1}\\ &=12\frac{1}{4}\\ &=3. \end{array}$$ $\square$ Non-rectangular domains, such as the ball, are still to come. Lengths, areas, volumes, and beyond What do the types of Riemann sums and integrals that we have considered have in common? Example. Suppose we would like to find the average height of a building in a city. For simplicity, we assume that there is one building in each block. The city might look like this: For our computations we collect the heights of these building and put them in a table, which makes a function of two variables $f$. These may be its inputs and outputs: $$\begin{array}{l|cccc} y\backslash x&0&1&2&...&10\\ \hline 0&(0,0)&(0,1)&(0,2)&...&(0,10)\\ 1&(1,0)&(1,1)&(2,2)&...&(1,10)\\ 2&(2,0)&(1,1)&(2,2)&...&(2,10)\\ ...&...&...&...&...\\ 10&(10,0)&(10,1)&(10,2)&...&(10,10)\\ \end{array}\quad\leadsto\quad \begin{array}{l|cccc} y\backslash x&0&1&2&...&10\\ \hline 0&1&3&5&...&0\\ 1&2&4&6&...&0\\ 2&3&5&7&...&1\\ ...&...&...&...&...\\ 10&0&1&7&...&0\\ \end{array}$$ This may look like a generic function of two variables $z=f(x,y)$, but let's take a closer look at how the data. It's not a single number for each location but for each block, i.e., a rectangle. The data then is represented by a table: $$\begin{array}{l|cccc} y\backslash x&0&&1&&2&&3&...&9&&10\\ \hline 0&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&1&|&3&|&5&|&...&|&0&|\\ 1&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&2&|&4&|&6&|&...&|&0&|\\ 2&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&3&|&5&|&7&|&...&|&1&|\\ 3&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ ...&&...&&...&&...&&...&&...\\ 9&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&0&|&1&|&7&|&...&|&0&|\\ 10&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ \end{array}$$ We realize that this isn't just a function of two variables; it's a discrete $2$-form! Then to find the average height, we find the total of the heights of the buildings, which is the sum of $f$ over the rectangle $R=[0,10]\times [0,10]$: $$\text{total height }= \sum_{R}f,$$ and then divide by the number of buildings: $$\text{average height }= \frac{1}{10\cdot 10}\sum_{R}f.$$ However, a more complex situation is that of a city with blocks of various dimensions. In that case, the average height is the total of the volumes of the buildings, which is the Riemann sum of the heights over $R$: $$\text{total volume }= \sum_{ij}f(X_{ij})\, \Delta A_{ij}=\sum_{R}f \, \Delta x\Delta y,$$ divided by the area of the city, $100$: $$\text{average height }= \frac{1}{\text{area of }R}\sum_{R}f\, \Delta x\Delta y.$$ $\square$ The analysis applies to the idea of the average amount of any material spread over a region. Some are familiar for the three cases of dimensions $1,2,3$: $$\begin{array}{llll} \text{average density of segment }I&=\frac{1}{\text{length of }I}\sum_I f\, \Delta x,\\ \text{average density of rectangle }R&=\frac{1}{\text{area of }R}\sum_R f\, \Delta x\Delta y,\\ \text{average density of box }B&=\frac{1}{\text{volume of }B}\sum_B f\, \Delta x\Delta y\, \Delta z.\\ \end{array}$$ Here, the meaning varies: $$\begin{array}{lllll} \dim&\text{domain}&\text{space}&\text{measure}&f \text{ is...}&\sum_Uf \, \Delta x... \text{ is...}\\ \hline 1&\text{interval }&\text{ in }{\bf R}^1&\text{length}&\text{linear density}&\text{total amount}\\ 2&\text{rectangle }&\text{ in }{\bf R}^2&\text{area}&\text{planar density}&\text{total amount}\\ 3&\text{box }&\text{ in }{\bf R}^3&\text{volume}&\text{density}&\text{total amount}\\ ...\\ \end{array}$$ Do we continue? To proceed to the $n$-dimensional case, we first need to understand how to measure the size of this $n$-dimensional "box". Recall that an $n$-cell in ${\bf R}^n$ is the set of all points each coordinate of which lies within a predetermined interval of values. In other words, for any list of $n$ pairs of real numbers, $$a_1<b_1,\ a_2<b_2,\ ...,\ a_n<b_n,$$ we define the box to be $$B=\{X=(x_1,x_2,...,x_n):\ a_i \le x_i \le b_i,\ i=1,2,...,n\}.$$ The notation we have used is: $$B=[a_1,b_1]\times[a_2,b_2]\times...\times [a_n,b_n].$$ The $n$-volume of the box $B$ is simply the product of the lengths of the intervals that make it up: $$v(B)=(b_1-a_1)(b_2-a_2)...(b_n-a_n).$$ Now, the faces of this $n$-box are $(n-1)$-cells. Together they form the boundary of the $n$-cell. It is important that the $n$-volume of an $(n-1)$-cell is zero! That is why we can create a partition of an $n$-cell into smaller $n$-cell as long as they only intersect by their faces. This is the $n$th row in the table above: $$\begin{array}{lllll} \dim&\text{domain}&\text{space}&\text{measure}&f \text{ is...}&\sum_Bf\, \Delta V \text{ is...}\\ \hline n&\text{cell }&\text{ in }{\bf R}^n&n\text{-volume}&\text{density}&\text{total amount} \end{array}$$ Now, what is the meaning of the Riemann sum $\sum_Bf\, \Delta V $? The $n$-cell $B$ is the product of the intervals located in the axes of ${\bf R}^n$. We provide a partition for each. These partitions cut $B$ into smaller $n$-cells creating a partition $P$. A discrete $n$-form $f$ is then a function that assigns a number to each of these cells. Definition. The Riemann sum of a discrete $n$-form $f$ over a partition of an $n$-cell $B$ is defined to be: $$\sum_B f\, \Delta V=\sum_C f(C)v(C),$$ where summation is over all $n$-cells $C$ of the partition. Definition. The average value of an $n$-form $f$ defined over a partition of an $n$-cell $B$ is defined to be: $$\text{average value of }f\ =\frac{1}{v(B)}\sum_{B}f\, \Delta V.$$ The limit of the Riemann sums, as the mesh of the partition is approaching $0$, is the Riemann integral. For all three dimensions, the integral of $f$ is the total weight: $$\begin{array}{llll} \text{weight }&=\int_{[a,b]}f(x)\, dx,\\ \text{weight }&=\iint_{[a,b]\times [c,d]}f(x,y)\, dxdy,\\ \text{weight }&=\iiint_{[a,b]\times [c,d]\times [p,q]}f(x,y,z)\, dxdydz.\\ \end{array}$$ Furthermore, the average density is the total amount (weight) divided by the measure of the "container": $$\begin{array}{llll} \text{average density}&=\frac{1}{b-a}\int_{[a,b]}f(x)\, dx,\\ \text{average density}&=\frac{1}{(b-a)(d-c)}\iint_{[a,b]\times [c,d]}f(x,y)\, dxdy,\\ \text{average density}&=\frac{1}{(b-a)(d-c)(q-p)}\iiint_{[a,b]\times [c,d]\times [p,q]}f(x,y,z)\, dxdydz.\\ \end{array}$$ This is a launching pad into higher dimensions! We can now understand these three as one. Definition. The average value of a function $z=f(X)$ of $n$ variables over an $n$-cell $B$ is defined to be: $$\text{average value of }f\ =\frac{1}{n\text{-volume of }B}\int_{B}f(X)\, dV.$$ Outside the sandbox So far, the footprints of our solids are just rectangles. The example of the sphere in the beginning of the chapter shows that yet another generalization of the concept is needed: we need to be able to handle non-rectangular regions too. The example also suggest the way to deal with this: we fill the missing values with $0$s. Definition. Suppose a function $z=f(x,y)$ is defined over an arbitrary region $U$ in ${\bf R}^2$. Then the Riemann integral of $f$ over $U$ is defined to be the integral over any rectangle $R$ that contains $U$, $$\iint_Uf(x,y)\, dxdy=\iint_Rg(x,y)\, dxdy,$$ with the function $g$ defined by: $$g(x,y)=\begin{cases}f(x,y)&\text {inside }B,\\0&\text{outside }U;\end{cases}$$ when such integral exists, the function $f$ is called integrable over $U$. Of course, this integral, and the area, might be infinite or not exist as all... Furthermore, the ability to compute the volumes of complex solids presupposes the ability to compute the areas of complex plane regions! But what is area anyway? The answer has been "the area under the graph of a function of one variable". The answer leaves out a lot of regions such a disk. Fortunately, with the above definition we can have a double integral over any region $U$. In the simplest case we pick $f$ to be constant at $1$ over $U$. Then $$\text{the volume of }U\ =\ \text{ the area of }U\ \cdot \text{ the thickness}.$$ Definition. Suppose $U$ is a region in the plane. Then the area $A(U)$ of $U$ is defined to be the Riemann integral of the function equal to $1$ over $U$: $$A(U) =\iint_U 1\, dxdy.$$ Every integral is a limit and a limit might not exist; therefore, some regions have no areas! Warning: it's not the same as to say that some regions have zero areas. Note that the definition settles the debt from Chapter 13 about the meaning of the volume of a "shell". Thus we can finally put to rest the early idea we have relied on for so long that the area is a single integral; the area is a double integral! It's only when the region has a special shape, this double integral turns into a single one. Indeed, let's match the new and the old definitions. Suppose $y=f(x)$ is a function. Then the area $U$ located under its graph between the lines $x=a$ and $x=b$ is: $$\int_a^b f\, dx\text{ but also } \iint_{U}1\, dxdy.$$ How do we represent this region $U$ algebraically? We know that $x$ runs between its bounds $a$ and $b$, what about $y$? Does it run between some $c$ and $d$? No, that would make $U$ a rectangle! The answer is: bounds on $y$, in the pair $(x,y)$, depends on $x$. In other words, we fix $x$ and look at all possible values of $y$: they lie between $0$ and $f(x)$. So, we have $$U=\{(x,y):a\le x\le b,\ 0\le y\le f(x)\}.$$ These four will serve as the bounds in our double integral in the new version of Fubini's Theorem. Let's carry out the computation to confirm the match: $$\begin{array}{lll} \iint_{U}1\, dxdy &= \int_{x=a}^{x=b}\left(\int_{y=0}^{y=f(x)}1\, dy\right)\, dx\\ &= \int_{x=a}^{x=b}\left( y \bigg|_{y=0}^{y=f(x)}\right)\, dx\\ &= \int_{x=a}^{x=b}\left( f(x)-0 \right)\, dx\\ &= \int_{a}^{b} f(x)\, dx,\ \text{ indeed!}\\ \end{array}$$ The case we will address is only slightly more complex. Theorem (Fubini's Theorem for two variables). Suppose $z=f(x,y)$ is a function integrable of the plane region given by $$\begin{array}{lll} U&=\{(x,y):&a\le x\le b,\\ &&g_1(x)\le y\le g_2(x)\}. \end{array}$$ Then $$\iint_{U}f\, dxdy = \int_{x=a}^{x=b}\left(\int_{y=g_1(x)}^{y=g_2(x)}f(x,y)\, dy\right)\, dx.$$ Just as before, the integral in the parentheses is the area $A(x)$ of the cross-section of the solid that lies above the region: $$\iint_{U}f\, dxdy = \int_{x=a}^{x=b}A(x)\, dx.$$ Therefore, the theorem is just a special case of the Cavalieri Principle. So, to set up such an integral we should choose which of the two variables: $x$ is the independent variable and $y$ is the dependent. As an independent variable, $x$ can vary freely between two fixed numbers while the bounds for $y$ might depend on $x$. Example. Let's go back to the beginning of the chapter and compute, exactly this time, the volume of the sphere of radius $R$. It is given by the implicit relation $x^2+y^2+z^2=R^2$. We represent the upper half of the sphere as the region bounded by the $xy$-plane from below and by the graph of the function from above: $$f(x,y)=\sqrt{R^2-x^2-y^2}.$$ Its domain is the "footprint", or the shadow, of the surface: $$U=\{(x,y):x^2+y^2\le R^2\}.$$ It will also serve as the domain of integration! But in order to apply Fubini's Theorem we need to represent it in the appropriate way, with explicit bounds for $x$ and $y$. First we include all possible values of $x$ and then, for a fixed $x$, we find the interval for $y$ (visible on the picture): $$\begin{array}{lll} U&=\{(x,y):&-R\le x \le R,\\ && -\sqrt{R^2-x^2}\le y \le \sqrt{R^2-x^2} \}. \end{array}$$ Now the volume is this integral: $$\begin{array}{lll} \text{volume }&=\iint_U\sqrt{R^2-x^2-y^2} dA&\text{ ... Fubini}\\ &= \int_{x=-R}^{x=R}\left(\int_{y=-\sqrt{R^2-x^2}}^{y=\sqrt{R^2-x^2}}\sqrt{R^2-x^2-y^2}\, dy\right)\, dx &\text{ ...a familiar integral}\\ &= \int_{x=-R}^{x=R}\left( \frac{y}{2}\sqrt{R^2-x^2-y^2} +\frac{R^2-x^2}{2}\sin^{-1}\frac{y}{R^2-x^2} \bigg|_{y=-\sqrt{R^2-x^2}}^{y=\sqrt{R^2-x^2}}\right)\, dx &\text{ }\\ &= \int_{x=-R}^{x=R}\frac{R^2-x^2}{2}(\pi/2-(-\pi/2))\, dx &\text{ }\\ &= \pi/2\int_{x=-R}^{x=R}(R^2-x^2)\, dx &\text{ }\\ &= \pi/2(R^2x-x^3/3)\bigg|_{x=-R}^{x=R} &\text{ }\\ &= \pi(R^2R-R^3/3) &\text{ }\\ &=\frac{2}{3}\pi R^3. \end{array}$$ The integral of $y$ in parentheses is for a fixed $x=c$ and represents the area of the corresponding cross-section of the half-sphere with this vertical plane. $\square$ Thus the complexity of solving such a problem lies more with finding the bounds of integration than the function to integrate. However, the task is familiar from Chapter 13. Exercise. How much water is in a bucket that is tilted so the degree that makes the water to spill out? Solve (a) without integration, and (b) with integration. Exercise. What geometric properties of the bucket allow one to solve the problem without integration? Triple integrals Definition. Suppose a function $z=f(X)$ is defined over an arbitrary region $U$ in ${\bf R}^n$. Then the Riemann integral of $f$ over $U$ is defined to be the integral over any box $B$ that contains $U$, $$\int_Uf(X)\, dV=\int_Bg(X)\, dV,$$ with the function $g$ defined by: $$g(X)=\begin{cases}f(X)&\text {inside }B,\\0&\text{outside }U;\end{cases}$$ when such integral exists, the function $f$ is called integrable over $U$. Definition. Suppose $U$ is a region in ${\bf R}^n$. Then the $n$-volume $V(U)$ of $U$ is defined to be the Riemann integral of the function equal to $1$ over $U$: $$V(U) =\int_U 1\, dV.$$ Thus we can put to rest the recent idea that the volume is a double integral; the volume is a triple integral! It's only when the region has a special shape, this triple integral turns into a double one. Indeed, let's match the new and the old definitions. Suppose $z=f(x,y)$ is a function. Then the 3d region $W$ located under its graph above a plane region $U$ is: $$\iint_U f\, dxdy\text{ but also } \iiint_{W}1\, dxdydz.$$ How do we represent this region $W$ algebraically? We know that $(x,y)$ runs within $W$, what about $z$? Does it run between some $p$ and $q$? No, that would make $W$ a cylinder! The answer is: the bound on $z$, in the triple $(x,y,z)$, depends on $(x,y)$. In other words, we fix $(x,y)$ in $U$ and look at all possible values of $z$: they lie between $0$ and $f(x,y)$. So, we have $$W=\{(x,y,z):(x,y)\text{ in }U,\ 0\le z\le f(x,y)\}.$$ Let's carry out the computation to confirm the match: $$\begin{array}{lll} \iiint_{W}1\, dxdydz &= \int_U\left(\int_{z=0}^{y=f(x,y)}1\, dz\right)\, dxdy\\ &= \int_U\left( z \bigg|_{z=0}^{z=f(x,y)}\right)\, dxdy\\ &= \int_U\left( f(x,y)-0 \right)\, dxdy\\ &= \int_U f(x,y)\, dxdy,\ \text{ indeed!}\\ \end{array}$$ The case we will address is only slightly more complex: instead of one surface bounding from above we have two bounding from above and below. We accept the following without proof. Theorem (Incremental Fubini's Theorem). Suppose $u=f(x,y,z)$ is a function integrable of the 3d region given by $$W=\{(x,y,z):(x,y) \text{ in } U,\ h_1(x,y)\le z\le h_2(x,y)\},$$ where $U$ is some plane region. Then $$\iiint_{W}f\, dxdydz = \int_U\left(\int_{z=h_1(x,y)}^{z=h_2(x,y)}f(x,y,z)\, dz\right)\, dxdy.$$ Let's make the region $U$ in the $xy$-plane specific. It is in fact identical to the one we considered above: bounded between two graphs: $$U=\{(x,y):a\le x\le b,\ g_1(x)\le y\le g_2(x)\}.$$ In other words, we take the $2$-dimensional Fubini's Theorem and add another integral in both left- and right-hand sides. The new integral -- with respect to $z$ -- will need its own bounds... Theorem (Fubini's Theorem for three variables). Suppose $u=f(x,y,z)$ is a function integrable of the 3d region given by $$\begin{array}{lll} W&=\{(x,y,z):&a\le x\le b,\\ && g_1(x)\le y\le g_2(x),\\ && h_1(x,y)\le z\le h_2(x,y)\}. \end{array}$$ Then $$\iiint_{W}f\, dxdy = \int_{x=a}^{x=b}\left(\int_{y=g_1(x)}^{y=g_2(x)}\left(\int_{z=h_1(x,y)}^{z=h_2(x,y)}f(x,y)\, dz\right)\, dy\right)\, dx.$$ So, to set up such an integral we should choose the order of the three variables: $x$ is the independent variable, $y$ is dependent on $x$, and $z$ is dependent on $x$ and $y$. As an independent variable, $x$ can vary freely between two fixed numbers while the bounds for $y$ might depend on $x$ and the bounds for $z$ on both $x$ and $y$. The order of variables may change though... Example. Let's take another look at the sphere. The computation of its volume in the last section -- as a double integral -- transforms into a new one -- as a triple integral. We consider the whole sphere this time (we don't have to rely on symmetry anymore!). We know that we represent the lower half and the upper half of the surface of the sphere as follows: $$h_1(x,y)=-\sqrt{R^2-x^2-y^2} \text{ and } h_2(x,y)=\sqrt{R^2-x^2-y^2}.$$ These will serve as the bounds for $z$! The other two come from the previous analysis of the circle: Taken together they give us the region: $$\begin{array}{lll} U&=\{(x,y):&-R\le x \le R,\\ && -\sqrt{R^2-x^2}\le y \le \sqrt{R^2-x^2} ,\\ &&-\sqrt{R^2-x^2-y^2}\le z\le \sqrt{R^2-x^2-y^2}\}. \end{array}$$ Now the volume is this triple integral: $$\begin{array}{lll} \text{volume }&=\iiint_W 1\, dV\\ &= \int_{x=-R}^{x=R} \left( \int_{y=-\sqrt{R^2-x^2}}^{y=\sqrt{R^2-x^2}} \left( \int_{z=-\sqrt{R^2-x^2-y^2}}^{z=\sqrt{R^2-x^2-y^2}} 1\, dz \right) \, dy \right)\, dx \\ &= \int_{x=-R}^{x=R} \left( \int_{y=-\sqrt{R^2-x^2}}^{y=\sqrt{R^2-x^2}} \left( z\bigg|_{z=-\sqrt{R^2-x^2-y^2}}^{z=\sqrt{R^2-x^2-y^2}} \right) \, dy \right)\, dx \\ &= \int_{x=-R}^{x=R} \left( \int_{y=-\sqrt{R^2-x^2}}^{y=\sqrt{R^2-x^2}}2\sqrt{R^2-x^2-y^2}\, dy \right) \, dx . \end{array}$$ Except for $2$, this integral is identical to the integral in the last section. $\square$ The $n$-dimensional case Theorem (Constant Integral Rule). Suppose $z=f(X)$ is constant on region $U$ in ${\bf R}^n$, i.e., $f(X) = m$ for all $X$ in $U$ and some real number $m$. Then $f$ is integrable over $U$ and $$\int_U f\, dV = m \cdot V(U).$$ The interpretation of additivity can be seen as the same as the one for numerical functions: the quantity in a region formed from two regions with a negligible overlap is equal to the sum of the quantities in the two. The two intervals overlap by a point, with zero length. The two rectangles overlap by an interval, with zero area. The two boxes overlap by a rectangle, with zero volume. There is no double-counting in spite of overlapping! Theorem (Additivity). Suppose $u=f(X)$ is integrable over regions $R$ and over $S$ in ${\bf R}^n$ and the two regions overlap only by their boundary points. Then $f$ is integrable $R\cup S$ and we have: $$\int_R f\, dV +\int_S f\, dV = \int_{R\cup S} f\, dV .$$ Theorem (Comparison Rule). If $$f(X)\geq g(X) \text{ on } U,$$ for some region $U$ in ${\bf R}^n$, then $$\int_U f\, dV \geq \int_U g\, dV ,$$ provided $u=f(X)$ and $u=g(X)$ are integrable functions over $U$. Otherwise we have: $$\begin{array}{lll} \int_U f\, dV =-\infty&\Longrightarrow& \int_U g\, dV =-\infty;\\ \int_U f\, dV =+\infty&\Longleftarrow& \int_U g\, dV =+\infty. \end{array}$$ Theorem (Estimate Rule) Suppose $u=f(X)$ is an integrable function over a region $U$ in ${\bf R}^n$. Then, if $$m \leq f(X)\leq M,$$ for all $X$ in $U$, we have $$m\cdot v(U)\leq \int_U f\, dV \leq M\cdot v(U).$$ Theorem (Constant Multiple Rule). Suppose $u=f(X)$ is an integrable function over a region $U$ in ${\bf R}^n$. Then so is $c\cdot f$ for any real $c$ and we have: $$ \int_U(c\cdot f)\, dV = c \cdot \int_U f\, dV.$$ Theorem (Sum Rule). Suppose $u=f(X)$ and $u=g(X)$ are integrable functions over region $U$ in ${\bf R}^n$. Then so is $f+g$ and we have: $$\int_U \left( f + g \right)\, dV = \int_U f\, dV + \int_U g\, dV . $$ The center of mass From Par II, we know how to balance this non-uniform rod on a single point of support: The question is important because this point, called the center of mass, is the center of rotation of the object when subjected to a force. The analysis follows one for the $1$-dimensional case; just all the numbers turn into vectors! There are two (or more) axes this time. This is the $1$-dimensional balance equation: $$(a)(2m)=(2a)(m).$$ In other words, this expression: $$\text{ distance }\cdot \text{ weight },$$ called the moment, is the same to the left and to the right of the support. This distance, also called the lever, is now a vector! We simply re-write the balance equation: $$(-a)(2m)+(2a)(m)=0.$$ Then, $$\text{ moment }= \text{ vector of location }\cdot \text{ weight }.$$ Furthermore, we can assume there is an object at every location but the rest of them have $0$ mass. The balance equation becomes: $$...+(-2a)(0)+(-a)(2m)+(0)(0)+(a)(0)+(2a)(m)+...=0.$$ This analysis bring us to the idea of combining the weights and the distances in a proportional manner in order to evaluate the contribution of a particular weight to the overall balance. The balance equation simply says that the sum of all moments is $0$: $$\text{total moment } =\sum_i m_i A_i=0,$$ where $m_i$ is the weight of the object located at $A_i$. We now go back to the original problem. Suppose different weights are located on a beam, where do we put the support in order to balance it? It was entirely our decision to place the origin of our coordinate system at the center of mass. The result we have established should be independent from that choice and we can move the origin anywhere. We just need to execute a change of variables. Suppose the center of mass (and the origin of the old coordinate system) is located at the point with coordinate $Q$ of the new coordinate system. Then, the new coordinate of the object is $$C=A+Q.$$ Therefore, the balance equation has this form: $$\sum_C m_C (C-Q)=0.$$ Alternatively, we have: $$\sum_C m_C C=Q\sum_C m_C.$$ It's as if the whole weight is concentrated at $Q$. Hence the name. Definition. We can call a system of weights a collection of non-negative numbers $m_C$ called weights assigned to each point $C$ in a collection of locations in ${\bf R}^n$. For a given point $Q$ and for each location $C$, the scalar product $$m_C(C-Q)$$ is called the corresponding object's moment with respect to $Q$. The sum of the moments $$\sum_C m_C (C-Q)$$ is called the total moment with respect to $Q$. The center of mass of this system of weights is such a location $Q$ that the total moment with respect to $Q$ is zero; i.e., $$c=\frac{\sum_C m_C C}{\sum_C m_C}.$$ Exercise. What if we allow the values of $m_C$ to be negative? What is the meaning of the system and of $Q$? Suppose an augmented partition $P$ of a $n$-cell in ${\bf R}^n$ is given. Then the density $l$ is known and the terms $l(C)\Delta V_C$ are formed. Each of these terms is a weight shown as a column. The lever of each is also shown as if the weight of the cell is concentrated at point $C$. Then the total moment of this system of weights with respect to some $Q$ is the Riemann sum, $$\sum_C m_C (C-Q)=\sum_C l(C)\Delta V (C-Q)=\sum_R G\, \Delta V,$$ of the vector-valued function of $n$ variables: $$G(X)=l(X)(X-Q).$$ Just as above, the system of weights that makes up the object is balanced when the total moment is zero. We arrive to a similar conclusion. Theorem. Suppose $z=l(X)$ is a function of $n$ variables defined at points located at the $n$-cells of a partition of an $n$-cell $R$. Then the system of weights $l(X)\Delta X$ has its center of mass at the following point: $$Q=\frac{\sum_R l(X)X\, \Delta V }{\sum_R l(X) \, \Delta V }.$$ Example. Let's test this formula on some regions... $\square$ The next step is to think of the weights assigned to every location within $R$. It's a function! What we have learned is that the total moment of the region with respect to some $Q$ is approximated by that of this system of weights, which is the Riemann sum, $$\sum_C m_C (C-Q)=\sum_C l(C_i)\Delta V (C-Q)=\sum_R G\, \Delta V,$$ of the vector valued function $$G(x)=l(X)(X-Q).$$ The system doesn't have to be balanced and the total moment doesn't have to be zero for each partition, but it does have to diminish to zero as we refine the partition. This means that the Riemann integral of this function is zero. Definition. Suppose we have a non-negative function $z=l(X)$ integrable on region $R$ called the density function. For a given point $C$ and for each $X$, the vector-valued function $$Z=l(X)(X-Q)$$ is called the moment function with respect to $Q$. The integral of the moment function $$\int_R l(X)(X-Q)\, dV$$ is called the total moment of the segment with respect to $Q$. The center of mass of the segment is such a point $Q$ that the total moment with respect to $Q$ is zero. Theorem. Suppose we have a non-negative function $z=l(X)$ integrable on region $R$. If the mass of $R$ is not zero, then the center of mass is located at: $$Q=\frac{\int_R l(X)X\, dV}{\int_R l(X)\, dV}.$$ Proof. First, we note that $Z=l(X)(X-C)$ is integrable by PR. Then we use SR and CMR to compute the following: $$0=\text{ total moment }=\int_R l(X)(X-C)\, dV=\int_R l(X)X\, dV+Q\int_R l(X)\, dV.$$ Now solve for $C$. $\blacksquare$ Example. Suppose the density of a $2\times 2$ plate $R$ is changing linearly: from $1$ to $2$, similar to this: Then, $l(x)=x/2+y/2+1$ and the mass is: $$\begin{array}{ll} m=\int_R l(X)\, dV&=\int_{[0,1]\times[0,2]} (x/2+y/2+1)\, dV\\ &=\int_0^2 \int_0^2 (x/2+y/2+1)\, dxdy\\ &=\int_0^2 (x^2+xy/2+x)\big|_0^2\, dy\\ &= \end{array}$$ That's the denominator of the fraction. Now, we compute the numerator, the moment: $$\begin{array}{ll} M=\int_R l(X)X\, dV&=\int_R (x/2+y/2+1)<x,y>\, dV\\ &=\int_R <(x/2+y/2+1)x,(x/2+y/2+1)y>\, dV\\ &=<\int_R (x/2+y/2+1)x\, dV,\int_R (x/2+y/2+1)y\, dV>\\ &= \end{array}$$ Therefore, the center of mass is $$Q=\frac{M}{m}=.$$ $\square$ The expected value Example. Let's recall the example of a baker the bread of whose is priced based on the prices of wheat and sugar that change every day. Every day for a month he recorded two numbers -- $x$ and $y$ -- that represent how much the two prices deviated from some minimum. And now he wants to understand what was the average combination of prices. For each combination of prices he records how many times is has occurred. He puts these numbers in a table, which makes a function of two variables. These may be its inputs and outputs: $$\begin{array}{l|cccc} y\backslash x&0&1&2&...&10\\ \hline 0&(0,0)&(0,1)&(0,2)&...&(0,10)\\ 1&(1,0)&(1,1)&(2,2)&...&(1,10)\\ 2&(2,0)&(1,1)&(2,2)&...&(2,10)\\ ...&...&...&...&...\\ 10&(10,0)&(10,1)&(10,2)&...&(10,10)\\ \end{array}\quad\leadsto\quad \begin{array}{l|cccc} y\backslash x&0&1&2&...&10\\ \hline 0&1&3&5&...&0\\ 1&2&4&6&...&0\\ 2&3&5&7&...&1\\ ...&...&...&...&...\\ 10&0&1&7&...&0\\ \end{array}$$ This may look like a generic function of two variables but let's take a closer look at how the data is collected. It's not the exact value of either price that matters but rather its range, say $2\le x<3$. This range is an interval of values and together the range of pairs of prices is a rectangle, say $[2,3]\times [1,2]$. The data then is represented by a table that looks a bit different: $$\begin{array}{l|cccc} y\backslash x&0&&1&&2&&3&...&9&&10\\ \hline 0&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&1&|&3&|&5&|&...&|&0&|\\ 1&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&2&|&4&|&6&|&...&|&0&|\\ 2&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&3&|&5&|&7&|&...&|&1&|\\ 3&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ ...&&...&&...&&...&&...&&...\\ 9&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ &|&0&|&1&|&7&|&...&|&0&|\\ 10&\bullet&--&\bullet&--&\bullet&--&\bullet&...&\bullet&--&\bullet\\ \end{array}$$ We are justified to visualize this information as columns over these rectangles: We realize that this isn't just a function of two variables; it's a discrete $2$-form! Furthermore, the average price combination is equivalent -- in the dimensions $1,2,3$ -- to finding the center of mass of this collection of bars. A familiar problem about a ball thrown in the air has a solution: its trajectory is a parabola. However, we also know that if we throw really-really hard (like a rocket) the ball will start to orbit the Earth following an ellipse. The motion of two planets (or a star and a planet, or a planet and a satellite, etc.) is governed by a single force: the gravity. Recall how this force operates. Newton's Law of Gravity: The force of gravity between two objects is given by the formula: $$F = G \frac{mM}{r^2},$$ where: $F$ is the force between the objects; $G$ is the gravitational constant; $m$ is the mass of the first object; $M$ is the mass of the second object; $r$ is the distance between the centers of the masses; or, in the vector form (with the first object is located at the origin): $$F=-G mM\frac{X}{||X||^3}.$$ This is what we know. When the Earth is seen as "large" in comparison to the size of the trajectory, the gravity forces are assumed to be parallel in all locations (the orbit is a parabola). When the Earth is seen as "small" in comparison to the size of the trajectory, the gravity forces are assumed to point radially toward that point (the orbit) may be an ellipse, or a hyperbola, or a parabola. When the size and, therefore, the shape of the Earth matter, things get complicated... This substitution for the simplest case of a perfectly spherical Earth is justified below. Retrieved from "https://calculus123.com/index.php?title=Integrals_of_functions_of_several_variables&oldid=649"
CommonCrawl
Microplastics and Nanoplastics Incorporating terrain specific beaching within a lagrangian transport plastics model for Lake Erie Juliette Daily ORCID: orcid.org/0000-0002-9493-22171, Victor Onink2,3,4, Cleo E. Jongedijk5, Charlotte Laufkötter2,3 & Matthew J. Hoffman1 Microplastics and Nanoplastics volume 1, Article number: 19 (2021) Cite this article Mass estimates of plastic pollution in the Great Lakes based on surface samples differ by orders of magnitude from what is predicted by production and input rates. It has been theorized that a potential location of this missing plastic is on beaches and in nearshore water. We incorporate a terrain dependent beaching model to an existing hydrodynamic model for Lake Erie which includes three dimensional advection, turbulent mixing, density driven sinking, and deposition into the sediment. When examining parameter choices, in all simulations the majority of plastic in the lake is beached, potentially identifying a reservoir holding a large percentage of the lake's plastic which in previous studies has not been taken into account. The absolute amount of beached plastic is dependent on the parameter choices. We also find beached plastic does not accumulate homogeneously through the lake, with eastern regions of the lake, especially those downstream of population centers, most likely to be impacted. This effort constitutes a step towards identifying sinks of missing plastic in large bodies of water. Plastic is a ubiquitous source of pollution in various ecological compartments of the world's oceans and lakes. Historically, researchers have focused on modeling transport of plastic in the open ocean surface and lakes [1, 2]. However, mass estimates of surface plastic based on sampling efforts are orders of magnitude lower than what is predicted by input estimates [3]. Locations of this missing plastic have been proposed, such as suspended deeper in the water column, trapped in the sediment, or that it is filtered out by rivers and does not make it to large bodies of water [4–8]. However, one of the proposed explanations is that this missing plastic remains trapped in coastal zones for extended periods of time, potentially beaching and resuspending before eventually moving to off shore waters [9–12]. Around the world, plastic has been abundantly observed on coastlines, serving as another indicator of the coastline as a proposed reservoir for plastic [13–15]. Coastal zones are also considered to be a major generator of microplastics as the mechanisms present on shorelines are more likely to cause fragmentation [16, 17]. In the Great Lakes, much attention has been devoted to studying the presence of plastic transported in the water and deposited in sediment [7, 8, 18], and these mechanisms have been included in large-scale models for a more complete representation of plastic behavior [2, 19, 20]. Like in the global oceans, plastic has been found on the beaches of the Great Lakes, but specific beaching mechanisms have not been included in any large-scale hydrodynamic models for the lake [13, 18]. Previous modelling efforts in Lake Erie, which include sediment deposition, have shown significant accumulation of particles in the shallow nearshore sediment. This underlines the need to include near-shore processes, such as beaching, in these models for a more accurate understanding of nearshore plastic accumulation [19]. Surface samples taken in the Great Lakes have shown high plastic concentrations, which are even higher than average concentrations in North Atlantic and South Pacific [21–23]. Of the Great Lakes systems, Lake Erie often reports some of the highest surface plastic concentrations [20, 21, 23, 24]. Lake Erie is also an important source of fresh water for the region, and plastic has been found in tap water originating from the lake [25]. The existing work on the beaching of plastic is difficult to compare because of the variety of approaches taken. Beaching research began with a focus on sampling to understand concentrations [13–15]. Hinata et al. [10] expanded on this work to estimated residence times of plastic items on a beach. While the study only considered one beach, it showed various types of plastic items have beach residence times of 69 - 273 days by marking and tracking beached items on the beach over the course of 1 - 2 years. Preliminary modeling work of beached plastics has not accounted for resuspension [26, 27]. Recently, Onink et al. [11] systematically tested parameterizations for plastic beaching and resuspension on a global scale, identifying coastlines and nearshore water as significant oceanic plastic reservoirs. Currently, there is no modeled plastic beaching work in the Great Lakes. While beaching modeling work specific to plastics is not extensive, we can draw from other fields of particle modeling such as oil beaching [28]. Some observations indicate that different beach types have an impact on beaching and retention of various particles, where areas of more sediment accumulation are more likely to trap particles compared to steep rocky beaches which are less likely to retain plastic [14, 29]. Samaras et al. [30] modeled the behavior of beaching oil droplets and quantified the retention behavior of nine different beach types. However, no similar work has been done to date for microplastic resuspension. We include our beaching model within a large-scale hydrodynamic model to capture the combined effect of the beaching and open water mechanisms. In this work we incorporate a beaching model from [11] to a previously used hydrodynamic model for Lake Erie [19, 20]. The existing Lake Erie model accounts for three-dimensional advection, diffusion, polymer density and size, and sediment deposition. Additionally, we use a high resolution shoreline classification for the lake to assign terrain specific beaching probabilities. Together this allows us to predict areas of plastic accumulation along the coastline and derive a first pass estimate for the amount of plastic on the beaches of Lake Erie. The hydrodynamic model was previously used in [19], and a two dimensional version was used in [20]. We apply to model to Lake Erie. Lake Erie is the shallowest of the Great Lakes, with an average depth of 19 m [31]. The persistent current in Lake Erie flows west to east with inflow in the west from the Detroit River and outflow in the east to the Niagara River [32]. In the x−y direction, particle positions are advected given the dynamical system: $$\frac{dx}{dt} =u(x,y,z,t) $$ $$\frac{dy}{dt} =v(x,y,z,t)$$ where u, and v, are the interpolated horizontal x-direction, and y-direction velocities, respectively. We assume smooth behavior of currents below grid resolution, which allow for interpolation to the particle location. Here we use cubic interpolation in space, and third-order Lagrange interpolation in time. We solve the system using a Runge-Kutta 4th order numerical scheme (RK4) with timesteps of one hour, and the code is implemented in Matlab. In the vertical, or z, dimension we also model diffusion and density driven sinking in addition to advection. In the z-direction, the surface is set to z=0 and greater depths have negative values. The vertical position is given by the Milstein solution [33] to an advection diffusion PDE model [34]. This gives z as $${}\begin{aligned} z(t+\delta t)=&z(t)+(w_{a}+w_{b})\delta t+\frac{1}{2}K'(z(t))[\Delta W^{2}+\delta t]\\&+\Delta W \sqrt{2K(z(t))} \end{aligned} $$ where wb is the rise velocity of the particle, wa is vertical water velocity, K(z,t) is the vertical turbulent diffusivity, and ΔW is a Gaussian random variable taken from a distribution with mean zero and standard deviation \(\sqrt {\delta t}\) for timestep δt=5 sec. If a particle moves below the depth of the lake, we consider it deposited and remove it from the system after recording the location. The rise velocities, wb, were calculated using a modified version of Stokes' Law to allow for particles of irregular size [35]. With this method, we have a way to calculate sinking velocities for a range of particle sizes, densities, and shapes and also account for changes in sinking velocity due to temperature variations in lake. This method has also been previously used to model microplastics sinking velocities by [36]. Implementing sinking velocity using Stokes' Equation for particles of irregular shape [35], the velocity is given by $$w_{b}=\left(\frac{\rho_{p}-\rho_{w}}{\rho_{w}} gw_{*}\nu \right)^{1/3},$$ where ρp is the density of the particle, ρf is the density of the water, ν is the kinematic viscosity of the water, and w∗, the dimensionless sinking velocity, is given by $$w_{*}=1.71 \times 10^{-4}D_{*}^{2}$$ $$D_{*}=\frac{(\rho_{p}-\rho_{w})}{\rho_{w} \nu^{2}} gD_{n}^{3}.$$ Here Dn is the equivalent spherical diameter, or the diameter of a sphere of the same volume as the particle of irregular shape. To set bounds for Dn, we first define the Corey Shape Factor (CSF) as $$\phi=\frac{c}{\sqrt{ab}}$$ where a,b,c are the longest, intermediate, and shortest lengths of the particle respectively. We assume b=c, implying it is symmetric in size along two of its axes. With this assumption, $$ \phi=\sqrt{\frac{c}{a}} $$ Assuming the irregular particle is an ellipsoid with dimensions a,b,c, and recalling Dn is the diameter of a sphere with the same volume as the particle of irregular shape: $$\frac{4}{3}\pi \left(\frac{D_{n}}{2}\right)^{3}=\frac{4}{3}\pi \frac{a}{2}\frac{b}{2}\frac{c}{2},$$ or again assuming b=c and solving for Dn. $$D_{n}=\sqrt[3]{ac^{2}}$$ Lastly, substituting Equation 1 into the above, we have Dn=aϕ4/3. An irregular particle presents a worst case scenario for Dn, as for a perfectly spherical particle Dn is simply the diameter, so we assume an irregularly shaped particle to calculate a lower bound on Dn. To find this lower bound, we use values for CSF from literature, specifically ϕ=0.6 which was estimated as the mean CSF for a fragment [37]. Fragments make up 31% of microplastics found in water sampling, the second most common shape after fibers, which represent 48.5% of sampled shapes by count [38]. We do not model fibers because the shape is too irregular to calculate sinking velocity or model as a passive tracer. Additionally, while fibers are common by count, they have a low mass compared to particles are unlikely to account for a significant portion of missing plastic mass [37]. Plastic sample sizes are typically reported as the length of the longest dimension, which here is equivalent to a. To generate a range of values for Dn, we randomly generate numbers uniformly distributed between Dn(min)=amin(.6)4/3 and Dn(max)=amax for whatever range particle size (amin to amax) we wish to model. Here we model particles with longest dimension from 1.00 mm to 4.75 mm. It is possible that uniform may not be the best distribution for particle size, as sampling efforts tend to find higher quantities of particles at smaller sizes [37]. Investigating different distributions for size could be a potential improvement in future work. To include beaching, we follow the approach of [11]. We first identify all particles within a 2x2 km grid cell that borders the coastline of the lake as nearshore. The probability of beaching for any nearshore particle is given as $$P_{b}^{i}=1-\exp{(-dt/T_{b}^{i})},$$ where dt is the time step and \(T_{b}^{i}\) is the characteristic beaching time at that shore point i. Once a particle is beached, the probability of resuspension is given by $$P_{r}^{i}=1-\exp{(-dt/T_{r}^{i})},$$ where \(T_{r}^{i}\) is the characteristic residence time for plastic on the beach for that beach type. We expand the beaching model to include beach type dependence. To classify beach types, we interpolate a beach type data set to our model grid (Fig. 1).We classify seven different beach types of sand beach, artificial, coarse grain flat coast, coastal wetland/riparian zone, N/A – mixed beach, rocky cliffs/bluffs, and sediment scarp (Table 1). These beach types were selected because they were the classification types in the data set, taken from [39]. To include the beach type dependence in the model, we choose \(T_{b}^{i}\) and \(T_{r}^{i}\) values based on that beach type at shore point i. The beaching probability does not depend on changes in the local hydrodynamics, but the stochastic nature of the parametrization is intended to account for this. Beach type classifications interpolated to model grid for Lake Erie Table 1 Beaching time and residence time ratios to sand beach for beach type classifications There is a lack of research on beaching behavior for plastics specifically, so we use ratios, \(\gamma ^{i}_{r}\), of the residence time of oil droplets on sand beaches to the residence times for other beach types in the Mediterranean Sea [30]. The beach types in this paper do not directly correspond to the classifications in our data set, so they were paired as accurately as possible, in some cases using satellite images of shorelines to identify characteristics (Table 1). We then use the characteristic residence time, \(T_{r}^{sand}\) for plastics on a sand beach from [10] to predict residence times, \(T^{i}_{r}=\gamma _{r}^{i}*T_{r}^{sand}\), for all other shore types. We make the assumption for all beach types that the characteristic beaching time is dependent on the reciprocal of the residence time ratio, where \(T^{i}_{b}=\gamma _{b}^{i}*T_{b}^{sand}\) with \(\gamma ^{i}_{b}=1/\gamma _{r}^{i}\). This assumption is made because if a certain beach type has a high probability of beaching, it is also likely to trap the plastic leading to a long residence time. Conversely, beach types with a low probability of beaching are expected to have a low residence time. This is similar to the approach by [11], where resuspension times were varied using a ratio for the sandiness of a coastline. To examine the sensitivity of the model we also run a version with no beach type dependence (NBD), meaning a particle has the same probability of beaching or resuspending at any shore point. In the NBD model, the values for Tb and Tr are fixed for the entire lake, so \(T_{b}^{i}\) is either 1, 2, or 5 days depending on the run, and \(T_{r}^{i}=69\) days. These values are constant for all i. This was the lower range of observed residence time for plastics on a beach based on field observations [10], and was also a value used in previous modeling work [11]. The choices for \(T_{b}^{i}\) are the lower range of values used in modeling work in the worlds oceans [11]. They were chosen to reflect the lower overall time scales in the smaller system of the lake, as compared to the ocean. For all models, we prevent nearshore particles from being deposited. This is done to isolate the effect of beaching because as depth goes to zero, we cannot differentiate between beaching and deposition, and deposition is permanent in the model. If model dynamics cause particles to move below the depth in a nearshore cell, they are reset to above the lake floor by a distance of 5% of the lake depth in that spot. We do not anticipate near shore deposition would dramatically impact results, because we model floating polymers that are less likely to sink in the lake. To input plastic into the model, we release a particle from every nearshore grid point, for a total of 492 particles, and assign each particle a weight representative of the nearshore population at the release point. Nearshore population data comes from [2], and this was also the same method used in [19, 20]. The nearshore population is calculated using US and Canadian census data of postal regions along the lake. We release particles every 12 hours for the first two months of each run. This is done because the modeled distribution of coastal and beached plastic is sensitive to input, and this way we can track the evolution of the distribution without the influence of continuously released plastic. This was also the same approach taken by [11]. These simulations are run with particles of polyethylene (PE) which is positively buoyant, with initial densities, ρp, randomly sampled from a uniform distribution from 917 to 965 kg/m3 [40]. We choose to model polyethylene because it is positively buoyant, meaning unlike a negatively buoyant particle, it will not sink shortly after entering the lake. Floating particles have the opportunity to experience beaching and nearshore dynamics. Polyethylene is also very common; it makes up about 32% of all produced plastic, more than any other single polymer [41]. We use interpolated temperature, diffusivity, and current output from NOAA's Lake Erie FVCOM hydrodynamic model ran using forcing files from 2012-2014 [42]. FVCOM uses an unstructured grid to fit smoothly to shoreline. For our use, the FVCOM output was linearly interpolated to a regular 2 km spaced grid to reduce computational cost of interpolation within the model. A 2 km grid has been previously used for plastic transport mesh size in Lake Erie [19, 20]. FVCOM is the operational hydrodynamic model used by NOAA Great Lakes Environmental Research Laboratory (GLERL). The density and viscosity of the water, ρw and ν, are calculated using the state equations for water with salinity zero and temperature output from FVCOM [43]. We first ran model simulations with no beach type dependence (NBD) for a year each, over three runs comparing different parameters. The simulation length was chosen to balance long term model behavior and computational cost while comparing parameter choice. The three choices of 1 day, 2 days, and 5 days for the parameter Tb in the NBD model had a significant effect on the number of beached particles (Fig. 2). With each choice of beaching parameter, the beached fraction increased linearly over the first two months, and then after the particle input ended the beached fraction slowly decreased over time. Lower Tb values reduce the total beached fraction, but the general qualitative behavior remained the same. Because the general behavior is the same for all values of Tb, we choose \(T_{b}^{sand}=2\) days for the beach type dependence model runs as it is a intermediate choice. The model with beach type dependence is run for three years to capture more long term beaching behavior. We chose three years for the run length because Lake Erie has a hydraulic residence time of 2.7 years [44]. One choice was used for \(T_{b}^{sand}\) in the three year run to limit computational cost. Comparison of influence of three values of Tb on the fraction of beached particles for the model run with no beach type dependence (NBD). Residence time for plastic on the beach was fixed at Tb=69 days for all three runs We define four reservoirs particles can be in. These reservoirs are beached, deposited, offshore, and nearshore, where nearshore are particles in the adjacent 2 km x 2 km grid cell to shore (Fig. 3). Beached particles make up a majority of all particles after a year long simulation, 62.7% for the model with beach type dependence and 71.9% without beach type dependence. As for differences between the two models, there are slightly fewer beached particles in the beach dependence model because there is, on average, a lower probability of beaching across the lake of.058 versus.061 per three-hour timestep. There is no accumulation of nearshore deposited particles that has been seen in other modeling work because there is no deposition in nearshore grid cells [19]. However, of the reservoirs, the number of deposited particles is the only one with monotonic growth, as it is the only reservoir particles cannot move out of. In this model, deposition is permanent, while in reality particles may have the chance to resuspend. This is a deficiency of the model both as it does not reflect lake dynamics, and if the model were to run indefinitely, all the particles would eventually be in this reservoir. Particle locations over one year for both beach type dependence (BD) and no dependence (NBD) with Tb=2 days Additional motivation for a more sophisticated sediment resuspension model comes from examining the distribution of particle sizes remaining in the system. Particle density and size begin with uniform distributions. When considering floating, non-beached particles, we see after the three-year run a distinct preference for larger particles to remain in the system (Fig. 4). This is likely due to smaller particles being closer to neutrally buoyant, and thus more likely to move down the water column and ultimately be deposited. This is also consistent with observations by [45], who found fewer smaller microplastics than expected in surface samples, which could be due to increased susceptibility to vertical transport mechanisms. This skew towards larger particles is less noticeable among the beached particles, where the distribution of the diameters is closer to uniform. This is likely because particles can accumulate on the beach where their size and rise velocity becomes meaningless in the scope of our model, and they can not be deposited. Within our model, being beached protects particles from the mechanisms that can introduce a bias towards larger particles. It is possible that if deposition was not an ultimate fate, the size distribution would not be as skewed at the end of the run. Distributions of particle equivalent spherical diameter, or the diameter of a sphere of the same volume as the particle of irregular shape (left) and calculated rise velocities (right) in the system for beached and floating particles after a three year run. A floating particle has a positive rise velocity In nearshore water we have removed deposition and replaced it with beaching. However, in the rest of the lake as it is currently implemented, particles are permanently deposited if they hit the lake bed. We also do not account for resuspension from the sediment, which causes the number of deposited particles to increase monotonically. This reduces the number of particles active in the system over time. In a real lake, plastic may resuspend, or move along the lake floor. A more sophisticated model for deposition would ideally incorporate strategies used in this beaching model such as lake bed type specific chances of deposition and resuspension, or consider near-bed velocities and particle transport along the lake bed [46, 47]. However, such data is currently not available and would require additional laboratory and field experiments. Our model does not account for mechanisms that can remove positively buoyant plastic from the surface, but these mechanisms would also increase depositions. In our model, this would likely have the effect of increasing deposition and reducing the amount of beached plastic. Positively buoyant plastics have been found in samples in both nearshore and deep sea sediment [48]. This is potentially because of biofouling, or the buildup of organic matter and organisms [49]. Biofouling can increase the density of a particle, causing it to sink over time [36, 49, 50]. The role of biofouling could be studied in a future model iteration by combining the current hydrodynamic model with a marine ecosystem model, such as with [51]. The amount of beached plastic could also be influenced by fragmentation, which we do not account for in our model. Fragmentation is the breakdown of the size of plastic particles, often caused by photo-degradation and abrasion [52]. The mechanisms that cause fragmentation can be stronger in shallow, nearshore water, potentially causing fragmentation to have an increased impact on beached plastic [9]. Initially when implementing the beach type dependent model, we hypothesized that beach type would be the predominant factor impacting plastic accumulation. It does have an undeniable impact, especially in regions with a high probability of beaching, such as wetlands. However, we still see many similarities between accumulation patterns for both the beach dependence and no dependence models (Fig. 5). These similarities can likely be explained by shore geometry and advection patterns, which are constant through both models. Additionally, some impact from beach type may indirectly be included in the no dependence model. Regions of high sediment accumulation (i.e. sandy beaches or wetlands) have high probabilities of beaching in the model, but also the lake has the physical properties that made this specific beach type in the first place over a much longer timescale, which could also allow for plastic accumulation. Number of particles beached on the North and South shores for the model with and without beach type dependence. For the model with beach type dependence, probability of beaching is shown on the x-axis. For the model with no beach type dependence, probability of beaching per three hour timestep is.058 The predominantly west to east currents, caused by the prevailing wind and outflow on the east side of the lake, have a large impact on patterns of accumulation and areas with the highest concentrations of beached plastic in the lake. Plastic tends to be pulled towards the eastern side of the lake, which causes plastic in this region to come from all over the lake. Plastic beached at the western side of the lake tends to originate almost entirely from within that region of the lake. When we consider the amount of particles beached by count, this behavior is fairly uniform across the lake, i.e. as we move east, the percentage of beached plastic that originated within that same region drops (Fig. 6). Specifically, in the western most region of the lake (Region 1 in Fig. 6), 100% of beached plastic comes from within that region. Contrasting with this, the eastern most region containing Buffalo, NY (Region 10 in Fig. 6) produced only 41% of the plastic by particle count beached there. Percentage of beached plastic in a region that originated internally. Top: percentage by particle count, bottom: percentage with particles weighted by population at origin point. Numbers in top figure correspond to region number However, the impact of population centers on the lake can disrupt this trend. If we weight the particles by the nearshore population where they originated, the percentage of plastic in the Buffalo, NY (Population 1.1 million) area that originated internally rises to 74% (Fig. 7) [53]. Additionally, after weighting particles by population, the percentage of plastic in the Buffalo region originating from Cleveland, OH (Population 2.0 million) rises from 2% to 8%. Within the Cleveland region itself (Region 4 in Fig. 6), the percentage of internally produced plastic rises from 63% to 91% after weighting by population. The effect of population centers and prevailing currents can work together to impact regions down current from population centers. In the region immediately to the east of Cleveland (Region 6 in Fig. 6), 54% of the beached plastic originated in the Cleveland region when weighted by population. It is also possible that the beach types across these regions could impact accumulation patterns (Table 2). The coastal wetland beach type only account for 8% of the shoreline in the lake, but holds 29% of beached plastic. These regions likely trap plastic that would otherwise be transported out of that region, and may drive accumulation in the regions they are located such as the western end of the lake (Region 1) or the north east coast (Region 9). Conversely, sediment scarp makes up 17% of the shoreline, but only holds.4% of beached plastic. This may prevent regions with sediment scarp from accumulating self produced plastic, and instead offload it to other regions. Origins of beached particles in Cleveland (Region 4) and Buffalo (Region 10). Left: percentages by particle count, Right: percentages weighted by population at particle origin point We compare our three year run results, ending in 2013, to the one published beach sample data for Lake Erie that is available in the literature with samples from 2008 [13]. To compare, we normalize our model concentrations and sample concentrations by dividing each by the sum of concentrations of that type (Fig. 8). When considering our model ran with beach type dependence, we see both the highest sample concentration (Presque Isle) and the lowest sample concentration (Port Stanley) agree with the locations of the highest and lowest model concentrations. However, there are some faults with this comparison. The sample locations were all classified as sandy when reported, however the model only classifies shore type down to the grid cell, which is 2 km by 2 km, and does not allow for high enough resolution to capture the full shore complexity. Thus, we see that our model only classifies two of the sample locations as sandy. Additionally, it is especially difficult to compare beached plastic sample data to model results. The beach samples used here, and in general, are normally only taken on sandy beaches. As this is only one beach type, we do not receive data on concentrations for other types to examine model behavior compared to samples in other terrain. Additionally, sand beaches are most likely to be used for recreation, and consequentially more likely to be the site of grooming or trash pickup efforts, which can skew samples collected there. Ideally, the model could be improved by validating with more, taken at regular spacial intervals around the lake to account for all beach types, rather than just sandy. In addition, regularly revisiting beach sites would provide greater insight into the temporal variability of samples concentrations. Comparison to 2008 samples [13] for 2012 with no beach type dependence (NBD) and 2012-2014 with beach type dependence (BD). Model classifications of beach type and sample locations shown on the inlaid map Table 2 Portion of the shoreline each classification makes up and what portion of beached plastic is beached in that type across the lake In the world's lakes and oceans plastic mass estimates based off surface sampling differ by multiple orders of magnitude from what is predicted by input estimates, indicating large quantities of missing plastic that are not present at the surface. In the oceans, it has recently been proposed that nearshore beaching plastic is the predominant location of this missing plastic [11, 54–57]. Additionally, previous modeling work for Lake Erie has shown high accumulation of plastic in the sediment in grid cells along the coast, further motivating the inclusion of beaching in the model [19]. Here we model particle beaching within the scope of a three dimensional hydrodynamic model, as the first work for the Great Lakes to do so. Additionally, this is the first large-scale beaching model to include specific plastic beaching probabilities for multiple beach types from broad morphological typologies. The total amount of beached plastic is sensitive to parameter choice for characteristic beaching time, Tb, so it is difficult to draw any definitive conclusions about what percentage of plastic litter we expect to be beached in the lake. However, the general accumulation behavior did not show a high dependency on parameters, at least for the parameters tested here. For all the parameter choices we considered, the majority of plastic in the system is beached. We also found that besides shore type, other factors such as advection and shoreline geometry impact accumulation patterns in the lake. We also found that as one moves east across the lake, there is more impact from input from all over the lake, while at the western most side of the lake, 100% of beached plastic is internally produced. We did find that population centers disrupt this general west to east accumulation pattern by causing higher accumulation in their regions, or regions downstream. We would expect comparable results for a similar body of water such as Lake Ontario which has similar size, shape, and prevailing currents as Lake Erie [31]. However, local flow and beach characteristics along with the distribution of population centers can influence beached plastic accumulation. The parameters used in our model could be improved by additional experimental research on plastic beaching. Additionally, model beaching results are difficult to validate because beach samples often do not reflect the true amount of plastic that is likely to have accumulated. As is the case for Lake Erie, beach samples tend to be taken on sandy beaches [13]. In addition to being unable to compare across beach types, sandy beaches are often used for recreation, and litter is typically routinely removed by grooming or pickup efforts, skewing down the amount of plastic reported in these locations [18]. A sampling effort that took regularly spaced samples around the lake, regardless of beach type, could provide better data for model validation [14]. Beaching plastic results are also heavily dependent on input data, as compared to other plastic modeling, because land-based plastic enters the system directly at the beaching location. In the worlds oceans, land-based plastic is considered the dominant source of plastic pollution [58]. Additionally, while we include population based plastic input from around rivers, we do not specifically model river input as a point source, but rather distribute this input along the coastline near the river mouth. This has the potential to impact accumulation patterns near the river mouth. Wastewater treatment plants (WWTP) are also understood to be a source of microplastics [60, 61]. Additionally, we do not account for plastic released within the lake from fishing or shipping [18]. With a more encompassing input data set, we could likely improve our beaching model and further understand the most impacted areas. While future work can expand on our findings here, this serves as preliminary model of beached microplastics in Lake Erie. We find that while our parameter choices were uncertain, for the parameters we tested the general behavior of the plastic was similar, with a majority of plastic being beached. The model used here indicates that accumulation in the lake is very dependent on advection patterns, with some impact from shoreline geometry and population centers. In future work we hope to be able to refine parameter choices and include a more complex deposition model. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. NBD: No beach type dependence BD: Beach type dependence Van Sebille E, Aliani S, Law K, Maximenko N, Alsina J, Bagaev A, Bergmann M, Chapron B, Chubarenko I, Cózar A, et al. The physical oceanography of the transport of floating marine debris. Environ Res Lett. 2020; 15(2):023003. Hoffman M, Hittinger E. Inventory and transport of plastic debris in the Laurentian Great Lakes. Mar Pollut Bull. 2017; 115(1-2):273–81. Cozar A, Echevarría F, González-Gordillo J, Irigoien X, Ubeda B, Hernández-León S, Palma T, Navarro S, García-de-Lomas J, Ruiz A, et al. Plastic debris in the open ocean. Proc Natl Acad Sci U S A. 2014; 111(28):10239–44. Weiss L, Ludwig W, Heussner S, Canals M, Ghiglione J-F, Estournel C, Constant M, Kerhervé P. The missing ocean plastic sink: Gone with the rivers. Science. 2021; 373(6550):107–11. Choy C, Robison B, Gagne T, Erwin B, Firl E, Halden R, Hamilton J, Katija K, Lisin S, Rolsky C, Houtan K. The vertical distribution and biological transport of marine microplastics across the epipelagic and mesopelagic water column. Sci Rep. 2019; 9(1):1–9. https://doi.org/10.1038/s41598-019-44117-2.Accessed 07 Jan 2020. Ballent A, Corcoran P, Madden O, Helm P, Longstaffe F. Sources and sinks of microplastics in Canadian Lake Ontario nearshore, tributary and beach sediments. Mar Pollut Bull. 2016; 110(1):383–95. Corcoran P, Norris T, Ceccanese T, Walzak M, Helm P, Marvin C. Hidden plastics of Lake Ontario, Canada and their potential preservation in the sediment record. Environ Pollut. 2015; 204:17–25. Dean B, Corcoran P, Helm P. Factors influencing microplastic abundances in nearshore, tributary and beach sediments along the Ontario shoreline of Lake Erie. J Great Lakes Res. 2018; 44(5):1002–9. Lebreton L, Egger M, Slat B. A global mass budget for positively buoyant macroplastic debris in the ocean. Sci Rep. 2019; 9(1):1–10. Hinata H, Mori K, Ohno K, Miyao Y, Kataoka T. An estimation of the average residence times and onshore-offshore diffusivities of beached microplastics based on the population decay of tagged meso-and macrolitter. Mar Pollut Bull. 2017; 122(1-2):17–26. Onink V, Jongedijk C, Hoffman M, van Sebille E, Laufkötter C. Global simulations of marine plastic transport show plastic trapping in coastal zones. Environ Res Lett. 2021. Chenillat F, Huck T, Maes C, Grima N, Blanke B. Fate of floating plastic debris released along the coasts in a global ocean model. Mar Pollut Bull. 2021; 165:112116. Zbyszewski M, Corcoran P, Hockin A. Comparison of the distribution and degradation of plastic debris along shorelines of the Great Lakes, North America. J Great Lakes Res. 2014; 40(2):288–99. Hardesty B, Lawson T, van der Velde T, Lansdell M, Wilcox C. Estimating quantities and sources of marine debris at a continental scale. Front Ecol Environ. 2017; 15(1):18–25. Naidoo T, Glassom D, Smit A. Plastic pollution in five urban estuaries of KwaZulu-Natal, South Africa. Mar Pollut Bull. 2015; 101(1):473–80. Andrady A. Microplastics in the marine environment. Mar Pollut Bull. 2011; 62(8):1596–605. Song Y, Hong S, Jang M, Han G, Jung S, Shim W. Combined effects of uv exposure duration and mechanical abrasion on microplastic fragmentation by polymer type. Bind Environ Sci Technol. 2017; 51(8):4368–76. Driedger A, Dürr H, Mitchell K, Cappellen P. Plastic debris in the Laurentian Great Lakes: A review. J Great Lakes Res. 2015; 41(1):9–19. https://doi.org/10.1016/j.jglr.2014.12.020. Daily J, Hoffman M. Modeling the three-dimensional transport and distribution of multiple microplastic polymer types in Lake Erie. Mar Pollut Bull. 2020; 154:111024. Mason S, Daily J, Aleid G, Ricotta R, Smith M, Donnelly K, Knauff R, Edwards W, Hoffman M. High levels of pelagic plastic pollution within the surface waters of Lakes Erie and Ontario. J Great Lakes Res. 2020; 46:277–88. Earn A, Bucci K, Rochman C. A systematic review of the literature on plastic pollution in the Laurentian Great Lakes and its effects on freshwater biota. J Great Lakes Res. 2020; 47:120–33. Law K, Morét-Ferguson S, Maximenko N, Proskurowski G, Peacock E, Hafner J, Reddy C. Plastic accumulation in the north atlantic subtropical gyre. Science. 2010; 329(5996):1185–8. Eriksen M, Maximenko N, Thiel M, Cummins A, Lattin G, Wilson S, Hafner J, Zellers A, Rifman S. Plastic pollution in the South Pacific subtropical gyre. Mar Pollut Bull. 2013; 68(1-2):71–6. Cable R, Beletsky D, Beletsky R, Wigginton K, Locke B, Duhaime M. Distribution and modeled transport of plastic pollution in the Great Lakes, the world's largest freshwater resource. Front Environ Sci. 2017; 5:45. Kosuth M, Mason S, Wattenberg E. Anthropogenic contamination of tap water, beer, and sea salt. PloS ONE. 2018; 13(4):0194970. Critchell K, Grech A, Schlaefer J, Andutta F, Lambrechts J, Wolanski E, Hamann M. Modelling the fate of marine debris along a complex shoreline: Lessons from the great barrier reef. Estuar Coast Shelf Sci. 2015; 167:414–26. Turrell W. A simple model of wind-blown tidal strandlines: How marine litter is deposited on a mid-latitude, macro-tidal shelf sea beach. Mar Pollut Bull. 2018; 137:315–30. Gundlach E. Oil-holding capacities and removal coefficients for different shoreline types to computer simulate spills in coastal waters. In: International Oil Spill Conference. American Petroleum Institute: 1987. p. 451–457. Ryan P, Perold V, Osborne A, Moloney C. Consistent patterns of debris on South African beaches indicate that industrial pellets and other mesoplastic items mostly derive from local sources. Environ Pollut. 2018; 238:1008–16. Samaras A, De Dominicis M, Archetti R, Lamberti A, Pinardi N. Towards improving the representation of beaching in oil spill models: A case study. Mar Pollut Bull. 2014; 88(1-2):91–101. Agency U. Physical Features of the Great Lakes. https://www.epa.gov/greatlakes/physical-features-great-lakes. Saylor J, Miller G. Studies of large-scale currents in Lake Erie, 1979–80. J Great Lakes Res. 1987; 13(4):487–514. Mil'shtein G. A method of second-order accuracy integration of stochastic differential equations. Theory Probab Its Appl. 1979; 23(2):396–401. Visser A. Using random walk models to simulate the vertical distribution of particles in a turbulent water column. Mar Ecol Prog Ser. 1997; 158:275–81. Dietrich W. Settling velocity of natural particles. Water Resour Res. 1982; 18(6):1615–26. Kooi M, Nes E, Scheffer M, Koelmans A. Ups and downs in the ocean: effects of biofouling on vertical transport of microplastics. Environ Sci Technol. 2017; 51(14):7963–71. Kooi M, Koelmans A. Simplifying microplastic via continuous probability distributions for size, shape, and density. Environ Sci Technol Lett. 2019; 6(9):551–7. Burns E, Boxall A. Microplastics in the aquatic environment: Evidence for or against adverse impacts and major knowledge gaps. Environ Toxicol Chem. 2018; 37(11):2776–96. Wang L, Riseng C, Mason L, Wehrly K, Rutherford E, McKenna Jr J, Castiglione C, Johnson L, Infante D, Sowa S, et al. A spatial classification and database for management, research, and policy making: The Great Lakes aquatic habitat framework. J Great Lakes Res. 2015; 41(2):584–96. Hidalgo-Ruz V, Gutow L, Thompson R, Thiel M. Microplastics in the marine environment: a review of the methods used for identification and quantification. Environ Sci Technol. 2012; 46(6):3060–75. Geyer R, Jambeck J, Law K. Production, use, and fate of all plastics ever made. Sci Adv. 2017; 3(7):1700782. Changsheng C, Beardsley R, Cowles G. An unstructured grid, finite-volume coastal ocean model (FVCOM) system. Oceanography. 2006; 19(1):78–89. Fofonoff NP, Millard Jr RC. Algorithms for the computation of fundamental properties of seawater: Unesco; 1983. Quinn F. Hydraulic residence times for the laurentian great lakes. J Great Lakes Res. 1992; 18(1):22–8. Eriksen M, Lebreton L, Carson H, Thiel M, Moore C, Borerro J, Galgani F, Ryan P, Reisser J. Plastic pollution in the world's oceans: more than 5 trillion plastic pieces weighing over 250,000 tons afloat at sea. PloS ONE. 2014; 9(12):111913. Drummond J, Aubeneau A, Packman A. Stochastic modeling of fine particulate organic carbon dynamics in rivers. Water Res Res. 2014; 50(5):4341–56. Kane I, Clare M. Dispersion, accumulation, and the ultimate fate of microplastics in deep-marine environments: A review and future directions. Front Earth Sci. 2019; 7:80. Erni-Cassola G, Zadjelovic V, Gibson M, Christie-Oleza J. Distribution of plastic polymer types in the marine environment; a meta-analysis. J Hazard Mater. 2019; 369:691–8. Ye S, Andrady A. Fouling of floating plastic debris under Biscayne Bay exposure conditions. Mar Poll Bull. 1991; 22(12):608–13. https://doi.org/10.1016/0025-326X(91)90249-R.Accessed 05 Oct 2016. Fazey F, Ryan P. Biofouling on buoyant marine plastics: An experimental study into the effect of size on surface longevity. Environ Pollut. 2016; 210:354–60. https://doi.org/10.1016/j.envpol.2016.01.026. Accessed 06 Jan 2020. Lobelle D, Kooi M, Koelmans A, Laufkötter C, Jongedijk C, Kehl C, van Sebille E. Global modeled sinking characteristics of biofouled microplastic. J Geophys Res Oceans. 2021; 126(4):2020–017098. Barnes D, Galgani F, Thompson R, Barlaz M. Accumulation and fragmentation of plastic debris in global environments. Philos Trans R Soc B Biol Sci. 2009; 364(1526):1985–98. Bureau U. American Community Survey 5-Year Data (2009-2019). https://www.census.gov/data/developers/data-sets/acs-5year.html. Lebreton L, Egger M, Slat B. A global mass budget for positively buoyant macroplastic debris in the ocean. Sci Rep. 2019; 9(1):1–10. https://doi.org/10.1038/s41598-019-49413-5. Accessed 07 Jan 2020. Hardesty B, Harari J, Isobe A, Lebreton L, Maximenko N, Potemra J, Van Sebille E, Vethaak A, Wilcox C. Using numerical model simulations to improve the understanding of micro-plastic distribution and pathways in the marine environment. Front Mar Sci. 2017; 4:30. Schwarz A, Ligthart T, Boukris E, Van Harmelen T. Sources, transport, and accumulation of different types of plastic litter in aquatic environments: a review study. Mar Pollut Bull. 2019; 143:92–100. Morales-Caselles C, Viejo J, Martí E, González-Fernández D, Pragnell-Raasch H, González-Gordillo J, Montero E, Arroyo G, Hanke G, Salvo V, et al. An inshore–offshore sorting system revealed from global classification of ocean litter. Nat Sustain. 2021; 4(6):484–93. Kershaw PJ, Rochman CM. Sources, fate and effects of microplastics in the marine environment: part 2 of a global assessment. Reports and Studies-IMO/FAO/Unesco-IOC/WMO/IAEA/UN/UNEP Joint Group of Experts on the Scientific Aspects of Marine Environmental Protection (GESAMP) Eng No. 93. 2015. IMO/FAO/UNESCO-IOC/ÚNIDO/WMO/IAEA/UN/UNEP/UNDP. Lebreton L, Van Der Zwet J, Damsteeg J-W, Slat B, Andrady A, Reisser J. River plastic emissions to the world's oceans. Nat Commun. 2017; 8:15611. Simon M, van Alst N, Vollertsen J. Quantification of microplastic mass and removal rates at wastewater treatment plants applying Focal Plane Array (FPA)-based Fourier Transform Infrared (FT-IR) imaging. Water Res. 2018; 142:1–9. https://doi.org/10.1016/j.watres.2018.05.019. Accessed 07 Jan 2020. Mintenig S, Int-Veen I, Löder M, Primpke S, Gerdts G. Identification of microplastic in effluents of waste water treatment plants using focal plane array-based micro-Fourier-transform infrared imaging. Water Res. 2017; 108:365–72. https://doi.org/10.1016/j.watres.2016.11.015. Accessed 07 Jan 2020. This research is a product resulting from project NYSG: 80794/1160707/1 funded under award NA18OAR4170096 from the National Sea Grant College Program of the U.S. Department of Commerce's National Oceanic and Atmospheric Administration, to the Research Foundation for State University of New York on behalf of New York Sea Grant. The statements, findings, conclusions, views and recommendations are those of the author(s) and do not necessarily reflect the views of any of those organizations. Additionally, this work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562, through allocation TG-OCE150006. CL and VO acknowledge support from the Swiss National Science Foundation under grant 174124. School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY, United States Juliette Daily & Matthew J. Hoffman Climate and Environmental Physics, Physics Institute, University of Bern, Bern, Switzerland Victor Onink & Charlotte Laufkötter Oeschger Centre for Climate Change Research, University of Bern, Bern, Switzerland Institute for Marine and Atmospheric Research, Utrecht University, Utrecht, The Netherlands Victor Onink Department of Civil and Environmental Engineering, Imperial College London, London, United Kingdom Cleo E. Jongedijk Juliette Daily Charlotte Laufkötter Matthew J. Hoffman JD developed the model, performed the analysis, and wrote the first draft. VO, CJ, CL, and MH provided help with model conceptualization, methodology, editing, and analysis. All authors read and approved the final manuscript. Correspondence to Juliette Daily. Daily, J., Onink, V., Jongedijk, C.E. et al. Incorporating terrain specific beaching within a lagrangian transport plastics model for Lake Erie. Micropl.&Nanopl. 1, 19 (2021). https://doi.org/10.1186/s43591-021-00019-7 Beaching Lagrangian transport
CommonCrawl
Systems of ODEs 1 The predator-prey model 2 Qualitative analysis of the predator-prey model 3 Solving the Lotka–Volterra equations 4 Vector fields and systems of ODEs 5 Discrete systems of ODEs 6 Qualitative analysis of systems of ODEs 7 The vector notation and linear systems 8 Classification of linear systems 9 Classification of linear systems, continued This is the IVP we have considered so far: $$\frac{\Delta y}{\Delta t}=f(t,y),\ y(t_0)=y_0,$$ and $$y'=f(t,y),\ y(t_0)=y_0.$$ The equations show how the rate of change of $y$ depends on $t$ and $y$. What if we have two variable quantities dependent on $t$? The simplest example is as follows: $x$ is the horizontal location, and $y$ is the vertical location. $\\$ We have already seen this simplest setting of free fall: $$\begin{cases} \frac{\Delta x}{\Delta t}=v_x,\\ \frac{\Delta y}{\Delta t}=v_y-gt, \end{cases} \text{ and } \begin{cases} x'=v_x,\\ y'=v_y-gt. \end{cases}$$ It is just as simple when arbitrary functions are in the right-hand sides of the equations (the continuous case is solved by integration). Here the rate of change of the location depends on the time $t$ only. More complex is the situation when the rate of change of the location depends on the location. When the former depends only on its own component of the location, the motion is described by this pair of ODEs: $$\begin{cases} \frac{\Delta x}{\Delta t}=g(x),\\ \frac{\Delta y}{\Delta t}=h(y). \end{cases} \text{ and } \begin{cases} x'=g(x),\\ y'=h(y). \end{cases}$$ The solution consists of two solutions to the two, unrelated, ODEs. We can then apply the methods of Chapter 24. As an example of quantities that do interact, let's consider the predator-prey model. $x$ be the number of rabbits and $y$ be the number of foxes in the forest. $\Delta t$ be the fixed increment of time. Even though time $t$ is now discrete, the "number" of rabbits $x$ or foxes $y$ isn't. Those are real numbers in our model. One can think of $.1$ rabbits as if the actual number is unknown but the likelihood that there is one somewhere is $10\%$. We begin with the rabbits. There are two factors affecting their population. First, we assume that they have an unlimited food supply and reproduce in a manner described previously -- when there is no predator. In other words, the gain of the rabbit population per unit of time through their natural reproduction is proportional to the size of their current population. Therefore, we have: $$\text{rabbits' gain }=\alpha\cdot x \cdot \Delta t,$$ for some $\alpha>0$. Second, we assume that the rate of predation upon the rabbits to be proportional to the rate at which the rabbits and the foxes meet, which, in turn, is proportional to the sizes of their current populations, $x$ and $y$. Therefore, we have: $$\text{rabbits' loss }=\beta\cdot x\cdot y \cdot \Delta t,$$ for some $\beta>0$. Combined, the change of the rabbit population over the period of time of length $\Delta t$ is: $$\Delta x = \alpha x \Delta t - \beta x y \Delta t.$$ We continue with the foxes. There are two factors affecting their population. First, we assume that the foxes have only a limited food supply, i.e., the rabbits. The foxes die out geometrically in a manner described previously -- when there is no prey. In other words, the loss of the fox population per unit of time through their natural death is proportional to the size of their current population. Therefore, we have: $$\text{foxes' loss }=\gamma\cdot y \cdot \Delta t,$$ for some $\gamma >0$. Second, we again assume that the rate of reproduction of the foxes is proportional to the rate of their predation upon the rabbits which is, as we know, proportional to the sizes of their current populations, $x$ and $y$. Therefore, we have: $$\text{foxes' gain }=\delta\cdot x\cdot y \cdot \Delta t,$$ for some $\delta>0$. Combined, the change of the fox population over the period of time of length $\Delta t$ is: $$\Delta y = \delta xy \Delta t - \gamma y \Delta t.$$ Putting these two together gives us a discrete predator-prey model: $$\begin{cases} \Delta x = \left( \alpha x - \beta x y \right) \Delta t,\\ \Delta y = \left( \delta xy - \gamma y \right) \Delta t. \end{cases}$$ Then the spreadsheet formulas are for $x$ and $y$ respectively: $$\texttt{=R[-1]C+R2C3*R[-1]C*R3C1-R2C4*R[-1]C*R[-1]C[1]*R3C1},$$ $$\texttt{=R[-1]C-R2C5*R[-1]C*R3C1+R2C6*R[-1]C*R[-1]C[-1]*R3C1.}$$ Let's take a look at an example of a possible dynamics ($\alpha=0.10$, $\beta=0.50$, $\gamma=0.20$, $\delta=0.20$, $h=0.2$): This is what we see. Initially, there are many rabbits and, with this much food, the number of foxes was growing: $\uparrow$. This was causing the number of rabbits to decline: $\leftarrow$. Combined, this is the direction of the system: $\nwarrow$. Later, the number of rabbits declines so much that, with so little food, the number of foxes also started to decline: $\downarrow$. At the end, both of the populations seem to have disappeared... Another experiment shows that they can recover ($\alpha=3$, $\beta=2$, $\gamma=3$, $\delta=1$, $h=0.03$): In fact, we can see a repeating pattern. Furthermore, with $\Delta t \to 0$, we have two, related, ODEs, a continuous predator-prey model: $$\begin{cases} \frac{dx}{dt} = \alpha x - \beta x y ,\\ \frac{dy}{dt} = \delta xy - \gamma y . \end{cases}$$ It approximates the discrete model. The equations are known as the Lotka–Volterra equations. Qualitative analysis of the predator-prey model To confirm our observations, we will carry out qualitative analysis. It is equally applicable to both the discrete model and the system of ODEs. Indeed, they both have the same right-hand side: $$\begin{cases} \frac{\Delta x}{\Delta t} = \alpha x - \beta x y ,\\ \frac{\Delta y}{\Delta t} = \delta xy - \gamma y , \end{cases}\quad\text{and}\quad \begin{cases} \frac{dx}{dt} = \alpha x - \beta x y ,\\ \frac{dy}{dt} = \delta xy - \gamma y . \end{cases}$$ We will investigate the dynamics at all locations in the first quadrant of the $tx$-plane. First, we find the locations where $x$ has a zero derivative, i.e., $x'=0$, which is the same as the locations where the discrete model leads to no change in $x$, i.e., $\Delta x=0$. The condition is: $$\alpha x - \beta x y=0.$$ We solve the equation: $$x=0 \text{ or } y=\alpha/\beta.$$ We discover that, first, $x=0$ is a solution, and, second, the horizontal line $y=\alpha/\beta$ is crossed vertically by the solutions. In other words, first, the foxes are dying out with no rabbits and, second, there may be a reversal in the dynamics of the rabbits at a certain number of foxes. To find out, solve the inequality: $$x'>0 \text{ or } \Delta x>0\ \Longrightarrow\ \alpha x - \beta x y>0 \ \Longrightarrow\ y<\alpha/\beta.$$ It follows that, indeed, the number of rabbits increases when the number of foxes is below $\alpha/\beta$, otherwise it decreases. Second, we find the locations where $y$ has a zero derivative, i.e., $y'=0$, which is the same as the locations where the discrete model leads to no change in $y$, i.e., $\Delta y=0$. The condition is: $$\delta xy - \gamma y=0.$$ We solve the equation: $$y=0 \text{ or } x=\gamma/\delta.$$ We discover that, first, $y=0$ is a solution, and, second, the vertical line $x=\gamma/\delta$ is crossed horizontally by the solutions. In other words, first, the rabbits thrive with no foxes and, second, there may be a reversal in the dynamics of the foxes at a certain number of rabbits. To find out, solve the inequality: $$y'>0 \text{ or } \Delta y>0\ \Longrightarrow\ \delta xy - \gamma y>0 \ \Longrightarrow\ x>\gamma/\delta.$$ It follows that, indeed, the number of foxes increases when the number of rabbits is above $\gamma/\delta$, otherwise it decreases. Now we put these two parts together. We have four sectors in the first quadrant determined by the four different choices of the signs of $x'$ and $y'$, or $\Delta x$ and $\Delta y$: In either case, the result is a rough description of the dynamics on the local level: if this is the current state, then this is the direction it is going. It is a vector field! We visualize this vector field with the same parameters as before: The arrows aren't meant yet to be connected into curves to produce solutions. The only four distinct solutions we know for sure are the following: the decline of the foxes in the absence rabbits -- on the $y$-axis; the explosion of the rabbits in the absence of foxes -- on the $x$-axis; the freezing of both rabbits and foxes at the special levels -- in the middle; and also the freezing of both rabbits and foxes at the zero level. Either of the last two is a constant solution called an equilibrium. The main, non-zero, equilibrium is: $$x(t)=\gamma/\delta,\ y(t)=\alpha/\beta.$$ What about the rest of the solutions? In order to confirm that the solutions circle the main equilibrium, we need a more precise analysis. In each of the four sectors, the monotonicity of the solutions has been proven. However, it is still possible that some solutions will stay within the sector when one or both of $x$ and $y$ behave asymptotically: $$x(t)\to a \text{ and/or } y(t)\to b \text{ as } t\to \infty.$$ Since both functions are monotonic, this implies that $$x'(t)\to 0 \text{ and/or } y'(t)\to 0 \text{ as } t\to \infty,$$ and the same for $\Delta x$ and $\Delta y$. We can show that this is impossible. For example, suppose we start in the bottom right sector, i.e., the initial conditions are: $x(0)=p>\gamma/\delta$; $y(0)=q<\alpha/\beta$. Then, for as long as the solution is in this sector, we have $x'>0 \ \Longrightarrow\ x>p$; $y'>0 \ \Longrightarrow\ y>q$. Therefore, $$y'=y(\delta x - \gamma)>q(\delta p-\gamma)>0.$$ This number is a gap between $y'$ and $0$. Therefore, $y'$ cannot diminish to $0$, and the same is true for $\Delta y$. It follow that the solution will reach the line $y=\alpha/\beta$ "in finite time". Exercise. Prove the analogous facts about the three remaining sectors. We have demonstrated that a solution will go around the main equilibrium, but when it comes back, will it be closer to the center, farther, or will it come to the same location? The first option is indicated by our spreadsheet result. Next, we set the discrete model aside and concentrate on solving analytically our system of ODEs to answer the question, is this a cycle or a spiral? Solving the Lotka–Volterra equations We would like to eliminate time from the equations ($x>0,\ y>0$): $$\begin{cases} \frac{dx}{dt} = \alpha x - \beta x y ,\\ \frac{dy}{dt} = \delta xy - \gamma y . \end{cases}$$ This step is made possible by the following trick. We interpret these derivatives in terms of the differential forms: $$\begin{array}{llll} dx = (\alpha x - \beta x y )dt&\Longrightarrow & dt=\frac{dx}{\alpha x - \beta x y },\\ dy = (\delta xy - \gamma y)dt&\Longrightarrow & dt=\frac{dy}{\delta xy - \gamma y }. \end{array}$$ Therefore, $$dt=\frac{dx}{\alpha x - \beta x y }=\frac{dy}{\delta xy - \gamma y }.$$ We next separate variables: $$\frac{\delta x - \gamma }{x}dx=\frac{\alpha - \beta y}{y}dy.$$ We integrate: $$\int \left(\delta - \frac{\gamma }{x} \right)dx=\int \left(\frac{\alpha}{y} - \beta \right)dy,$$ and we have: $$\delta x - \gamma \ln x = \alpha \ln y - \beta y +C.$$ The system is solved!.. But what does it mean? Every solution $x=x(t)$ and $y=y(t)$, when substituted into the function $$G(x,y)=\delta x - \gamma \ln x - \alpha \ln y + \beta y,$$ produces a constant. In other words, this parametric curve is a level curve of $z=G(x,y)$. Once we have no derivatives left, we declare the system solved even though only implicitly. Even though we don't have explicit formulas for $x=x(t)$ or $y=y(t)$, we can use what we have to further study the qualitative behavior of the system. For example, the fact that this is a level curve already suggests that the parametric curve is not a spiral. Just try to imagine such a surface that its level curves are spirals: We turn instead to the actual function. First, we plot it with a spreadsheet ($\alpha=3$, $\beta=2$, $\gamma=3$, $\delta=1$): The level curves are visible. Some of them are clearly circular and others aren't. The reason is that they aren't shown all the way to the axes because the value of $G$ rises so quickly (in fact, asymptotically). As expected, the surface seems to have a single minimum point. Let's prove that algebraically: $$\begin{array}{lllll} \frac{\partial G}{\partial x}(x,y)&=\delta - \gamma /x,\\ \frac{\partial G}{\partial y}(x,y)&= - \alpha /y + \beta. \end{array}$$ We next find the extreme points of $G$. We set the two derivatives equal to zero and solve for $x$ and $y$: $$\begin{array}{lllll} \frac{\partial G}{\partial x}(x,y)&=\delta - \gamma /x&=0&\Longrightarrow &x=\frac{\gamma}{\delta},\\ \frac{\partial G}{\partial y}(x,y)&= - \alpha /y + \beta&=0&\Longrightarrow &y=\frac{\alpha}{\beta}. \end{array}$$ This point is indeed our main equilibrium point. The surface here has a horizontal tangent plane. We have also demonstrated that there are no others points like that! But could this be a maximum point? Just as $y=x^3$ crosses the $x$-axis at $0$ degrees, a surface can cross its tangent plane. We compute the second derivatives: $$\begin{array}{lllll} \frac{\partial^2 G}{\partial x^2}(x,y)&=\gamma \frac{1}{x^2}>0,\\ \frac{\partial^2 G}{\partial y^2}(x,y)&=\alpha \frac{1}{y^2}>0 . \end{array}$$ Both are positive throughout, therefore, either of the cross-sections of the surface along the axes have a minimum point here and it has to stay on one side of the plane. However, it might cross it at other directions. For example, this still might be a saddle point! We invoke the Second Derivative Test from Chapter 9 to resolve this. We consider the Hessian matrix (discussed in Chapter 18) of $G$. It is the $2\times 2$ matrix of the four partial derivatives of $G$: $$H(x,y) = \begin{pmatrix}\frac{\partial^2 G}{\partial x^2} &\frac{\partial^2 G}{\partial x \partial y}\\ \frac{\partial^2 G}{\partial y \partial x} &\frac{\partial^2 G}{\partial y^2}\end{pmatrix}=\begin{pmatrix} \gamma \frac{1}{x^2} &0\\ 0& \alpha \frac{1}{y^2}\end{pmatrix}.$$ Here, in addition, we have the mixed second derivatives: $$\frac{\partial^2 G}{\partial x \partial y}(x,y)=\frac{\partial^2 G}{\partial y \partial x}(x,y)=0 .$$ Next, we look at the determinant of the Hessian: $$D(x,y)=\det(H(x,y)) = \left( \gamma \frac{1}{x^2}\right)\cdot \left( \alpha \frac{1}{y^2}\right)= \frac{\alpha \gamma}{x^2y^2}>0.$$ It's positive! Therefore, the point is a minimum. We conclude that every level curve of $G$, i.e., a solution of the system, goes around the equilibrium and comes back to create a cycle. An easier, but more ad hoc, way to reach this conclusion is to imagine that a solution starts on, say, the line $y=\alpha/\beta$ at $x=x_0$ to the right of the equilibrium and then comes back to the line at $x=x_1$. Since this is a level curve, we have $G(x_0,\alpha/\beta)=G(x_1,\alpha/\beta)$. According to Rolle's Theorem from Chapter 9, this contradicts our conclusion that $\frac{\partial G}{\partial x}> 0$ along this line. Plotted with the same parameters, this is what these curves look like: Exercise. Prove that the predator-prey discrete model, i.e., Euler's method for this system of ODEs, produces solutions that spiral away from the equilibrium. Hint: image below. Vector fields and systems of ODEs Numerous processes are modeled by systems of ODEs. For example, if little flags are placed on the lawn, then their directions taken together represent a system of ODEs, while the (invisible) air flow is the solutions of this system. A similar idea is used to model a fluid flow. The dynamics of each particle is governed by the velocity of the flow, at each location, the same at every moment of time. To solve such a system would require tracing the path of every particle of the liquid. Let's review the discrete model of a flow: given a flow on a plane, trace a single particle of this stream. For both coordinates, $x$ and $y$, the following table is being built. The initial time $t_0$ and the initial location $p_0$ are placed in the first row of the spreadsheet. As we progress in time and space, new numbers are placed in the next row of our spreadsheet: $$t_n,\ v_n,\ p_n,\ n=1,2,3,...$$ The following recursive formulas are used: $t_{n+1}=t_n+\Delta t$. $p_{n+1}=p_n+v_{n+1}\cdot \Delta t$. The result is a growing table of values: $$\begin{array}{c|c|c|c|c|c} &\text{iteration } n&\text{time }t_n&&\text{velocity }v_n&\text{location }p_n\\ \hline \text{initial:}&0&3.5&&--&22\\ &1&3.6&&33&25.3\\ &...&...&&...&...\\ &1000&103.5&&4&336\\ &...&...&&...&...\\ \end{array}$$ So, instead of two (velocity -- location, as before), there will be four main columns when the motion is two-dimensional and six when it is three-dimensional: $$\begin{array}{c|c|cc|cc|c} \text{}&\text{time}&\text{horiz.}&\text{horiz.}&\text{vert.}&\text{vert.}&...\\ \text{} n&\text{}t&\text{vel. }x'&\text{loc. }x&\text{vel. }y'&\text{loc. }y&...\\ \hline 0&3.5&--&22&--&3&...\\ 1&3.6&33&25.3&4&3.5&...\\ ...&...&...&...&...&...&...\\ 1000&103.5&4&336&66&4&...\\ ...&...&...&...&...&...&...\\ \end{array}$$ Example. Recall the examples such flows. If the velocity of the flow is proportional to the location: $$v_{n+1}=.2\cdot p_n,$$ for both horizontal and vertical, the result is particles flying away from the center, faster and faster: If the horizontal velocity is proportional to the vertical location and the vertical velocity proportional to the negative of the horizontal location, the result resembles rotation: Now, suppose the velocity comes from explicit formulas as functions of the location: $$u=f(x,y),\ v=g(x,y),$$ are there explicit formulas for the location as a function of time: $$x=x(t),\ y=y(t)?$$ We assume that there is a version of our recursive relation, $$p_{n+1}=p_n+v_{n+1}\cdot \Delta t,$$ for every $\Delta t>0$ small enough. Then our two functions have to satisfy for $x$: $$v_n=f(p_n) \text{ and } p_n=x(t_n),$$ and for $y$: $$v_n=g(p_n) \text{ and } p_n=y(t_n),$$ We substitute these two, as well as $t=t_n$, into the recursive formulas for $p_{n+1}$ for $x$: $$x(t+\Delta t)=x(t)+f(x(t+\Delta t),y(t+\Delta t))\cdot \Delta t,$$ and for $y$: $$y(t+\Delta t)=y(t)+g(x(t+\Delta t),y(t+\Delta t))\cdot \Delta t.$$ Then, we have for $x$: $$\frac{x(t+\Delta t)-x(t)}{\Delta t}=f(x(t+\Delta t),y(t+\Delta t)),$$ and for $y$: $$\frac{y(t+\Delta t)-y(t)}{\Delta t}=g(x(t+\Delta t),y(t+\Delta t)).$$ Taking the limit over $\Delta t\to 0$ gives us the following system: $$\begin{cases} x'(t)&=f(x(t),y(t)),\\ y'(t)&=g(x(t),y(t)), \end{cases}$$ provided $x=x(t),\ y=y(t)$ are differentiable at $t$ and $u=f(x,y),\ v=g(x,y)$ are continuous at $(x(t),y(t))$. Now, let's recall from Chapter 21 that a vector field supplies a direction to every location, i.e., there is a vector attached to each point of the plane: $${\rm point} \mapsto {\rm vector}.$$ A vector field in dimension $2$ is in fact any function: $$F : {\bf R}^2 \to {\bf R}^2 ,$$ given by two functions of two variables: $$F(x,y)=<f(x,y),g(x,y)>.$$ Then, the vector field defines a system of two (time-independent) ODEs. Definition. A solution of a system of ODEs is a pair of functions $x=x(t)$ and $y=y(t)$ (a parametric curve) with either one differentiable on an open interval $I$ such that for every $t$ in $I$ we have: $$\begin{cases} x'(t)&=f(x(t),y(t)),\\ y'(t)&=g(x(t),y(t)), \end{cases}$$ or abbreviated: $$\begin{cases} x'=f(x,y),\\ y'=g(x,y). \end{cases}$$ The vector field of this system is the slope field of the ODE. How do we visualize the solutions of such a system? With a single ODE, $$y'=f(t)\ \Longrightarrow\ y=y(t),$$ we simply plot their graphs, i.e., the collections of points $(t,y(t))$, of some of them on the $ty$-plane. This time, the solutions are parametric curves! Their graphs, i.e., the collections of $(t,x(t),y(t))$ lie in the $3$-dimensional $txy$-space. That is why, we, instead, plot their images, i.e., the collections of points $(x(t),y(t))$ on the $xy$-plane. In the theory of ODEs, they are also known as trajectories, or paths. Then the vectors of the vector field are tangent to these trajectories. Since the vector field is independent of $t$, such a representation is often sufficient as explained by the following result. Theorem. If $x=x(t),\ y=y(t)$ is a solution of the system of ODEs then so is $x=x(t+s),\ y=y(t+s)$ for any real $s$. It is also important to be aware of the fact that the theory of systems of ODEs "includes" the theory of single ODEs. Recall, first, how the graph of every function can be represented by the trajectory of a parametric curve: $$y=r(x)\ \longrightarrow\ \begin{cases} x=t,\\ y=r(x). \end{cases}$$ Similarly, the solutions of every time-independent ODE can be represented by the trajectories of a system of two ODEs: $$y'=g(x) \ \longrightarrow\ \begin{cases} x'&=1,\\ y'&=g(y). \end{cases}$$ Definition. For a given system of ODEs and a given triple $(t_0,x_0,y_0)$, the initial value problem, or an IVP, is $$\begin{cases} x'&=f(x,y),\\ y'&=g(x,y); \end{cases}\quad \begin{cases} x(t_0)&=x_0,\\ y(t_0)&=y_0; \end{cases}$$ and its solution is a solution of the ODE that satisfies the initial condition above. By the last theorem, the value of $t_0$ doesn't matter; it can always be chosen to be $0$. Definition. We say that a system of ODEs satisfies the existence property at a point $(t_0,x_0,y_0)$ when the IVP above has a solution. If your model of a real-life process doesn't satisfy existence, it reflects limitations of your model. It is as if the process starts but never continues. Definition. We say that an ODE satisfies the uniqueness property at a point $(t_0,x_0,y_0)$ if every pair of solutions, $(x_1,y_1),\ (x_2,y_2)$, of the IVP above are equal, $$x_1(t)=x_2(t), \ y_1(t)=y_2(t),$$ for every $t$ in some open interval that contains $t_0$. If your model of a real-life process doesn't satisfy uniqueness, it reflects limitations of your model. It's as if you have all the data but can't predict even the nearest future. Thus systems of ODEs produce families of curves as the sets of their solutions. Conversely, if a family of curves is given by an equation with a single parameter, we may be able to find a system of ODEs for it. Example. The family of vertically shifted graphs, $$y=x^2+C,$$ creates an ODE if we differentiate (implicitly) with respect to $t$: $$y'=2xx'.$$ Since these are just functions of $x$, we can choose $x=t$. This is a possible vector field for this family: $$\begin{cases} x'&=1,\\ y'&=2x. \end{cases}$$ $\square$ Example. The family of stretched exponential graphs, $$y=Ce^x,$$ creates an ODEs: $$y'=Ce^xx'.$$ This is a possible vector field for this family: $$\begin{cases} x'&=1,\\ y'&=Ce^x. \end{cases}$$ $\square$ Example. What about these concentric circles? (In the case when $C=0$, we have the origin.) They are given by $$x^2+y^2=C\ge 0.$$ We differentiate (implicitly) with respect to $t$: $$2xx'+2yy'=0.$$ We choose what $x',\ y'$ might be equal to in order for the two terms to cancel. This is a possible vector field for this family: $$\begin{cases} x'&=-y,\\ y'&=x. \end{cases}$$ $\square$ Example. These hyperbolas are given by these equations: $$xy=C.$$ (In the case when $C=0$, we have the two axes.) Then, we have an ODE: $$x'y+xy'=0.$$ This is a possible vector field for this family: $$\begin{cases} x'&=x,\\ y'&=-y. \end{cases}$$ $\square$ None of the examples have problems with either existence or uniqueness -- in contrast to the corresponding ODEs. The proofs of the following important theorems lie outside the scope of this book. Theorem (Existence). Suppose $(x_0,y_0)$ is a point on the $xy$-plane and suppose $H$ is an open interval that contains $x_0$, and $G$ is an open interval that contains $y_0$. Suppose also that functions $z=f(x,y)$ and $z=g(x,y)$ of two variables are continuous with respect to $x$ and $y$ on $H\times G$. Then the system of ODE, $$\begin{cases} x'&=f(x,y),\\ y'&=g(x,y). \end{cases}$$ satisfies the existence property at $(t_0,x_0,y_0)$ for any $t_0$. Theorem (Uniqueness). Suppose $(x_0,y_0)$ is a point on the $xy$-plane and suppose Suppose also that function $z=f(x,y)$ and $z=g(x,y)$ of two variables are differentiable with respect to $x$ and $y$ on $H\times G$. Then the system of ODEs, $$\begin{cases} x'&=f(x,y),\\ y'&=g(x,y). \end{cases}$$ satisfies the uniqueness property at $(t_0,x_0,y_0)$ for any $t_0$. Discrete systems of ODEs Discrete ODEs approximate and are approximated by continuous ODEs. The same is true for systems. For example, the discrete system for the predator-prey model produces this almost exactly cyclic path: In other words, Euler's method is capable of tracing solutions very close the ones of the ODE it came from. Just as in the $1$-dimensional case, the IVP tells us: where we are (the initial condition), and the direction we are going (the ODE). Just as before, the unknown solution is replaced with its best linear approximation. Example. Let's consider again these concentric circles: They are the solutions of the ODEs: $$\begin{cases} x'&=y,\\ y'&=-x; \end{cases}$$ We choose the increment of $t$: $$\Delta t=1.$$ We start with this initial condition: $$t_0=0, \quad x_0=0,\quad y_0=2.$$ We substitute these two numbers into the equations: $$\begin{cases} x'&=2,\\ y'&=0; \end{cases}$$ This is the direction we will follow. The increments are $$\begin{cases} \Delta x=2\cdot \Delta t=2\cdot 1 =2,\\ \Delta y=0\cdot \Delta t=0\cdot 1 =0; \end{cases}$$ Our next location on the $xy$-plane is then: $$\begin{cases} x_1=x_0+\Delta x=0+2=2,\\ y_1=y_0+\Delta y=2+0=2. \end{cases}$$ A new initial condition appears: $$x_0=2,\quad y_0=2.$$ We again substitute these two numbers into the equations: $$\begin{cases} x'&=2,\\ y'&=-2; \end{cases}$$ producing the direction we will follow. The increments are $$\begin{cases} \Delta x=2\cdot \Delta t=2\cdot 1 =2,\\ \Delta y=-2\cdot \Delta t=-2\cdot 1 =-2; \end{cases}$$ Our next location on the $xy$-plane is then: $$\begin{cases} x_2=x_1+\Delta x=2+2=4,\\ y_2=y_1+\Delta y=2+(-2)=0. \end{cases}$$ One more IVP: $$x_2=0,\ y_2=-4.$$ The increments are $$\begin{cases} \Delta x=0\cdot \Delta t=0\cdot 1 =0,\\ \Delta y=-4\cdot \Delta t=-4\cdot 1 =-4; \end{cases}$$ Our next location on the $xy$-plane is then: $$\begin{cases} x_3=x_2+\Delta x=4+0=4,\\ y_3=y_2+\Delta y=0-4=-4. \end{cases}$$ These four points form a very crude approximation of one of our circular solutions: They are clearly spiraling away from the origin. $\square$ In terms of motion, at our current location and current time, we examine the ODE to find the velocity and then move accordingly to the next location. Definition. The Euler solution with increment $\Delta t>0$ of the IVP $$\begin{cases} x'&=f(x,y),\\ y'&=g(x,y); \end{cases}\quad \begin{cases} x(t_0)&=x_0,\\ y(t_0)&=y_0; \end{cases}$$ is the two sequences $\{x_n\}$ and $\{y_n\}$ of real numbers given by: $$\begin{cases} x_{n+1}&=x_n+f(x_n,y_n)\cdot \Delta t,\\ y_{n+1}&=y_n+g(x_n,y_n)\cdot \Delta t; \end{cases}$$ where $t_{n+1}=t_n+\Delta t$. Once again, if we derived our ODEs from a discrete model (via $\Delta t\to 0$), Euler's method will bring us right back to it: $$\newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{cccccc} &&\text{ODEs}\\ &\nearrow&&\searrow\\ \text{discrete model}&&\la{same!}&&\text{Euler's method} \end{array}$$ Example. Let's now carry out this procedure with a spreadsheet. The formulas for $x_n$ and $y_n$ are respectively: $$\texttt{=R[-1]C+R[-1]C[1]*R3C1},$$ $$\texttt{=R[-1]C-R[-1]C[-1]*R3C1}.$$ These are the results: In contrast to the case of a single ODE, the approximations do not behave erratically close to the $x$-axis. The reason is that there is no division by $y$ anymore. $\square$ Example. Let's consider again these hyperbolas: $$xy=C.$$ They are the solutions of the system: $$\begin{cases} x'&=x,\\ y'&=-y; \end{cases}$$ An Euler solution is shown below: However, is this asymptotic convergence toward the $x$-axis or do they merge? $\square$ Qualitative analysis of systems of ODEs Since Euler's method depends on the value of $h$, the increment of $t$, and, even with smaller and smaller values of $h$, the result remains a mere approximation. Meanwhile, qualitative analysis collects information about the solutions without solving the system -- either analytically or numerically. The result is fully accurate but very broad descriptions of the solutions. Example. Let's review an example of a single ODE from last section: $$y'=-\tan (y).$$ This is what we conclude about the strip $[-\pi/2,\pi/2]$: for $-\pi/2<y<0$, we have $y'=-\tan y>0$ and, therefore, $y\nearrow$; for $y=0$, we have $y'=-\tan y=0$ and, therefore, $y$ is a constant solution; for $0<y<\pi/2$, we have $y'=-\tan y<0$ and, therefore, $y\searrow$. In fact, every solution $y$ is decreasing (or increasing) throughout its domain. The conclusions are confirmed with Euler's method: We can match this ODE with a system: $$\begin{cases} x'=1,\\ y'=-\tan y. \end{cases}$$ Its solutions have these trajectories as solutions. $\square$ The directions of a parametric curves are its tangent vectors. Therefore, the directions of the solutions of the system of ODEs: $$\begin{cases} x'=f(x,y),\\ y'=g(x,y), \end{cases}$$ are derived from the signs of the functions in the right-hand side: $$\begin{array}{c|c|c|c} &&x'=f(x,y)<0&x'=f(x,y)=0&x'=f(x,y)>0\\ \hline &&\leftarrow&\bullet&\to\\ \hline y'=g(x,y)<0&\downarrow&\swarrow&\downarrow&\searrow\\ \hline y'=g(x,y)=0&\bullet&\leftarrow&\bullet&\rightarrow\\ \hline y'=g(x,y)>0&\uparrow&\nwarrow&\uparrow&\nearrow \end{array}$$ Example. Consider next: $$y'=\cos y \cdot \sqrt{|y|}.$$ we demonstrated that the monotonicity of the solutions varies with the cosine: We can see that all solutions progress forward along the $x$-axis. What if we add variability of the direction of $x$? Consider: $$\begin{cases} x'=\sin y,\\ y'=\cos y . \end{cases}$$ We conduct the "sign analysis" for both functions in the right-hand side: $$\begin{array}{r|c|c|c} y&x'&x=x(t)&\text{path}\\ \hline 2\pi&0&&\\ &-&x\downarrow&\leftarrow\leftarrow\leftarrow\leftarrow\\ \pi&0&&\\ &+&x\uparrow&\to\to\to\to\\ 0&0&&\\ &-&x\downarrow&\leftarrow\leftarrow\leftarrow\leftarrow\\ -\pi&0&\\ \end{array} \quad \begin{array}{r|c|c|c} y&y'&y=y(t)&\text{path}\\ \hline 3\pi/2&0&\\ &-&y\downarrow&\downarrow\downarrow\downarrow\downarrow\\ \pi/2&0&\\ &+&y\uparrow&\uparrow\uparrow\uparrow\uparrow\\ -\pi/2&0&\\ \end{array}$$ Now put them together: $$\begin{array}{cccccc} \to&\to&\to&\to\\ \nearrow&\nearrow&\nearrow&\nearrow\\ \uparrow&\uparrow&\uparrow&\uparrow\\ \nwarrow&\nwarrow&\nwarrow&\nwarrow\\ \leftarrow&\leftarrow&\leftarrow&\leftarrow\\ \end{array}$$ The results are confirmed with Euler's method: Example. The next one: $$\begin{cases} x'=\sin x,\\ y'=\cos y . \end{cases}$$ We conduct the "sign analysis" of the right-hand side: $$\begin{array}{r|c|cccccc} &x&-\pi&&\pi&&2\pi&&3 \pi&\\ \hline y&y' | x'&0&+&0&-&0&+&0\\ \hline 3\pi/2&0&\bullet&\to&\bullet&\leftarrow&\bullet&\to&\bullet\\ &-&\downarrow&\searrow&\downarrow&\swarrow&\downarrow&\searrow&\downarrow\\ \pi/2&0&\bullet&\to&\bullet&\leftarrow&\bullet&\to&\bullet\\ &+&\uparrow&\nearrow&\uparrow&\nwarrow&\uparrow&\nearrow&\uparrow\\ -\pi/2&0&\bullet&\to&\bullet&\leftarrow&\bullet&\to&\bullet\\ &-&\downarrow&\searrow&\downarrow&\swarrow&\downarrow&\searrow&\downarrow\\ -3\pi/2&0&\bullet&\to&\bullet&\leftarrow&\bullet&\to&\bullet\\ \end{array}$$ The results are confirmed with Euler's method: Example. The next one is discontinuous: $$x'=x−y,\ y'=[x+y].$$ For Euler's method we use the $\texttt{FLOOR}$ function: File:X'=x−y, y'=x+y.png Exercise. Confirm the plot below by analyzing this system: $$x'=y,\ y'=\sin y \cdot y.$$ Suppose the system is time-independent, $$x'=f(x,y),\ y'=g(x,y).$$ Then it is thought of as a flow: liquid in a pond or the air over a surface of the Earth. Then, the right-hand side is recognized as a two-dimensional vector field. In contrast to the $1$-dimensional case with only three main possibilities, we will see a wide variety of behaviors around an equilibrium when the space of location is a plane. With such a system, the qualitative analysis is much simpler. In fact, the ones above exhibit most of the possible patterns of local behavior. We concentrate on what is going on in the vicinity of a given location $(x,y)=(a,b)$. The first main possibility is $$f(a,b)\ne 0 \text{ or } g(a,b)\ne 0,$$ which is equivalent to $$F(a,b)= <f(a,b),g(a,b)>\ne 0.$$ Then, from the continuity of $f$ and $g$, we conclude that neither changes its sign in some disk $D$ that contains $(a,b)$. Then, the solutions with paths located within $D$ proceed in an about the same direction: The behavior is "generic". More interesting behaviors are seen around a zero of $F$: $F(a,b)=0\ \Longrightarrow\ (x,y)=(a,b) $ is a stationary solution (an equilibrium). Then the pattern in the vicinity of the point, i.e., an open disk $D$, depends on whether this is a maximum of $f$ or $g$, or a minimum, or neither. Some of the ideas come from dimension $1$. For example, a stable equilibrium, $$y\to a \text{ as } x\to +\infty,$$ is a sink: flow in only. An unstable equilibrium, $$y\to a \text{ as } x\to -\infty,$$ is a source: flow out only. An semi-stable equilibrium could mean that some the solutions asymptotically approach the equilibrium and others do not. As you can see there are many more possibilities than in dimension $1$. The vector notation and linear systems We can combine the two variables to form a point, a location or a location on the plane: $$X=(x,y).$$ We can also use the column-vector notation: $$X=\left[\begin{array}{cc}x \\ y\end{array}\right].$$ Next, the same happens to their derivatives as vectors: $$\frac{\Delta X}{\Delta t}=\left<\frac{\Delta x}{\Delta t}, \frac{\Delta y}{\Delta t}\right>=\left[\begin{array}{cc}\frac{\Delta x}{\Delta t}\\\frac{\Delta y}{\Delta t}\end{array}\right].$$ Then the setup we have been using for real-valued functions reappears: $$\frac{\Delta X}{\Delta t}=F(X),\ X(t_0)=X_0.$$ In this section, our main concern will be ODEs with respect to derivatives: $$X'=<x',y'>=\left[\begin{array}{cc}x'\\y'\end{array}\right],$$ and $$X'=F(X),\ X(t_0)=X_0.$$ All the plotting, however, is done with the discrete ODEs. The "phase space" ${\bf R}^2$ is the space of all possible locations. Then the position of a given particle is a function $X:{\bf R}^+\to {\bf R}^2$ of time $t\ge 0$. Meanwhile, the dynamics of the particle is governed by the velocity of the flow, at each location, the same at every moment of time: the velocity of a particle if it happens to be at point $X$ is $F(X)$. Then either the next position is predicted to be $X+F(X)$ -- that's a discrete model -- or $F(X)$ is just a tangent of the trajectory -- that's an ODE. A vector field supplies a direction to every location, i.e., there is a vector attached to each point of the plane: $${\rm point} \mapsto {\rm vector}.$$ A vector field in dimension $2$ is any function: $$F : {\bf R}^2 \to {\bf R}^2 .$$ Furthermore, one can think of a vector field as a time-independent ODE on the plane: $$X'=F(X).$$ The corresponding IVP adds an initial condition: $$X(t_0)=X_0.$$ Definition. A solution of a system of ODEs is a function $u$ differentiable on an open interval $I$ such that for every $t$ in $I$ we have: $$X'(t)=F(X(t)).$$ Definition. For a given system of ODEs and a given $(t_0,X_0)$, the initial value problem, or an IVP, is $$X'=F(X),\quad X(t_0)=X_0.$$ and its solution is a solution of the ODE that satisfies the initial condition above. Definition. The Euler solution with increment $\Delta t>0$ of the IVP: $$X'=F(X),\quad X(t_0)=X_0;$$ is a sequence $\{X_n\}$ of points on the plane given by: $$X_{n+1}=X_n+F(X_n)\cdot \Delta t,$$ where $t_{n+1}=t_n+\Delta t$. All the definitions above remain valid if we think of $X$ as a location in an $N$-dimensional space ${\bf R}^N$. We now concentrate on linear systems (that may be acquired from non-linear ones via linearization). All the functions involved are linear and, therefore, differentiable; both existence and uniqueness are satisfied! This is the case of a linear function $F$. As such, it is given by a matrix and is evaluated via matrix multiplication: $$F(X)=FX=\left[ \begin{array}{ll}a&b\\c&d\end{array}\right] \left[ \begin{array}{ll}x\\y\end{array}\right]=\left[ \begin{array}{ll}ax+by\\cx+dy\end{array}\right].$$ It is written simply as $$X'=FX.$$ The characteristics of this matrix -- the determinant, the trace, and the eigenvalues -- will help us classify such a system. However, the first observation is very simple: $X=0$ is the equilibrium of the system. In fact what we've learned about systems of linear equations tells us the following. Theorem. When $\det F\ne 0$, $X=0$ is the only equilibrium of the system $X'=FX$; otherwise, there are infinitely many such points. In other words, what matter is whether $F$, as a function, is or is not one-to-one. Example (degenerate). The latter is the "degenerate" case such as the following. Let's consider this very simple system of ODEs: $$\begin{cases}x'&=2x\\ y'&=0\end{cases} \qquad\begin{array}{ll}\Longrightarrow\\ \Longrightarrow\end{array} \begin{cases}x&=Ce^{2t}\\ y&=K\end{cases}.$$ It is easy to solve, one equation at a time. We have exponential growth on the $x$-axis and constant on the $y$-axis. Example (saddle). Let's consider this system of ODEs: $$\begin{cases}x'&=-x\\ y'&=4y\end{cases} \qquad\begin{array}{ll}\Longrightarrow\\ \Longrightarrow\end{array} \begin{cases}x&=Ce^{-t}\\ y&=Ke^{4t}\end{cases}.$$ We solve it instantly because the two variables are fully separated. We can think of either of these two solutions of the two ODEs as a solution of the whole system that lives entirely within one of the two axes: We have exponential growth on the $x$-axis and exponential decay on the $y$-axis. The rest of the solutions are seen to tend toward one of these. Since not all of the solutions go toward the origin, it is unstable. This pattern is called a "saddle" because the curves look like the level curves of a function of two variables around a saddle point. Here, the matrix of $F$ is diagonal: $$F=\left[ \begin{array}{ll}-1&0\\0&4\end{array}\right].$$ Algebraically, we have: $$X=\left[ \begin{array}{ll}x\\y\end{array}\right]=\left[ \begin{array}{ll}Ce^{-t}\\Ke^{4t}\end{array}\right]=Ce^{-t}\left[ \begin{array}{ll}1\\0\end{array}\right]+Ke^{4t}\left[ \begin{array}{ll}0\\1\end{array}\right].$$ We have expressed the general solution as a linear combination of the two basis vectors! $\square$ Example (node). A slightly different system is: $$\begin{cases}x'&=2x\\ y'&=4y\end{cases} \qquad\begin{array}{ll}\Longrightarrow\\ \Longrightarrow\end{array} \begin{cases}x&=Ce^{2t}\\ y&=Ke^{4t}\end{cases}.$$ Here, $$F=\left[ \begin{array}{ll}2&0\\0&4\end{array}\right].$$ Once again, either of these two solutions of the two ODEs is a solution of the whole system that lives entirely within one of the two axes (exponential growth on both of the axes) and the rest of the solutions are seen to tend toward one of these. The slight change to the system produces a very different pattern: Algebraically, we have: $$X=\left[ \begin{array}{ll}x\\y\end{array}\right]=\left[ \begin{array}{ll}Ce^{2t}\\Ke^{4t}\end{array}\right]=Ce^{2t}\left[ \begin{array}{ll}1\\0\end{array}\right]+Ke^{4t}\left[ \begin{array}{ll}0\\1\end{array}\right],$$ a linear combination of the two basis vectors with time-dependent weights. The equilibrium is unstable but changing $2$ and $4$ to $-2$ and $-4$ will reverse the directions of the curves and make it stable. The exponential growth is faster along the $y$-axis; that is why the solutions appear to be tangent to the $x$-axis. In fact, eliminating $t$ gives us $y=x^2$ and similar graphs. When the two coefficients are equal, the growth is identical and the solutions are simply straight lines: What if the two variables aren't separated? The insight is that they can be -- along the eigenvectors of the matrix. Indeed, the basis vectors are the eigenvectors of these two diagonal matrices. Now just imagine that the two pictures are skewed: Let's look at them one at a time. The idea is uncomplicated: the system within the eigenspace is $1$-dimensional. In other words, it is a single ODE and can be solved the usual way. This is how we find solutions. Every solution $X$ that lies within the eigenspace, which is a line, is a $t$-dependent multiple of the eigenvector $V$: $$\begin{array}{lll} X=rV& \Longrightarrow X'=(rV)'=F(rV)=rFV=r\lambda V\\ &\Longrightarrow\ r'V=\lambda rV\\ &\Longrightarrow\ r'=\lambda r\\ &\Longrightarrow\ r=e^{\lambda t}. \end{array}$$ Theorem (Eigenspace solutions). If $\lambda$ is an eigenvalue and $V$ a corresponding eigenvector of a matrix $F$, then $$X=e^{\lambda t}V$$ is a solution of the linear system $X'=FX$. Proof. To verify, we substitute into the equation and use linearity (of both matrix multiplication and differentiation): $$\begin{array}{lll} X=e^{\lambda t}V& \Longrightarrow \\ \text{left-hand side:}&X'=(e^{\lambda t}V)'=(e^{\lambda t})'V=\lambda e^{\lambda t}V;\\ \text{right-hand side:}&FX=F(e^{\lambda t}V)=e^{\lambda t}FV= e^{\lambda t}\lambda V. \end{array}$$ $\square$ The eigenvalue can be complex! The second idea is to try to express the solution of the general linear system as a linear combination of two solutions found this way. Theorem (Representation). Suppose $V_1$ and $V_2$ are two eigenvectors of a matrix $F$ that correspond to two eigenvalues $\lambda_1$ and $\lambda_2$. Suppose also that the eigenvectors aren't multiples of each other. Then all solutions of the linear system $X'=FX$ are given as linear combinations of non-trivial solutions within the eigenspaces: $$X=Ce^{\lambda_1 t}V_1+Ke^{\lambda_2 t}V_2,$$ with real coefficients $C$ and $K$. Proof. Since these solutions cover the whole plane, the conclusion follows from the uniqueness property. $\blacksquare$ Definition. For a given eigenvector $V$ with eigenvalue $\lambda$, we will call $e^{\lambda t}V$ a characteristic solution. Exercise. Show that when all eigenvectors are multiples of each other, the formula won't give us all the solutions. Classification of linear systems We consider a few systems with non-diagonal matrices. The computations of the eigenvalues and eigenvectors come from Chapter 24. Example (degenerate). Let's consider a more general system of ODEs: $$\begin{cases}x'&=x&+2y,\\ y'&=2x&+4y,\end{cases} \ \Longrightarrow\ F=\left[ \begin{array}{cc}1&2\\2&4\end{array}\right].$$ Euler's method shows the following solutions: It appears that the system has (exponential) growth in one direction and constant in another. What are those directions? Linear algebra helps. First, the determinant is zero: $$\det F=\det\left[ \begin{array}{cc}1&2\\2&4\end{array}\right]=1\cdot 4-2\cdot 2=0.$$ That's why there is a whole line of points $X$ with $FX=0$. These are stationary points. To find them, we solve this equation: $$\begin{cases}x&+2y&=0,\\ 2x&+4y&=0,\end{cases} \ \Longrightarrow\ x=-2y.$$ We have, then, eigenvectors corresponding to the zero eigenvalue $\lambda _1=0$: $$V_1=\left[\begin{array}{cc}2\\-1\end{array}\right]\ \Longrightarrow\ FV_1=0 .$$ So, second, there is only one non-zero eigenvalue: $$\det (F-\lambda I)=\det\left[ \begin{array}{cc}1-\lambda&2\\2&4-\lambda\end{array}\right]=\lambda^2-5\lambda=\lambda(\lambda-5).$$ Let's find the eigenvectors for $\lambda_2=5$. We solve the equation: $$FV=\lambda_2 V,$$ as follows: $$FV=\left[ \begin{array}{cc}1&2\\2&4\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=5\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}x&+2y&=5x,\\ 2x&+4y&=5y,\end{cases} \ \Longrightarrow\ \begin{cases}-4x&+2y&=0,\\ 2x&-y&=0,\end{cases} \ \Longrightarrow\ y=2x.$$ This line is the eigenspace. We choose the eigenvector to be: $$V_2=\left[\begin{array}{cc}1\\2\end{array}\right].$$ Every solution starts off the line $y=-x/2$ and continues along this vector. It is a linear combination of the two eigenvectors: $$X=CV_1+KV_2=C\left[\begin{array}{cc}2\\-1\end{array}\right]+Ke^{5t}\left[\begin{array}{cc}1\\2\end{array}\right].$$ $\square$ Exercise. Find the line of stationary solutions. Example (saddle). Let's consider this system of ODEs: $$\begin{cases}x'&=x&+2y,\\ y'&=3x&+2y.\end{cases} $$ Here, the matrix of $F$ is not diagonal: $$F=\left[ \begin{array}{cc}1&2\\3&2\end{array}\right].$$ Euler's method shows the following: The two lines the solutions appear to converge to are the eigenspaces. Let's find them: $$\det (F-\lambda I)=\det \left[ \begin{array}{cc}1-\lambda&2\\3&2-\lambda\end{array}\right]=\lambda^2-3\lambda-4.$$ Therefore, the eigenvalues are $$\lambda_1=-1,\ \lambda_2=4.$$ Now we find the eigenvectors. We solve the two equations: $$FV_k=\lambda_k V_k,\ k=1,2.$$ The first: $$FV_1=\left[ \begin{array}{cc}1&2\\3&2\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=-1\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}x&+2y&=-x,\\ 3x&+2y&=-y,\end{cases} \ \Longrightarrow\ \begin{cases}2x&+2y&=0,\\ 3x&+3y&=0,\end{cases} \ \Longrightarrow\ x=-y.$$ We choose $$V_1=\left[\begin{array}{cc}1\\-1\end{array}\right].$$ Every solution within this eigenspace (the line $y=-x$) is a multiple of this characteristic solution: $$X_1=e^{\lambda_1t}V_1=e^{-t}\left[\begin{array}{cc}1\\-1\end{array}\right].$$ The second eigenvalue: $$FV_2=\left[ \begin{array}{cc}1&2\\3&2\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=4\left[\begin{array}{cc}x\\y\end{array}\right].$$ We have the following system: $$\begin{cases}x&+2y&=4x,\\ 3x&+2y&=4y,\end{cases} \ \Longrightarrow\ \begin{cases}-3x&+2y&=0,\\ 3x&-2y&=0,\end{cases}\ \Longrightarrow\ x=2y/3.$$ We choose $$V_2=\left[\begin{array}{cc}1\\3/2\end{array}\right].$$ Every solution within this eigenspace (the line $y=3x/2$) is a multiple of this characteristic solution: $$X_2=e^{\lambda_2t}V_2=e^{4t}\left[\begin{array}{cc}1\\3/2\end{array}\right].$$ The two solutions $X_1$ and $X_2$, as well as $-X_1$ and $-X_2$, are shown below: The general solution is a linear combination of these two basic solutions: $$X=Ce^{\lambda_1t}V_1+Ke^{\lambda_2t}V_2=Ce^{-t}\left[\begin{array}{cc}1\\-1\end{array}\right]+Ke^{4t}\left[\begin{array}{cc}1\\3/2\end{array}\right]=\left[\begin{array}{cc}Ce^{-t}+Ke^{4t}\\-Ce^{-t}+3/2Ke^{4t}\end{array}\right],$$ i.e., $$\begin{cases}x&=Ce^{-t}&+Ke^{4t},\\ y&=-Ce^{-t}&+3/2Ke^{4t}.\end{cases} $$ The equilibrium is unstable. $\square$ Example (node). Let's consider this system of ODEs: $$\begin{cases}x'&=-x&-2y,\\ y'&=x&-4y.\end{cases} $$ Here, the matrix of $F$ is not diagonal: $$F=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right].$$ Euler's method shows the following: The analysis starts with the characteristic polynomial: $$\det (F-\lambda I)=\det \left[ \begin{array}{cc}-1-\lambda&-2\\1&-4-\lambda\end{array}\right]=\lambda^2-5\lambda+6.$$ Therefore, the eigenvalues are $$\lambda_1=-3,\ \lambda_2=-2.$$ To find the eigenvectors, we solve the two equations: $$FV_k=\lambda_k V_k,\ k=1,2.$$ The first: $$FV_1=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=-1\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}-x&-2y&=-3x,\\ x&-4y&=-3y,\end{cases} \ \Longrightarrow\ \begin{cases}2x&-2y&=0,\\ x&-y&=0,\end{cases} \ \Longrightarrow\ x=y.$$ We choose $$V_1=\left[\begin{array}{cc}1\\1\end{array}\right].$$ Every solution within this eigenspace (the line $y=x$) is a multiple of this characteristic solution: $$X_1=e^{\lambda_1t}V_1=e^{-3t}\left[\begin{array}{cc}1\\1\end{array}\right].$$ The second eigenvalue: $$FV_2=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=-2\left[\begin{array}{cc}x\\y\end{array}\right].$$ We have the following system: $$\begin{cases}-x&-2y&=-2x,\\ x&-4y&=-2y,\end{cases} \ \Longrightarrow\ \begin{cases}x&-2y&=0,\\ x&-2y&=0,\end{cases}\ \Longrightarrow\ x=2y.$$ We choose $$V_2=\left[\begin{array}{cc}2\\1\end{array}\right].$$ Every solution within this eigenspace (the line $y=x/2$) is a multiple of this this characteristic solution: $$X_2=e^{\lambda_2t}V_2=e^{-2t}\left[\begin{array}{cc}2\\1\end{array}\right].$$ The general solution is a linear combination of these two basic solutions: $$X=Ce^{\lambda_1t}V_1+Ke^{\lambda_2t}V_2=Ce^{-3t}\left[\begin{array}{cc}1\\1\end{array}\right]+Ke^{-2t}\left[\begin{array}{cc}2\\1\end{array}\right].$$ The equilibrium is stable. $\square$ Definition. For a linear system $X'=FX$, the equilibrium solution $X_0=0$ is called a stable node if every other solution $X$ satisfies: $$X(t)\to 0\text{ as } t\to +\infty \text{ and }||X(t)||\to \infty \text{ as } t\to -\infty ;$$ and an unstable node if $$X(t)\to 0\text{ as } t\to -\infty \text{ and }||X(t)||\to \infty \text{ as } t\to +\infty ;$$ provided no $X$ makes a full rotation around $0$. Definition. For a linear system $X'=FX$, the equilibrium solution $X_0=0$ is called a saddle if it has solutions $X$ that satisfy: $$||X(t)||\to \infty \text{ as } t\to \pm\infty .$$ Theorem (Classification of linear systems I). Suppose matrix $F$ has two real eigenvalues $\lambda _1$ and $\lambda_2$. Then, if $\lambda _1$ and $\lambda_2$ have the same sign, the system $X'=FX$ has a node, stable when this sign is negative and unstable when this sign is positive; if $\lambda _1$ and $\lambda_2$ have the opposite signs, the system $X'=FX$ has a saddle. Proof. The stability is seen in either of the two characteristic solutions, as $t\to +\infty$: $$||X||=||e^{\lambda t}V||=e^{\lambda t}\cdot ||V||\to\begin{cases}\infty&\text{if }\lambda>0,\\0&\text{if }\lambda<0.\end{cases}$$ According to the last theorem, we have a linear combination of the two characteristic solutions. Then, in the former case, we have one or the other pattern, and in the latter, both. There can be no rotation because no solution can intersect an eigenspace, according to the uniqueness property. $\blacksquare$ Classification of linear systems, continued What if the eigenvalues are complex? Recall that the characteristic polynomial of matrix $F$ is $$\chi(\lambda)=\det (F-\lambda I)=\lambda^2-\operatorname{tr} F\cdot\lambda+\det F$$ The discriminant of this quadratic polynomial is $$D=(\operatorname{tr} F)^2-4\det F.$$ When $D>0$, we have two distinct real eigenvalues, the case addressed in the last section. We are faced with complex eigenvalues whenever $D<0$. The transitional case is $D=0$. Example (improper node). In contrast to the last example, a node may be produced by a matrix with repeated (and, therefore, real) eigenvalues: $$F=\left[ \begin{array}{cc}-1&2\\0&-1\end{array}\right].$$ Euler's method shows the following: The analysis starts with the characteristic polynomial: $$\det (F-\lambda I)=\det \left[ \begin{array}{cc}-1-\lambda&2\\0&-1-\lambda\end{array}\right]=(-1-\lambda)^2.$$ Therefore, the eigenvalues are $$\lambda_1=\lambda_2=-1.$$ The only eigenvectors are horizontal. The solution is given by $$X=Ce^{-t}\left[\begin{array}{cc}1\\0\end{array}\right]+K\left( te^{-t}\left[\begin{array}{cc}1\\0\end{array}\right]+e^{-t}\left[\begin{array}{cc}?\\1\end{array}\right] \right).$$ $\square$ Exercise. Finish the computation in the example. When $D<0$, the eigenvalues are complex! Therefore, there are no eigenvectors (not real ones anyway). Does the system $X'=FX$ even have solutions? The theorem about characteristic solutions says yes, they are certain exponential functions... Example (center). Consider $$\begin{cases}x'&=y,\\ y'&=-x.\end{cases} $$ We already know that the solution is found by substitution: $$x' '=(x')'=y'=-x.$$ Therefore the solutions are the linear combinations of $\sin t$ and $\cos t$. The result is confirmed with Euler's method (with a limited number of step to prevent the approximations to spiral out): According to the theory above, the solutions are supposed to be exponential rather than trigonometric. But the latter are just exponential functions with imaginary exponents. Let's make this specific; we have $$F=\left[ \begin{array}{ccc}0&1\\-1&0\end{array}\right],$$ and the characteristic polynomial, $$\chi(\lambda)=\lambda^2+1,$$ has these complex roots: $\lambda_{1,2}=\pm i$. To find the first eigenvector, we solve: $$FV_1=\left[ \begin{array}{ccc}0&1\\-1&0\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=i\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}&y&=ix\\ -x&&=iy\end{cases} \ \Longrightarrow\ y=ix.$$ We choose a complex eigenvector: $$V_1=\left[\begin{array}{cc}1\\i\end{array}\right],$$ and similarly: $$V_2=\left[\begin{array}{cc}1\\-i\end{array}\right],$$ The general solution is a linear combination -- over the complex numbers -- of these two characteristic solutions: $$X=Ce^{\lambda_1t}V_1+Ke^{\lambda_2t}V_2=Ce^{it}\left[\begin{array}{cc}1\\i\end{array}\right]+Ke^{-it}\left[\begin{array}{cc}1\\-i\end{array}\right].$$ The problem is solved! ...in the complex domain. What is the real part? Let $K=0$. Then the solution is: $$X=Ce^{it}\left[\begin{array}{cc}1\\i\end{array}\right]=C\left[\begin{array}{cc}e^{it}\\ie^{it}\end{array}\right]=C\left[\begin{array}{cc}\cos t+i\sin t\\i(\cos t+i\sin t)\end{array}\right]=C\left[\begin{array}{cc}\cos t+i\sin t\\ -\sin t+i\cos t\end{array}\right].$$ Its real part is: $$\operatorname{Re} X=C\left[\begin{array}{cc}\cos t\\ -\sin t\end{array}\right].$$ These are all the circles. $\square$ Example (focus). Let's consider a more complex system ODEs: $$\begin{cases}x'&=3x&-13y,\\ y'&=5x&+y.\end{cases} $$ Here, the matrix of $F$ is not diagonal: $$F=\left[ \begin{array}{cc}3&-13\\5&1\end{array}\right].$$ Euler's method shows the following: The analysis starts with the characteristic polynomial: $$\chi(\lambda)=\det (F-\lambda I)=\det \left[ \begin{array}{cc}3-\lambda&-13\\5&1-\lambda\end{array}\right]=\lambda^2-4\lambda+68.$$ Therefore, the eigenvalues are $$\lambda_{1,2}=2\pm 8i.$$ Now we find the eigenvectors. We solve the two equations: $$FV_k=\lambda_k V_k,\ k=1,2.$$ The first: $$FV_1=\left[ \begin{array}{cc}3&-13\\5&1\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=(2+8i)\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}3x&-13y&=(2+8i)x\\ 5x&+y&=(2+8i)y\end{cases} \ \Longrightarrow\ \begin{cases}(1-8i)x&-13y&=0\\ 5x&+(-1-8i)y&=0\end{cases} \ \Longrightarrow\ x=\frac{1+8i}{5}y.$$ We choose $$V_1=\left[\begin{array}{cc}1+8i\\5\end{array}\right].$$ The second eigenvalue: $$FV_2=\left[ \begin{array}{cc}3&-13\\5&1\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=(2- 8i)\left[\begin{array}{cc}x\\y\end{array}\right].$$ We have the following system: $$\begin{cases}3x&-13y&=(2-8i)x\\ 5x&+y&=(2-8i)y\end{cases} \ \Longrightarrow\ \begin{cases}(1+8i)x&-13y&=0\\ 5x&+(-1+8i)y&=0\end{cases}\ \Longrightarrow\ x=\frac{1-8i}{5}y.$$ We choose $$V_2=\left[\begin{array}{cc}1-8i\\5\end{array}\right].$$ The general complex solution is a linear combination of the two characteristic solutions: $$Z=Ce^{\lambda_1t}V_1+Ke^{\lambda_2t}V_2=Ce^{(2+8i)t}\left[\begin{array}{cc}1+8i\\5\end{array}\right]+Ke^{(2-8i)t}\left[\begin{array}{cc}1-8i\\5\end{array}\right].$$ Let's now examine a simple real solution. We let $C=1$ and $K=0$: $$\begin{array}{lll} X=\operatorname{Re} Z&= \operatorname{Re}e^{(2+8i)t}\left[\begin{array}{cc}1+8i\\5\end{array}\right]\\ &=e^{2t}\operatorname{Re}e^{8it}\left[\begin{array}{cc}1+8i\\5\end{array}\right]\\ &=e^{2t}\operatorname{Re}(\cos 8t+i\sin 8t)\left[\begin{array}{cc}1+8i\\5\end{array}\right]\\ &=e^{2t}\operatorname{Re}\left[\begin{array}{cc}(\cos 8t+i\sin 8t)(1+8i)\\(\cos 8t+i\sin 8t)5\end{array}\right]\\ &=e^{2t}\operatorname{Re}\left[\begin{array}{cc}\cos 8t+i\sin 8t+8i\cos 8t-8\sin 8t\\5\cos 8t+i5\sin 8t\end{array}\right]\\ &=e^{2t}\left[\begin{array}{cc}\cos 8t-8\sin 8t\\5\cos 8t\end{array}\right]. \end{array}$$ Plotting this parametric curve confirms Euler's method result: Definition. For a linear system $X'=FX$, the equilibrium solution $X_0=0$ is called a stable focus if every other solution $X$ satisfies: $$X(t)\to 0\text{ as } t\to +\infty \text{ and }||X(t)||\to \infty \text{ as } t\to -\infty ;$$ and an unstable focus if $$X(t)\to 0\text{ as } t\to -\infty \text{ and }||X(t)||\to \infty \text{ as } t\to +\infty ;$$ provided every such $X$ makes a full rotation around $0$. Definition. For a linear system $X'=FX$, the equilibrium solution $X_0=0$ is called a center if all solutions are cycles. Theorem (Classification of linear systems II). Suppose matrix $F$ has two complex conjugate eigenvalues $\lambda _1$ and $\lambda_2$. Then, if the real part of $\lambda _1$ and $\lambda_2$ is non-zero, the system $X'=FX$ has a focus, stable when this sign of this number is negative and unstable when this sign is positive; if the real part of $\lambda _1$ and $\lambda_2$ is zero, the system $X'=FX$ has a center. Proof. The stability is seen in either of the two characteristic solutions, as $t\to +\infty$: $$||X||=||e^{\lambda t}V||=||e^{(a+bi) t}V||=e^{at} |\cos bt+i\sin bt|\cdot ||V||=e^{at}||V||\to\begin{cases}\infty&\text{if }a>0,\\0&\text{if }a<0.\end{cases}$$ $\blacksquare$ The combination of the two classification theorems is illustrated below: Thereby we complete the sequence: elementary algebra $\longrightarrow$ matrix algebra $\longrightarrow$ linear differential equations. To summarize, in order to classify a system of linear ODEs $X'=FX$, where $F$ is a $2\times 2$ matrix and $X$ is a vector on the plane, we classify $F$ according to its eigenvalues and visualize how the locations of these two numbers in the complex plane indicate very different behaviors of the trajectories. (The missing patterns are better illustrated dynamically, as the exact "moments" when one pattern transitions into another.) Exercise. Point out on the complex plane the locations of the center and the improper node. Exercise. How likely would a given system fall into each of these five categories? What about the center and the improper node? Exercise. What parameters determine the clockwise vs. counter-clockwise behavior? Example (predator-prey). Let's classify the equilibria of the predator-prey model -- via linearization. Our non-linear system is given by the following: $$\begin{cases} x' = \alpha x &- \beta x y ,\\ y' = \delta xy &- \gamma y , \end{cases}$$ with non-negative coefficients $\alpha x,\ \beta,\ \delta,\ \gamma$. In other words, we have $$X'=G(X),\text{ with } G(x,y)=(\alpha x - \beta x y ,\delta xy - \gamma y).$$ The Jacobian of $G$ is $$G'(x,y)= \left[\begin{array}{cc} \frac{\partial}{\partial x}(\alpha x - \beta x y)&\frac{\partial}{\partial y}(\alpha x - \beta x y)\\ \frac{\partial}{\partial x}(\delta xy - \gamma y)&\frac{\partial}{\partial y}(\delta xy - \gamma y) \end{array}\right]=\left[\begin{array}{cc} \alpha - \beta y&-\beta x \\ \delta y &\delta x - \gamma \end{array}\right].$$ The matrix depends on $(x,y)$ because the system is non-linear. By fixing locations $X=A$, we create linear vector ODEs: $$X'=G'(A)X.$$ First, we consider the zero equilibrium, $$x=0,\ y=0.$$ Here, $$F=G'(0,0)=\left[\begin{array}{cc} \alpha - \beta \cdot0&-\beta \cdot0 \\ \delta \cdot0 &\delta\cdot 0 - \gamma \end{array} \right] =\left[\begin{array}{cc} \alpha &0 \\ 0& - \gamma \end{array} \right].$$ The eigenvalues are found by solving the following equation: $$\det \left[\begin{array}{cc} \alpha-\lambda &0 \\ 0& - \gamma-\lambda \end{array} \right]=(\alpha-\lambda)(- \gamma-\lambda)=0.$$ Therefore, $$\lambda_1=\alpha,\ \lambda_2=-\gamma.$$ We have two real eigenvalues of opposite signs. This is a saddle! Indeed, around this point the foxes decline while the rabbits increase in numbers. The main equilibrium is $$x=\frac{\gamma}{\delta},\ y=\frac{\alpha}{\beta}.$$ Here, $$F=G'\left(\frac{\gamma}{\delta}, \frac{\alpha}{\beta}\right)=\left[\begin{array}{cc} \alpha - \beta \cdot \frac{\alpha}{\beta} &-\beta \cdot \frac{\gamma}{\delta} \\ \delta \cdot \frac{\alpha}{\beta} &\delta\cdot \frac{\gamma}{\delta} - \gamma \end{array}\right] =\left[\begin{array}{cc} 0 &- \frac{\beta\gamma}{\delta} \\ \frac{\delta\alpha}{\beta} &0 \end{array}\right] .$$ The eigenvalues are found by solving the following equation: $$\det \left[\begin{array}{cc} -\lambda &- \frac{\beta\gamma}{\delta} \\ \frac{\delta\alpha}{\beta} &-\lambda \end{array}\right]=\lambda^2+ \alpha\gamma=0.$$ Therefore, $$\lambda_1=\sqrt{\alpha\gamma}\,i,\ \lambda_2=-\sqrt{\alpha\gamma}\,i.$$ We have two purely imaginary eigenvalues. This is a center! Indeed, around this point we have a cyclic behavior. The results match our previous analysis. $\square$ Retrieved from "https://calculus123.com/index.php?title=Systems_of_ODEs&oldid=437"
CommonCrawl
Architecture and Architectonics (116) Chemistry and Chemical Engineering (1,609) Earth and Environmental Sciences (29) Journal Articles (1,719) You are looking at 91 - 100 of 1,719 items for : "silica" x Removal of complexed mercury by dithiocarbamate grafted on mesoporous silica Authors: K. Venkatesan, T. Srinivasan, and P. Vasudeva Rao Mesoporous silica (MCM-41) with d (100) interplanar distance of 38 Å was prepared by a room temperature process through low surfactant templation technique. The surface of MCM-41 was functionalized with dithiocarbamate (dtc) ligand, named as MCM-41-dtc and this was characterized by X-ray diffraction, BET surface area, particle size analysis, 29Si MAS NMR spectra and sulphur analysis. The sorption of mercury from 0.1M HCl solution by MCM-41-dtc was studied as a function of pH, [Hg2+], time and temperature. The sorption data obtained at various initial concentrations of mercury were fitted into Langmuir adsorption model. Mercury speciation in solution and the sorption capacity measurements indicated possible formation of a 1 : 1 square planar complex in the solid phase. A very rapid sorption of mercury was observed in the initial stages of equilibration, which can be attributed to the large surface area, wide porosity and fine particle size of MCM-41-dtc, facilitating facile accessibility of mercury into the inner pores of the sorbent. The enthalpy change accompanied by the sorption of mercury was found to decrease from 83.7 to 6.2 kJ/mol, when the initial concentration of mercury was increased from 5.10-4M to 1.5.10-3M. Pyrolysis study of a hydride-sol-gel silica Part I. Chemical aspects Authors: R. Campostrini, A. Sicurelli, M. Ischia, and G. Carturan A homogeneous silica gel sample bearing ≡Si-H groups was prepared, via sol-gel method, by hydrolysis of trimethoxysilane under acid condition in tetrahydrofuran. Preliminary NMR experiments in liquid phase indicated an immediate and complete hydrolysis of Si-OCH3, followed by a slower condensation of the Si-OH groups, with maintenance of Si-H bonds. The crude-gel, and samples heated to various temperatures, were characterized by different instrumental methods, including FTIR, density, porosity, and specific surface area. These data indicate that the crude-gel was a dense material which, on heating, increases porosity and surface area up to ca 500�C. The thermal behavior was studied in inert atmosphere by means of coupled thermogravimetric, gas chromatographic, mass spectrometric analyses. The pyrolysis process was described by the fundamental chemical reactions occurring among the siloxane chains of the gel network and by the qualitative and semiquantitative chemical analysis of the compounds released in gas-phase. The proposed pyrolysis mechanism was discussed and interpreted in agreement with the change of the morphological properties of the gel. The pyrolysis data and the mass balance between the compounds released in gas-phase and the solid residue at 1000�C allowed the determination of a nominal chemical formula to describe the crude-gel composition. Kinetic characterization of the reduction of silica supported cobalt catalysts Authors: Y. Wan, J. Li, and D. Chen Abstact The reduction process of silica supported cobalt catalyst was studied by thermal analysis technique. The reduction of the catalyst proceeds in two steps: \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$Co_3 O_4 + H_2 \to 3CoO + H_2 O, 3CoO + 3H_2 \to 3Co + 3H_2 O$$ \end{document} which was validated by the TPR and in-situ XRD experiments. The kinetic parameters of the reduction process were obtained with a comparative method. For the first step, the activation energy, E a, and the pre-exponential factor, A, were found to be 104.35 kJ mol−1 and 1.18�106∼2.45�109 s−1 respectively. The kinetic model was random nucleation and growth and the most probable mechanism function was found to be f(α)=3/2(1−α)[−ln(1−α)]1/3 or in the integral form: g(α)=[−ln(1−α)]2/3. For the second step, the activation energy, E a, and the pre-exponential factor, A, were found to be 118.20 kJ mol−1 and 1.75�107∼2.45 � 109s−1 respectively. The kinetic model was a second order reaction and the probable mechanism function was f(α)=(1−α)2 or in the integral form: g(α)=[1−α]−1−1. Part II. Kinetic aspects The thermal behaviour of a sol-gel prepared hybride silica gel (HSiO sample) in the 20–1000°C interval was studied by coupled thermogravimetric-mass spectrometric (TG-MS) analyses carried out at various heating rates. Thermogravimetric curve elaboration allowed the determination of the flex temperatures, corresponding to the maximum release rate of gas-evolved compounds, and to calculate the activation energy of the overall process. The mass spectrometric data, registered in the TG-MS measurements, were treated to discriminate the single reactions accounting for the release of each compound, among which water, dihydrofuran and various silane-and siloxane-derived species. These results were used to calculate the comprehensive activation energy and also those of each of the released species. Different methods of data processing were used to achieve better reliability of calculated activation energies. The discussion focuses on the high extension of kinetic information arising from MS data processing and on the advantage of identifying the contribution of single reactions, although they occur simultaneously during the heating process. In this respect, good agreement was found between the activation energies of the overall process calculated by separately processing TG and MS data. By processing MS data, the same agreement was observed in the comparison between the activation energy calculated for the overall thermal process and in the sum of the weighed activation energies of the reaction of each released compound. Study by thermal analysis of mortars belonging to wall paintings corresponding to some historical buildings of Sevillian art Authors: A. Duran, M. Robador, M. Jimenez de Haro, and Veronica Ramirez-Valle Mortars taken from the walls of three historical buildings in Seville: Pond of Patio de las Doncellas in Real Alcazar of Seville, the Monastery of Santa Maria de las Cuevas and the Church of El Salvador were investigated. The techniques employed were thermogravimetry (TG), differential thermal analysis (DTA), XRD, FTIR, SEM with EDAX, Bernard calcimeter, granulometry, mercury intrusion porosimetry and mechanical strength tests. The majority of the studied mortars consist of calcite and silica. Gypsum was detected in samples of four mortars from the Santa Maria de las Cuevas Monastery and two from the El Salvador Church, whose samples were taken from the upper layers of the walls, but gypsum was not detected in the internal mortars layers. Only in two of the samples of the Monastery, the presence of cellulosic material as an organic additive was detected. All the studied mortars could be regarded hydraulic, so much by results from ratios between mass loss due to CO2 and H2O, hydraulic module and assays of compressive strength. The values obtained by these three techniques are related, providing good agreements between them. These results give useful information that aids in understanding the technology of historic mortars, and how to plan the restoration of these wall paintings. Selective separation behavior of silica gel for zirconium in simulated high level radioactive liquid waste Authors: Y. Zhang, R. Wang, C. Lin, and X. Zhang Zirconium in simulated high level radioactive liquid waste (HLLW) was selectively adsorbed and separated by self-made high adsorption activity silica gel. The selective adsorption mechanism was analyzed according to the structure character of self-made silica gel and performance of zirconium in acid simulated HLLW. The results show that the adsorption selectivity of self-made silica gel for zirconium is strong, because zirconium has higher positive charge and zirconium ion hydrolyzes easily. Distribution coefficient of self-made silica gels for zirconium is 53.5 ml/g. There are 6.5 (OH)/nm2 on the surface on self-made silica gels which provide more adsorption activity places, thus self-made silica gels have higher adsorption capacity for zirconium (31.4 mg/g). The elution rate of the adsorption of zirconium on self-made silica gel by 0.2 mol/l H2C2O4 is more than 99%. The solubility of the self-made silica gel in nitric acid is low, the chemical stability of self-made silica gel is very strong. Homogeneously doped silica matrices for trace element standards in neutron activation analysis Volume 39: Issue 1-2 Authors: J. Mitchell, L. Blitzer, T. Kometani, T. Gills, and L. Clark A convenient method for introducing desired trace elements into a silica matrix is described in order to have an ideal standard for instrumental neutron activation analysis. Studies of mechanism of silica polymerization reactions in the combination of silica sol and potassium sodium waterglass via isothermal heat conduction microcalorimetry Authors: Hebao Tian, Chaocan Zhang, Lili Wu, and Yanjun Chen Isothermal heat conduction microcalorimetry was adopted as a novel characterization method to investigate the polymerization processes of silica when the combination of silica sol and potassium sodium silicate was stirred at 25.0, 35.0, and 45.0 °C. Thermodynamic and kinetic parameters were simultaneously obtained. The enthalpy change was greater at each higher temperature. The reaction orders (m, n) instantaneously varied, up and down in an alternate manner. At 25.0, 35.0, and 45.0 °C, the rate constants were different; the maximum rate constant occurred at 25.0 °C. These phenomena reflect a two-stage oligomeric mechanism of silica monomers. The measurements of particle size showed the complex chemical composition of aqueous silicates, which can be qualitatively designated by the particle size distribution in two parts. The results further indicate that the colloidal particles in the mixed silica sol and silicates first dissolved. Then the "active" silica in the silicates redeposited to make a distinct particle size distribution influenced by K+ and Na+ ions as well as by temperature. Mesoporous micelle templated silica with incorporated C8 and C18 phase Authors: J. Goworek, W. Stefaniak, Agnieszka Kierys, and Mariola Iwan Mesoporous silica material of MCM-41 type was synthesized by co-condensation of highly concentrated octyltriethoxysilane (OTEOS), octadecyltriethoxysilane (ODTEOS) and tetraethoxysilane (TEOS). The obtained hybrid materials were characterized using XRD, TG-DSC and low temperature adsorption/desorption of nitrogen. It was shown that the applied method of synthesis allows to obtain silica of MCM-41 type with a high degree of hydrocarbon saturation. Improvement of thermal conductivity of poly(dimethyl siloxane) using silica-coated multi-walled carbon nanotube Authors: Jinho Hong, Jeongwoo Lee, Chang Hong, and Sang Shim In order to enhance the thermal conductivity of MWCNT filled poly(dimethyl siloxane) (PDMS) composites, the MWCNT was coated with silica layer by three step reactions. The composites filled with raw and silica-coated MWCNTs were prepared and the properties were investigated in terms of the curing characteristics, mechanical properties, and thermal conductivity. Due to the poor compatibility between raw MWCNT and PDMS, raw MWCNT showed poor dispersion uniformity and wettability in PDMS. On the other hand, due to the chemical affinity between silica/MWCNT and PDMS throughout the hydrogen bonding, the silica-coated MWCNT filled PDMS showed improved mechanical properties in terms of tensile strength and 100% modulus, and good interfacial compatibility than raw MWCNT incorporated PDMS. Finally, the good wettability of silica/MWCNT in PDMS resulted in higher thermal conductivity caused from the facile phonon movement at the interface even with the smaller MWCNT contents.
CommonCrawl
The Berry-Esseen bounds of wavelet estimator for regression model whose errors form a linear process with a ρ-mixing Liwang Ding1 & Yongming Li2 In this paper, considering the nonparametric regression model \(Y_{ni}=g(t_{i})+\varepsilon_{i}\) (\(1\leq i\leq n\)), where \(\varepsilon_{i}=\sum_{j=-\infty}^{\infty}a_{j}e_{i-j}\) and \(e_{i-j}\) are identically distributed and ρ-mixing sequences. This paper obtains the Berry-Esseen bounds of the wavelet estimator of \(g(\cdot)\), the rates of the normal approximation are shown as \(O(n^{-1/6})\) under certain conditions. The Berry-Esseen theorem of probability distribution concerns mainly research of statistics convergence to a certain distribution and the measure of the probability distributions of the statistics which determines the distribution as regards the absolute distance that can be controlled as an optimal problem. In recent years, the Berry-Esseen bounds theorem has got extensive investigation. For instance, Xue [1] discussed the Berry-Esseen bound of an estimator for the variance in a semi-parametric regression model under some mild conditions, Liang and Li [2] studied the asymptotic normality and the Berry-Esseen type bound of the estimator with linear process error, Li et al. [3] derived the Berry-Esseen bounds of the wavelet estimator for a nonparametric regression model with linear process errors generated by φ-mixing sequences, Li et al. [4] investigated the Berry-Esseen bounds of the wavelet estimator in a semi-parametric regression model with linear process errors. To investigate the estimation of the fixed design nonparametric regression model involves a regression function \(g(\cdot)\) which is defined on \([0,1]\): $$ Y_{i}=g(t_{i})+\varepsilon_{i} \quad (1\leq i\leq n), $$ where \(\{t_{i}\}\) are known fixed design points, we assume \(\{t_{i}\}\) and to be ordered \(0\leq t_{1}\leq\cdots\leq t_{n}\leq1\), and \(\{\varepsilon_{i}\}\) are random errors. It is well known that a regression function estimation is an important method in data analysis and has a wide range of applications in filtering and prediction in communication and control systems, pattern recognition and classification, and econometrics. So the model (1.1) has been studied widely. The model (1.1) has been applied in solving many practical problems and kinds of estimation methods have been used to obtain estimators of \(g(\cdot)\). For model (1.1), the following wavelet estimator of \(g(\cdot)\) will be considered: $$ g_{n}(t)=\sum_{i=1} ^{n}Y_{i} \int_{A_{i}}E_{m}(t,s)\,ds. $$ The wavelet kernel \(E_{m}(t,s)\) can be considered as follows: \(E_{m}(t,s)=2^{m}E_{0}(2^{m}t,2^{m}s)=2^{m}\sum_{k\in Z}\phi(2^{m}t-k)\phi(2^{m}s-k)\), where \(\phi(\cdot)\) is a scaling function, the smooth parameter \(m=m(n)>0\) depending only on n, and \(A_{i}=[s_{i-1},s_{i}]\) being a partition of interval \([0,1]\), \(s_{i}=(1/2)(t_{i}+t_{i+1})\), and \(t_{i}\in A_{i}\), \(1\leq i\leq n\). As we know wavelets have been used widely in many engineering and technological fields, especially in picture handling by computers. Since the 1990s, in order to meet practical demands, some authors began to consider using wavelet methods in statistics. Let \(\{X_{i}: i=1,2,\ldots\}\) be a sequence of random variables. Write the ρ-mixing coefficient $$\rho(n)=\sup_{k\in N}\sup_{X\in L_{2}(\mathfrak {F}^{k}_{1}),Y\in L_{2}(\mathfrak{F}^{\infty}_{k+n})} \frac {|\mathrm{E}(X-\mathrm{E}X)(Y-\mathrm{E}Y)|}{\sqrt{\operatorname{Var} X \operatorname{Var} Y}}, $$ where \(\mathfrak{F}^{b}_{a}:=\sigma\{\{X_{i}:a\leq i\leq b\}\}\), \(L_{2}(\mathfrak{F}^{b}_{a})\) is the set of square integrable random variables on the condition of \(\mathfrak{F}^{b}_{a}\). A sequence of random variables \(\{X_{i}: i=1,2,\ldots\}\) is ρ-mixing if \(\rho(n)\to0\), \(n\to\infty\). Kolmogorov and Rozanov [5] put forward a ρ-mixing random variables sequence. For the wide use of ρ-mixing in science and technology and economics, many scholars have investigated the ρ-mixing and got fruitful meaningful results. For instance, the central limit theorem for ρ-mixing, the law of large numbers for ρ-mixing, the strong invariant principle and weak invariant principle for ρ-mixing, the complete convergence theorem of ρ-mixing, which we can see in Shao's work [6, 7]. Recently, Jiang [8] discussed the convergence rates in the law of the logarithm of the ρ-mixing random variables, obtaining a sufficient condition for the law of logarithm of ρ-mixing and the convergence rates in the law of the iterated logarithm. Chen and Liu [9] achieved the sufficient and necessary conditions of complete moment convergence for a sequence of identically distributed ρ-mixing random variables. Zhou and Lin [10] investigated the estimation problems of partially linear models for longitudinal data with ρ-mixing error structures, and they studied the strong consistency for the least squares estimator of the parametric component; the strong consistency and uniform consistency for the estimator of nonparametric function were studied under some mild conditions. Tan and Wang [11] studied the complete convergence for weighted sums of non-identically distributed ρ-mixing random variables sequence, and they gave the Marcinkiewicz-Zygmund type strong law of large numbers. Assumptions and main results First, we give some basic assumptions as follows: (A1) \(\{\varepsilon_{j}\}_{j\in Z}\) has a linear representation \(\varepsilon_{j}=\sum_{k=-\infty}^{\infty}a_{k}e_{k-j}\), where \(\{a_{k}\}\) is a sequence of real numbers with \(\sum_{k=-\infty}^{\infty}|a_{k}|<\infty\), \(\{e_{j}\}\) are identically distributed, ρ-mixing random variables with \(\mathrm{E}e_{j}=0\), \(\mathrm{E}|e_{j}|^{r}<\infty\) for some \(r>2\), and \(\rho(n)=O(n^{-\lambda})\) for \(\lambda>2\). The spectral density function \(f(\omega)\) of \(\{\varepsilon_{i}\}\) satisfies \(0< c_{1}\leq f(\omega)\leq c_{2}<\infty\) for all \(\omega\in(-\pi, \pi]\). \(\phi(\cdot)\) is said to be σ-regular (\(\phi \in S_{\sigma}\), \(\sigma\in N\)) if for any \(\kappa\leq\sigma\) and any integer t, one has \(|d^{\kappa}_{\phi}/dx^{\kappa}|\leq C_{t}(1+|x|)^{-1}\), the \(C_{t}\) depending only on t; \(\phi(\cdot)\) satisfies the Lipschitz condition with order 1 and \(|\hat{\phi}(\xi)-1|=O(\xi)\) as \(\xi\to\infty\), where ϕ̂ is the Fourier transform of ϕ. \(g(\cdot)\) satisfies the Lipschitz condition of order 1; \(g(\cdot)\in H^{\mu}\), \(\mu>1/2\), A function space \(H^{\mu}\) (\(\mu \in R\)) is said to be Sobolev space of order V, i.e., if \(h\in H^{\mu}\) then \(\int|\hat{h}(w)|^{2}(1+w^{2})^{\mu}\, dw<\infty\), where ĥ is the Fourier transform of h. \(\max_{1\leq i\leq n}|s_{i}-s_{i-1}-n^{-1}|=o(n^{-1})\). Set \(p:=p(n)\) and \(q:=q(n)\), write \(k:=[3n/(p+q)]\) such that for \(p+q\leq 3n\), \(qp^{-1}\to0\), and \(\zeta_{in}\to0\), \(i=1,2,3,4\), where \(\zeta _{1n}=qp^{-1} 2^{m}\), \(\zeta_{2n}=p \frac{2^{m}}{n}\), \(\zeta_{3n}=n(\sum_{|j|>n}|a_{j}|)^{2}\), \(\zeta_{4n}= k\rho(q)\). Remark 2.1 (A1) are the general conditions of the ρ-mixing sequence, such as Shao [6, 7], (A1) is weaker than Li et al.'s [3], (A2)-(A5) are mild regularity conditions for the wavelet estimate in the recent literature, such as Li et al. [3, 4, 12], Liang and Qi [13]. In (A6), p, q, \(2^{m}\) can be defined as increasing sequences, and \(\zeta_{in}\to0\), \(i=1,2,3,4\), are easily satisfied, if p, q and \(2^{m}\) are chosen reasonable. See e.g. Liang and Li [2], Li et al. [3, 4, 12]. In order to facilitate the discussion, write \(\sigma_{n}^{2}:=\sigma _{n}^{2}(t)={\operatorname{Var}}(\hat{g}_{n}(t))\), \(S_{n}:=S_{n}(t)=\sigma_{n}^{-1} \{\hat{g}_{n}(t)-\mathrm{E}\hat{g}_{n}(t)\}\), \(u(n)=\sum_{j=1}^{\infty} \rho(j)\), \(\| X \|_{\beta}= (\mathrm{E}|X|^{\beta})^{1/\beta} \), \(a\wedge b=\min\{a, b\}\). Next, we give the main results as follows. Suppose that (A1)-(A6) hold, then for each \(t\in[0,1]\), we can get $$\sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert \leq C \bigl\{ \zeta_{1n}^{1/3}+ \zeta_{2n}^{1/3}+\zeta _{2n}^{\delta/2}+ \zeta_{3n}^{1/3}+ \zeta_{4n}^{1/4} + u(q) \bigr\} . $$ Corollary 2.1 $$\sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert =\circ(1). $$ Suppose that (A1)-(A6) hold, assume that $$\frac{2^{m}}{n}=O(n^{-\theta})\quad \textit{and}\quad \sup_{n\geq 1} \bigl(n^{\frac{6\lambda\theta+3\lambda+3\theta+4}{2(6\lambda +7)}}\bigr)\sum_{|j|>n}|a_{j}| < \infty $$ for some \(\frac{2}{9-6\lambda}<\theta\leq1\) and for some \(\lambda> 2\), we can get $$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert \leq C \bigl\{ \zeta_{1n}^{1/3}+ \zeta_{2n}^{1/3}+ \zeta _{3n}^{1/3}+ \zeta_{4n}^{1/4} + u(q) \bigr\} , \\& \sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda+7}} \bigr). \end{aligned}$$ Observe, taking \(\theta\approx1\) as \(\lambda\to\infty\), that it follows that \(\sup_{u}\vert \mathrm{P}(S_{n}(t)\leq u) -\Phi (u)\vert =O(n^{-1/6})\). Some lemmas From (1.2), we can see that $$\begin{aligned} S_{n} = & \sigma_{n}^{-1}\sum _{i=1}^{n} \varepsilon_{i} \int_{A_{i}} E_{m}(t,s) \,ds \\ =&\sigma_{n}^{-1}\sum_{i=1}^{n} \int_{A_{i}} E_{m}(t,s) \,ds \sum _{j=-n}^{n}a_{j}e_{i-j} \\ &{}+ \sigma_{n}^{-1}\sum_{i=1}^{n} \int_{A_{i}} E_{m}(t,s) \,ds \sum _{|j|>n}a_{j}e_{i-j} \\ :=& S_{1n}+S_{2n}. \end{aligned}$$ $$S_{1n}=\sum_{l=1-n}^{2n} \sigma_{n}^{-1} \Biggl(\sum_{i=\max\{1,l-n\}}^{\min\{n,l+n\}}a_{i-l} \int_{A_{i}} E_{m}(t,s) \,ds \Biggr)e_{l} := \sum_{l=1-n}^{2n}W_{nl}, $$ set \(S_{1n}=S_{1n}^{\prime}+S_{1n}^{\prime\prime}+S_{1n}^{\prime\prime \prime}\), where \(S_{1n}^{\prime}=\sum_{v=1}^{k}y_{nv}\), \(S_{1n}^{\prime\prime}=\sum_{w=1}^{k}y_{nv}^{\prime}\), \(S_{1n}^{\prime\prime\prime}=y^{\prime}_{n k+1}\), $$\begin{aligned}& y_{nv}=\sum_{i=k_{v}}^{k_{v}+p-1}W_{ni}, \qquad y_{nv}^{\prime}=\sum_{i=l_{v}}^{l_{v}+q-1}W_{ni}, \qquad y_{nk+1}^{\prime}=\sum_{i=k(p+q)-n+1}^{2n}W_{ni}, \\& k_{v}=(v-1) (p+q)+1-n, \qquad l_{v}=(v-1) (p+q)+p+1-n, \qquad w=1,\ldots,k, \end{aligned}$$ $$S_{n}=S_{1n}^{\prime}+S_{1n}^{\prime\prime}+S_{1n}^{\prime\prime \prime}+S_{2n}. $$ Next, we give the main lemmas as follows. Let \(\{X_{i}: i=1,2,\ldots\}\) be a ρ-mixing sequence, \(p_{1}\), \(p_{2}\) are two integers, let \(\eta_{l}:=\sum^{(l-1)(p_{1}+p_{2})+p_{1}}_{(l-1)(p_{1}+p_{2})+1}X_{i}\), for \(1\leq l\leq k\). If \(r>1\), \(s>1\), and \(1/r+1/s=1\), then $$\Biggl\vert \mathrm{E}\exp \Biggl(it\sum_{l=1}^{k} \eta_{l} \Biggr)-\prod_{l=1} ^{k} \mathrm{E}\exp(it\eta_{l})\Biggr\vert \leq C|t|\rho^{1/s}(p_{2}) \sum_{l=1}^{k}\|\eta_{l} \|_{r}. $$ Proof of Lemma 3.1 We can easily see that $$\begin{aligned} I_{0}: =&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m}}(t_{1}, \ldots,t_{m})-\varphi_{\xi _{1}}(t_{1})\cdots \varphi_{\xi_{m}}(t_{m})\bigr\vert \\ =&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m}}(t_{1},\ldots,t_{m})- \varphi_{\xi _{1},\ldots,\xi_{m-1}}(t_{1},\ldots,t_{m-1}) \varphi_{\xi_{m}}(t_{m})\bigr\vert \\ &{}+\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-1}}(t_{1},\ldots ,t_{m-1})-\varphi_{\xi_{1}}(t_{1})\cdots \varphi_{\xi _{m-1}}(t_{m-1})\bigr\vert =:I_{1}+I_{2}. \end{aligned}$$ As \(\exp(ix) = \cos(x)+i \sin(x)\), \(\sin(x+y) = \sin(x) \cos(y)+\cos(x) \sin(y)\), \(\cos(x+y) = \cos(x) \cos(y)-\sin(x) \sin(y)\), we can get $$\begin{aligned} I_{1} =&\Biggl\vert \mathrm{E} \exp \Biggl(i\sum _{l=1}^{m}t_{l}\xi_{l} \Biggr)-\mathrm{E}\exp \Biggl(i\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr)\mathrm{E} \exp(it\xi_{m})\Biggr\vert \\ \leq&\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum _{l=1}^{m-1}t_{l}\xi_{l} \Biggr), \cos(t_{m}\xi_{m}) \Biggr)\Biggr\vert +\Biggl\vert \operatorname{Cov} \Biggl(\sin \Biggl(\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr), \sin(t_{m}\xi_{m}) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(\sin \Biggl(\sum _{l=1}^{m-1}t_{l}\xi_{l} \Biggr), \cos(t_{m}\xi_{m}) \Biggr)\Biggr\vert +\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr), \sin(t_{m}\xi_{m}) \Biggr)\Biggr\vert \\ =:&I_{11}+I_{12}+I_{13}+I_{14}. \end{aligned}$$ It follows from Lemma 3.2 and \(|\sin(x)|\leq|x|\) that we have $$ \begin{aligned} &I_{12}\leq C\rho^{1/S}(1)\bigl\Vert \sin(t_{m} \xi_{m})\bigr\Vert _{r}\leq C\rho^{1/S}(1)|t_{m}| \bigl\Vert \sin(\xi_{m})\bigr\Vert _{r}, \\ &I_{14}\leq C\rho^{1/S}(1)|t_{m}|\bigl\Vert \sin( \xi_{m})\bigr\Vert _{r}. \end{aligned} $$ Notice that \(\cos(2x) = 1-2\sin2(x)\). Then one has $$\begin{aligned} I_{11} =&\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr), 1-2\sin^{2}(t_{m} \xi_{m}/2) \Biggr)\Biggr\vert \\ =&2\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum _{l=1}^{m-1}t_{l}\xi_{l} \Biggr), \sin^{2}(t_{m}\xi_{m}/2) \Biggr)\Biggr\vert \leq C\rho^{1/s}(1)E^{1/s}\bigl\vert \sin(t_{m}\xi_{m}/2)\bigr\vert ^{2r} \\ \leq&C\rho^{1/s}(1)E^{1/s}\bigl\vert \sin(t_{m} \xi_{m}/2)\bigr\vert ^{r} \leq C\rho^{1/s}(1)|t_{m}| \| \xi_{m}\|_{r}. \end{aligned}$$ $$ I_{13}\leq C\rho^{1/s}(1)|t_{m}|\| \xi_{m}\|_{r}. $$ Therefore, we can obtain $$ I_{1}\leq C\rho^{1/s}(1)|t_{m}|\| \xi_{m}\|_{r}. $$ Thus, as follows from (3.1) and (3.6), we can get $$ I_{0}=\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m}}(t_{1}, \ldots,t_{m})-\varphi_{\xi _{1}}(t_{1})\cdots \varphi_{\xi_{m}}(t_{m})\bigr\vert \leq C\rho^{1/s}(1)|t_{m}| \| \xi _{m}\|_{r}+I_{2}. $$ For \(I_{2}\) in (3.7), using the same decomposition as in (3.1) above, it can be found that $$\begin{aligned} I_{2} :=&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-1}}(t_{1}, \ldots,t_{m})-\varphi _{\xi_{1}}(t_{1})\cdots \varphi_{\xi_{m}}(t_{m})\bigr\vert \\ =&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-1}}(t_{1},\ldots,t_{m})- \varphi _{\xi_{1},\ldots,\xi_{m-2}}(t_{1},\ldots,t_{m-2}) \varphi_{\xi_{m-1}}(t_{m-1})\bigr\vert \\ &{}+\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-2}}(t_{1},\ldots ,t_{m-2})-\varphi_{\xi_{1}}(t_{1})\cdots \varphi_{\xi _{m-2}}(t_{m-2})\bigr\vert =:I_{3}+I_{4}, \end{aligned}$$ and similarly to the proof of \(I_{1}\), we can get $$I_{3}\leq C\rho^{1/s}(1)|t_{m-1}|\| \xi_{m-1}\|_{r}. $$ Then, we obtain $$ I_{2}\leq C\rho^{1/s}(1)|t_{m-1}|\| \xi_{m-1}\|_{r}+I_{4}. $$ Combining (3.7)-(3.9), it suffices to show Lemma 3.1. □ Suppose that (A1)-(A5) hold, then we have $$\sigma_{n}^{2}(t)\geq C 2^{m}n^{-1} \quad \textit{and} \quad \sigma _{n}^{-2}(t)\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds\biggr\vert \leq C. $$ From (A1) and (1.2), we can get $$\begin{aligned} \sigma_{n}^{2} =&\sigma^{2}\sum _{i=1}^{n}\biggl( \int_{A_{i}} E_{m}(t,s) \,ds\biggr)^{2}+2\sum _{1\leq i\leq j\leq n}E(\varepsilon_{i}, \varepsilon_{j}) \int _{A_{i}} E_{m}(t,s) \,ds \int_{A_{j}} E_{m}(t,s) \,ds \\ =& \sigma^{2}\sum_{i=1}^{n} \biggl( \int_{A_{i}} E_{m}(t,s) \,ds\biggr)^{2}+I_{3}. \end{aligned}$$ By Lemma A.1, we obtain $$\begin{aligned} I_{3} \leq&10\sum_{1\leq i\leq j\leq n}\rho(j-i)\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds \int_{A_{j}} E_{m}(t,s) \,ds\biggr\vert \| \varepsilon_{i}\|_{2}\|\varepsilon_{j} \|_{2} \\ =& \sum_{k=1}^{n-1}\rho(k)\sum _{i=1}^{n-k}\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds \int_{A_{k+i}} E_{m}(t,s) \,ds\biggr\vert \| \varepsilon_{i}\|_{2}\|\varepsilon_{j} \|_{2} \\ \leq& 5\sigma^{2}\sum_{k=1}^{n-1} \rho(k)\sum_{i=1}^{n-k} \biggl[ \biggl( \int_{A_{i}} E_{m}(t,s) \,ds \biggr)^{2}+ \biggl( \int_{A_{k+i}} E_{m}(t,s) \,ds \biggr)^{2} \biggr] \\ \leq& 10\sigma^{2}\sum_{k=1}^{n} \rho(k)\sum_{i=1}^{n} \biggl( \int _{A_{i}} E_{m}(t,s) \,ds \biggr)^{2}. \end{aligned}$$ Therefore, from applying (A1) and Lemma A.5, we obtain $$\sigma_{n}^{2}\leq\sigma^{2} \Biggl(1+20\sum _{k=1}^{n}\rho(k) \Biggr)\sum _{i=1}^{n} \biggl( \int_{A_{i}} E_{m}(t,s) \,ds \biggr)^{2}=C2^{m}n^{-1}. $$ In addition, the same result as (7) was deduced, see Liang and Qi [13], and by (A2), (A4), and (A5), we have $$\sigma_{n}^{2}(t)\geq C 2^{m}n^{-1} \quad \mbox{and} \quad \sigma _{n}^{-2}(t)\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds\biggr\vert \leq C. $$ Assume that (A1)-(A6) hold, we can get \(\mathrm{E}(S_{1n}^{\prime\prime})^{2}\leq C\zeta_{1n}\), \(\mathrm{E}(S_{1n}^{\prime\prime\prime})^{2}\leq C\zeta_{2n}\), \(\mathrm{E}(S_{2n})^{2}\leq C\zeta_{3n}\); \(\mathrm{P}(|S''_{1n}|\geq\zeta_{1n}^{1/3}) \leq C \zeta_{1n}^{1/3}\), \(\mathrm{P}(|S'''_{1n}|\geq\zeta_{2n}^{1/3}) \leq C \zeta_{2n}^{1/3}\), \(\mathrm{P}(|S_{2n}|\geq\zeta_{3n}^{1/3}) \leq C \zeta_{3n}^{1/3}\). Let (A1)-(A6) be satisfied, and applying Lemmas 3.2 and A.3(i) in the Appendix, we can refer to Li et al.'s [3] Lemma 3.1 of the proof process. □ Assume that (A1)-(A6) hold, let \(s_{n}^{2}=\sum_{v=1}^{k}{ \operatorname{Var}(y_{nv})}\), we can get $$\bigl\vert s_{n}^{2}-1\bigr\vert \leq C\bigl( \zeta_{1n}^{1/2}+\zeta_{2n}^{1/2}+ \zeta_{3n}^{1/2}+ u(q)\bigr). $$ Let \(\{\eta_{nv}:v=1, \ldots, k\}\) be independent random variables and \(\eta_{nv}\stackrel{\mathcal{D}}{=}y_{nv}\), \(v=1, \ldots, k\). Set \(T_{n}=\sum_{v=1}^{k}{\eta_{nv}}\). Then we get the following. Let \(\Delta _{n}=\sum_{1\leq i< j\leq k} {\operatorname{Cov}(y_{ni},y_{nj})}\), then \(s_{n}^{2}=\mathrm{E}(S_{1n}^{\prime})^{2}-2\Delta_{n}\). By \(\mathrm{E}(S_{n})^{2}=1\), Lemma 3.3(1), the \(C_{r}\)-inequality, and the Cauchy-Schwarz inequality, we have $$\begin{aligned}& \mathrm{E}\bigl(S^{\prime}_{1n}\bigr)^{2}=\mathrm{E} \bigl[S_{n}-\bigl(S^{\prime\prime}_{1n} +S^{\prime\prime\prime}_{1n}+S_{2n} \bigr)\bigr]^{2}=1 +\mathrm{E}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime }_{1n}+S_{2n} \bigr)^{2}-2\mathrm{E}\bigl[S_{n}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime }_{1n}+S_{2n} \bigr)\bigr], \\& \mathrm{E}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime}_{1n}+S_{2n} \bigr)^{2}\leq 2\bigl[\mathrm{E}\bigl(S^{\prime\prime}_{1n} \bigr)^{2}+\mathrm{E}\bigl(S^{\prime\prime\prime }_{1n} \bigr)^{2}+\mathrm{E}(S_{2n})^{2}\bigr]\leq C( \zeta_{1n}+\zeta_{2n}+\zeta_{3n}), \\& \mathrm{E}\bigl[S_{n}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime }_{1n}+S_{2n} \bigr)\bigr]\leq \mathrm{E}^{{1}/{2}}\bigl(S^{2}_{n} \bigr)\mathrm{E}^{{1}/{2}}\bigl(S^{\prime\prime }_{1n}+S^{\prime\prime\prime}_{1n}+S_{2n} \bigr)^{2}\leq C\bigl(\zeta ^{{1}/{2}}_{1n}+ \zeta^{{1}/{2}}_{2n}+\zeta^{{1}/{2}}_{3n}\bigr). \end{aligned}$$ It has been found that $$\begin{aligned} \bigl\vert \mathrm{E}\bigl(S_{1n}^{\prime} \bigr)^{2}-1\bigr\vert =& \bigl\vert \mathrm{E} \bigl(S_{1n}^{\prime \prime} +S_{1n}^{\prime\prime\prime}+S_{2n} \bigr)^{2} - 2{\mathrm{E}}\bigl\{ S_{n} \bigl(S_{1n}^{\prime\prime}+S_{1n}^{\prime\prime\prime}+S_{2n} \bigr)\bigr\} \bigr\vert \\ \leq& C\bigl(\zeta_{1n}^{1/2}+\zeta_{2n}^{1/2}+ \zeta_{3n}^{1/2}\bigr). \end{aligned}$$ On the other hand, from the basic definition of ρ-mixing, Lemmas 3.2, A.5(iv), and (A1), we can prove that $$\begin{aligned} |\Delta_{n}| \leq& \sum_{1\leq i< j\leq k} \sum_{s_{1}=k_{i}}^{k_{i}+p-1} \sum _{t_{1}=k_{j}}^{k_{j}+p-1}{\bigl|{\operatorname{Cov}}(W_{ns_{1}}, W_{nt_{1}})\bigr|} \\ \leq& \sum_{1\leq i< j\leq k}\sum_{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{t_{1}=k_{j}}^{k_{j}+p-1} \sum _{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \sum_{v=\max\{1,t_{1}-n\}}^{\min\{n,t_{1}+n\}} \sigma _{n}^{-2}\biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds \int_{A_{v}} E_{m}(t,s) \,ds \biggr\vert \\ &{}\cdot|a_{u-s_{1}}a_{v-t_{1}}| \bigl\vert { \operatorname{Cov}}(e_{s_{1}},e_{t_{1}})\bigr\vert \\ \leq& C \sum_{1\leq i< j\leq k}\sum _{s_{1}=k_{i}}^{k_{i}+p-1}\sum_{t_{1}=k_{j}}^{k_{j}+p-1} \sum_{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \sum _{v=\max\{1,t_{1}-n\}}^{\min\{n,t_{1}+n\}} \biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}a_{v-t_{1}}| \\ &{}\cdot\rho(t_{1}-s_{1})\sqrt{\operatorname{Var}(e_{s_{1}}) \operatorname{Var}(e_{t_{1}})} \\ \leq& C \sum_{i=1}^{k-1}\sum _{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \biggl\vert \int _{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}| \sum_{j=i+1}^{k}\sum _{t_{1}=k_{j}}^{k_{j}+p-1}\rho (t_{1}-s_{1}) \\ &{}\cdot \sum_{v=\max\{1,t_{1}-n\}}^{\min\{n,t_{1}+n\}}|a_{v-t_{1}}| \\ \leq& C \sum_{i=1}^{k-1}\sum _{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \biggl\vert \int _{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}|\sum_{j=q} ^{\infty}\rho(j) \\ \leq& C u(q)\sum_{i=1}^{k-1}\sum _{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{u=1}^{n} \biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}| \\ \leq& C u(q)\sum_{u=1}^{n}\biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds\biggr\vert \Biggl(\sum _{i=1}^{k-1}\sum_{s_{1}=k_{i}}^{k_{i}+p-1} |a_{u-s_{1}}| \Biggr) \leq C u(q). \end{aligned}$$ Hence, combining (3.10) with (3.11), we can see that $$\bigl\vert s_{n}^{2}-1\bigr\vert \leq\bigl\vert \mathrm{E} \bigl(S_{1n}^{\prime}\bigr)^{2}-1\bigr\vert +2\vert \Delta_{n}\vert \leq C\bigl\{ \zeta_{1n}^{1/2} +\zeta_{2n}^{1/2}+\zeta_{3n}^{1/2}+ u(q) \bigr\} . $$ Assume that (A1)-(A6) hold, and applying this in Lemma 3.4, we can get $$\sup_{u}\bigl\vert \mathrm{P} (T_{n}/s_{n} \leq u )-\Phi(u)\bigr\vert \leq C\zeta _{2n}^{\delta/2}. $$ It follows from the Berry-Esseen inequality (Petrov [14]) that we have $$ \sup_{u}\bigl\vert \mathrm{P} (T_{n}/s_{n} \leq u )-\Phi(u)\bigr\vert \leq C \frac{\sum_{w=1}^{k} \mathrm{E}|y_{nv}|^{r}}{s_{n}^{r}} \quad \mbox{for } r\geq2. $$ From Definition 1.2, hence \(\sum_{j=k_{v}}^{[\log p]}\rho^{2/r}(2^{j})=o(\log p)\). Further, \(\exp (C_{1}\sum_{j=k_{v}}^{[\log p]}\rho^{2/r}(2^{j}) )=o(p^{\iota})\) for any \(C_{1}>0\) and \(\iota>0\) (for ι small enough). According to Lemma A.5(i) and Lemma A.2, we can get $$\begin{aligned} \sum_{v=1}^{k} \mathrm{E}|y_{nv}|^{r} =& \sum _{v=1}^{k} \mathrm{E}\Biggl\vert \sum _{j=k_{v}}^{k_{v}+p-1} \sum_{i=\max\{1,j-n\}}^{\min\{n,j+n\}} \sigma_{n}^{-1}a_{i-j} \int_{A_{i}}E_{m}(t,s) \,ds e_{j}\Biggr\vert ^{r} \\ \leq& C\sum_{v=1}^{k}p^{r/2}\exp \Biggl(C_{1}\sum_{j=k_{v}}^{[\log p]}\rho \bigl(2^{j}\bigr) \Biggr) \\ &{}\cdot\max_{1\leq j\leq p} \Biggl(\mathrm{E} \Biggl\vert \sum_{i=\max\{1,j-n\}}^{\min\{n,j+n\}} \sigma_{n}^{-1}a_{i-j} \int_{A_{i}}E_{m}(t,s) \,ds e_{j}\Biggr\vert ^{2} \Biggr)^{r/2} \\ &{}+C\sum_{v=1}^{k}p\exp \Biggl(C_{1}\sum_{j=k_{v}}^{[\log p]}\rho ^{2/r}\bigl(2^{j}\bigr) \Biggr) \\ &{}\cdot\max_{1\leq j\leq p} \mathrm{E}\Biggl\vert \sum_{i=\max\{1,j-n\}}^{\min\{n,j+n\}} \sigma_{n}^{-1}a_{i-j} \int_{A_{i}}E_{m}(t,s) \,ds e_{j}\Biggr\vert ^{r} \\ \leq& C\sigma_{n}^{-r} \Biggl(\sum _{j=-\infty}^{\infty}|a_{j}| \Biggr)^{r} \sum_{v=1}^{k} \biggl(p^{r/2+\iota} \biggl(\frac {2^{m}}{n}\biggr)^{r}+p^{1+\iota}\biggl( \frac{2^{m}}{n}\biggr)^{r} \biggr) \\ \leq& Ckp^{r/2+\iota}\biggl(\frac{2^{m}}{n}\biggr)^{r/2}\leq Cnp^{r/2-1}\biggl(\frac {2^{m}}{n}\biggr)^{r/2}\leq Cn\biggl(p \frac{2^{m}}{n}\biggr)^{r/2}=Cn \zeta_{2n}^{r}. \end{aligned}$$ Hence, by Lemma 3.4, and combining (3.12) with (3.13), we can get the result. □ $$\sup_{u}\bigl\vert \mathrm{P}\bigl(S_{1n}^{\prime} \leq u\bigr)-\mathrm{P}(T_{n}\leq u)\bigr\vert \leq C \bigl\{ \zeta_{2n}^{\delta/2}+ \zeta_{4n}^{1/2} \bigr\} . $$ Suppose that \(\phi_{1} (t)\) and \(\psi_{1} (t)\) are the characteristic functions of \(S_{1n}^{\prime}\) and \(T_{n}\). Therefore, it follows from Lemmas 3.1, 3.2, A.5, and (A1) that we can obtain $$\begin{aligned} \bigl\vert \phi_{1} (t)-\psi_{1} (t)\bigr\vert =& \Biggl\vert \mathrm{E}\exp\Biggl( {\mathbf{i}}t \sum _{v=1}^{k} y_{nv}\Biggr)-\prod _{v=1}^{k} \mathrm{E}\exp({\mathbf{i}}t y_{nv})\Biggr\vert \\ \leq& C |t| \rho^{1/2}(q) \sum _{v=1}^{k}\|y_{nv}\|_{2} \\ \leq& C |t| \rho^{1/2}(q) \sum_{v=1}^{k} \Biggl\{ \mathrm{E} \Biggl( \sum_{i=k_{v}}^{k_{v}+p-1} \sigma_{n}^{-1} \sum_{j=\max\{1,i-n\}}^{\min\{n,i+n\}} a_{j-i} \int_{A_{j}}E_{m}(t,s) \,ds |e_{i}| \Biggr)^{2} \Biggr\} ^{1/2} \\ \leq& C |t| \rho^{1/2}(q) \Biggl(\sum_{l=-\infty}^{\infty} |a_{l}| \Biggr) \Biggl\{ k\sum_{v=1}^{k} \sum_{i=k_{v}}^{k_{v}+p-1} \biggl\vert \int_{A_{j}}E_{m}(t,s) \,ds\biggr\vert \Biggr\} ^{1/2} \\ \leq& C |t| \bigl(k\rho(q)\bigr)^{1/2} \leq C |t| \zeta^{1/2}_{4n}, \end{aligned}$$ $$ \int_{-T}^{T} \biggl\vert \frac{\phi_{1} (t)-\psi_{1} (t)}{t}\biggr\vert \, dt \leq C \zeta^{1/2}_{4n}T. $$ Note that $$\mathrm{P}(T_{n}\leq u)=\mathrm{P}({T_{n}}/{s_{n}} \leq{u}/{s_{n}}). $$ Consequently, from Lemma 3.5, it has been found $$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P}(T_{n}\leq u+y)- \mathrm{P}(T_{n}\leq u)\bigr\vert \\& \quad =\sup_{u}P\bigl\vert ({T_{n}}/{s_{n}} \leq{u+y}/{s_{n}})-\mathrm{P}({T_{n}}/{s_{n}}\leq {u}/{s_{n}})\bigr\vert \\& \quad \leq\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u+y}/{s_{n}})-\Phi({u+y}/{s_{n}})\bigr\vert +\sup _{u}\bigl\vert \Phi ({u+y}/{s_{n}})- \Phi({u}/{s_{n}})\bigr\vert \\& \qquad {}+\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u}/{s_{n}})-\Phi({u}/{s_{n}})\bigr\vert \\& \quad \leq2\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u}/{s_{n}})-\Phi(u)\bigr\vert +\sup_{u}\bigl\vert \Phi({u+y}/{s_{n}})-\Phi({u}/{s_{n}})\bigr\vert \\& \quad \leq C\bigl\{ \zeta_{2n}^{\delta/2}+{\vert y\vert }/{s_{n}}\bigr\} \leq C\bigl\{ \zeta_{2n}^{\delta/2}+ \vert y\vert \bigr\} . \end{aligned}$$ $$ T \sup_{u} \int_{|y|\leq c/T} \bigl\vert \mathrm{P}(T_{n} \leq u+y)- \mathrm{P}(T_{n} \leq u )\bigr\vert \, dy \leq C\bigl\{ \zeta_{2n}^{\delta/2}+1/T\bigr\} . $$ Thus, combining (3.14) with (3.15) and taking \(T= \zeta_{4n}^{-1/4} \), it suffices to prove that $$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P}\bigl(S_{1n}^{\prime} \leq u\bigr)-\mathrm{P}(T_{n}\leq u)\bigr\vert \\& \quad \leq \int_{-T}^{T}{\biggl\vert \frac{\phi_{1} (t)-\psi_{1} (t)}{t} \biggr\vert \, dt} + T \sup_{u} \int_{|y|\leq c/T} \bigl\vert \mathrm{P}(T_{n}\leq u+y) - \mathrm{P}(T_{n}\leq u)\bigr\vert \, dy \\& \quad \leq C\bigl\{ \zeta^{1/2}_{4n} T + \zeta_{2n}^{\delta/2}+1/T\bigr\} = C\bigl\{ \zeta_{2n}^{\delta/2} + \zeta_{4n}^{1/4} \bigr\} . \end{aligned}$$ Proofs of the main results Proof of Theorem 2.1 $$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P} \bigl(S^{\prime}_{1n}\leq u\bigr)-\Phi(u)\bigr\vert \\& \quad \leq \sup_{u}\bigl\vert \mathrm{P} \bigl(S^{\prime}_{1n}\leq u\bigr)-\mathrm{P}(T_{n}\leq u)\bigr\vert +\sup_{u}\bigl\vert \mathrm{P}(T_{n} \leq u)-\Phi({u}/{s_{n}})\bigr\vert +\sup_{u}\bigl\vert \Phi({u}/{s_{n}})-\Phi(u)\bigr\vert \\& \quad := J_{1n}+J_{2n}+J_{3n}. \end{aligned}$$ According to Lemma 3.6, Lemma 3.5, and Lemma 3.4, it follows that $$\begin{aligned}& J_{1n}\leq C\bigl\{ \zeta_{2n}^{\delta/2} + \zeta_{4n}^{1/4}\bigr\} , \end{aligned}$$ $$\begin{aligned}& J_{2n}=\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u}/{s_{n}})-\Phi({u}/{s_{n}})\bigr\vert =\sup _{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq u)-\Phi(u)\bigr\vert \leq C\zeta_{2n}^{\delta/2}, \end{aligned}$$ $$\begin{aligned}& J_{3n}\leq C\bigl\vert s_{n}^{2}-1\bigr\vert \leq C\bigl\{ \zeta_{1n}^{1/2} +\zeta_{2n}^{1/2}+ \zeta_{3n}^{1/2}+ u(q)\bigr\} . \end{aligned}$$ Hence, by (4.2)-(4.4) and combining with (4.1), we have $$ \sup_{u}\bigl\vert {\mathrm{P}}\bigl(S_{1n}^{\prime} \leq u\bigr)-\Phi(u)\bigr\vert \leq C \bigl\{ \zeta_{1n}^{1/2}+ \zeta_{2n}^{1/2}+\zeta _{2n}^{\delta/2}+ \zeta_{3n}^{1/2}+ \zeta_{4n}^{1/4} + u(q) \bigr\} . $$ Thus, by Lemma A.4, Lemma 3.3(2), and (4.5), it suffices to prove that $$\begin{aligned}& \sup_{u}\bigl\vert {\mathrm{P}}(S_{n}\leq u) - \Phi(u)\bigr\vert \\& \quad \leq C \Biggl\{ \sup_{u}\bigl\vert {\mathrm{P}} \bigl(S_{1n}^{\prime}\leq u\bigr)-\Phi (u)\bigr\vert +\sum _{i=1}^{3}\zeta_{in}^{1/3}+{ \mathrm{P}}\bigl(\bigl\vert S_{1n}^{\prime\prime}\bigr\vert \geq \zeta_{1n}^{1/3}\bigr) \\& \qquad {}+{ \mathrm{P}}\bigl(\bigl\vert S_{1n}^{\prime\prime\prime}\bigr\vert \geq\zeta_{2n}^{1/3} \bigr) +{ \mathrm{P}}\bigl(\vert S_{2n}\vert \geq \zeta_{3n}^{1/3}\bigr) \Biggr\} \\& \quad \leq C \bigl\{ \zeta_{1n}^{1/3}+\zeta_{2n}^{1/3}+ \zeta _{2n}^{\delta/2}+ \zeta_{3n}^{1/3}+ \zeta_{4n}^{1/4} + u(q) \bigr\} . \end{aligned}$$ Proof of Corollary 2.1 By (A1), since \(\sum_{j=1}^{\infty} \rho(j)<\infty\), we can easily see that \(u(q)\to0\), therefore Corollary 2.1 holds. □ Let \(p=[n^{\tau}]\), \(q=[n^{2\tau-1}]\). Taking \(\tau=\frac{1}{2}+\frac{8\theta-1}{2(6\lambda+7)}\), \(\frac{2}{9-6\lambda}<\theta\leq1\), \(\tau<\theta\). Consequently, $$\begin{aligned}& \zeta_{1n}^{1/3}=\zeta _{2n}^{1/3}=O \bigl(n^{-\frac{\theta-\tau}{3}} \bigr) =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} \bigr), \\& \zeta^{1/3}_{3n}=n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}}= \biggl(n^{\frac{6\lambda\theta+3\lambda+3\theta+4}{2(6\lambda +7)}} \sum_{|j|>n}|a_{j}| \biggr)^{2/3} =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} \bigr), \\& \zeta_{4n}^{1/4}=O \bigl(n^{-\frac{\tau+\lambda(2\tau -1)-1}{4}} \bigr) =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} \bigr), \\& u(q)= O \Biggl(\sum_{i=q}^{\infty} i^{-\lambda} \Biggr) = O \bigl( p^{-\lambda+1} \bigr) = O \bigl( n^{-(2\tau-1)(\lambda-1)} \bigr)=O \bigl(n^{-\frac {(8\theta-1)(\lambda-1)}{6\lambda+7}} \bigr). \end{aligned}$$ Finally, taking \(\frac{2}{9-6\lambda}<\theta\), hence \(\frac{(8\theta-1)(\lambda-1)}{6\lambda+7}>\frac{\lambda(2\theta -1)+(\theta-1)}{6\lambda+7}\), it has been found that \(u(q)=O (n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} )\), therefore, the desired result is completed by Corollary 2.1 immediately. □ Xue, LG: Berry-Esseen bound of an estimate of error variance in a semiparametric regression model. Acta Math. Sin. 48(1), 157-170 (2005) Liang, HY, Li, YY: A Berry-Esseen type bound of regression estimator based on linear process errors. J. Korean Math. Soc. 45(6), 1753-1767 (2008) MathSciNet Article MATH Google Scholar Li, YM, Wei, CD, Xin, GD: Berry-Esseen bounds of wavelet estimator in a regression with linear process errors. Stat. Probab. Lett. 81(1), 103-111 (2011) Li, YM, Guo, JH, Yang, SC: The Berry-Esseen bounds of wavelet estimators for semiparametric regression model whose errors for a linear process with a mixing innovations. Acta Math. Appl. Sin. 36(6), 1021-1036 (2013) Kolmogorov, AN, Rozanov, UA: On the strong mixing conditions for stationary Gaussian process. Theory Probab. Appl. 5(2), 204-208 (1960) Shao, QM: Almost sure convergence properties of ρ-mixing sequences. Acta Math. Sin. 32(3), 377-393 (1989) Shao, QM: Maximal inequalities for sums of ρ-mixing sequences. Ann. Probab. 23, 948-965 (1995) Jiang, DY: On the convergence rates in the law of iterated logarithm of ρ-mixing sequences. Math. Appl. 15(3), 32-37 (2002) Chen, PY, Liu, XD: Complete moment convergence for sequence of identically distributed ρ-mixing random variables. Acta Math. Sin. 51(2), 281-290 (2008) Zhou, XC, Lin, JG: Strong consistency of estimators in partially linear models for longitudinal data with mixing-dependent structure. J. Inequal. Appl. 2011, 112 (2011) Tan, XL, Wang, M: Strong convergence results for weighted sums of ρ-mixing random variables sequence. J. Jilin Univ. Sci. Ed. 52(5), 927-932 (2014) Li, YM, Yin, CM, Wei, CD: The asymptotic normality for φ-mixing dependent of wavelet regression function estimator. Acta Math. Appl. Sin. 31(6), 1016-1055 (2008) Liang, HY, Qi, YY: Asymptotic normality of wavelet estimator of regression function under NA assumptions. Bull. Korean Math. Soc. 4(2), 247-257 (2007) Petrov, VV: Limit Theory for Probability Theory. Oxford University Press, New York (1995) Yang, SC: Moment inequality for mixing sequences and nonparametric estimation. Acta Math. Sin. 40(2), 271-279 (1997) Yang, SC: Uniformly asymptotic normality of the regression weighted estimator for negatively associated samples. Stat. Probab. Lett. 62, 101-110 (2003) This project is supported by the National Natural Science Foundation of China (11461057). School of Information and Statistics, Guangxi University of Finance and Economics, Nanning, 530003, P.R. China Liwang Ding School of Mathematics and Computer Science, Shangrao Normal University, Shangrao, 334001, P.R. China Yongming Li Correspondence to Liwang Ding. The two authors contributed equally and significantly in writing this article. Both authors read and approved the final manuscript. Liwang Ding (1985-), male, lecturer, graduate, major in probability theory and mathematical statistics. Lemma A.1 (Shao [7]) Let \(\{X_{i}: i\geq1\}\) be a ρ-mixing sequence, \(s, t>1\), and \(1/s+1/t=1\). If \(X\in L_{s}(\mathcal{F}^{k}_{1})\), \(Y\in L_{t}(\mathcal{F}^{\infty}_{k+n})\), then $$|\mathrm{E}XY-\mathrm{E}X\mathrm{E}Y|\leq10\rho^{2(\frac{1}{s}\wedge\frac{1}{t})}(n)\| X\| _{s}\| Y\|_{t}. $$ Assume that \(\mathrm{E}X_{i}=0\) and \(\| X_{i}\|_{q}\) for some \(q\geq2\). Then there exists a positive constant \(K=K(q,\rho(\cdot))\) depending only on q and \(\rho(\cdot)\) such that for any \(k\geq0\), \(n\geq1\), $$\begin{aligned} \mathrm{E}\max_{1\leq i\leq n}\bigl\vert S_{k}(i)\bigr\vert ^{q} \leq& Kn^{q/2}\exp \Biggl(K\sum _{i=0}^{[\log n]}\rho\bigl(2^{i}\bigr) \Biggr) \max_{k\leq i\leq k+n}\| X_{i}\|_{2}^{q} \\ &{}+nK\exp \Biggl(K\sum_{i=0}^{[\log n]} \rho^{2/q}\bigl(2^{i}\bigr) \Biggr)\max_{k\leq i\leq k+n} \| X_{i}\|_{q}^{q}. \end{aligned}$$ (Yang [15]) Let \(\{X_{i}: i=1,2,\ldots\}\) be a ρ-mixing sequence, and there is a \(\lambda>0\), make \(\rho(n)=O(n^{-\lambda})\), with \(\mathrm{E}X_{i}=0\), \(\mathrm{E}|X_{i}|^{r}<\infty\) (\(r>1\)), when any integer \(m\geq1\), there exists a positive constant \(C(m)\), then: for \(1< r\leq2\), we have $$\mathrm{E}\Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{r}\leq C(m)n^{\beta(m)}\sum _{i=1}^{n}\mathrm{E}\vert X_{i}\vert ^{r}, $$ for \(r>2\), we have $$\mathrm{E}\Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{r}\leq C(m)n^{\beta(m)} \Biggl\{ \sum _{i=1}^{n}\mathrm{E}\vert X_{i}\vert ^{r}+ \Biggl(\sum_{i=1}^{n} \mathrm{E}X_{i}^{2} \Biggr)^{r/2} \Biggr\} . $$ In \(\beta(m)=(r-1)\omega^{m}\) and \(0<\omega<1\). Suppose that \(\{\zeta_{n}: n\geq1\}\), \(\{\eta_{n}: n\geq1\}\), and \(\{\xi_{n}: n\geq 1\}\) are three random variable sequences, \(\{\gamma_{n}: n\geq1\}\) is a positive constant sequence, and \(\gamma_{n}\to0\). If \(\sup_{u}|F_{\gamma_{n}}(u)-\Phi(u)|\leq C\gamma_{n}\) then for any \(\varepsilon_{1}>0\), and \(\varepsilon_{2}>0\), then $$\sup_{u}\bigl\vert F_{\zeta_{n}+\eta_{n}+\gamma_{n}}(u)-\Phi(u)\bigr\vert \leq C\bigl\{ \gamma _{n}+\varepsilon_{1}+ \varepsilon_{2}+\mathrm{P}\bigl(\vert \eta_{n}\vert \geq \varepsilon_{1}\bigr)+\mathrm{P}\bigl(\vert \xi _{n}\vert \geq\varepsilon_{2}\bigr)\bigr\} . $$ (Li et al. [12]) Under assumptions (A2)-(A5), we have \(|\int_{A_{i}} E_{m}(t,s) \,ds|=O(\frac{2^{m}}{n})\), \(i=1,2,\ldots,n\); \(\sum_{i=1}^{n}(\int_{A_{i}} E_{m}(t,s) \,ds)^{2}=O(\frac{2^{m}}{n})\); \(\sup_{m}\int_{0}^{1} |E_{m}(t,s) \,ds|\leq C\); \(\sum_{i=1}^{n}|\int_{A_{i}} E_{m}(t,s) \,ds|\leq C\). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Ding, L., Li, Y. The Berry-Esseen bounds of wavelet estimator for regression model whose errors form a linear process with a ρ-mixing. J Inequal Appl 2016, 107 (2016). https://doi.org/10.1186/s13660-016-1036-x Received: 20 October 2015 wavelet estimator ρ-mixing Berry-Esseen bound linear process Recent Advances in Inequalities and Applications
CommonCrawl
Help us improve our products. Sign up to take part. A Nature Research Journal Search E-alert Submit My Account Login Article | Open | Published: 12 March 2019 Gold Dissolution from Ore with Iodide-Oxidising Bacteria San Yee Khaing1, Yuichi Sugai2 & Kyuro Sasaki2 Scientific Reportsvolume 9, Article number: 4178 (2019) | Download Citation Element cycles Gold leaching from ore using iodide-iodine mixtures is an alternative to gold cyanidation. This study evaluated the ability of iodide-oxidising bacteria to solubilise gold from ore that was mainly composed of gold, pyrite, galena, and chalcopyrite. Eight bacterial strains were successfully isolated from brine. Those strains were incubated in a liquid culture medium containing ore with a gold content of 0.26 wt.% and pulp density of 3.3 w/v% to evaluate their abilities to mediate the dissolution of gold. The gold was solubilised completely within 30 days of incubation in the iodine-iodide lixiviant solution generated by three bacterial strains. One strain, in particular, completed the dissolution of gold within 5 days of incubation and was identified as a member of the genus Roseovarius. Thus, the possibility of bacterial gold leaching using iodide-oxidising bacteria was successfully demonstrated. Bioleaching gold with iodide would likely be more environmentally sustainable than traditional cyanide leaching. Further research is required to evaluate the techno-economic feasibility of this approach. Gold cyanidation remains a major gold-leaching technology in most gold mining operations around the world because of its highly effective gold recovery and economic efficiency. However, major challenges related to gold cyanidation include environmental problems caused by the extremely toxic nature of cyanide. Several gold-leaching agents alternative to cyanide have been discovered and suggested by several researchers1,2,3,4,5. Of several proposed alternatives, halides (chloride, bromide and iodide) are best known for high gold-leaching efficiency and low environmental impacts compared with other agents6. Among those halogens, iodine forms more stable complex ions with gold in aqueous solutions because gold iodate complex ions have lower redox potential compared with other halogenated gold complex ions [\({{\rm{E}}}_{({{\rm{A}}{\rm{u}}}^{+}/{\rm{A}}{\rm{u}})}^{0}=1.83\,{\rm{V}}\), \({{{\rm{E}}}^{0}}_{([{{\rm{A}}{\rm{u}}{\rm{C}}{\rm{I}}}_{2}{]}^{-}/{\rm{A}}{\rm{u}})}=1.15\,{\rm{V}}\), \({{{\rm{E}}}^{0}}_{([{{\rm{A}}{\rm{u}}{\rm{B}}{\rm{r}}}_{2}{]}^{-}/{\rm{A}}{\rm{u}})}=0.96\,{\rm{V}}\), \({{{\rm{E}}}^{0}}_{([{{\rm{A}}{\rm{u}}{\rm{l}}}_{2}{]}^{-}/{\rm{A}}{\rm{u}})}=0.576\,{\rm{V}}\), \({{{\rm{E}}}^{0}}_{([{{\rm{A}}{\rm{u}}{\rm{l}}}_{4}{]}^{-}/{\rm{A}}{\rm{u}})}=0.57\,{\rm{V}}\) (vs. SHE at 25 °C)]6,7. The fundamental aspects of gold leaching using iodide-iodine solutions have been described and discussed in the literature7,8,9,10. In an iodide-iodine mixture, iodine (I2) reacts with iodide (I−) in aqueous solution to form triiodide (I3−) according to reaction (1): $${{\rm{I}}}^{-}+{{\rm{I}}}_{2}\to {{{\rm{I}}}_{3}}^{-}$$ Gold can be oxidised in an iodide-iodine mixture to gold (I) diiodide and/or gold (III) tetraiodide according to reactions (2) and (3): $$2{\rm{Au}}+{{{\rm{I}}}_{3}}^{-}+{{\rm{I}}}^{-}\to 2{[{{\rm{AuI}}}_{2}]}^{-}$$ $$2{\rm{Au}}+3{{{\rm{I}}}_{3}}^{-}\to 2{[{{\rm{AuI}}}_{4}]}^{-}+{{\rm{I}}}^{-}$$ The dissolution rate of gold is directly proportional to the concentrations of iodine and iodide, unaffected by a change in pH between 2 and 109,10, in contrast to gold cyanidation, which is carried out within a specific range of alkaline pH between 10 and 11. Microorganisms are used in the extraction of base and precious metals from primary ores and concentrates through bioleaching and biooxidation. Bioleaching refers to microbially catalysed solubilisation of metals from solid materials11. Bioleaching has been successfully applied in the commercial extraction of valuable metals such as copper12,13,14,15,16,17, uranium18,19 and zinc20 from low-grade ores. Biooxidation has been used as a pre-treatment to dissolve sulphide minerals from refractory gold ores before cyanide leaching21,22,23. A number of bacteria have been shown to oxidise I− and help to regenerate the iodide-iodine lixiviant by oxidising I− to I224. A heterotrophic, Gram-negative bacterium "P. iodooxidans" can oxidise I− to I225, through an extracellular peroxidase with hydrogen peroxide (H2O2) as an electron acceptor26,27: $${{\rm{H}}}_{2}{{\rm{O}}}_{2}+2{{\rm{I}}}^{-}+2{{\rm{H}}}^{+}\to {{\rm{I}}}_{2}+2{{\rm{H}}}_{2}{\rm{O}}$$ Recent studies24,28,29 have indicated that certain bacterial strains, such as Roseovarius tolerans, Rhodothalassium salexigens and Roseovarius spp., oxidise I− to I2, and that the iodide-oxidising reaction was mediated by an extracellular oxidase that requires oxygen24: $$4{{\rm{I}}}^{-}+{{\rm{O}}}_{2}+4{{\rm{H}}}^{+}\to 2{{\rm{I}}}_{2}+2{{\rm{H}}}_{2}{\rm{O}}$$ Although the oxidation of I− by oxygen as an electron acceptor is energetically favourable, the extracellular nature of the enzyme implies that energy conservation by this reaction is not possible24. Iodide-oxidising bacteria (IOB) seem to prefer iodide-rich environments24. Natural gas brine samples used in this study were collected from a natural gas field in Chiba Prefecture, Japan. The brine samples contained not only natural gas but also iodide at high concentrations, such as 120 ppm, which is more than 1,500 times the iodide concentration of seawater. The presence of IOB in the iodide-rich brine water collected from natural gas fields has been reported24,29. Thus, IOB can be isolated from brine water and are capable of oxidising I− into I2, which in turn forms I3− in the culture solution by the chemical reactions (1) to (3). Kaksonen et al.30 have proposed the use of biogenic iodide-iodine as a lixiviant solution for gold leaching and the regeneration of the lixiviant with iodide-oxidising microorganisms. Even though bioleaching of gold with biogenic iodine-iodide lixiviant has been suggested, it is yet to be practically demonstrated. To the best of our knowledge, this is the first report examining the bioleaching of gold from gold ores using IOB. The principal objective of the present study is to examine the feasibility of bacterial leaching of gold ores using IOB. This study addresses the isolation of IOB from the natural environment, screening of effective IOB for gold dissolution, and demonstration of gold dissolution from gold ore using IOB-generated iodine-iodide lixiviant. Isolation of IOB After incubation of brine samples for 7 days on a solid culture medium, colouration to purple was observed around some colonies, as shown in Fig. 1a. This colouration was caused by a chemical reaction between starch and triiodide. Triiodide formed through a chemical reaction between iodide and iodine, as shown by chemical reaction (1), and iodine formed through the oxidation of iodide by IOB. Among those colonies, eight bacterial strains were isolated from the solid culture media based on differences in shape, size, colour and texture. These colonies were named a-1, a-2, e-1, e-2, f-1, f-2, j-1 and j-2, respectively. Agar plates for isolating IOB and liquid culture experiments for screening IOB that can grow under high-iodideconcentration conditions. (a) The inoculated solid culture medium which consisted of the marine broth, agar, potassium iodide and starch after incubation for a week at 30 °C. The colour of the culture medium around the colonies of IOB became purple because of the iodine-starch reaction; colonies of IOB can therefore be easily found. (b) The inoculated liquid culture solution including the marine broth, potassium iodide and starch after incubation of colonies of IOB isolated from the solid medium shown in (a) for 30 days at 30 °C. The colour of the culture solution became purple because of the iodine-starch reaction, whereas the colour of the negative control (without inoculation of IOB) did not change. First screening of IOB capable of growing and oxidising iodide in an environment with high-iodide concentration The eight bacterial strains, isolated as described above, were incubated in a liquid culture medium with a high-iodide concentration (>20,000 ppm) to select competent IOB that could grow and oxidise iodide in such a high-iodide concentration environment. Figure 1b shows photographs of the culture solutions that had been incubated at 30 °C for 30 days. All of these solutions except for the non-inoculated control had changed colour to purple because of the reaction between starch and triiodide. The initial bacterial cell population was 3 × 106 cells/mL, and the bacterial cell counting after incubation indicated that populations of all eight strains had increased to more than 1 × 108 cells/mL. This result shows that all eight bacterial strains could grow and oxidise iodide in a high-iodideconcentration environment. Therefore, all of these strains were used in the next screening step. Second screening of competent IOB capable of solubilising gold from gold ore Next, the eight bacterial strains were incubated in the liquid culture medium containing the marine broth, potassium iodide and sulphide gold ore sample to evaluate their abilities to extract gold from a real gold ore sample. X-ray diffraction (XRD) analysis indicated that the gold ore was mainly composed of quartz, galena, pyrite, chalcopyrite and gold. The chemical compositions of oxides and elements in the original gold ore and solid residue collected from culture solution after the incubation were determined by X-ray fluorescence (XRF) analysis. The results of the XRF analysis are shown in Table 1. The original gold content of the ore was approximately 0.26 wt.%. The decrease in the gold content in the solid residue collected from the culture solution after the incubation was observed in all the culture solutions in which the eight strains were incubated for 30 days at 30 °C, whereas decrease in the gold content in the negative control was not observed. In particular, the gold contents in the solid residue collected from the culture solutions of three strains (a-1, e-2 and f-1) were completely reduced. Table 1 Chemical compositions (wt.%) of oxides and elements obtained by XRF analysis of the original gold ore (before incubation) and the solid residue (after incubation). Figure 2 highlights the bacterial cell numbers and the leaching yield (%), which were calculated based on the results of the XRF analysis and can be expressed as follows: $${\rm{Leaching}}\,{\rm{yield}}=\frac{{\rm{Original}}\,{\rm{mass}}\,{\rm{of}}\,{\rm{gold}}\,{\rm{in}}\,{\rm{the}}\,{\rm{ore}}\,{\rm{sample}}-{\rm{Final}}\,{\rm{mass}}\,{\rm{of}}\,{\rm{gold}}\,{\rm{in}}\,{\rm{the}}\,{\rm{ore}}\,{\rm{sample}}}{{\rm{Original}}\,{\rm{mass}}\,{\rm{of}}\,{\rm{gold}}\,{\rm{in}}\,{\rm{the}}\,{\rm{ore}}\,{\rm{sample}}}\times 100\,( \% )$$ Bacterial cell numbers and leaching yield in the ore sample after 30 days. Bacterial cell number (blue bar) is indicated in cells/mL by the left vertical axis with a logarithmic scale and error bars. The leaching yield in the ore sample (red bar) is indicated in percentage by the right vertical axis with a linear scale and error bars. NC stands for the negative control without inoculation of bacteria. The pulp density used in this study was 3.3 w/v%. The leaching yield was calculated by equation (6) using the results of XRF analysis of the solid residue collected from the culture solution after 30 days of incubation. The bacterial cell numbers of all eight strains increased by over an order of magnitude in the liquid culture medium during incubation (Fig. 2). In particular, the bacterial cell number of strain a-1 increased to 5 × 108 cells/mL, which was the largest increment in this study. A decrease in the residual gold content was observed for all bacterial strains except strain f-2. In particular, gold in the ore sample was decreased to zero by strains a-1, e-2 and f-1. As the decrease in residual gold content was not observed for the negative control, it can be inferred that the activities of the IOB caused a decrease in residual gold content. ICP-MS analysis of dissolved gold in the culture solution was carried out to further confirm the leaching yield of gold from the solution. The ICP-MS analysis indicated that the gold concentrations in the culture solutions of strains a-1, e-2 and f-1 were 83 ppm, 86 ppm and 90 ppm, respectively. The leaching yield of gold was calculated from the gold mass in both the culture solutions and original ore sample. The leaching yields of the culture solutions of strains a-1, e-2 and f-1 were 92.1%, 95.5% and 99.9%, respectively. The leaching yields calculated from the results of the XRF analysis of the solid residues were 100%, as shown in Fig. 2; therefore, high consistency could be obtained between the analyses of both the solution and the solid residue. These results therefore suggest that the leaching yield calculated based on the results of the XRF analysis of the solid residues in this study are relatively accurate. Figure 3 shows photographic images of the negative control and culture solution of strain a-1 taken after the passage of 0 days, 15 days and 30 days after the start of the experiments. The colour of the culture solution changed from clear and colourless to deep yellow as the incubation period proceeded, whereas the colour of the negative control was not changed. This colouration suggests that I2 was generated by IOB in the culture solution because the colour of triiodide generated by chemical reaction (1) is yellow24. Consequently, gold was dissolved into the culture solution from the ore sample. Also, the bacterial cell number of strain a-1 (Fig. 3b) increased to 5 × 108 cells/mL after 30 days. On the basis of these results, the three bacterial strains that successfully decreased the gold content of the ore sample to zero, strains a-1, e-2 and f-1, were selected as potential bacterial strains for leaching gold from the ore sample and were subjected to the next screening step. Liquid culture experiments for the second screening of IOB that extracted gold from the ore sample in the culture solution. (a) Marine broth liquid medium with potassium iodide just after inoculation of the liquid culture medium of IOB from the pre-culture solution with 0.5 g of sterile ground ore samples at 30 °C. (b) The same culture solution after 15 days and 30 days of incubation. The colour of the culture solution became yellow and deep yellow after 15 days and 30 days, respectively; however, the colour of the negative control (without inoculation of IOB) did not change. During incubation, the IOB were capable of oxidising iodide (I−) into iodine (I2), which in turn formed triiodide (I3−) in the culture solution by chemical reaction (1). Subsequently, triiodide in the culture solution extracted gold from the ore sample according to chemical reactions (2) and (3). Pulp density used in this experiment was 3.3 w/v%. Finally, Table 2 summarises the enrichment cultures and screening results of leaching experiments for the eight bacterial strains by showing the morphology, leaching yield and similarity to identified bacteria. The eight bacterial strains were identified by analysing the sequences of their 16S rRNA genes. The results of the analysis indicate that all eight strains belong to the genus Roseovarius, with sequence similarity of 92% to 94%. Table 2 Summary of characteristics of eight bacterial strains isolated as IOB through the enrichment cultures and subjected to the first and second screening experiments. Third screening of competent IOB effective for fast gold leaching Strains a-1, e-2 and f-1 were incubated in the same liquid culture medium under the same conditions to evaluate their effects on the leaching yield of gold from the ore sample. Figure 4 shows the temporal changes of bacterial cell number and leaching yield during the incubation experiments. Strain a-1 grew just after the start of the incubation experiment, and its bacterial cell number reached a maximum after 10 days of incubation. Thereafter, the bacterial cell number plateaued until 30 days. Initially, the bacterial cell number of strain a-1 was 3 × 106 cells/mL but increased to 5 × 107 cells/mL after 5 days and remained unchanged after 10–30 days of incubation. Likewise, the initial bacterial cell number of strain e-2 started from 3 × 106 cells/mL but increased to 7 × 107 cells/mL after 5 days, until it stabilised at a maximum cell number of 4 × 108 cells/mL after 10–30 days of incubation. In the case of strain f-1, the bacterial cell number reached more than 2 × 108 cells/mL after 5 days of incubation and changed insignificantly until 30 days. Dynamic changes of bacterial cell number and leaching yield of gold from the ore sample during the incubation experiments. Temporal changes of bacterial cell number (blue plots) and Au leaching yield in the ore sample (red plots) in the culture solution of bacterial strains (a) a-1, (b) e-2 and (c) f-1. The bacterial cell number is indicated by the left vertical axis in cells/mL with a logarithmic scale. The leaching yield is indicated by the right vertical axis in percentage with a linear scale. Because the incubation experiment was carried out in closed system (batch system), the growth rate of bacteria was calculated using the experimental results obtained before the cell number reached a plateau. Accordingly, the growth rates of strains a-1 and e-2 were calculated using the experimental results obtained before 10 days, and that of strain f-1 was calculated using the experimental results obtained before 5 days. The growth rates of a-1, e-2 and f-1 were 0.393 day−1, 0.445 day−1 and 0.804 day−1, respectively. The growth of strain f-1 was the fastest in the three strains. However, the leaching yield increased faster in the culture solutions of strains a-1 and e-2 compared with the culture solution of strain f-1. The leaching yield in the culture solution of strain a-1 increased to 100% after 5 days, whereas those in the culture solutions of e-2 and f-1 increased to 100% after 20 days and 30 days, respectively. After 30 days of incubation, no traces of gold were detected in the solid residues of all three samples collected from the culture solutions of a-1, e-2 and f-1, based on the XRF analysis. Thus, three bacterial strains that generated iodine-iodide lixiviant solution that completely dissolved gold from the ore sample were successfully isolated through this study. In particular, one of these strains, named a-1, was selected as the most promising bacterial strain for gold leaching because it showed the highest capability for leaching gold from gold ore quickly, as shown in Fig. 4. These results demonstrate the potential for bacterial leaching of gold ore using IOB. Iodine is not the only effective or important lixiviant for gold leaching; triiodide is as well. The behaviour of triiodide generation was therefore investigated in this study. The two bacterial strains a-1 and f-1, which were the most effective strain (a-1) and a slightly inferior strain (f-1) respectively, were incubated in a culture medium that did not contain the ore sample to observe the behaviour of triiodide generation. The ore sample was not put in the culture medium in this experiment to prevent the consumption of triiodide due to the chemical reaction between triiodide and gold in the ore sample and to understand the behaviour of triiodide generation. The concentration of triiodide in the culture solutions of both strains increased to 220–240 ppm, as shown in Fig. 5. The concentration of triiodide in the culture solution was high enough for gold to be completely leached from the ore sample used in this study. In particular, the concentration of triiodide in the culture solution of strain a-1 reached a plateau faster than that of strain f-1. Because the actual rate of triiodide generation is useful data for comparing the performance of selected strains, we compared the rate for each strain. Because the incubation experiment was performed in a closed system, the actual rate was calculated using the results obtained before the concentration of triiodide reached a plateau. Accordingly, the actual rates of triiodide generation by strains a-1 and f-1 were calculated using the experimental results obtained before 13 days and 17 days had elapsed after the start of the experiment, respectively. The actual rates of triiodide generation by strains a-1 and f-1 were 18.5 ppm day−1 and 13.2 ppm day−1, respectively. The behaviour of triiodide generation was consistent with the behaviour of the leaching yield; that is, the higher actual rate of triiodide generation, the greater leaching yield (%) of gold. To this effect, a primary step to experimentally demonstrate the possibility of gold leaching using IOB is to qualitatively assess the behaviour of triiodide generation. Variation of triiodide concentrations with incubation time. The generation of triiodide was evaluated based on measurement of the absorbance at 351 nm48 by UV-visible spectrophotometry, and the absorbance was converted to triiodide concentration by a calibration curve prepared using the triiodide standard solution. Strains a-1 and f-1 were the two effective bacterial strains that completely leached gold from the ore sample in the leaching experiment and were used in the determination of leaching yield with respect to time. To understand the reactions and process variables of the IOB gold-leaching mechanism, measurements of pH and redox potential were carried out for the liquid medium. The initial pH of the medium was approximately 8.2. The pH of the medium varied from 8.0 to 8.8 during the experiments. Also, the redox potential of the culture solution ranged from 498 mV to 547 mV, as shown in Fig. 6. The values of pH and redox potential continued to increase during the incubation experiments. The increasing trends differed depending on the growth of IOB and the iodine-iodide reaction in the culture solution. Both the pH and redox potential of the culture solution of strain a-1 increased continuously before 5 days from the start of the incubation. The pH increased to 8.8 in that period from 8.0, the original pH of the culture medium. The redox potential increased to 547 mV in that period from 522 mV, the original redox potential of the culture medium. Thereafter, both pH and redox potential remained approximately constant. pH value and redox potential of the culture solution after 30 days. The pH value (blue bar) is indicated by the left vertical axis with an error bar. The redox potential (red bar) is indicated in mV by the right vertical axis with an error bar. NC stands for the negative control without inoculation of bacteria. In this study, eight bacterial strains were successfully isolated from brine. Those strains were incubated in a liquid medium containing ground ore with 0.26 wt.% Au and 3.3 w/v% pulp density, and their abilities to leach gold from ore were evaluated. Decrease in residual gold content from the ore was observed for all bacterial strains except strain f-2. All eight bacterial strains isolated as IOB in this study were identified as the same genus, Roseovarius, by analysing 16S rRNA gene sequences, as shown in Table 2. This genus has been previously reported as a kind of IOB capable of oxidising iodide to iodine24. Although all eight bacterial strains belonged to the same genus, the results of the incubation experiments differed between the strains. These eight bacterial strains may belong to different species, which could be identified through full sequencing analysis; therefore, differences in their abilities to dissolve gold from the ore sample were associated with differences between the species or strains. In particular, three strains (a-1, e-2 and f-1) showed high capacity to generate iodine-iodide lixiviant solution that completely solubilised gold from the ore sample within 30 days of incubation. Strain a-1 also grew until 5 days of incubation, and the bacterial cell number remained approximately constant after that. On the basis of comparison of the similarity in the increasing trends of pH, redox potential and bacterial growth, the increase in bacterial activity caused the concomitant rise in pH and redox potential. Similar results were also obtained in the experiments incubating strains e-2 and f-1. Because gold dissolution is insensitive to pH over the range of pH 2–1031 and the dissolution rate depends upon the concentrations of iodide and iodine9,32,33, microbial activities therefore played a critical role in accelerating dissolution of gold from ore in the bioleaching using IOB. On the basis of the pH and redox potential conditions, with near-neutral pH and low redox potential values, the type of gold in the solution could be designated as AuI2− 32,33. Gold concentration in the culture solution of strain a-1 was quantified after 30 days of incubation to evaluate the mass balance of gold in the culture system. On the basis of the results of ICP-MS and XRF analyses, increase in the concentrations of gold in the solution relative to decrease in the gold contents of ore samples was observed to be balanced. The growth of strain a-1 stabilised after 5–10 days of cultivation (Fig. 4). Iodine can therefore be assumed to be generated by IOB from the logarithmic growth phase to the early stage of the stationary phase. The concentration of triiodide was still increasing after the 5th day, whereas the gold was completely leached into the culture solution by the 5th day. The additive amount of potassium iodide in this experiment can therefore be assumed to be excessive. Thus, the optimum amount of iodide should be considered depending on the gold grade of the ore. The leaching rate was higher for strains a-1, e-2 and f-1, whereas the growth rate was higher for strains f-1, e-2 and a-1. Strain a-1 was assumed to have higher iodide oxidation capability than the other two strains. Iodine was therefore generated at a higher rate in the culture solution of strain a-1, and the leaching rate became higher. However, the growth of a-1 might have been inhibited by iodine generated by that strain itself at an early stage of the incubation experiment. Zhao et al. investigated the negative influence of iodine on the growth of three IOB strains, which were all belonging to Roseovarius spp., through the incubation experiments using marine broth medium containing molecular iodine34. They showed that the growth of their IOB strains which were exposed to 1–2 ppm, 5 ppm and 10 ppm of iodine was suppressed to 70%, 50% and 20% respectively as compared with the cases where they were incubated without molecular iodine. They determined the iodine concentration of the culture solution according to the following equilibrium equation: $$[{{\rm{I}}}_{2}]=\frac{[{{\rm{I}}}_{3}^{-}]}{{K}_{{\rm{C}}}[{{\rm{I}}}^{-}]}$$ where [I−], [I2], [I3−] and KC are the equilibrium concentration of iodide, molecular iodine, triiodide and equilibrium constant respectively. The iodine concentration in the culture solution of a-1 of this study can be calculated using the same equilibrium equation with the KC in pure water which was found to be 626 L/mol at 30 °C, which is in close agreement with the value reported by Palmer et al.35. The iodine concentration was approximately calculated as 2.3 ppm by using the concentration of iodide and triiodide at 15 days after the start of the incubation of the strain a-1. According to the report of Zhao et al. described above, it can be assumed that the growth of the strain a-1 was suppressed to 70% under the condition of such iodine concentration in this study. Because of this negative impact of iodine on the strain a-1, the growth rate of a-1 was the lowest whereas the leaching rate of a-1 was the highest. In contrast, f-1 had lower capability of iodide oxidation and it generated iodine at a lower rate. Thus, the growth of f-1 was less inhibited. The growth rate of f-1 was therefore higher, but the leaching rate was lower. According to the results of the XRF analysis of the ore samples before and after incubation, the contents of other trace elements such as Al, Pb and Ag in the ore samples were also decreased after incubation. This finding indicates that trace elements were also dissolved into the culture solution from the ore sample. Those elements often affect the growth and metabolism of bacteria. Although the adverse effects of those elements on the activities of IOB were not investigated in this study, it is considered that the activities of f-2 during the second screening might have been affected by those trace elements36,37,38. An experiment on cyanide leaching of finely disseminated gold ore (104 µm grain size, pH ≥ 10.5) indicated that higher gold dissolution (97.5%) occurred after about 40 hours of leaching time39. Another bioleaching study indicated that the cyanogenic bacteria Pseudomonas plecoglossicida solubilised gold from shredded printed circuit boards with 69% dissolution in 80 hours40. A two-step bioleaching process to leach gold from electronic waste indicated that Chromobacterium violaceum was capable of leaching 69% of gold and that a mixture of Chromobacterium violaceum and Pseudomonas aeruginosa exhibited 73% gold leaching within 7 days41. The present study demonstrated that gold can be solubilised at a greater yield (i.e. 95–100% leaching yield) from high-grade ground ore (average particle size of 75 µm) within 5–30 days of the bioleaching experiment using the IOB-generated iodine-iodide lixiviant. The iodide-iodine system does not normally oxidise metal sulphides; therefore, excessive reagent consumption can be avoided in the leaching process and the process is suitable for sulphide ores42. Metal sulphides are oxidised during the cyanidation process, and cyanide is consumed by reaction with metal sulphides. Thus, cyanide is used more during the leaching process. In the case of the iodide-iodine system, iodine and iodide are not consumed by oxidation of metal sulphides. Therefore, the iodine-iodide system has an advantage over the cyanidation process. Leaching of gold by the iodine-iodide system can be carried out in a wide pH range between 2 and 1031, in contrast to cyanide leaching, which is normally carried out under relatively restricted alkaline conditions (pH 10 to 11). Iodine can act as an oxidant, and no other oxidant is required in this bioleaching experiment32. IOB was found to be much more abundant in the iodide-rich brine waters than in natural seawater, which indicates that they can be active in slightly alkaline conditions. IOB growth could possibly occur in a pH range of 4.5 to 8.543. In addition, iodine can leach gold from its ore at low concentrations, penetrating rocks particularly well, and does not adsorb on gangue particles to any great extent, allowing for excellent recovery of the reagent and reducing the cost of the process44. Bioleaching of gold ore using IOB may be considered an environmentally friendly process because of its lower toxicity than conventional cyanide. However, the leaching (contact) time may not be as fast as that of direct chemical leaching by cyanide, which requires contact times of 24–72 hours for gold ore32. In practice, the costs of the reagent (potassium iodide) and nutrients (marine broth) may be expensive, which may be a cost factor to consider for bioleaching operation using IOB. The possibility of bacterial gold leaching using IOB was successfully demonstrated in the present study. A direct comparison cannot be made between the results of the present study and those of other chemical- and bioleaching studies. Reasons for this include differences in the nature of the treatments, growth media, microorganisms, and compositions of ores. However, bacterial leaching in this study is shown to be effective and offers better conditions with considerable advantages over other biocyanidation and conventional cyanidation processes. This study could be improved by doing more intensive experiments using a column and/or bioreactor to examine the relationship between microbial community function and leaching behaviour, controlling the microbial generation of iodine and the leaching mechanisms, optimising the iodine-iodide molar ratio and their concentrations, using oxidants to minimise loss of iodine during leaching operation to find the best practical conditions for effective gold dissolution, and regenerating the lixiviants for economic efficiency. Although this work was carried out to leach gold successfully from high-grade gold ore, it is hoped that with further research this technique could also be adopted to treat low-grade gold ore. Thus, studies on these parameters should be continued in future works to improve the technique and optimise conditions for effective gold leaching. Enrichment and Isolation Sampling of the ore and brine A gold ore rock sample was collected from the Modi Taung gold mine in central Myanmar. Natural gas brine waters (two samples) were collected from Chiba Prefecture, Japan. These samples were kept at room temperature in sterile plastic bottles until use. Iodide concentrations in the brine water samples were measured at more than 120 ppm by ion chromatography45. Enrichment and isolation of IOB A solid culture medium was used for isolation of IOB from the brine. Then, incubation was performed using a liquid solution composed of Difco™ marine broth 2216 (37.4 g/L), agar powder (20 g/L), soluble starch (1.2 g/L) and potassium iodide (1.0 g/L) was autoclaved at 121 °C for 20 minutes. Next, 15 mL of the sterilised solution was poured into a sterilised petri dish. After the culture medium was solidified by cooling, 100 µL of the brine was spread onto the surface of the solid culture medium using a sterilised spreader. The culture medium was incubated at 30 °C under aerobic conditions and observed daily. Colonies of bacteria could be found on the solid culture medium after 5–10 days of incubation. The colonies of IOB could be clearly distinguished because the colour of the culture medium at the periphery of the colonies changed to purple. Single colonies of IOB were picked and streaked out onto another identical solid culture medium to obtain pure isolates of IOB. This work was repeated twice to obtain pure isolates of IOB. Incubation experiment using liquid culture media Two kinds of liquid culture media were used for the first and second screenings of effective isolates of IOB. The liquid culture medium used for the first screening was composed of Difco™ marine broth 2216 (37.4 g/L), soluble starch (1.2 g/L), and potassium iodide (21.8 g/L). Addition of high KI concentration into the medium was performed to successfully isolate and enrich IOB because they prefer high-iodideconcentration environment24. This liquid culture medium was autoclaved at 121 °C for 20 minutes. After the medium was cooled, single colonies of pure isolates obtained from previous isolation were inoculated into the liquid culture medium and incubated at 30 °C under aerobic conditions. A non-inoculated control was also prepared. After 4 weeks of incubation, the capabilities of the colonies for growth under the condition of high-iodide concentration (>20,000 ppm) and for oxidising iodide were evaluated. IOB growth was evaluated by counting bacterial cell numbers in the culture solution. Iodide oxidation capability was evaluated by observing the changing colour of culture solution into purple, which was caused by the iodine-starch reaction46. The isolates that were evaluated to be capable of growth and iodide oxidation were subjected to the second screening. The liquid culture medium used for the second screening contained Difco™ marine broth 2216 and potassium iodide with the same concentration as that in the liquid culture medium used for the first screening. Fifteen millilitres of the solution was poured into a glass tube and autoclaved at 121 °C for 20 minutes. The IOB isolates were pre-cultured in a liquid culture medium containing only marine broth with the same concentration as described above, at 30 °C under aerobic conditions. This previously cultured solution of the isolate was inoculated into the culture medium to be used for secondary screening. In addition, the gold ore powder sample was wrapped in aluminium foil and sterilised in a furnace at 140 °C for 4 hours. After cooling, 0.5 g of the sterilised ore was put into 15 mL of the culture medium. In the third screening, twelve bottles with the same culture medium were prepared for this experiment. Six of the bottles were incubated, and the other six were used as non-inoculated controls. The bottles were incubated at 30 °C for 30 days. One inoculated bottle and one uninoculated bottle were sequentially sampled every 5 days. The ore and solution were separated by filtration using a membrane filter with a pore size of 0.2 µm, and the ore was washed with pure water, dried at 50 °C and subjected to XRF analysis (ZSX Primus II, Rigaku Corporation, Tokyo, Japan) to evaluate the gold content. The pH and redox potential of the filtrate were measured at room temperature using a handheld ion/pH meter (IM-32P, DKK-TOA Corporation, Tokyo, Japan) with an ion selective electrode. The reference electrode was a silver-silver chloride electrode in 3.3 mol/L solution of potassium chloride. The gold concentration in the solution was quantified by ICP-MS after the solution was pre-treated as follows. The solution was thermally decomposed with sulphuric acid and nitric acid47. The decomposition product was then dissolved completely in the aqua regia solution. After the post-aqua regia solution was filtered with a filter pore size of 0.45 μm, it was subjected to ICP-MS analysis. Extra pure water was added to the solution to maintain a constant volume. Subsequently, the concentration of gold in the solution was quantified through ICP-MS analysis. Lastly, the methodology of the incubation experiment for the third screening was the same as that for the second screening. However, in the case of the third screening, the data were obtained every 5 days. Solid samples were analysed for each time point (i.e. 5 days, 10 days, 15 days, 20 days, 25 days and 30 days) using replications for the leaching experiment. Cells were counted using a Petroff-Hausser counting chamber and a phase-contrast microscope, the EVOS XL Core Cell Imaging System provided by Thermo Fisher Scientific Inc. (Waltham, Massachusetts MA, USA). Replicate counts were made in accordance with the instruction of the counting chamber. Then, the isolates of IOB were incubated at 30 °C under aerobic conditions for 30 days. XRF and ICP-MS analysis This gold ore rock was crushed and ground to an average size of 75 µm or less using the CMT Vibrating Sample Mill (TI-100). The ground ore sample was used for the incubation experiments. The mineral composition and chemical composition were determined using XRD and XRF analysis, respectively. The mineral composition was analysed using a Rigaku RINT-2100 Diffractometer. The analysis was done using Cu Kα radiation at 40 kV and 20 mA, and count data for the random powder mounts were collected from 2°–65° two-theta at a scanning rate of 2° per minute. The chemical composition of the gold ore was measured from pressed pellets using a ZSX Primus II X-ray fluorescence spectrometer. The experiment was performed with an accelerating potential of 15 kV, a beam current of about 10 mA, and a beam diameter of 3 µm. The soluble Au in the solution was measured by ICP-MS analysis. However, because of financial limitations, we could measure the Au content in only three solution samples in which the residual gold in the ore was not detected after 30 days of incubation. Because good balance between the gold concentration in the solution and gold content in the original ore sample could be obtained, the leaching yield could be evaluated by measuring the residual gold in the ore sample. Identification of microbial species Microbial species isolated from the formation water were identified through 16S rRNA gene sequence analysis. Each isolated strain of IOB was incubated on the solid culture medium as described above. Bacterial DNA was extracted from one colony of each IOB on the solid medium using an UltraClean Microbial DNA Isolation Kit (MO BIO Laboratories, Inc., USA) according to the manufacturer's instructions. Extracted DNA was amplified by PCR targeting of the 16S rRNA genes using Premix Taq™ Hot Start Version (TAKARA BIO Inc., Japan) and primers. Almost all regions of the eubacterial 16S rRNA genes were amplified using universal primers (forward EU27f [5′-AGAGTTTGA TCCTGGCTCAG-3′], reverse EU1525r [5′-AAAGGAGG TGATCCAGCC-3′]). The thermal cycle profile for the PCR was as follows: initial denaturation at 94 °C for 2 min; 30 denaturation cycles at 94 °C for 1 min, annealing at 54 °C for 2 min, extension at 72 °C for 2 min; final extension at 72 °C for 7 min, and cooling at 4 °C. The DNA in the PCR product was purified using QIA quick PCR Purification Kit (QIAGEN, the Netherlands). The purified DNA was sequenced using a MicroSeq 500 16S rRNA gene Sequencing Kit (Applied Biosystems Inc., USA). Purified sequencing reaction products were electrophoresed using an ABI Prism 3130 Genetic Analyzer (Applied Biosystems Inc., USA). Homology searches of the RNA gene sequences were performed using the Gen Bank (www.ncbi.nlm.nih.gov) DNA database with the BLAST program provided online by the DNA Data Bank of Japan. Triiodide analysis The generation of triiodide was investigated based on the measurement of the absorbance. The behaviour of triiodide generation was estimated qualitatively based on the variation of absorbance at 351 nm. Absorbance measurements were carried out using a UV-2450 UV-visible spectrophotometer. Kuzugudenli, O. E. & Kantar, C. Alternatives to gold recovery by cyanide leaching. Erc. Univ. Fen Bil. Derg. 15(1–2), 119–127 (1999). Mc Nulty, T. Cyanide substitutes. Mining Magazine 184(5), 256–61 (2001). Hilson, G. & Monhemius, A. J. Alternatives to cyanide in the gold mining industry: what prospects for the future? J. Clean. Prod. 14, 1158–1167 (2006). Gökelma, M., Birich, A., Stopic, S. & Friedrich, B. A review on alternative gold recovery reagents to cyanide. J. Mater. Sci. & Chem. Eng. 4, 8–17, https://doi.org/10.4236/msce.2016.48002 (2016). Aylmore, M. Alternative lixiviants to cyanide for leaching gold ores, in Gold ore processing: Project development and operations (ed., Adams, M.), 447–484. Amsterdam: Elsevier (2016). Bard, A.J., Parsons, R. & Jordan, J. Standard Potentials in Aqueous Solutions. (Dekker, New York, 1985). Wan, R. Y., Le Vier, M. & Miller, J. D. Research and development activities for the recovery of gold from non-cyanide solutions. In Hydrometallurgy: Fundamentals, Technology and Innovations (eds Hiskey, J. B & Warren, G.W.), 415–436 (SME, Littleton, 1993). Kelsall, G. H., Welham, N. J. & Diaz, M. A. Thermodynamics of Cl–H2O, Br–H2O, I–H2O, Au–Cl–H2O, Au–Br–H2O, and Au–I–H2O systems at 298K. J. Electroanal. Chem. 361, 13–24 (1993). Davis, A. & Tran, T. Gold dissolution in iodide electrolytes. Hydrometallurgy 26, 163–177 (1991). Davis, A., Tran, T. & Young, D. Solution chemistry of iodide leaching gold. Hydrometallurgy 32, 143–159 (1993). Watling, H. Review microbiological advances in biohydrometallurgy. Minerals 6, 49, https://doi.org/10.3390/min6020049 (2016). Watling, H. R. et al. Bioleaching of a low-grade copper ore linking leach chemistry and microbiology. Min. Eng. 56, 35–44 (2014). Rear, B., Taylor, A. & Busby, P. Chalcocite ore leaching at Girilambone Copper Mine. Proceeding of Biomine '94: Applications of Biotechnology to the Minerals Industry (Perth, W. A.). Australian Mineral Foundation, Glenside, 600–625 (1994). Gericke, M. & Pinches, A. Bioleaching of copper sulphide concentrate using extreme thermophilic bacteria. Miner. Eng. 12, 893–904 (1999). Gericke, M., Pinches, A. & van Rooyen, J. V. Bioleaching of a chalcopyrite concentrate using an extremely thermophilic culture. Int. J. Miner. Process. 62, 243–255 (2001). Pradhan, N., Nathsarma, K. C., Srinivasa Rao, K., Sukla, L. B. & Mishra, B. K. Heap bio-leaching of chalcopyrite: A review. Miner. Eng. 21, 355–365 (2008). Dong, Y., Lin, H., Xu, X. & Zhou, S. Bioleaching of different copper sulfides by Acidithiobacillus ferrooxidans and its adsorption on minerals. Hydrometallurgy 140, 42–47 (2013). Munoz, J. A., Gonzalez, F., Blazquez, M. L. & Ballester, A. A study of the bioleaching of a Spanish uranium ore. Part I: A review of the bacterial leaching in the treatment of uranium ores. Hydrometallurgy 38, 39–57 (1995). Garcia, O. Jr. Bacterial leaching of uranium ore from Figueira-PR, Brazil, at laboratory and pilot scale. FEMS Microbiol. Rev. 11(1–3), 237–242 (1993). Fowler, T. A. & Crundwell, F. K. Leaching of zinc sulfide by Thiobacillus ferrooxidans: bacterial oxidation of the sulfur product layer increases the rate of zinc sulfide dissolution at high concentrations of ferrous ions. Appl. Environ. Microbiol. 65, 5285–5292 (1999). Amankwah, R. K., Yen, W. T. & Ramsay, J. A. A two-stage bacterial pretreatment process for double refractory gold ores. Miner. Eng. 18, 103–108 (2005). Lindstrom, E. B., Gunneriusson, E. & Tuovinen, O. H. Bacterial Oxidation of Refractory Sulfide Ores for Gold Recovery. Crit. Rev. Biotechnol. 12(1–2), 133–155 (2008). Olson, G. J. Microbial oxidation of gold ores and gold bioleaching. FEMS Microbiol. Lett. 119, 1–6 (1994). Amachi, S. et al. Isolation of iodide-oxidizing bacteria from iodide-rich natural gas brines and seawaters. Microb. Ecol. 49, 547–557 (2005). Gozlan, R. S. Isolation of iodine-producing bacteria from aquaria. Antonie Van Leeuwenhoek 34, 226 (1968). Gozlan, R. S. & Margalith, P. Iodide oxidation by a marine bacterium. J. Appl. Bacteriol. 36, 407–417 (1973). Gozlan, R. S. & Margalith, P. Iodide oxidation by Pseudomonas iodooxidans. J. Appl. Bacteriol. 37, 493–499 (1974). Fuse, H., Inoue, H., Murakami, K., Takimura, O. & Yamaoka, Y. Production of free and organic iodine by Roseovarius spp. FEMS Microbiol. Lett. 229, 189–194 (2003). Amachi, S. Microbial contribution to global iodine cycling: volatilization, accumulation, reduction, oxidation, and sorption of iodine. Microb. Environ. 23, 269–276 (2008). Kaksonen, A. H., Mudunuru, B. M. & Hackl, R. The role of microorganisms in gold processing and recovery – a review. Hydrometallurgy 142, 70–83 (2014). Qi, P. H. & Hiskey, J. B. Dissolution kinetics of gold in iodide solution. Hydrometallurgy 27, 47–62 (1991). Angelidis, T. N., Kydros, K. A. & Matis, K. A. A fundamental rotating disk study of gold dissolution in iodine-iodide solutions. Hydrometallurgy 34, 49–64 (1993). Baghalha, M. The leaching kinetics of an oxide gold ore with iodide/iodine solution. Hydrometallurgy 113–114, 42–50 (2012). Zhao, D., Lim, C. P., Miyanaga, K. & Tanji, Y. Iodine from bacterial iodide oxidization by Roseovarius spp. Inhibits the growth of other bacteria. Appl Micro. Biotechnol 97, 2173–2182 (2013). Palmer, D. A. et al. Triiodide ion formation equilibrium and activity coefficients in aqueous solution. J. Solution Chem. 13, 673–683 (1984). Wang, H. et al. Inhibition of bacterial oxidation of ferrous iron by lead nitrate in sulfate-rich systems. J. Hazard. Mater. 244–245, 718–725 (2013). Zhang, L. et al. Effects of Lead and Mercury on Sulfate-Reducing Bacterial Activityin a Biological Process for Flue Gas Desulfurization Wastewater Treatment. Scientific Reports. 6, 30455, https://doi.org/10.1038/srep30455 (2016). Liau, S. Y., Read, D. C., Pugh, W. J., Furr, J. R. & Russell, A. D. Interaction of silver nitrate with readily identifiable groups: relationship to the antibacterial action of silver ions. Lett. Appl. Microbiol. 25, 279–283 (1997). Gonen, N. Leaching of finely disseminated gold ore with cyanide and thiourea solutions. Hydrometallurgy 69, 169–176 (2003). Reith, F., Lengke, M. F., Falconer, D., Craw, D. & Southam, G. The geo-microbiology of gold. ISME J. 1, 567–584 (2007a). Pradhan, J. K. & Kumar, S. Metals bioleaching from electronic waste by Chromobacterium violaceum and Pseudomonads sp. Waste Manag. Res. 30, 1151–1159 (2012). Hisky, J. B & Qi, P. H. Leaching behaviour of gold in iodide solutions in: World Gold, 91, Aus.I.M.M., Melbourne, 115–120 (1991). Iino, T., Ohkuma, M., Kamagata, Y. & Amachi, S. Iodidimonas muriae gen. nov., sp. nov., an aerobic iodide-oxidizing bacterium isolated from brine of a natural gas and iodine recovery facility, and proposals of Iodidimonadaceae fam. nov., Iodidimonadales ord. nov., Emcibacteraceae fam. nov. and Emcibacterales ord. nov. Int. J. Syst. Evol. Microbiol. 66, 5016–5022 (2016). Jacobson, R. & Murphy, J. The future of solution and in-situ leaching. SME-AIME Conf. (Denver, Colo.) (1987). Bishsel, Y. & Gunten, U. Determination of Iodide and Iodate by Ion Chromatography with Postcolumn Reaction and UV/Visible Detection. Anal. Chem. 71(1), 34–38 (1999). Dhar, N. R. The Starch-Iodine Reaction. J. Phys. Chem. 28(2), 125–130 (1924). Iino, S., Mogi, S. & Miyawaki, K. Metal Resource Analytical Methods for Environment and Smelting Fields: Test Methods in Notification No. 19 of the Ministry of the Environment, Provisional Analytical Methods for Precious and Rare Metals in Electric and Electronic Products, and Matte Fusion Method. J. Mater. Cycles Waste 27, 176–187 (2016). Jung, S. H., Yeon, J. W., Kang, Y. & Song, K. Determination of Triiodide Ion Concentration Using UV-Visible Spectrophotometry. Asian J. of Chem. 26(13), 4084–408 (2014). This study was financially supported by a grant entitled 'Engineering Research for Pioneering of a New Field' provided by the Faculty of Engineering, Kyushu University, Japan. We thank N. Naung and W. Phyo for assistance in collecting the ore sample. We also thank Sara J. Mason for editing a draft of this manuscript. Department of Earth Resources Engineering, Graduate School of Engineering, Kyushu University, 8190395, Fukuoka, Japan San Yee Khaing Department of Earth Resources Engineering, Faculty of Engineering, Kyushu University, 8190395, Fukuoka, Japan Yuichi Sugai & Kyuro Sasaki Search for San Yee Khaing in: Search for Yuichi Sugai in: Search for Kyuro Sasaki in: S.Y.K. and Y.S. designed the study and performed the experiments. Y.S. and K.S. helped with measurement and analysis of the data, and provided valuable discussions. S.Y.K. and Y.S. wrote the manuscript. All authors reviewed the manuscript. Correspondence to Yuichi Sugai. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Consideration of Influential Factors on Bioleaching of Gold Ore Using Iodide-Oxidizing Bacteria , Yuichi Sugai , Kyuro Sasaki & Myo Min Tun Minerals (2019) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Guest Edited Collections Editorial Board Highlights Scientific Reports rigorous editorial process Open Access Funding Support
CommonCrawl
On the Cauchy problem for the Schrödinger-Hartree equation Approximate controllability of abstract nonsimple thermoelastic problem December 2015, 4(4): 391-429. doi: 10.3934/eect.2015.4.391 On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions Sergey P. Degtyarev 1, Institute for Applied Mathematics and Mechanics NASU, State Institute for Applied Mathematics and Mechanics, R.Luxenburg Str., 74, Donetsk, 83114, Ukraine Received March 2015 Revised October 2015 Published November 2015 We give relatively simple sufficient conditions on a Fourier multiplier so that it maps functions with the Hölder property with respect to a part of the variables to functions with the Hölder property with respect to all variables. By using these these sufficient conditions we prove solvability in Hölder classes of the initial-boundary value problems for the linearized Cahn-Hilliard equation with dynamic boundary conditions of two types. In addition, Schauders estimates are derived for the solutions corresponding to the problem under study. Keywords: Cahn-Hilliard equation., Hölder spaces, partial regularity, dynamic boundary conditions, Fourier multipliers. Mathematics Subject Classification: Primary: 42B15, 42B37; Secondary: 35G15, 35G16, 35R3. Citation: Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations & Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391 H. Amann, Operator-valued Fourier multipliers, vector-valued Besov spaces, and applications,, Math. Nachr., 186 (1997), 5. doi: 10.1002/mana.3211860102. Google Scholar H. Amann, Anisotropic Function Spaces and Maximal Regularity for Parabolic Problems. Part 1: Function Spaces,, Jindrich Necas Center for Mathematical Modeling Lecture Notes, (2009). Google Scholar H. Amann, Linear and Quasilinear Parabolic Problems, Volume I, Abstract Linear Theory,, Monographs in Mathematics, (1995). doi: 10.1007/978-3-0348-9221-6. Google Scholar S. N. Antontsev, C. R. Gonsalves and A. M. Meirmanov, Local existence of classical solutions to the well-posed Helle-Shaw problem,, Port. Math. (N.S.), 59 (2002), 435. Google Scholar J.-H. Bailly, Local existence of classical solutions to first-order parabolic equations describing free boundaries,, Nonlinear Anal., 32 (1998), 583. doi: 10.1016/S0362-546X(97)00504-X. Google Scholar B. V. Basaliy, I. I. Danilyuk and S. P. Degtyarev, Classical solvability of the multidimensional nonstationary filtration problem with free boundary (Russian. English summary),, Dokl. Akad. Nauk Ukr. SSR, (1987), 3. Google Scholar B. V. Bazaliy and S. P. Degtyarev, On classical solvability of the multidimensional Stefan problem for convective motion of a viscous incompresssible fluid,, Math. USSR Sb., 60 (1988), 1. Google Scholar B. V. Bazaliy and S. P. Degtyarev, Solvability of a problem with an unknown boundary between the domains of a parabolic and an elliptic equation,, Ukr. Math. J., 41 (1989), 1155. doi: 10.1007/BF01057253. Google Scholar B. V. Bazaliy and S. P. Degtyarev, Stefan problem with kinetic and classical conditions at the free boundary,, Ukr. Math. J., 44 (1992), 139. doi: 10.1007/BF01061735. Google Scholar G. I. Bizhanova and V. A. Solonnikov, On problems with free boundaries for second-order parabolic equations,, St. Petersburg Mathematical Journal, 12 (2001), 949. Google Scholar G. I. Bizhanova and V. A. Solonnikov, On some model problems for second order parabolic equations with time derivative in the boundary conditions,, St. Petersbg. Math. J., 6 (1995), 1151. Google Scholar Y.-K. Cho and D. Kim, A fourier multiplier theorem on the Besov-Lipschits spaces,, Korean J. Math., 16 (2008), 85. Google Scholar P. Constantin, D. Cyrdoba and F. Gancedo, On the global existence for the Muskat problem,, J. Fur. Math. Soc. (JEMS), 15 (2013), 201. doi: 10.4171/JEMS/360. Google Scholar P. Constantin and M. Pugh, Global solutions for small data to the Hele-Shaw problem,, Nonlinearity, 6 (1993), 393. doi: 10.1088/0951-7715/6/3/004. Google Scholar A. Cyrdoba, D. Cyrdoba and F. Gancedo, Porous media: The Muskat problem in three dimensions,, Anal. PDE., 6 (1993), 447. doi: 10.2140/apde.2013.6.447. Google Scholar S. P. Degtyarev, Classical solvability of multidimensional two-phase Stefan problem for degenerate parabolic equations and Schauder's estimates for a degenerate parabolic problem with dynamic boundary conditions,, Nonlinear Differential Equations and Applications NoDEA, 22 (2015), 185. doi: 10.1007/s00030-014-0280-3. Google Scholar S. P. Degtyarev, The existence of a smooth interface in the evolutionary elliptic Muskat-Verigin problem with nonlinear source,, (Russian) Ukrainian Mathematical Bulletin, 7 (2010), 301. Google Scholar S. P. Degtyarev, The existence of a smooth interface in the evolutionary elliptic Muskat-Verigin problem with nonlinear source,, , (). Google Scholar R. Denk, J. Prüss and R. Zacher, Maximal $L_p$ - regularity of parabolic problems with boundary dynamics of relaxation type,, J. Funct. Anal., 255 (2008), 3149. doi: 10.1016/j.jfa.2008.07.012. Google Scholar R. Denk and R. Volevich, A new class of parabolic problems connected with Newton's polygon,, Discrete Cont. Dyn. Syst., (2007), 294. Google Scholar R. Denk and L. R. Volevich, Parabolic boundary value problems connected with Newton's polygon and some problems of crystallization,, J. Evol. Equ., 8 (2008), 523. doi: 10.1007/s00028-008-0392-5. Google Scholar R. Denk and M. Kaip, General Parabolic Mixed Order Systems in $L_p$ and Applications,, Operator Theory: Advances and Applications, (2013). doi: 10.1007/978-3-319-02000-6. Google Scholar P. Dintelmann, Classes of Fourier multipliers and Besov-Nikolskij spaces,, Math. Nachr., 173 (1995), 115. doi: 10.1002/mana.19951730108. Google Scholar H. Dong, Gradient estimates for parabolic and elliptic systems from linear laminates,, Arch. Ration. Mech. Anal., 205 (2012), 119. doi: 10.1007/s00205-012-0501-z. Google Scholar H. Dong and S. Kim, Partial Scauder estimates for second-order elliptic and parabolic equations,, Calc. Var. Partial Differential Equations, 40 (2011), 481. doi: 10.1007/s00526-010-0348-9. Google Scholar J. Escher, Quasilinear parabolic systems with dynamical boundary conditions,, Comm. Partial Differential Equations, 18 (1993), 1309. doi: 10.1080/03605309308820976. Google Scholar J. Escher and B.-V. Matioc, On the parabolicity of the Muskat problem: Well-posedness, fingering, and stability results,, Z. Anal. Anwend., 30 (2011), 193. doi: 10.4171/ZAA/1431. Google Scholar J. Escher and G. Simonett, Classical solutions of multidimensional Hele-Shaw models,, SIAM J. Math. Anal., 28 (1997), 1028. doi: 10.1137/S0036141095291919. Google Scholar J. Escher and G. Simonett, Classical solutions for Hele-Shaw models with surface tension,, Adv. Differential Equations, 2 (1997), 619. Google Scholar P. Fife, Schauder estimates under incomplete Hölder continuity assumptions,, Pacific J. Math., 13 (1963), 511. doi: 10.2140/pjm.1963.13.511. Google Scholar A. Friedman, B. Hu and J. J. L. Velazquez, A Stefan problem for a protocell model with symmetry-breaking bifurcations of analitic solutions,, Interfaces Free Bound., 3 (2001), 143. doi: 10.4171/IFB/37. Google Scholar A. Friedman and J. J. L. Velazquez, A free boundary problem associated with crystallization of polymers in a temperature field,, Indiana Univ. Math. J., 50 (2001), 1609. doi: 10.1512/iumj.2001.50.2118. Google Scholar E. Frolova, Solvability in Sobolev spaces of a problem for a second order parabolic equation with time derivative in the boundary condition,, Portugal Math., 56 (1999), 419. Google Scholar C. G. Gal and H. Wu, Asymptotic behavior of Cahn-Hilliard equation with Wentzell boundary conditions and mass conservation,, Discrete Contin. Dyn. Syst., 22 (2008), 1041. doi: 10.3934/dcds.2008.22.1041. Google Scholar S. Gindikin and L. R. Volevich, The Method of Newton's Polyhedron in the Theory of Partial Differential Equations,, Mathematics and its Applications, (1992). doi: 10.1007/978-94-011-1802-6. Google Scholar M. Girardi and L. Weis, Operator-valued Fourier multiplier theorems on Besov spaces,, Math. Nachr., 251 (2003), 34. doi: 10.1002/mana.200310029. Google Scholar G. R. Goldstein and A. Miranville, A Cahn-Hilliard-Gurtin model with dynamic boundary conditions,, Discrete Contin. Dyn. Syst. Ser. S., 6 (2013), 387. doi: 10.3934/dcdss.2013.6.387. Google Scholar K. K. Golovkin, On equivalent normalizations of fractional spaces,, in Automatic Programming, (1962), 364. Google Scholar K. K. Golovkin and V. A. Solonnikov, Bounds for integral operators in translation-invariant norms,, in Boundary Value Problems of Mathematical Physics, (1964), 47. Google Scholar K. K. Golovkin and V. A. Solonnikov, Estimates of integral operators in translation-invariant norms. II,, in Boundary Value Problems of Mathematical Physics. Part 4, (1966), 5. Google Scholar K. K. Golovkin and V. A. Solonnikov, On some estimates of convolutions,, in Boundary-Value Problems of Mathematical Physics and Related Problems of Function Theory. Part 2, (1968), 6. Google Scholar B. Grec and E. V. Radkevich, Newton's polygon method and the local solvability of free boundary problems,, Journal of Mathematical Sciences, 143 (2007), 3253. doi: 10.1007/s10958-007-0208-0. Google Scholar D. Guidetti, The parabolic mixed Cauchy-Dirichlet problem in spaces of functions which are Hölder continuous with respect to space variables,, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl., 7 (1996), 161. Google Scholar D. Guidetti, Optimal regularity for mixed parabolic problems in spaces of functions which are Hölder continuous with respect to space variables,, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 26 (1998), 763. Google Scholar V. N. Gusakov and S. P. Degtyarev, Existence of a smooth solution in a filtration problem,, Ukr. Math. J., 41 (1989), 1027. doi: 10.1007/BF01056273. Google Scholar L. Hörmander, The Analysis of Linear Partial Differential Operators. I,, Grundlehren der Mathematischen Wissenschaften, (1983). doi: 10.1007/978-3-642-96750-4. Google Scholar S. D. Ivasishen, Green's matrices of boundary value problems for Petrovskii parabolic systems of general form. I,, Mathematics of the USSR-Sbornik, 42 (1982), 93. Google Scholar S. D. Ivasishen, Green's matrices of boundary value problems for Petrovskii parabolic systems of general form. II,, Mathematics of the USSR-Sbornik, 42 (1982), 461. Google Scholar A. I. Komech, Linear partial differential equations with constant coefficients,, in Partial Differential Equations II: Elements of the Modern Theory. Equations with Constant Coefficients, (2013), 121. doi: 10.1007/978-3-642-57876-2_2. Google Scholar J. Kovats, Real analytic solutions of parabolic equations with time-measurable coefficients,, Proc. Amer. Math. Soc., 130 (2002), 1055. doi: 10.1090/S0002-9939-01-06163-9. Google Scholar S. N. Kruzkov, A. Castro and M. M. Lopez, Schauder-type estimates and existence theorems for the solution of basic problems for linear and nonlinear parabolic equations,, Sov. Math. Dokl., 16 (1975), 60. Google Scholar N. Krylov, The Calderon-Zygmund theorem and parabolic equations in $L_p(\mathbbR,C^{2+\alpha})$- spaces,, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 1 (2002), 799. Google Scholar N. V. Krylov, Parabolic equations in $L_p$-spaces with mixed norms,, St. Petersburg Mathematical Journal, 14 (2003), 603. Google Scholar N. Krylov and E. Priola, Elliptic and parabolic second-order PDEs with growing coefficients,, Comm. Partial Differential Equations, 35 (2010), 1. doi: 10.1080/03605300903424700. Google Scholar Y. Kusaka and A. Tani, On the classical solvability of the two-phase Stefan problem in a viscous incompressible fluid flow,, Math. Models Methods Appl. Sci., 12 (2002), 365. doi: 10.1142/S0218202502001696. Google Scholar Y. Kusaka, On a free boundary problem describing the phase transition in an incompressible viscous fluid,, Interfaces Free Bound., 12 (2010), 157. doi: 10.4171/IFB/231. Google Scholar O. A. Ladyzhenskaya, A theorem on multiplicators in nonhomogeneous holder spaces and some of its applications,, Journal of Mathematical Sciences (New York), 115 (2003), 2792. doi: 10.1023/A:1023373920221. Google Scholar O. A. Ladyzhenskaya, On multiplicators in Hölder spaces with nonhomogeneous metric,, Methods. Appl. Anal., 7 (2000), 465. Google Scholar G. M. Lieberman, Intermediate Schauder theory for second order parabolic equations. IV. Time irregularity and regularity,, Differential Integral Equations, 5 (1992), 1219. Google Scholar L. Lorenzi, Optimal Schauder estimates for parabolic problems with data measurable with respect to time,, SIAM J. Math. Anal., 32 (2000), 588. doi: 10.1137/S0036141098342842. Google Scholar L. Lorenzi, Optimal Hölder regularity for nonautonomous Kolmogorov equations,, Discrete Contin. Dyn. Syst. Ser. S, 4 (2011), 169. doi: 10.3934/dcdss.2011.4.169. Google Scholar A. Lunardi, Maximal space regularity in nonhomogeneous initial-boundary value parabolic problem,, Numer. Funct. Anal. Optim., 10 (1989), 323. doi: 10.1080/01630568908816306. Google Scholar A. Lunardi, Analytic Semigroups and Optimal Regularity in Parabolic Problems,, Progress in Nonlinear Differential Equations and their Applications, (1995). doi: 10.1007/978-3-0348-9234-6. Google Scholar A. M. Meirmanov, On the classical solution of the multidimensional Stefan problem for quasilinear parabolic equations,, Math. USSR Sb., 40 (1981), 157. Google Scholar A. Meirmanov, The Muskat problem for a viscoelastic filtration,, Interfaces Free Bound., 13 (2011), 463. doi: 10.4171/IFB/268. Google Scholar I. Sh. Mogilevskiĭ and V. A. Solonnikov, Solvability of a noncoercive initial-boundary value problem for the Stokes system in Hölder classes of functions (the half-space case),, Z. Anal. Anwendungen, 8 (1989), 329. Google Scholar J. Prüss, R. Racke and S. Zheng, Maximal regularity and a symptotic behavior of solutions for the Cahn-Hilliard equation with dynamic boundary conditions,, Anali di Matematica, 185 (2006), 627. doi: 10.1007/s10231-005-0175-3. Google Scholar J. Prüss, G. Simonett and R. Zacher, Qualitative behavior of solutions for thermodinamically consistent Strfan problems,, Arch. Ration. Mech. Anal., 207 (2013), 611. doi: 10.1007/s00205-012-0571-y. Google Scholar R. Racke and S. Zheng, The Cahn-Hilliard equation with dynamic boundary conditions,, Adv. Differential Equations, 8 (2003), 83. Google Scholar E. V. Radkevich, On conditions for the existence of a classical solution of the modified Stefan problem (Gibbs-Thomson low),, Russ. Ac. Sc. Sb. Math., 75 (1993), 221. doi: 10.1070/SM1993v075n01ABEH003381. Google Scholar E. V. Radkevich, On the spectrum of the pencil in the Verigin-Muskat problem,, Russ. Ac. Sc. Sb. Math., 80 (1995), 33. doi: 10.1070/SM1995v080n01ABEH003513. Google Scholar J. F. Rodrigues and V. A. Solonnikov, On a parabolic system with time derivative in the boundary conditions and related free boundary problems,, Math. Ann., 315 (1999), 61. doi: 10.1007/s002080050318. Google Scholar E. Sinestrari, On the solutions of the first boundary value problem for the linear parabolic equations,, Proc. Roy. Soc. Edinburgh, 108 (1988), 339. doi: 10.1017/S0308210500014712. Google Scholar V. A. Solonnikov, Estimates for solutions of a non-stationary linearized system of Navier-Stokes equations,, in Boundary Value Problems of Mathematical Physics. Part 1, (1964), 213. Google Scholar V. A. Solonnikov, On boundary value problems for linear parabolic systems of differential equations of general form,, in Proceedings of the Steklov Institute of Mathematics, (1965), 3. Google Scholar V. A. Solonnikov, Estimates of solutions to some noncoercive initial-boundary value problems with the help of a theorem about multipliers in Laplace-Fourier integrals,, (Russian) in Functional and Numerical Methods of Mathematical Physics, (1988), 220. Google Scholar V. A. Solonnikov, Estimates of solutions of the second initial-boundary problem for the Stokes system in the spaces of functions with Holder continuous derivatives with respect to spatial variables,, Journal of Mathematical Sciences (New York), 109 (2002), 1997. doi: 10.1023/A:1014456711451. Google Scholar G. Tian and X.-J. Wang, Partial regularity for elliptic equatios,, Discrete Contin. Dyn. Syst., 28 (2010), 899. doi: 10.3934/dcds.2010.28.899. Google Scholar H. Triebel, Theory of Function Spaces II,, Modern Birkhauser Classics, (2010). Google Scholar J. L. Vázquez and E. Vitillaro, On the Laplace equation with dynamical boundary conditions of reactive-diffusive type,, J. Math. Anal. Appl., 354 (2009), 674. doi: 10.1016/j.jmaa.2009.01.023. Google Scholar J. L. Vázquez and E. Vitillaro, Heat equation with dynamical boundary conditions of reactive type,, Comm. Partial Differential Equations, 33 (2008), 561. doi: 10.1080/03605300801970960. Google Scholar H. Wu, Convergence to equilibrium for a Cahn-Hilliard model with Wentzell boundary condition,, Asymptot. Anal., 54 (2007), 71. Google Scholar D. Yang, W. Yuan and C. Zhuo, Musielak-Orlicz Besov-type and Triebel-Lizorkin-type spaces,, Rev. Mat. Complut., 27 (2014), 93. doi: 10.1007/s13163-013-0120-8. Google Scholar F. Yi, Local classical solution of Muskat free boundary problem,, J. Partial Differential Equations., 9 (1996), 84. Google Scholar F. Yi, Global classical solution of Muskat free boundary problem,, J. Math. Anal. Appl., 288 (2003), 442. doi: 10.1016/j.jmaa.2003.09.003. Google Scholar Alain Miranville, Sergey Zelik. The Cahn-Hilliard equation with singular potentials and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 275-310. doi: 10.3934/dcds.2010.28.275 Laurence Cherfils, Madalina Petcu, Morgan Pierre. A numerical analysis of the Cahn-Hilliard equation with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1511-1533. doi: 10.3934/dcds.2010.27.1511 Gianni Gilardi, A. Miranville, Giulio Schimperna. On the Cahn-Hilliard equation with irregular potentials and dynamic boundary conditions. Communications on Pure & Applied Analysis, 2009, 8 (3) : 881-912. doi: 10.3934/cpaa.2009.8.881 Cecilia Cavaterra, Maurizio Grasselli, Hao Wu. Non-isothermal viscous Cahn-Hilliard equation with inertial term and dynamic boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1855-1890. doi: 10.3934/cpaa.2014.13.1855 Takeshi Fukao, Shuji Yoshikawa, Saori Wada. Structure-preserving finite difference schemes for the Cahn-Hilliard equation with dynamic boundary conditions in the one-dimensional case. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1915-1938. doi: 10.3934/cpaa.2017093 Xinlong Feng, Yinnian He. On uniform in time $H^2$-regularity of the solution for the 2D Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5387-5400. doi: 10.3934/dcds.2016037 Ciprian G. Gal, Alain Miranville. Robust exponential attractors and convergence to equilibria for non-isothermal Cahn-Hilliard equations with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 113-147. doi: 10.3934/dcdss.2009.2.113 Ciprian G. Gal, Maurizio Grasselli. Singular limit of viscous Cahn-Hilliard equations with memory and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1581-1610. doi: 10.3934/dcdsb.2013.18.1581 Ciprian G. Gal, Hao Wu. Asymptotic behavior of a Cahn-Hilliard equation with Wentzell boundary conditions and mass conservation. Discrete & Continuous Dynamical Systems - A, 2008, 22 (4) : 1041-1063. doi: 10.3934/dcds.2008.22.1041 Gisèle Ruiz Goldstein, Alain Miranville. A Cahn-Hilliard-Gurtin model with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 387-400. doi: 10.3934/dcdss.2013.6.387 Laurence Cherfils, Madalina Petcu. On the viscous Cahn-Hilliard-Navier-Stokes equations with dynamic boundary conditions. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1419-1449. doi: 10.3934/cpaa.2016.15.1419 Rafael De La Llave, R. Obaya. Regularity of the composition operator in spaces of Hölder functions. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 157-184. doi: 10.3934/dcds.1999.5.157 Ciprian G. Gal. Robust exponential attractors for a conserved Cahn-Hilliard model with singularly perturbed boundary conditions. Communications on Pure & Applied Analysis, 2008, 7 (4) : 819-836. doi: 10.3934/cpaa.2008.7.819 Anna Kostianko, Sergey Zelik. Inertial manifolds for the 3D Cahn-Hilliard equations with periodic boundary conditions. Communications on Pure & Applied Analysis, 2015, 14 (5) : 2069-2094. doi: 10.3934/cpaa.2015.14.2069 Angelo Favini, Rabah Labbas, Stéphane Maingot, Hiroki Tanabe, Atsushi Yagi. Necessary and sufficient conditions for maximal regularity in the study of elliptic differential equations in Hölder spaces. Discrete & Continuous Dynamical Systems - A, 2008, 22 (4) : 973-987. doi: 10.3934/dcds.2008.22.973 Desheng Li, Xuewei Ju. On dynamical behavior of viscous Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2207-2221. doi: 10.3934/dcds.2012.32.2207 Laurence Cherfils, Alain Miranville, Sergey Zelik. On a generalized Cahn-Hilliard equation with biological applications. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2013-2026. doi: 10.3934/dcdsb.2014.19.2013 Álvaro Hernández, Michał Kowalczyk. Rotationally symmetric solutions to the Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 801-827. doi: 10.3934/dcds.2017033 Kelong Cheng, Cheng Wang, Steven M. Wise, Zixia Yuan. Global-in-time Gevrey regularity solutions for the functionalized Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020186 Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461 Sergey P. Degtyarev
CommonCrawl
Construction of minimal linear codes from multi-variable functions A class of linear codes and their complete weight enumerators Abelian non-cyclic orbit codes and multishot subspace codes Gustavo Terra Bastos 1, , Reginaldo Palazzo Júnior 2, and Marinês Guerreiro 3, Department of Mathematics and Statistics, Federal University of São João del-Rei, Praça Frei Orlando, 170, Centro, São João del-Rei - MG, 36307-352, Brazil Department of Communications, FEEC, State University of Campinas, Cidade Universitária Zeferino Vaz - Barão Geraldo, Campinas - SP, 13083-852, Brazil Department of Mathematics, Federal University of Viçosa, Avenida Peter Henry Rolfs, s/n, Viçosa - MG, 36570-900, Brazil Received November 2018 Revised June 2019 Published November 2019 Fund Project: The first author was supported by CAPES and CNPq PhD scholarships In this paper we characterize the orbit codes as geometrically uniform codes. This characterization is based on the description of all isometries over a projective geometry. In addition, Abelian orbit codes are defined and a construction of Abelian non-cyclic orbit codes is presented. In order to analyze their structures, the concept of geometrically uniform partitions have to be reinterpreted. As a consequence, a substantial reduction in the number of computations needed to obtain the minimum subspace distance of these codes is achieved and established. An application of orbit codes to multishot subspace codes obtained according to a multi-level construction is provided. Keywords: Geometrically uniform codes, abelian orbit codes, multishot subspace codes and geometrically uniform partitions. Mathematics Subject Classification: Primary: 11T71, 94B60; Secondary: 51E99. Citation: Gustavo Terra Bastos, Reginaldo Palazzo Júnior, Marinês Guerreiro. Abelian non-cyclic orbit codes and multishot subspace codes. Advances in Mathematics of Communications, doi: 10.3934/amc.2020035 R. Ahlswede, N. Cai, R. Li and R. W. Yeung, Network information flow, IEEE Trans. Inform. Theory, 46 (2000), 1204-1216. doi: 10.1109/18.850663. Google Scholar R. Baer, Linear Algebra and Projective Geometry, Academic Press Inc., New York, 1952. Google Scholar F. Bardestani and A. Iranmanesh, Cyclic orbit codes with the normalizer of a singer subgroup, J. Sci. Islam. Repub. Iran, 26 (2015), 49-55. Google Scholar E. Biglieri and M. Elia, Multidimensional modulation and coding for band-limited digital channels, IEEE Trans. Inform. Theory, 34 (1988), 803-809. doi: 10.1109/18.9777. Google Scholar A. R. Calderbank, Multilevel codes and multistage decoding, IEEE Trans. Comm., 37 (1989), 222-229. doi: 10.1109/26.20095. Google Scholar B. Chen and H. Liu, Constructions of cyclic constant dimension codes, Des. Codes Cryptogr., 86 (2018), 1267-1279. doi: 10.1007/s10623-017-0394-9. Google Scholar J.-J. Climent, V. Requena and X. S.-Escrivà, A construction of Abelian non-cyclic orbit codes, Cryptogr. Commun., 11 (2019), 839-852. doi: 10.1007/s12095-018-0306-5. Google Scholar J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups, Fundamental Principles of Mathematical Sciences, 290, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4757-6568-7. Google Scholar P. Delsarte, Bilinear forms over a finite field, with applications to coding theory, J. Combin. Theory Ser. A, 25 (1978), 226-241. doi: 10.1016/0097-3165(78)90015-8. Google Scholar G. Forney Jr., Geometrically uniform codes, IEEE Trans. Inform. Theory, 37 (1991), 1241-1260. doi: 10.1109/18.133243. Google Scholar È. M. Gabidulin, Theory of codes with maximum rank distance, Problemy Peredachi Informatsii, 21 (1985), 3-16. Google Scholar J. T. Goozeff, Abelian p-subgroups of the general linear group, J. Austral. Math. Soc., 11 (1970), 257-259. doi: 10.1017/S1446788700006613. Google Scholar H. Imai and S. Hirakawa, A new multilevel coding method using error-correcting codes, IEEE Trans. Inform. Theory, 23 (1977), 371-377. doi: 10.1109/TIT.1977.1055718. Google Scholar I. M. Isaacs, Finite Group Theory, Graduate Studies in Mathematics, 92, American Mathematical Society, Providence, RI, 2008. doi: 10.1090/gsm/092. Google Scholar A. Khaleghi, D. Silva and F. R. Kschischang, Subspace codes, in Cryptography and Coding, Lecture Notes in Comput. Sci., 5921, Springer, Berlin, 2009, 1-21. doi: 10.1007/978-3-642-10868-6_1. Google Scholar R. Köetter and F. R. Kschischang, Coding for errors and erasures in random network coding, IEEE Trans. Inform. Theory, 54 (2008), 3579-3591. doi: 10.1109/TIT.2008.926449. Google Scholar A. Kohnert and S. Kurz, Construction of large constant dimension codes with a prescribed minimum distance, in Mathematical Methods in Computer Science, Lecture Notes in Comput. Sci., 5393, Springer, Berlin, 2008, 31-42. doi: 10.1007/978-3-540-89994-5_4. Google Scholar H. A. Loeliger, Signal sets matched to groups, IEEE Trans. Inform. Theory, 37 (1991), 1675-1682. doi: 10.1109/18.104333. Google Scholar H. G-Luerssen, K. Morrison and C. Troha, Cyclic orbit codes and stabilizer subfields, Adv. Math. Commun., 9 (2015), 177-197. doi: 10.3934/amc.2015.9.177. Google Scholar F. Manganiello, E. Gorla and J. Rosenthal, Spread codes and spread decoding in network coding, IEEE International Symposium on Information Theory, (2008), 881-885. doi: 10.1109/ISIT.2008.4595113. Google Scholar R. W. Nobrega and B. F. Uchoa-Filho, Multishot codes for network coding: Bounds and a multilevel construction, IEEE International Symposium on Information Theory, (2009), 428-432. doi: 10.1109/ISIT.2009.5205750. Google Scholar J. J. Rotman, An Introduction to the Theory of Groups, Graduate Texts in Mathematics, 148, Springer-Verlag, New York, 1995. doi: 10.1007/978-1-4612-4176-8. Google Scholar D. Slepian, Group codes for the Gaussian channel, Bell System Tech. J., 47 (1968), 575-602. doi: 10.1002/j.1538-7305.1968.tb02486.x. Google Scholar D. A. Suprunenko, Matrix Groups, Translations of Mathematical Monographs, 45, American Mathematical Society, Providence, RI, 1976. Google Scholar A.-L. Trautmann, Isometry and automorphisms of constant dimension codes, Adv. Math. Commun., 7 (2013), 147-160. doi: 10.3934/amc.2013.7.147. Google Scholar A.-L. Trautmann, F. Manganiello, M. Braun and J. Rosenthal, Cyclic orbit codes, IEEE Trans. Inform. Theory, 59 (2013), 7386-7404. doi: 10.1109/TIT.2013.2274266. Google Scholar A.-L. Trautmann, F. Manganiello and J. Rosenthal, Orbit codes - A new concept in the area of network coding, IEEE Information Theory Workshop, Dublin, Ireland, 2010, 1-4. doi: 10.1109/CIG.2010.5592788. Google Scholar Z. Wan, On geometrically uniform signal sets and signal sets matched to groups, IEEE International Symposium on Information Theory, San Antonio, 1993,179pp. doi: 10.1109/ISIT.1993.748493. Google Scholar Table 1. Interdistance sets $ D\left( {\left\{ V \right\},{C_H}\left( {{\alpha ^i}V} \right)} \right)$, for $ 1 \leq i \leq 4 $ $ d_S (.,.) $ $ \alpha^1 $ $ \alpha^{10} $ $ \alpha^{19} $ $ \alpha^{28} $ $ \alpha^{37} $ $ \alpha^{46} $ $ \alpha^{55} $ $ \alpha^0 $ 4 4 6 6 6 4 6 Shiqiu Liu, Frédérique Oggier. On applications of orbit codes to storage. Advances in Mathematics of Communications, 2016, 10 (1) : 113-130. doi: 10.3934/amc.2016.10.113 Washiela Fish, Jennifer D. Key, Eric Mwambene. Binary codes from reflexive uniform subset graphs on $3$-sets. Advances in Mathematics of Communications, 2015, 9 (2) : 211-232. doi: 10.3934/amc.2015.9.211 Heide Gluesing-Luerssen, Katherine Morrison, Carolyn Troha. Cyclic orbit codes and stabilizer subfields. Advances in Mathematics of Communications, 2015, 9 (2) : 177-197. doi: 10.3934/amc.2015.9.177 Daniel Heinlein, Ferdinand Ihringer. New and updated semidefinite programming bounds for subspace codes. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020034 Cristina García Pillado, Santos González, Victor Markov, Consuelo Martínez, Alexandr Nechaev. New examples of non-abelian group codes. Advances in Mathematics of Communications, 2016, 10 (1) : 1-10. doi: 10.3934/amc.2016.10.1 Olof Heden, Faina I. Solov'eva. Partitions of $\mathbb F$n into non-parallel Hamming codes. Advances in Mathematics of Communications, 2009, 3 (4) : 385-397. doi: 10.3934/amc.2009.3.385 Osama Khalil. Geodesic planes in geometrically finite manifolds. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 881-903. doi: 10.3934/dcds.2019037 Cristóbal Camarero, Carmen Martínez, Ramón Beivide. Identifying codes of degree 4 Cayley graphs over Abelian groups. Advances in Mathematics of Communications, 2015, 9 (2) : 129-148. doi: 10.3934/amc.2015.9.129 Dean Crnković, Bernardo Gabriel Rodrigues, Sanja Rukavina, Loredana Simčić. Self-orthogonal codes from orbit matrices of 2-designs. Advances in Mathematics of Communications, 2013, 7 (2) : 161-174. doi: 10.3934/amc.2013.7.161 Ettore Fornasini, Telma Pinho, Raquel Pinto, Paula Rocha. Composition codes. Advances in Mathematics of Communications, 2016, 10 (1) : 163-177. doi: 10.3934/amc.2016.10.163 Michael Braun. On lattices, binary codes, and network codes. Advances in Mathematics of Communications, 2011, 5 (2) : 225-232. doi: 10.3934/amc.2011.5.225 Alexis Eduardo Almendras Valdebenito, Andrea Luigi Tironi. On the dual codes of skew constacyclic codes. Advances in Mathematics of Communications, 2018, 12 (4) : 659-679. doi: 10.3934/amc.2018039 K. A. Ariyawansa, Leonid Berlyand, Alexander Panchenko. A network model of geometrically constrained deformations of granular materials. Networks & Heterogeneous Media, 2008, 3 (1) : 125-148. doi: 10.3934/nhm.2008.3.125 Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. Advances in Mathematics of Communications, 2018, 12 (4) : 773-804. doi: 10.3934/amc.2018046 Antonio Cossidente, Francesco Pavese, Leo Storme. Optimal subspace codes in $ {{\rm{PG}}}(4,q) $. Advances in Mathematics of Communications, 2019, 13 (3) : 393-404. doi: 10.3934/amc.2019025 Dean Crnković, Ronan Egan, Andrea Švob. Self-orthogonal codes from orbit matrices of Seidel and Laplacian matrices of strongly regular graphs. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020032 Gustavo Terra Bastos Reginaldo Palazzo Júnior Marinês Guerreiro
CommonCrawl
Limb, joint and pelvic kinematic control in the quail coping with steps upwards and downwards Forward dynamic simulation of Japanese macaque bipedal locomotion demonstrates better energetic economy in a virtualised plantigrade posture Hideki Oku, Naohiko Ide & Naomichi Ogihara Evaluating anticipatory control strategies for their capability to cope with step-down perturbations in computer simulations of human walking Lucas Schreff, Daniel F. B. Haeufle, … Roy Müller Changes in human walking dynamics induced by uneven terrain are reduced with ongoing exposure, but a higher variability persists Jenny A. Kent, Joel H. Sommerfeld & Nicholas Stergiou Stabilization demands of walking modulate the vestibular contributions to gait Rina M. Magnani, Sjoerd M. Bruijn, … Patrick A. Forbes Altered active control of step width in response to mediolateral leg perturbations while walking Nicholas K. Reimold, Holly A. Knapp, … Jesse C. Dean Lower extremity joint-level responses to pelvis perturbation during human walking Mark Vlutters, Edwin H. F. van Asseldonk & Herman van der Kooij Lower-limb kinematics and kinetics during continuously varying human locomotion Emma Reznick, Kyle R. Embry, … Robert D. Gregg The capacity to learn new motor and perceptual calibrations develops concurrently in childhood Cristina Rossi, Connie W. Chau, … Amy J. Bastian Bilateral temporal control determines mediolateral margins of stability in symmetric and asymmetric human walking Tom J. W. Buurke, Claudine J. C. Lamoth, … Rob den Otter Emanuel Andrada1, Oliver Mothes2, Heiko Stark1, Matthew C. Tresch3, Joachim Denzler2, Martin S. Fischer1 & Reinhard Blickhan4 Scientific Reports volume 12, Article number: 15901 (2022) Cite this article Small cursorial birds display remarkable walking skills and can negotiate complex and unstructured terrains with ease. The neuromechanical control strategies necessary to adapt to these challenging terrains are still not well understood. Here, we analyzed the 2D- and 3D pelvic and leg kinematic strategies employed by the common quail to negotiate visible steps (upwards and downwards) of about 10%, and 50% of their leg length. We used biplanar fluoroscopy to accurately describe joint positions in three dimensions and performed semi-automatic landmark localization using deep learning. Quails negotiated the vertical obstacles without major problems and rapidly regained steady-state locomotion. When coping with step upwards, the quail mostly adapted the trailing limb to permit the leading leg to step on the elevated substrate similarly as it did during level locomotion. When negotiated steps downwards, both legs showed significant adaptations. For those small and moderate step heights that did not induce aerial running, the quail kept the kinematic pattern of the distal joints largely unchanged during uneven locomotion, and most changes occurred in proximal joints. The hip regulated leg length, while the distal joints maintained the spring-damped limb patterns. However, to negotiate the largest visible steps, more dramatic kinematic alterations were observed. There all joints contributed to leg lengthening/shortening in the trailing leg, and both the trailing and leading legs stepped more vertically and less abducted. In addition, locomotion speed was decreased. We hypothesize a shift from a dynamic walking program to more goal-directed motions that might be focused on maximizing safety. Encompassing almost ten thousand species, birds (clade Aves) are the most successful bipeds. Despite their flying abilities, they also represent a valuable study group to understand adaptations to terrestrial locomotion. For example, there are bird species that combine remarkable flying and walking abilities (e.g., waders1,2). Other species evolved to live on the ground, losing partially or completely their ability to fly. Within the latter group, encompassing about sixty species, the quail (Coturnix coturnix), is representative for the group of small cursorial birds. Like most of this group, the quail prefers grounded running (a running gait without aerial phases) during unrestricted level locomotion3,4. In the wild, however, the quail must navigate over complex and unstructured terrains. Locomotion might become non-periodic, altering the kinematic and mechanical demands placed on the neuromechanical control system as compared to level locomotion. Our understanding of how animals' neuromechanical control strategies adapt to these changing demands, despite important progress achieved in the past years, remains elusive. It is believed that animals combine the intrinsic stability of their body mechanics with their neuronal control to negotiate rough terrains. The assumption is that anticipatory (feedforward) mechanisms pre-adjust limb kinematics and impedance before the leg contacts the ground, to reduce the need for reactive (feedback) response to readapt posture during stance5,6,7,8,9. In the last years, two-dimensional neuromechanical studies have tried to bring light to the adaptive mechanisms underlying bipedal uneven locomotion. Their results indicate that humans mainly adjust leg stiffness to maintain dynamic stability10,11,12,13, whereas birds seem to rely on leg actuation and kinematic control14. Birds use anticipatory maneuvers to vault upwards in order to avoid excessive crouched postures on an obstacle14,15. Similar to humans16, birds use leg retraction in late swing to regulate landing conditions14,17, to minimize fluctuations in leg loading during uneven locomotion18, and to prevent falls19,20. Late-swing retraction is known to increase stability of locomotion as it changes the angle of attack of the leg at touch down (TD) according to obstacle height16. In small birds, the retraction of the leading leg can be the consequence of the leg placement strategy called fixed aperture angle4. In this strategy, the angle between the leg going to contact on the ground (usually termed leading) and the supporting legs (usually termed trailing) is fixed before TD. The retraction of the leading leg is thus automatically adapted for locomotion speed4,21,22. The aperture angle strategy has not yet been tested in birds facing perturbations, although there is some evidence for its use by humans during uneven locomotion23. Simple leg control strategies like the aperture angle might reduce, to some extent, the necessity of anticipatory maneuvers when negotiating light uneven terrains. The robustness of avian level locomotion was also assessed using a simple model including an effective leg (the segment spanning from the hip to the toe, Fig. 1F) and a trunk22. The model produced self-stable gaits and was able to cope with steps over obstacles or sudden drops without the need for feedback control or even the need for tuning feedforward strategies22,24. Experimental setup and 2D/3D global and joint limbs kinematics. The quail negotiated visible step-up (A) and step-down (B) steps of 1 cm (green), 2.5 cm (red), and 5 cm (blue) height. Body and hindlimb kinematics were captured using biplanar fluoroscopy. (C) Analyzed body segments. (D) 3D kinematics of the pelvis relative to the global coordinate system, and rotation of the whole leg related to the pelvis. The last estimates the three-dimensional rotations occurring at the hip joint. The whole leg is a plane formed by the hip (e.g., hl), the knee (e.g., kl) and the distal marker of the tarsometatarsus (tmtdist. l). Coordinate systems for the pelvis and the leg can be seen in D1, see methods for further explanations, (E) joint kinematics (INT intertarsal joint, TMP tarsometatarsal–phalangeal joint), (F) effective leg is the distance between tip of the middle toe (Mto) and the hip. To our knowledge, there is no previous literature on three-dimensional analyses of avian locomotion over uneven surfaces. Even for level locomotion, three-dimensional analyses of avian locomotion are uncommon e.g.,25,26,27,28. However, a three-dimensional analysis is mandatory to address topics like lateral stabilization or navigation control, especially on uneven terrains. Finally, to have a more general picture of the strategies to negotiate uneven terrains, it is important to link the kinematic control occurring at different levels of abstraction (e.g., simple model representations of the leg vs. joint kinematics). Simple model representations like the effective leg help to understand basic strategies for stability or economy of locomotion e.g.,4,5,21,29,30,31 and can be used as global goals for the control of limb joints32. During unrestricted locomotion, there is evidence of an interplay between effective leg and limb segmental angles. In humans, Japanese macaques and the quail, limb segmental angles (thigh, shank, and foot) measured in the sagittal plane covary in a way that they form a planar loop in a three-dimensional space33,34,35,36,37. This result indicates that intersegmental coordination might reduce the number of degrees of freedom to control the leg from three (i.e., joint angles) to two (i.e., effective leg length and angle). Due to the redundant nature of the segmented leg, different combinations of joint kinematics can lead to the same effective leg length and angle before TD, but to differing leg responses later during stance. Thus, we can expect that their combined analysis helps to infer quail motor control goals on rough terrains. In this study, we aimed to uncover pelvic, leg, and joint kinematic adaptations to visible steps (upwards and downwards, Fig. 1), and how these adaptations influence leg response after TD. We searched for relationships between simple model representations of the leg and joint kinematics. In our experiments, we used biplanar fluoroscopy to accurately describe joint positions in three dimensions (Fig. 1A,B). Because of our constrained field of view, we focused our analysis on preadaptation strategies, i.e., from the stride before the vertical shift in terrain (we termed i-1) to stride after that (termed i). We expected step-type (up vs. down) and step-height related changes in leg kinematics, as animals preadapt and redirect the body when negotiating a visible vertical step. While kinematics cannot predict dynamics, we anticipated that the knowledge of the interaction between kinematics and dynamics during level locomotion discussed previously3,4,14,15,18,21,22,24,38 could help us to deduce joint related pre-/post-adaptations and thus to infer the main goals of neuromechanical strategies used by animals to cope with vertical steps. Quails negotiated vertical steps ranging from ca. 10–50% of their effective leg length without major problems. None of the subjects lost visible stability or stumbled because of the challenges. Furthermore, based on the inspection of the live videos, they looked recovered from vertical shifts after one or two steps. To overcome 1 cm vertical steps, quails usually switched to aerial running for both step-up and step-down conditions. For negotiating 2.5 cm and 5 cm steps, quails relied on double support phases, except for 5 cm drops, where they sometimes switched to aerial running after the vertical shift. On average, locomotion speed measured during coping with the uneven substrate decreased with step height (Table 1). Contact and swing times showed a less clear trend. During step-up experiments, quails increased contact and swing times from 1 to 2.5 cm but decreased them from 2.5 to 5 cm step heights. For 2.5 cm and 5 cm, contact times were longer for the leading leg, indicating a reduction in the kinetic energy after the vertical shift. During step-down locomotion, the quail increased trailing limb contact times with step height, and varied swing times similarly as already explained for the step-up locomotion. For the leading, limb contact and swing times increased while increasing step height from 1 to 2.5 cm, but they did not vary between 2.5 and 5 cm step height. Table 1 Spatiotemporal parameters. In the following only selected significant differences are presented, please refer to the tables for further information about significance values. Analysis of effective leg kinematics Stepping up, trailing leg Overall patterns of the effective leg length for the trailing limb were similar for level and step-up locomotion. After TD, the effective trailing leg is compressed, then slightly extended until toe-off (TO). During the swing, the leg shortened and rapidly extended until the next TD. The extension is decelerated at late swing (see Fig. 2A). However, some differences can be observed between level and step-up locomotion. Quails prepare step-up TD with longer effective trailing legs than observed during level locomotion. During stance, step-up conditions caused increased trailing leg extension (e.g., l = 0.094 m at TO during level locomotion versus l = 0.139 m at TO for the 5 cm step-up condition) and reduced leg retraction significantly compared to level locomotion (αTO ≈108°, αTO ≈89°, αTO ≈96°, and αTO ≈100° for level, 1 cm, 2.5 cm, 5 cm, respectively; see Fig. 2C; Table S1). Effective leg kinematics during level and step locomotion. Level vs step-up (above): (A,B) effective leg length (l), effective leg axial velocity (\(\dot{l}\)). (C,D) effective leg angle (\(\alpha\)), effective leg angle velocity (\(\dot{\alpha }\)). (E) aperture angle between effective legs (\(\phi\)) and aperture angle velocity (\(\dot{\phi }\)). Level vs drop (below): (F,G) effective leg length (l), effective leg axial velocity (\(\dot{l}\)). (H,I) effective leg angle (\(\alpha\)), effective leg angle velocity (\(\dot{\alpha }\)). (J) aperture angle between effective legs (\(\phi\)) and aperture angle velocity (\(\dot{\phi }\)). Left: trailing leg stepping before the step up/downwards (stride i-1), right: leading leg stepping after the step up/downwards (stride i). Level (black) and step locomotion (1 cm: green, 2.5 cm: red, 5 cm: blue) in the quail. Solid lines: length/angle, dotted lines length/angle velocities. Curves display mean values. Black, blue, red, green dashed lines indicate toe-off (TO), while solid lines touch down (TD). Cyan solid lines indicate 15% and 85% of the stride. Due to the constrained field of view in the X-ray fluoroscope, hip data was often missing at the beginning and at the end of the stride cycles and average values less reliable (showed diffuse). Stepping up, leading leg In general, the effective kinematics of the leading leg during step-up locomotion were similar to those observed during level locomotion. However, the effective leg length at TD (l0) was significantly longer during step-up locomotion as compared to level locomotion (l0-level ≈ 0.13 m, while l0–1 cm, l0–2.5 cm, and l0–5 cm > 0.14 m, for all cases p < 0.0001). The trajectory of the effective leg angle (α) on the step was not substantially altered as compared to level locomotion, but differences were observed (Fig. 2D). For example, the leading leg starts the swing phase more vertically oriented and contacts the elevated substrate with a slightly less vertical angle compared to level locomotion (at TD, α0 ≈ 43°, α0 ≈ 38°, α0 ≈ 39°, and α0 ≈ 36° for level, 1 cm, 2.5 cm, 5 cm, respectively, in all cases p < 0.0001). Retraction speed at TD was found to be significantly slower for 1 cm and 2.5 cm step heights compared to level locomotion (α0-level ≈ 300° s−1, α0–1 cm ≈ 256° s−1, α0–2.5 cm ≈ 235° s−1, p < 0.012 and p < 0.001, respectively, see Table S1b). Like the trailing leg, the leading leg was significantly (p < 0.0001) less retracted during stance compared to level locomotion (α85% ≈ 89°, α85% ≈ 86°, α85% ≈ 86°, and α85% ≈ 85° for level, 1 cm, 2.5 cm, 5 cm, respectively). Differences between different steps heights were not significant (Fig. 2D; Table S2). The aperture angle between leading and trailing legs at TD (ϕ0) was generally not affected by step height and remained not significantly different from the mean values (ϕ0 ≈ 53°) obtained during level locomotion (p value > 0.05, see Fig. 2E; Table S1). Stepping down, trailing leg Step related strategies were observed for the trailing leg at the level of the effective leg. Birds negotiating 1 cm drops displayed a compression-extension pattern that diverged from the pattern they exerted during level locomotion and from the monotonic compression displayed when they faced 2.5 cm and 5 cm steps (Fig. 2F). Stance time was significantly increased (p < 0.05) for drop heights of ca. 25% and 50% of leg length (level ≈ 0.22 s, 1 cm ≈ 0.22 s, 2.5 cm ≈ 0.29 s, 5 cm ≈ 0.33 s). Leg compression was significantly larger at TO for 5 cm drops (p < 0.0001) when compared to level locomotion and the other drop conditions (l5cm ≈ 0.08 m vs. l2.5 cm ≈ 0.107 m, l1cm ≈ 0.104 m, llevel ≈ 0.094 m). The trailing leg's angle at the early stance (α) was not related to the height of the step-down, and it was similar to the α0 observed for level locomotion (α15% ≈ 54°, see Fig. 2H; Table S1). For moderate drop heights, the effective leg angle was substantially (p < 0.0001) less retracted at TO (αTO ≈ 108°, αTO ≈ 104°, αTO ≈ 83°, and αTO ≈ 106° for level, 1 cm, 2.5 cm, 5 cm, respectively; see Fig. 2H, Table S2). After TO the effective leg angle returned to the values observed during level locomotion. Stepping down, leading leg There were clear adaptations in effective leg kinematics for the leg that stepped on the lowered substrate. The effective leg length at TD for 5 cm steps (l5cm ≈ 0.14 m) was significantly shorter than the leg length at TD for 1 cm (l5cm ≈ 0.15 m) and 2.5 cm (l5cm ≈ 0.15 m) drops (in both cases p value < 0.0001, see Table S1). However, all three were significantly longer (p value < 0.0001) than the leg length observed during level locomotion llevel ≈ 0.13 m. Leg compression speed at TD was also higher for the largest drops but not significantly different among step conditions (Fig. 2G, Table S1b). During stance, the effective leg was compressed until TO. However, at 85% of the stance, for all drops conditions, the effective leg length was significantly larger (p value < 0.0001) than the leg length measured during level locomotion (l85% ≈ 0.091 m, l85% ≈0.122 m, l85% ≈ 0.11, and l85% ≈ 0.097 for level, 1 cm, 2.5 cm, 5 cm, respectively). Similarly, effective leg angles were altered during step down locomotion for the leading leg. At TO (elevated substrate) the angle of the effective leg stepping onto the lowered subtract was steeper as compared to level locomotion (level αΤΟ ≈ 108°, from Table S2, 2.5 cm: αΤΟ ≈ 89°, 5 cm: αΤΟ ≈ 87°, from Fig. 2I). At TD, the angle of attack (α0) was unrelated to drop-height but significantly more retracted after a drop compared to level locomotion (α0-level ≈ 42°, α0-1 cm ≈ 50°, α0-2.5 cm ≈ 54°, and α0-5 cm ≈ 53°, in all cases p value < 0.001; see Table S1). Retraction speed at TD (\({\dot{\alpha }}_{0}\)) was significantly reduced for drops of 25% and 50% of leg length (\({\dot{\alpha }}_{0}\)-level ≈ 300° s−1, \({\dot{\alpha }}_{0}\)-2.5 cm ≈ 243° s−1, \({\dot{\alpha }}_{0}\)-5 cm ≈ 200° s−1; p < 0.01 and p < 0.0001, respectively). The aperture angle (\(\phi\)0) between leading and trailing legs was adapted to the drop height (Fig. 2J). For 1 cm step, the aperture angle increased before TD especially after the level height was crossed. Conversely, for 2.5 cm and 5 cm drops, the aperture angle was on average significantly below the mean value obtained at level locomotion (\(\phi\)0-level ≈ 53°, \(\phi\)0–2.5 cm ≈ 35°, \(\phi\)0-5 cm≈ 44°, and \(\phi\)0 ≈ 53°, p value < 0.0001, respectively p value < 0.01; Table S1). Note that the quails adapted the angle between legs after the point at which level height was crossed (Fig. 2J). Joint angles The previous section described how effective leg kinematics were altered during uneven locomotion. In this section, we describe how the kinematics of individual, elemental joints were altered. Quail joint angles during level locomotion were previously published3. Here, we have included pertinent values from that study to permit the comparison between step and level locomotion. The influence of the disturbances on the hip angle will be described in the section on 3D hip angles. Stepping up, trailing limb (Fig. 3A,C,E) Joint angles. Level vs step-up (above): (A,B) knee; (C,D) intertarsal (INT); and (E,F) tarsometatarsal-phalangeal (TMP). Level vs drop (below): (G,H) knee; (I,J) INT; and (K,L) TMP. Left: trailing leg stepping before the step up/downwards (stride i-1), right: leading leg stepping after the step up/downwards (stride i). Curves display mean values of joint angles during level (black) and step locomotion (1 cm: green, 2.5 cm: red, 5 cm: blue) in the quail. Black, blue, red, green dashed lines indicate toe-off (TO), while solid lines touch down (TD). Cyan solid lines indicate 15% and 85% of the stride. Due to the constrained field of view in the X-ray fluoroscope, hip data was often missing at the beginning and at the end of the stride cycles and average values might be less reliable (showed diffuse). To negotiate 1 cm steps, quails used significantly more flexed knee (Fig. 3A) and INT angles (Fig. 3C) in the early stance as compared to level locomotion (around 15% stance: knee1cm ≈ 85° vs. kneelevel ≈ 98°, p < 0.0001; INT1cm ≈ 99°, INTlevel ≈ 112°). Around TO the knee was more extended while the INT remained more flexed compared to level locomotion (knee1cm ≈ 64° vs. kneelevel ≈ 60°, p = 0.04; INT1cm ≈ 81°, INTlevel ≈ 112°, p < 0.0001). 2.5 cm steps induced less substantial but still significant changes in knee and TMP joint kinematics (Fig. 3E). Both the knee and the TMP were more flexed in the early stance (around 15% stance: knee2.5 cm ≈ 91° vs. kneelevel ≈ 98°, p < 0.0001; TMP2.5 cm ≈ 135° vs TMPlevel ≈ 143°, p < 0.01). The knee was around TO significantly more expended (around TO: knee2.5 cm ≈ 73° vs. kneelevel ≈ 60°, p < 0.0001). To negotiate 5 cm steps, the knee and the INT joints were significantly more extended, and the TMP was more flexed during early stance. Around 15% stance: knee5cm ≈ 113° vs. kneelevel ≈ 98°, p < 0.0001; INT5cm ≈ 139°, INTlevel ≈ 112°, p < 0.0001; TMP5cm ≈ 135° vs TMPlevel ≈ 143°, p < 0.01. Around TO: knee5cm ≈ 103° vs. kneelevel ≈ 60°, p < 0.0001; INT5cm ≈ 143°, INTlevel ≈ 112°, p < 0.0001; TMP5cm ≈ 142°, TMPlevel ≈ 142°, p > 0.05 (see Fig. 3A,C,E, Tables S3, S4). After TO, the knee was kept more extended during the early swing phase. Note that the bouncing behavior observed in the INT almost vanishes when facing 5 cm step upwards (Fig. 3C). Stepping up, leading limb (Fig. 3B,D,F) In the elevated substrate, the quails displayed in average a more flexed knee and INT at TD for all experimental conditions (Fig. 3B,D, kneelevel ≈ 120°, knee1cm ≈ 106°, knee2.5 cm ≈ 112°, knee5cm ≈ 110°, all p < 0.0001; INTlevel ≈ 125°, INT1cm ≈ 112° ,INT2.5 cm ≈ 114°, INT5cm ≈ 121°, for 1 cm and 2.5 cm p < 0.0001, for 5 cm p = 0.058, see Table S3). During stance on the step, the joint patterns for 1 cm and 2.5 cm steps displayed a more flexed INT (Fig. 3D), together with a more extended TMP (Fig. 3F) compared to the patterns observed for 5 cm steps. Stepping down, trailing limb (Fig. 3G,I,K) When negotiating 1 cm steps, the flexion–extension pattern for the TMP changed (Fig. 3K). Note that during stance, there was a larger flexion up to midstance, followed by an extension in the late stance. After TO, a second more marked flexion extension was exhibited. For 2.5 cm drops, quails displayed less flexion extension in the INT Fig(3I). More marked differences in all joints were observed for 5 cm steps. Under this test condition, knee (Fig. 3G) and INT joints exhibited significantly larger flexion at TD and during stance. Around 15% stance: knee5cm ≈ 90° vs. kneelevel ≈ 98°; INT5cm ≈ 76°, INTlevel ≈ 112°, both p < 0.0001. Around TO: knee5cm ≈ 48° vs. kneelevel ≈ 60°; INT5cm ≈ 81°, INTlevel ≈ 112°, both p < 0.0001). After TO, knee and INT were kept more flexed (see Fig. 3G,I). Stepping down, leading limb (Fig. 3H,J,L) The leg that stepped in the lowered substrate, displayed step related adaptations before and after TD. Before TD, changes were observed mainly in the distal joints. 1 cm drops increased joint flexion in the first half of the swing phase but did not induce significant changes at TD related to level locomotion. 2.5 cm and 5 cm drops did not substantially influence joint swing patterns but affected joint angles at TD (related to level locomotion, significantly more extended for the knee and INT: kneelevel ≈ 120°, knee2.5 cm ≈ 128°, knee5cm ≈ 131°, both p < 0.0001; INTlevel ≈125°, INT2.5 cm ≈ 146°, INT5cm ≈ 148°, both p < 0.0001, see Fig. 3H,J, Table S3, and significantly more flexed for the TMP: TMPlevel ≈ 158°, TMP2.5 cm ≈ 133°, TMP5cm ≈ 134°, both p < 0.0001, see Fig. 3L and Table S3). After TD, the INT was further flexed for 1 cm and 2.5 cm drops until TO (Fig. 3J). The INT for 5 cm and the TMP for 1 cm drops displayed a rebound behavior (flexion–extension pattern, see Fig. 3J). For 2.5 cm and 5 cm drops, TMP patterns were like those observed for level locomotion, but the joints were kept more flexed until late stance (Fig. 3L). 3D-kinematics of the whole leg This section describes the three-dimensional kinematics of the whole leg relative to the pelvis during level and step locomotion (see Fig. 4). Under the assumption that both knee and intertarsal joints work as revolute joints, the whole leg approximates three-dimensional hip kinematics. Note that because the z-axis was aligned with the segment from hip to knee, rotation about y-axis (βh) reflects flexion/extension between femur and pelvis, rotations about z-axis (γh) reflect hip ab-adduction, while rotations about the x-axis (αh) reflect femoral axial rotations, resulting in the mediolateral rotation of the whole leg. αh = βh = γh = 0° indicates that the whole leg and the pelvis coordinate systems are aligned. However, in this zero-pose, the pelvis and femur are orthogonal to each in the sagittal plane. Therefore, we used βh + 90° to represent hip flexion/extension in Fig. 4A,B,G,H and Tables S5 and S6. In the following, level locomotion is first described in detail. Step locomotion is discussed when there is a difference from level locomotion. Whole leg three-dimensional rotations in the quail. Motions were measured relative to the pelvis. Level (black) and step locomotion (1 cm: green, 2.5 cm: red, 5 cm: blue). Accepting that the knee, the intertarsal and the tarsometatarsal–phalangeal joints work mainly as revolute joints, the plane describing the whole-leg displays the three-dimensional hip control. Level vs step-up (above): (A,B) hip flexion extension (βh), negative values indicate flexion. (C,D) Hip mediolateral rotation (αh). Positive values indicate that the distal point of the whole leg moves laterally with respect to the hip; and (E,F) hip ad-abduction (γh). Level vs drop (below): (G,H) hip flexion extension; (I,J) mediolateral rotation; and (E,F) hip Ad-abduction. Curves display mean values. Left: trailing leg stepping before the step up/downwards (stride i-1), right: leading leg stepping after the step up/downwards (stride i). Black, blue, red, green dashed lines indicate toe-off (TO), while solid lines touch down (TD). Cyan solid lines indicate 15% and 85% of the stride. Due to the constrained field of view in the X-ray fluoroscope, hip data was often missing at the beginning and at the end of the stride cycles and average values might be less reliable (showed diffuse). Level locomotion, hip flexion–extension (βh) At TD, the hip joint is flexed about 42°. After a small flexion due to weight transfer, the hip joint extends 17° until TO. After TO the leg protracts, flexing the hip joint up to 85% of swing. In the late swing phase, the whole leg retracts until TD (see Fig. 4A, black line). Level locomotion, mediolateral control of the whole leg (αh) At TD the whole leg was medially oriented (α ≈ − 14°). During stance, the leg was rotated laterally until TO to an angle of approx. α = 11°. During swing, the distal point of the whole leg was rapidly rotated medially (see Fig. 4C, black line). Level locomotion, whole leg (femoral) ab-adduction (γh) Hip ab-adduction curves show a half-sine pattern. At TD the whole leg was abducted about 36°. Abduction was reduced during stance to 18° at TO. After TO the leg was abducted up to TD (see Fig. 4E, black line). Stepping up, trailing limb (Fig. 4,A,C,E; Tables S5,S6) Step height had a significant influence on hip flexion–extension. At TD, quails facing step-ups exhibited significant larger hip extension (Fig. 4A). As stance phase progressed, the hip joint was significantly more extended during stepping up than during level locomotion (around 15% stance: \(\beta\)h-level ≈ 41°, \(\beta\)h-1 cm ≈ 46°, \(\beta\)h-2.5 cm ≈ 49°, \(\beta\)h-5 cm ≈ 62°, p values = 0.0042, 0.00003, and 0 for 1 cm, 2.5 cm, and 5 cm, respectively). However, 1 cm and 2.5 cm steps induced, on average, similar hip extension patterns (p value > 0.05) but significantly different from 5 cm (i.e., quails displayed a two-step strategy to negotiate the different step-up conditions). Mediolateral hip control was also influenced by step height (Fig. 4C). At TD and early stance, 2.5 cm and 5 cm step-ups induced a more vertical orientation of the whole leg (around 15% stance: \(\alpha\)h-level ≈ − 6°, \(\alpha\)h-1 cm ≈ − 4°, \(\alpha\)h-2.5 cm ≈ − 2°, \(\alpha\)h-5 cm ≈ 6°, p values < 0.0001 for 2.5 cm, and 5 cm), and at TO the whole leg was less laterally oriented than during level locomotion (around TO: \(\alpha\)h-level ≈ 11°, \(\alpha\)h-1 cm ≈ 5°, \(\alpha\)h-2.5 cm ≈ 5°, \(\alpha\)h-5 cm ≈ 9°, p values < 0.0001 for 1 cm, and 2.5 cm). During step-up locomotion, the whole leg was on average less abducted (Fig. 4E). While quails facing 5 cm steps decreased abduction in similar way as when they negotiated 2.5 cm steps, for coping with 1 cm steps they kept adduction similar to the abduction observed during level locomotion (around 15% stance: \(\gamma\)h-level ≈ 29°, \(\gamma\)h-1 cm ≈ 27°, \(\gamma\)h-2.5 cm ≈ 21°, \(\gamma\)h-5 cm ≈ 20°, p values < 0.0001 for 2.5 cm, and 5 cm; around TO: \(\gamma\)h-level ≈ 18°, \(\gamma\)h-1 cm ≈ 22°, \(\gamma\)h-2.5 cm ≈ 14°, \(\gamma\)h-5 cm ≈ 10°, p values < 0.0001 for 2.5 cm, and 5 cm). After TO, quails facing 2.5 cm and 5 cm steps increased abduction, approaching values observed during level locomotion. However, for 5 cm steps, quails maintained a persistent hip adduction in the late swing. Stepping up, leading limb (Fig. 4B,D,F and Tables S5 and S6) Flexion–extension patterns in the elevated step are similar in shape to those observed for level locomotion (Fig. 4B). However, the quail stepped with a more flexed hip after negotiating 1 cm and 5 cm steps (around TD: \(\beta\)h-level ≈ 42°, \(\beta\)h-1 cm ≈ 37°, \(\beta\)h-2.5 cm ≈ 42°, \(\beta\)h-5 cm ≈ 38°, p values < 0.0001 for 1 cm and 5 cm). After TD, the quail exhibited comparative larger hip extensions compared to level locomotion (at late stance, around 85%: \(\beta\)h-level ≈ 56°, \(\beta\)h-1 cm ≈ 62°, \(\beta\)h-2.5 cm ≈ 68°, \(\beta\)h-5 cm ≈ 66°, p values < 0.0001 for 1 cm, 2.5 cm and 5 cm). In contrast, the quail reduced both mediolateral rotations (Fig. 4D) and ab-adduction (Fig. 4F) during the swing phase before stepping on the elevated substrate. At TD on the elevated substrate, the leading whole leg was significantly more vertically oriented and less abducted compared to level locomotion (around TD: \(\alpha\)h-level ≈ − 15°, \(\alpha\)h-1 cm ≈ − 7°, \(\alpha\)h-2.5 cm ≈ − 8°, \(\alpha\)h-5 cm ≈ − 9°, for all cases p value < 0.0001; \(\gamma\)h-level ≈ 37°, \(\gamma\)h-1 cm ≈ 25°, \(\gamma\)h-2.5 cm ≈ 30°, \(\gamma\)h-5 cm ≈ 25°, all p values < 0.0001). After the early stance phase, mediolateral motion differences between step and level locomotion lessened (around 85% stance: \(\alpha\)h-level ≈ 8°, \(\alpha\)h-1 cm ≈ 5°, \(\alpha\)h-2.5 cm ≈ 5°, \(\alpha\)h-5 cm ≈ 3°, p value < 0.0001 for 5 cm). For 1 cm steps, the abduction of the whole leg stayed around γ = 20° (Fig. 4F). Stepping down, trailing limb (Fig. 4G,I,K, and Tables S5 and S6). Quails facing 1 cm visible drops displayed larger hip extension after midstance (Fig. 4G). This can be explained by the tendency of the subjects to switch to aerial running when negotiating this type of step height. 2.5 cm drops did not induce major changes in the flexion–extension patterns of the hip. When negotiating 5 cm drops, the hip joint was significantly more flexed than during level locomotion (around 15% stance: \(\beta\)h-level ≈ 41°, \(\beta\)h-5 cm ≈ 36°, p value < 0.0001; around TO: \(\beta\)h-level ≈ 57°, \(\beta\)h-5 cm ≈ 52°, p value < 0.001). The response of the mediolateral hip control for 1 cm and 2.5 cm was similar to those observed during step upwards Fig. (4I). For 5 cm drops, the leg was more medially oriented at TD than observed during level locomotion (around 15% stance: \(\alpha\)h-level ≈ − 6°, \(\alpha\)h-5 cm ≈ − 11°, p < 0.0001) and straightening of the leg during stance was more gradual. The abduction of the leg (Fig. 4K) increased with drop height (p < 0.001). When quails faced 1 cm steps, adduction of the whole leg was reduced with respect to level locomotion. When they negotiated 2.5 cm steps, abduction was on average similar to the patterns exhibited during level locomotion, while for 5 cm drops, the whole leg was kept more abducted until late stance (around 15% stance: \(\gamma\)h-level ≈ 29°, \(\gamma\)h-1 cm ≈ 16°, \(\gamma\)h-2.5 cm ≈ 28°, \(\gamma\)h-5 cm ≈ 34°, p values < 0.0001 for 1 cm and 5 cm; around TO: \(\gamma\)h-level ≈ 18°, \(\gamma\)h-1 cm ≈ 14°, \(\gamma\)h-2.5 cm ≈ 17°, \(\gamma\)h-5 cm ≈ 16°, p < 0.0001 for 1 cm, and p = 0.02 for 5 cm). Stepping down, leading limb (Fig. 4H,J,L, and Tables S5 and S6) Quails started the swing phase using a more extended hip to approach 1 cm drops, and more flexed for 2.5 cm and 5 cm drops (see Fig. 4H). At TD in the lowered substrate, the hip was more extended for 1 cm, 2.5 cm and 5 cm (around TD: \(\beta\)h-level ≈ 42°, \(\beta\)h-1 cm ≈ 44°, \(\beta\)h-2.5 cm ≈ 52°, \(\beta\)h-5 cm ≈ 47°, p values < 0.0001 for 2.5 cm and 5 cm). Whole leg medial rotations (femoral outer rotations, Fig. 4J) were constrained when negotiating 2.5 cm and 5 cm drops (around TD: \(\alpha\)h-level ≈ − 15°, \(\alpha\)h-2.5 cm ≈ 3°, \(\alpha\)5 cm ≈ − 1°, in both cases p < 0.0001). This permitted the quail to step in the lowered substrate with an almost vertically oriented whole leg. Hip adduction was also reduced during the swing phase (Fig. 4L). After 1 cm drop, the quail kept their hip more adducted during stance (around TD: \(\gamma\)h-level ≈ 37°, \(\gamma\)h-1 cm ≈ 23°, p < 0.0001), but close before TO, the hip joint was abducted. After 2.5 cm and 5 cm drops, hip adduction behaved like the patterns observed for level locomotion (around TD: \(\gamma\)h-2.5 cm ≈ \(\gamma\)h-5 cm ≈ 34°, p > 0.05). Because in Aves the pelvis and the trunk are fused39, the three-dimensional kinematics of the pelvis informs about the spatial motion of the trunk as well. Pelvic pitch (\(\beta p\)) oscillation frequency was twice the step frequency, across all locomotion conditions compared to level locomotion, pelvic retroversion increased significantly (p < 0.0001) when the quail negotiated step up conditions: the pelvis was retroverted about 10° during level and up to 28° during step up locomotion (Fig. 5A). For visible drops, the picture was less clear and was inconsistent across different size drops (Fig. 5D). Relative to the values obtained for level locomotion, when quails faced 1 cm drops, they increased and then decreased pelvic retroversion after the TD in the lowered substrate. When they faced 2.5 cm drops, they significantly increased pelvic retroversion (mean values oscillated about \(\beta\)p-2.5 cm ≈ − 20°, with a p value < 0.001 in all three measured timepoints), and when quails negotiated 5 cm drops, they significantly decreased pelvic retroversion related to level locomotion (mean values were: at 15% stride, \(\beta\)p-5 cm ≈ − 6°, p < 0.05, at TD leading limb \(\beta\)p-5 cm ≈ − 9°, p < 0.01, at TO trailing limb \(\beta\)p-5 cm ≈ − 7°, p < 0.0001). For more information see Table S7. Pelvic three-dimensional rotations during level (black) and step locomotion in the quail. Curves display mean values. Left: step-up locomotion, right: step-down locomotion. For better understanding, we transformed the data to ensure that the trailing limb is always the left leg and the leading leg the right one (see methods). (A,D) pelvic pitch (βp), negative values indicate retroversion (trunk is more vertical oriented). (B,E) pelvic roll (αp), positive values indicate that the trunk tilts towards the right. (C,F) Pelvic yaw (γp), positive values indicate that the body is directed towards the left. Black, blue, red, green dashed lines indicate toe-off of the contralateral leg (TO), while solid lines touch down (TD). Dot dashed lines indicate when the leg crossed level line during drops. Cyan solid lines indicate 15% and 85% of the stride. TL trailing limb, LL leading limb. Due to the constrained field of view in the X-ray fluoroscope, hip data was often missing at the beginning and at the end of the stride cycles and average values might be less reliable. Lateral tilt (\(\alpha\)p, roll) was cyclic and counteracted by the leg in contact with the substrate (Fig. 5B,E). Step-related differences were found for 2.5 cm and 5 cm after mid-stride (p < 0.0001) for both steps upwards and downwards. Relative to level locomotion, significant differences existed only for 5 cm upwards around the double support phase (p < 0.01, see Fig. 5B and Table S8). Pelvic yaw amplitudes (\(\gamma\)p) were small. While during level locomotion, \(\gamma\)p oscillated around cero. When negotiating the highest step-up condition, the quail rotated the pelvis towards the contralateral leg during the trailing support phase. This change was significant (p < 0.001, see Fig. 5C; Table S9). For drops of about 25% and 50% of leg length the quail rotated the pelvis about \(\gamma\)p ≈ 8°towards the direction of the leg in contact with the ground. Compared to level locomotion, those changes were significant (p < 0.001). To facilitate negotiating larger visible drops, the pelvis (and the trunk) were rotated towards the trailing limb (yaw) and tilted (roll) towards the leading leg. After TD in the lowered substrate, the pelvis (trunk) was reoriented in motion's direction. To understand control strategies implemented by any system, it is necessary to characterize how the system responds to external perturbations. In the present work, we analyzed the kinematic strategies employed by the common quail to negotiate visible step-up and step-down conditions of about 10%, 25%, and 50% of the average value of their effective leg length during stance. Our main goal was to uncover leg kinematic changes at different levels of abstraction and how they relate to each other. The highest level of abstraction in our work is found in the effective leg (Fig. 1F). The kinematic analysis of the effective leg characterizes global control goals such as leg length, angle of attack at TD, aperture angle and retraction speed. Note that the effective leg will have two main functions if the dynamics are taken into consideration: (a) the axial leg function, which is a time-dependent force function (e.g., spring-damper) and (b) the tangential or rotational leg function, which is a time-dependent torque that controls the leg and balances the trunk (e.g., virtual pivot point (VPP) control22,40). Two- and three-dimensional joint kinematics (Fig. 1E,D) are representations with less level of abstraction. Because different combinations of joint kinematics can lead to the same effective leg lengths, we expected that their combined analysis would help to infer quail motor control goals on uneven terrains. Thus, we compared the (a) effective leg kinematic, (b) joint kinematics and (c) whole leg (represents hip 3D kinematics, see Fig. 4) and pelvic kinematics for the quail negotiating step-up and step-down conditions with our previously collected data on quail level ground running22, which is freely available on https://datadryad.org/stash/dataset/doi:10.5061/dryad.jh5h4. Our results display a complex picture of kinematic strategies before and after TD. In the next sections, we analyze that complex picture by linking our results with the existing knowledge about the interactions between kinematics, dynamics, and muscle activation during level/uneven locomotion. This combined analysis is used to unravel anticipatory and reactive strategies for the negotiation of uneven terrain, and to discuss whether those strategies may be governed by simple control goals. Trailing limb (stride i-1) The trailing effective leg was significantly longer at TD for stepping up condition than observed during level grounded running. Moreover, the effective leg length significantly increased with step height. The angle of attack at TD was steeper as step height increased. The differences in effective leg length between level locomotion and step locomotion at TD might be explained by the fact that data for level and step locomotion belonged to different quail cohorts. Animals had similar age, but the quail facing steps were heavier. However, longer effective leg length at TD and steeper angle of attack at TD might also indicate a "pre-programmed" control strategy at the global level to negotiate upward steps perhaps producing a shift in the operating locomotion program towards "mixed gaits"24, a periodic change between walking and grounded running steps that might permit birds to adjust their leg to vault towards the elevated substrate14. A more extended leg at TD also would agree with observations in running humans, which adapt their center of mass (CoM) height about 50% of step height in anticipation of stepping onto a visible step41,42. Note that because of neuromuscular delays, vertebrates preset muscle force before TD using posture dependent control3,14,15,17,43. During stance, the quail also fine-tuned leg length, and leg retraction of the trailing effective leg according to step height (see Fig. 2A,C). This adjustment indicates that visual perception of the upcoming obstacle induced anticipatory changes in leg loading during stance. One can hypothesize that the goal of this sensory driven adaptation was to adjust the trajectory of the CoM to reduce the necessity of compensation in the following step. How was the effective trailing leg length adjusted at the joint level in the step before the vertical shift? Our results suggest that the quail used two distinct strategies, depending on the height of the step. For step heights up to 25% of effective leg length, the extension of the hip joint lengthened the leg, while knee and intertarsal joints displayed similar patterns to those observed during level locomotion. For the 5 cm step height (about 50% of effective leg length) both knee and intertarsal joints were extended, while the hip joint extended even more. Note that during quail level locomotion, the spring-like leg behavior is mostly produced in the INT, while the active flexion of the knee joint controls leg retraction3,44. However, to negotiate 5 cm steps, the extension of both knee and INT turned the crouched quail leg into a more vertical one. In this leg configuration, the retraction of the leg is produced by hip extension. Thus, to vault the CoM onto the obstacle, the avian leg was controlled similarly as humans and animals, which have a more stiff and extended leg design12,21,41,43,45,46,47. Thus, the "zig-zag" configuration of the femur, the tibiotarsus, and the tarsometatarsus is abandoned to negotiate larger steps (see the trailing limb configuration superimposed to the X-ray picture in Fig. 3). The enclosed joints are spanned by mono- and bi-articular muscles with the latter enforcing a parallel mechanism, the so-called pantograph leg48,49. Gordon et al.9 reported significant larger activations for muscles M. flexor cruris lateralis pelvica (FCLP, hip extensor, knee flexor, possible hip abductor), M. gastrocnemius pars lateralis (GL, ankle extensor, knee flexor), M. gastrocnemius pars medialis (GM, ankle extensor, knee flexor/extensor), M. flexor perforatus digiti III (FPPD3, ankle extensor, digital flexor), and M. femorotibialis lateralis (FTL, mono-articular knee extensor) in the step prior to a step-up condition. These activation profiles are consistent with the control of the extension in the hip joint, the knee and the INT in the quail. In addition, the larger activation of FCLP also correlates with the reduced hip adduction in the quail when negotiating 5 cm step upwards. At the neuronal level, this shift in leg behavior might be induced by changed muscle synergies via higher locomotor center signals based on visual perception. Leading leg towards and on the elevated substrate (stride i) When the leading limb was swung towards the elevated substrate, the quail controlled the aperture angle between legs as described for level locomotion4. In the late swing, the aperture angle was kept constant at ϕ ≈ 53° despite step height. Thus, the late swing retraction and the angle of attack of the leading leg were mainly controlled by the retraction of the trailing leg, as hypothesized. When the leading leg stepped on the elevated substrate, the effective leg length and the angle of attack were similar to those observed in level locomotion. After TD, the effective leading leg kinematics did not markedly differ from those observed during level locomotion. Adaptations of the trailing limb thus permitted the leading limb to touch down on the step in similar manner as during level locomotion. This strategy might help to rapidly dissipate the changes in state variables produced by the vertical step. Empirical evidence has shown that running animals recover steady state behavior two to three steps after an unexpected perturbation15,50,51. Our observations from live videos suggest that the quail recovered from a visible step upwards or drop mostly in the second step after it, similarly, as described previously for other birds9,14,15. Despite the significant extension of the trailing leg, the leading leg touched down with joints more flexed than during level locomotion. After TD, the hip was rapidly more extended than during level locomotion, and the behavior of the INT shifted from a spring-like mode to an energy supplier (joint extended beyond its angle at TD) as step height increased. Note that at TD, the knee was not used to extend the leg, possibly because larger extensor torques about this joint would increase the horizontal GRF, breaking the retraction of the leg. Even so, the flexion of the knee was controlled during stance when negotiating the largest step heights, so that the knee-joint angle returned slowly to the value exhibited during unrestricted locomotion. The increased extensor activity of the FTL muscle, observed after the guinea fowl stepped on an elevated substrate, might be consistent with our observations9. In summary, the trailing leg extension might have reduced the necessity of reactive control. Whether changes in leading leg loading are necessary to compensate for the more flexed joints at TD, must be investigated in further studies. When the quail negotiated drops of about 10% of effective leg length, they used aerial phases to rapidly overcome the challenge. To introduce aerial phases, the operation of the trailing leg was shifted towards spring-like behavior (more marked rebound, see Fig. 2F). At the effective leg level, this change can be produced by reducing effective leg damping and/or inducing an axial extension of the effective leg in the late stance. In both cases, the pronograde virtual pivot point model [PVPP22] predicts that the axial energy of the system increases. This makes aerial phases more likely to occur. But how are those changes produced at the joint level? As observed before3 and for step-up conditions, hip extension seems to control effective leg extension if legs are kept crouched (c.p. Figs. 2A, 4A). Knee and INT joint kinematics did not display sudden changes compared to level locomotion (Fig. 3G,I, green lines). This seems to indicate, following3, that retraction angle was not adapted to negotiate the lowest drop height. Indeed, the trajectories for the retraction angle did not deviate from those observed during level locomotion (see Fig. 2H). As explained before, the INT seems to control the spring-like behavior of the avian leg3. Taking this into account, the shape of the curve in Fig. 3G might indicate that the spring-like behavior was conserved in the INT (note the rebound behavior compared to level locomotion). However, this hypothesis and the relationship between stiffness changes in the INT and their influence in the effective leg stiffness need further analysis. When compared with the patterns obtained during level locomotion, the TMP joint displayed a change to a more spring-like function (see Fig. 3K). Because this joint was previously related to the damping behavior of the leg during level locomotion3, we can speculate, based on the PVPP model, that the combined action of the hip and the TMP joints might control gait-changes between grounded and aerial running as they regulate, respectively, the effective leg length and damping ratio during the stance. To cope with drops of about 25% to 50% of leg length, the quail approached the step slower compared to 1 cm and relied on double support. Animals' strategies to negotiate drops of 25% and 50% leg length differed. When negotiating visible drops of 25% leg length, the quail displayed rather subtle changes in the trailing leg, even though its effective length was longer than during level locomotion. This observation is supported by the slightly more extended hip and knee joints during stance, and a stiffer INT joint (less flexion–extension than level locomotion for assumed similar ground reaction forces), which might also have induced a vaulting descending motion of the CoM towards the lowered substrate. To cope with visible drops of 50% leg length, the trailing leg displayed a more crouched configuration, and was less retracted than during level locomotion (Fig. 2H). The shorter effective leg was produced by a significantly more flexed hip, INT and knee joints. Leg retraction displayed a trade-off between flexion of the hip, which protracted the leg, and of the knee, which in turn induced the contrary motion. Thus, the quail used a large hip extension to extend the effective leg during stance but did not use a larger hip flexion to shorten it. This can be explained by the fact that hip extensor torque must be sufficient to stabilize a pronograde trunk and the overall locomotion22,40,47. At TD and during later stance, the trailing whole leg was nearly vertically oriented for 25% and 50% visible drops. Such a leg orientation may help to prevent a collapse of the leg. For the largest drop, the hip was significantly more abducted (see Fig. 4K). The described leg placement permitted the pelvis to be rotated towards the trailing leg (yaw motion) and tilted towards the leading leg (roll motion) while descending towards the lowered substrate (Fig. 4E,F). Leading limb (stride i) The leading effective leg touched down significantly later when stepping down, if compared to the same event during level locomotion. The angle of attack (α0) was steeper but did not vary with drop-height. At the same time, the retraction of the trailing limb in the late stance was step-height related. This indicates that leading leg retraction was decoupled from the trailing leg after crossing to the ground level, as observed in the change of the aperture angle (\(\phi\)0) (see Fig. 2J). This result suggests that the angle of attack and not the aperture angle is a target control parameter for leg placement when negotiating visible drops. During 1 cm drops, the effective leg lengthening during swing is explained by hip extension, but especially by the significant extension of the TMP joint before TD. This shaped the subsequent behavior of the leg during stance. We think that the more extended TMP joint at TD shifted spring-like behavior from the INT to the TMP joint (cp. Fig. 3J,L, green lines after TD). Note that this behavior is similar to the "KEh-mode" observed during an unexpected drop, in which the drop energy is converted to horizontal kinetic energy, see44. Gordon and colleagues showed that the guinea fowl displayed significantly higher activation of the M. flexor perforatus digiti III before and after their leg touched down in a sunken substrate8. We speculate, that by preloading the tendons spanning the TMP joint during swing, the quail changed the viscoelastic properties of the joint (i.e., they shifted from a more damped joint behavior dominated by muscle properties to a more spring-like behavior dominated by elastic tissues, as observed in running humans52 and turkeys53. The consequences of this change for joint control goals like minimization of joint work54 needs further investigation. As was observed for drops of 10% leg length, the quail used a more extended leading leg (stride i) to negotiate drops of 25% leg length compared to level or 5 cm drops (see Table S1). However, the source of the leading leg lengthening was different from those depicted for drops of 10% leg length. The quail extended the INT joint instead of the TMP joint during swing (see Fig. 3J,L red curves). This simple change effected a dampened leg response after the drop. Focusing on the joint level, the TMP joint abandoned the spring-like behavior during stance depicted during 10% drops, and exhibited the dampened pattern described for level locomotion3. It seems that the extension of the INT joint during swing permits muscular work to control leg compression and thus the energy dissipation after a visible drop. EMG data from the guinea fowl negotiating slow drops showed that the M. gastrocnemius pars lateralis was recruited earlier than the M. flexores perforate digiti III. This shift in the activation vanished for faster drops and level locomotion9. Perhaps the onset in the activation of these muscles is used by birds to shape the viscoelastic response of the leg. To negotiate 50% leg length drops, the aperture angle between the effective legs was similar to 25% leg length drops until the level line. However, after the leg crossed the level height, it was extended until TD. Note that the slope of the mean leg angle before TD was quite flat until the level line (Fig. 2J blue line). Consequently, the retraction speed of the leading leg was only slightly adapted when level TD is lost (Fig. 2J blue dotted line). At TD, the leading effective leg was shorter than in other drop conditions. Distal joint angles during 50% leg length drops were not significantly different from those exhibited by 2.5 cm drops. During this rather cautious drop negotiating technique, leg shortening seems to be performed by a more flexed hip joint at TD. During stance, the INT displayed a more bouncing-like behavior. With increased drop height, the whole leg was more vertically oriented in the frontal plane and less abducted in the lowered substrate compared to unrestricted locomotion. This leg placement strategy prevented leg collapse and might have permitted the reorientation of the pelvis and thus the trunk in motion's direction. To negotiate visible steps, the quail reconfigured leg and joint kinematics related to step type (upwards vs. downwards) and height via different anticipatory strategies during swing and/or reactive control after TD. However, dramatic changes were observed only in the trailing limb for step heights of 50% of leg length. Leg and joint adaptations permitted the quail to regain steady-state locomotion already after one or two steps. When coping with steps upwards, the quail adapted the trailing limb to permit that the leading leg steps on the elevated substrate in the same way as it does during level locomotion. This strategy may have reduced the need of reactive (feedback) response to readapt posture during leading leg's stance. The quail kept the kinematic patterns of the distal joints to a large extent unchanged during uneven locomotion, and most changes were accomplished in proximal joints. Up to middle step heights, hip extension was mainly used to lengthen the leg, or in combination with a more spring-like TMP joint to change to aerial running. However, to negotiate the largest visible step upwards and drop heights, all joints contributed to leg lengthening/shortening in the trailing leg and both the trailing and leading legs stepped more vertically and less abducted. This indicates a sudden change in leg motor-control program. Further analysis is certainly necessary to understand muscle synergies, and overall neuromechanics underlining changes between "dynamical" (spinal controlled) and more "safely" (slower, goal-directed motions, perhaps from higher centers controlled) gait programs. Nine adult common quails [Phasianidae: Coturnix coturnix (Linnaeus 1758)] displaying a mean body weight 315 ± 30 g were used for our analysis (see Table 2). The birds were housed at the Institute of Zoology and Evolutionary Research in Jena with access to food and water ad libitum. Housing, care, and all experimental procedures were approved by the Committee for Animal Research of the State of Thuringia (registry number 02-054/14). Animal keeping and experiments were performed in strictly accordance with the approved guidelines. Table 2 Animals and strides. For information about level locomotion experiments, please refer to3. In the step-up/step-down experiments, the quails moved across a 3 m long walking track at their preferred speeds. In the middle of the walking track, the birds negotiated visible drop/step-up conditions of 1.0 cm, 2.5 cm, and 5 cm. Those challenges were created by supplementing the first (for drops) or the last (for step-up) half of the walking track. The track was covered with fine sheet rubber to reduce slipping. Body and limb kinematics were collected by using a biplanar X-ray fluoroscope (Neurostar, Siemens, Erlangen, Germany) at the facility of the Institute of Zoology and Evolutionary Research, Germany. X-ray sources were set to obtain recordings from the laterolateral and ventrodorsal projections. In addition, two synchronized standard light high-speed cameras (SpeedCam Visario g2, Weinberger, Erlangen, Germany) were used to cover both frontal and lateral perspectives of the track. The X-ray machine parameters were 40 kV and 53 mA, and a sampling frequency of 500 Hz. Raw video data was first undistorted by using a modified version of the freely available MATLAB (The MathWorks, Natick, MA, USA) routine batchUndistort (www.xromm.org) provided by Brown University (Providence, RI, USA). As a base for the Automatic Anatomical Landmark Localization using Deep Features (see below), manual digitization of the joints and other landmarks [following3] was performed using SimiMotion software (SimiMotion Systems, Unterschleißheim, Germany) on no more than ten randomly distributed frames per trial. Note that 5–10 annotated frame pairs are sufficient to train a model with the same annotation performance as manual landmark labeling55,56. Automatic anatomical landmark localization in multi-view sequences using deep features In the following, the automatic multi-view landmark localization technique of the locomotion sequence is described, which is originally published in57. Our method utilizes multi-view deep concatenated feature representations of annotated input images to train individual linear regressors for each view-based correspondent landmark pair. Based on a small number of annotated correspondent images of a multi-view sequence, the individual trained regressors locate all landmarks of the entire sequence in each view. In Fig. 6 the whole method pipeline is visualized. Afterwards, the automatic localized 2D landmarks of the dorsoventral and lateral view are utilized to reconstruct 3D landmark coordinates. To train an individual multi-view landmark regressor \({h}_{n}\), initially, the deep features \({x}_{i}=(({x}_{1}^{d}, \dots ,{x}_{M}^{d}\), \({x}_{1}^{l}, \dots ,{x}_{M}^{l})\) are extracted of \(M\) annotated image pairs. Afterwards, the concatenated features of correspondent image pairs serve as input for the regressor training. The landmark positions \({y}_{n}^{*}\) of unseen image pairs of \(S\) are predicted from the resulting trained model \({h}_{n}\). This procedure is repeated for each of the N landmark pairs individually. The utilized deep features are learned representations of images extracted from a Convolutional Neural Network (CNN)58, which are mainly used for supervised computer vision tasks, like image classification, object recognition, or object tracking. The CNN learn in each of its convolutional layer several sets of individual convolutional filters based on the input images in the training process and provides thereby powerful feature representations of the utilized image domain. The training of CNN models usually needs a lot of data, which is not available in our application. Hence, we choose a model of the AlexNet architecture59 pre-trained on a similar task exploiting the same data domain of our application. This pre-trained model is trained for pose classification with the very same data of multi-view bipedal locomotion sequences to distinguish 10 quantized poses in each view. The semi-automatic annotation of the poses is described in57. After training the CNN on the auxiliary task of pose classification, the CNN's layer activations during inference can be exploited as deep features. In the following, we describe the regressor training process for a single two-view locomotion sequence S utilizing the deep features. The multi-view locomotion sequence \(S\) contains \(L\) correspondent image pairs from the dorsoventral and lateral view (\({I}_{1}^{d},\dots , {I}_{L}^{d}\)) and (\({I}_{1}^{l},\dots , {I}_{L}^{l}\)). From each image pair \({I}_{i}^{d}\) and \({I}_{i}^{l}\) the deep features \({x}_{i}=({x}_{i}^{l},{x}_{i}^{d})\) are extracted and concatenated from the fifth convolutional layer Conv-5 of the pre-trained CNN. Additionally, in \(M=10\) equidistant sampled frame pairs of both views, the correspondent \(N=22\) landmark position pairs \({y}_{n}=\left({y}_{1},\dots , {y}_{N}\right)\) with \({y}_{n}=({(l}_{n,1}^{d}, {l}_{n,1}^{l}),\dots ,{(l}_{n,M}^{d},{l}_{n,M}^{l}))\) are annotated, which are used for single regressor training. By utilizing each annotated corresponding landmark pairs \({y}_{n}\), individual linear regressors \({h}_{n}\) are trained, which locates the correspondent landmarks in the remaining \(L-M\) images of both views, automatically. As linear model \({h}_{n}\), we train \(N\) single \(\epsilon\)-SV regressors60. Each linear regression model \({h}_{n}\) uses the given training data \(\left({x}_{1}, {y}_{1}\right), \dots , \left({x}_{M},{y}_{N}\right) \subset X \times R\), where \({x}_{i}\) denotes the deep features with \(X \times {R}^{D}\) and \({y}_{i}\) the landmark positions of the ith landmark in the \(M\) frames. Hence, for each landmark position pair of both views, a single regressor \({h}_{i}\) is trained. The goal of this regression task is to find a hyperplane \(f\left(x\right)=\langle \omega ,x\rangle +b\) with a maximum deviation of \(\epsilon\) from the target values \({y}_{i}\) for all training data. Given the fact that the vector \(\omega\) is perpendicular to the hyperplane \(f\left(x\right)\), we only need to minimize the norm of \(\omega\), i.e.,\({\| \omega \| }^{2}= \langle \omega ,\omega \rangle\). When working with real data, in most cases, it is impossible to find a decent solution for this convex optimization problem based on potential outliers. With the addition of slack variables \({\xi }_{i} and {\xi }_{i}^{*}\) such infeasible conditions can be handled. We formulate the problem like51: $${\frac{1}{2}\| \omega \| }^{2}+C{\sum }_{i=1}^{L}\left({\xi }_{i}+{\xi }_{i}^{*}\right)$$ $$s.t. \{{y}_{i}-\langle \omega ,{x}_{i}\rangle -b \le \epsilon + {\xi }_{i} \langle \omega ,{x}_{i}\rangle +b- {y}_{i} \le \epsilon + {\xi }_{i}^{*} {\xi }_{i},{\xi }_{i}^{*} \ge 0 ,$$ where \(C>0\) is a constant, which weights the tolerance of deviation greater than \(\epsilon\). Multi-view 3D reconstruction The dorsoventral and lateral two-dimensional position data can be exploited to reconstruct these corresponded landmark points to three-dimensional points in a metric space. For this, a three-dimensional calibration cube was used to perform the 3D reconstruction process. The cube is semi-transparent and contains equidistant X-ray opaque metal spheres. By annotating at least seven individual corresponding spheres in both views, a relationship between the annotated 2D pixel position (\(\left({u}_{i}^{d} ,{v}_{i}^{d}\right),({u}_{i}^{l} ,{v}_{i}^{l}))\) to the 3D real word positions \(\left({X}_{i},{Y}_{i},{Z}_{i}\right)\) of the spheres can be exploited. For more details on how \(P\) is estimated, we refer to61. Angle calculation Joint angles were computed as explained in3, while model related leg kinematics following22,62. Three-dimensional kinematics (see Fig. 1D): the pelvic local coordinate system was located in the centroid of the triangle composed by both hip joints and the pelvis cranial marker (\({p}_{c}\)). It measures the absolute motion of the pelvis related to the global coordinate system. It was defined by specifying first \({\overrightarrow{e}}_{{x-int}_{pel}}\) as an interim vector pointing from the right hip joint (\({h}_{r}\)) to the pelvis cranial marker \({\overrightarrow{e}}_{{x-int}_{pel}}= {p}_{c}-{h}_{r}\), then \({\overrightarrow{e}}_{{y}_{pel}}\) to be a vector pointing from \({h}_{r}\)to the left hip joint (\({h}_{l}\)), \({\overrightarrow{e}}_{{y}_{pel}}={h}_{l}-{h}_{r}\), and \({\overrightarrow{e}}_{{z}_{pel}}\) and \({\overrightarrow{e}}_{{x}_{pel}}\) via cross-products as \({\overrightarrow{e}}_{{z}_{pel}}= {\overrightarrow{e}}_{{x-int}_{pel}}\times {\overrightarrow{e}}_{{y}_{pel}}\) and \({\overrightarrow{e}}_{{x}_{pel}}= {\overrightarrow{e}}_{{y}_{pel}}\times {\overrightarrow{e}}_{{z}_{pel}}\). The whole-leg coordinate system measures the rotation of the whole leg related to the pelvis (estimates the three-dimensional rotations occurring at the hip joint). It was constructed as follows: \({\overrightarrow{e}}_{{z}_{leg\_i}}\) extends from the knee joint (\({k}_{i}\)) to the hip joint \({h}_{i}\) (right leg, i = r, left leg, i=l), e.g. \({\overrightarrow{e}}_{{z}_{leg\_i}}= {h}_{i}-{k}_{i}\). Then \({\overrightarrow{e}}_{{x-int}_{leg\_i}}\) is an interim vector directed from TMP-distal markers (\({tmp}_{dist\_i}\)) to \({k}_{i}\), e.g., \({\overrightarrow{e}}_{{x-int}_{leg\_i}}= {k}_{i}-{tmp}_{dist\_i}\). \({\overrightarrow{e}}_{{y}_{leg\_i}}\) was then obtained as \({\overrightarrow{e}}_{{y}_{leg\_i}}={\overrightarrow{e}}_{{z}_{leg\_i}}\times {\overrightarrow{e}}_{{x-int}_{leg\_i}}\), \({\overrightarrow{e}}_{{y}_{leg\_i}}\) is hence perpendicular to the plane defined by the hip joint, the knee joint and the TMP-distal marker and points to the left (towards medial for the right leg and lateral for the left leg). Finally, \({\overrightarrow{e}}_{{x}_{leg\_i}}={\overrightarrow{e}}_{{y}_{leg\_i}}\times {\overrightarrow{e}}_{{z}_{leg\_i}}\). The whole-leg coordinate system was located in the middle of the femur (segment between hip and knee). To compute three-dimensional angles, we used the Cardan rotation sequence z–x–y. The left leg was used as reference. Thus, positive rotations around the x, y, and z axes represent, respectively, the inner rotation of the femur (whole leg rotates laterally), femoral retraction (hip extension), and femoral abduction. To build the mean using both legs, rotations around the z and x axes for the right leg were multiplied by − 1. Because of the constrained field of view, events defining a stride were selected in a different way for the leading and the trailing limbs. For the trailing limb the stride was defined between TD on the level plate and TD on the vertical shifted plate. For the leading limb between TO on the level plate and TO on the vertical shifted plate. For comparison, strides were afterwards interpolated to 100 points. Kinematics were computed using a custom written script in Matlab 2017 (The MathWorks Inc., Natick, MA, USA). The following kinematic variables were defined as dependent variables: global parameters such as angle of attack (α0), aperture angle (ϕ0) and leg length (l), effective leg axial velocity (\(\dot{l}\)), effective leg angle velocity (\(\dot{\alpha}\)), aperture angle velocity (\(\dot{\phi}\)), all joint angles and cardan angles for the pelvis and hip joint (relative angles between pelvis and leg, see Figs. 1, 2, 3, 4 and 5). For the trailing limb, we analyzed the early stance (15%, because at TD in most cases data was absent) and TO events. For the leading limb, we analyzed the TD and the late stance (85%). In our analysis, we also included the four precedents and the four following points relative to the selected event (event ± 4% of the stride). Step locomotion are paired measures (same individuals, comparison between the i-1 and the i events) while step vs. level locomotion (grounded running) unpaired [level locomotion was collected in a different study3]. For step locomotion, repeated measures ANOVA was used to assess the influence of step-height and direction (up vs. drop) to the dependent variables. To test for significant differences between each step condition and level locomotion, we performed single multivariate ANOVAs (e.g., 2.5 cm step upwards vs. level). Post-Hoc were afterwards performed to assess the influence of each treatment. Based on the homogeneity of the variances (Levene-test) we selected between TukeyHSD or Games–Howell tests. Statistical analysis was implemented in R (Version: 3.5.3). We used the following libraries (R.matlab, data.table, stats, rstatix und car). To generate R-code we used the program "master" (free downloadable under https://starkrats.de). All experiments were approved by and carried out in strict accordance with the German Animal Welfare guidelines and regulations of the states of Thuringia (TLV 02-054/14). We confirm that we complied with the ARRIVE guidelines. Kilbourne, B. M., Andrada, E., Fischer, M. S. & Nyakatura, J. A. Morphology and motion: Hindlimb proportions and swing phase kinematics in terrestrially locomoting charadriiform birds. J. Exp. Biol. 219, 1405–1416 (2016). Nyakatura, J. A., Andrada, E., Grimm, N., Weise, H. & Fischer, M. S. Kinematics and center of mass mechanics during terrestrial locomotion in Northern Lapwings (Vanellus vanellus, Charadriiformes). J. Exp. Zool. Part A Ecol. Genet. Physiol. 317, 580–594. https://doi.org/10.1002/jez.1750 (2012). Andrada, E., Nyakatura, J. A., Bergmann, F. & Blickhan, R. Adjustments of global and local hindlimb properties during terrestrial locomotion of the common quail (Coturnix coturnix). J. Exp. Biol. 216, 3906–3916 (2013). Andrada, E., Rode, C. & Blickhan, R. Grounded running in quails: Simulations indicate benefits of observed fixed aperture angle between legs before touch-down. J. Theor. Biol. 335, 97–107 (2013). Article ADS MathSciNet PubMed MATH Google Scholar Blickhan, R. et al. Intelligence by mechanics. Philos. Trans. A Math. Phys. Eng. Sci. 365, 199–220. https://doi.org/10.1098/rsta.2006.1911 (2007). Article ADS MathSciNet PubMed Google Scholar Gordon, M. S., Blickhan, R., Dabiri, J. O. & Videler, J. J. Animal Locomotion: Physical Principles and Adaptations (CRC Press, 2017). Book Google Scholar Dickinson, M. H. et al. How animals move: An integrative view. Science 288, 100–106 (2000). Nishikawa, K. et al. Neuromechanics: An integrative approach for understanding motor control. Integr. Comp. Biol. 47, 16–54. https://doi.org/10.1093/icb/icm024 (2007). Gordon, J. C., Rankin, J. W. & Daley, M. A. How do treadmill speed and terrain visibility influence neuromuscular control of guinea fowl locomotion?. J. Exp. Biol. 218, 3010–3022 (2015). Farley, C. T., Houdijk, H. H. P., Van Strien, C. & Louie, M. Mechanism of leg stiffness adjustment for hopping on surfaces of different stiffnesses. J. Appl. Physiol. 85, 1044–1055 (1998). Ferris, D. P., Liang, K. & Farley, C. T. Runners adjust leg stiffness for their first step on a new running surface. J. Biomech. 32, 787–794. https://doi.org/10.1016/S0021-9290(99)00078-0 (1999). Muller, R. & Blickhan, R. Running on uneven ground: Leg adjustments to altered ground level. Hum. Mov. Sci. 29, 578–589. https://doi.org/10.1016/j.humov.2010.04.007 (2010). Müller, R., Tschiesche, K. & Blickhan, R. Kinetic and kinematic adjustments during perturbed walking across visible and camouflaged drops in ground level. J. Biomech. 47, 2286–2291 (2014). Birn-Jeffery, A. V. & Daley, M. A. Birds achieve high robustness in uneven terrain through active control of landing conditions. J. Exp. Biol. 215, 2117–2127. https://doi.org/10.1242/jeb.065557 (2012). Birn-Jeffery, A. V. et al. Don't break a leg: Running birds from quail to ostrich prioritise leg safety and economy on uneven terrain. J. Exp. Biol. 217, 3786–3796 (2014). Seyfarth, A., Geyer, H. & Herr, H. Swing-leg retraction: A simple control model for stable running. J. Exp. Biol. 206, 2547–2555 (2003). Daley, M. A., Usherwood, J. R., Felix, G. & Biewener, A. A. Running over rough terrain: Guinea fowl maintain dynamic stability despite a large unexpected change in substrate height. J. Exp. Biol. 209, 171–187. https://doi.org/10.1242/jeb.01986 (2006). Blum, Y. et al. Swing-leg trajectory of running guinea fowl suggests task-level priority of force regulation rather than disturbance rejection. PLoS ONE 9, e100399 (2014). Blum, Y., Birn-Jeffery, A., Daley, M. A. & Seyfarth, A. Does a crouched leg posture enhance running stability and robustness?. J. Theor. Biol. 281, 97–106. https://doi.org/10.1016/j.jtbi.2011.04.029 (2011). Article ADS PubMed MATH Google Scholar Daley, M. A. & Usherwood, J. R. Two explanations for the compliant running paradox: Reduced work of bouncing viscera and increased stability in uneven terrain. Biol. Lett. 6, 418–421. https://doi.org/10.1098/rsbl.2010.0175 (2010). Andrada, E., Blickhan, R., Ogihara, N. & Rode, C. Low leg compliance permits grounded running at speeds where the inverted pendulum model gets airborne. J. Theoret. Biol. 110227, 25 (2020). Andrada, E., Rode, C., Sutedja, Y., Nyakatura, J. A. & Blickhan, R. Trunk orientation causes asymmetries in leg function in small bird terrestrial locomotion. Proc R. Soc. B Biol. Sci. https://doi.org/10.1098/rspb.2014.1405 (2014). Müller, R. & Andrada, E. Skipping on uneven ground: Trailing leg adjustments simplify control and enhance robustness. R. Soc. Open Sci. 5, 172114 (2018). Andrada, E. et al. Mixed gaits in small avian terrestrial locomotion. Sci. Rep. 5, 13636. https://doi.org/10.1038/srep13636 (2015). Abourachid, A. et al. Bird terrestrial locomotion as revealed by 3D kinematics. Zoology 114, 360–368. https://doi.org/10.1016/j.zool.2011.07.002 (2011). Kambic, R. E., Roberts, T. J. & Gatesy, S. M. Long-axis rotation: A missing degree of freedom in avian bipedal locomotion. J. Exp. Biol. 217, 2770–2782. https://doi.org/10.1242/jeb.101428 (2014). Rubenson, J., Lloyd, D. G., Besier, T. F., Heliams, D. B. & Fournier, P. A. Running in ostriches (Struthio camelus): Three-dimensional joint axes alignment and joint kinematics. J. Exp. Biol. 210, 2548–2562 (2007). Kambic, R. E., Roberts, T. J. & Gatesy, S. M. Guineafowl with a twist: Asymmetric limb control in steady bipedal locomotion. J. Exp. Biol. 218, 3836–3844 (2015). Blickhan, R. The spring-mass model for running and hopping. J. Biomech. 22, 1217–1227. https://doi.org/10.1016/0021-9290(89)90224-8 (1989). Ruina, A., Bertram, J. E. & Srinivasan, M. A collisional model of the energetic cost of support work qualitatively explains leg sequencing in walking and galloping, pseudo-elastic leg behavior in running and the walk-to-run transition. J. Theor. Biol. 237, 170–192 (2005). Srinivasan, M. & Ruina, A. Computer optimization of a minimal biped model discovers walking and running. Nature 439, 72–75 (2006). Full, R. J. & Koditschek, D. E. Templates and anchors: Neuromechanical hypotheses of legged locomotion on land. J. Exp. Biol. 202, 3325–3332 (1999). Ogihara, N., Kikuchi, T., Ishiguro, Y., Makishima, H. & Nakatsukasa, M. Planar covariation of limb elevation angles during bipedal walking in the Japanese macaque. J. R. Soc. Interface 9, 2181–2190 (2012). Ogihara, N. et al. Planar covariation of limb elevation angles during bipedal locomotion in common quails (Coturnix coturnix). J. Exp. Biol. 217, 3968–3973 (2014). Ivanenko, Y. P., Cappellini, G., Dominici, N., Poppele, R. E. & Lacquaniti, F. Modular control of limb movements during human locomotion. J. Neurosci. 27, 11149–11161 (2007). Ivanenko, Y. P., d'Avella, A., Poppele, R. E. & Lacquaniti, F. On the origin of planar covariation of elevation angles during human locomotion. J. Neurophysiol. 99, 1890–1898 (2008). Borghese, N., Bianchi, L. & Lacquaniti, F. Kinematic determinants of human locomotion. J. Physiol. 494, 863 (1996). Daley, M. A. In 9th International Symposium on Adaptive Motion of Animals and Machines (AMAM 2019). Beebe, W. The Bird: Its Form and Function Vol. 1 (Henry Holt and Company, 1906). Maus, H. M., Lipfert, S. W., Gross, M., Rummel, J. & Seyfarth, A. Upright human gait did not provide a major mechanical challenge for our ancestors. Nat. Commun. 1, 70. https://doi.org/10.1038/ncomms1073 (2010). Blickhan, R., Ernst, M., Koch, M. & Müller, R. Coping with disturbances. Hum. Mov. Sci. 32, 971–983 (2013). Ernst, M., Götze, M., Müller, R. & Blickhan, R. Vertical adaptation of the center of mass in human running on uneven ground. Hum. Mov. Sci. 38, 293–304 (2014). Müller, R., Ernst, M. & Blickhan, R. Leg adjustments during running across visible and camouflaged incidental changes in ground level. J. Exp. Biol. 215, 3072–3079. https://doi.org/10.1242/jeb.072314 (2012). Daley, M. A., Felix, G. & Biewener, A. A. Running stability is enhanced by a proximo-distal gradient in joint neuromechanical control. J. Exp. Biol. 210, 383–394. https://doi.org/10.1242/jeb.02668 (2007). Blickhan, R., Andrada, E., Müller, R., Rode, C. & Ogihara, N. Positioning the hip with respect to the COM: Consequences for leg operation. J. Theor. Biol. 382, 187–197. https://doi.org/10.1016/j.jtbi.2015.06.036 (2015). Gunther, M., Keppler, V., Seyfarth, A. & Blickhan, R. Human leg design: Optimal axial alignment under constraints. J. Math. Biol. 48, 623–646. https://doi.org/10.1007/s00285-004-0269-3 (2004). Article MathSciNet PubMed MATH Google Scholar Shen, Z. H. & Seipel, J. E. A fundamental mechanism of legged locomotion with hip torque and leg damping. Bioinspir. Biomim. 7, 046010 (2012). Witte, H. et al. In Proceedings of CLAWAR'2001–4th International Conference on Climbing and Walking Robots. 63–68. Witte, H. et al. In International Symposium on Adaptive Motion of Animals and Machines. Daley, M. A. & Biewener, A. A. Leg muscles that mediate stability: Mechanics and control of two distal extensor muscles during obstacle negotiation in the guinea fowl. Philos. Trans. R. Soc. B Biol. Sci. 366, 1580–1591 (2011). Jindrich, D. L. & Full, R. J. Dynamic stabilization of rapid hexapedal locomotion. J. Exp. Biol. 205, 2803–2823 (2002). Farris, D. J. & Sawicki, G. S. Human medial gastrocnemius force–velocity behavior shifts with locomotion speed and gait. Proc. Natl. Acad. Sci. 109, 977–982 (2012). Roberts, T. J., Marsh, R. L., Weyand, P. G. & Taylor, C. R. Muscular force in running turkeys: The economy of minimizing work. Science 275, 1113–1115 (1997). Rode, C., Sutedja, Y., Kilbourne, B. M., Blickhan, R. & Andrada, E. Minimizing the cost of locomotion with inclined trunk predicts crouched leg kinematics of small birds at realistic levels of elastic recoil. J. Exp. Biol. 219, 485–490 (2016). Haase, D., Nyakatura, J. & Denzler, J. Comparative large-scale evaluation of human and active appearance model based tracking performance of anatomical landmarks in X-ray locomotion sequences. Pattern Recogn. Image Anal. 24, 86–92 (2014). Haase, D., Nyakatura, J. A. & Denzler, J. In Joint Pattern Recognition Symposium. 11–20 (Springer). Mothes, O. & Denzler, J. In International Conference on Pattern Recognition (ICPR)—VAIB workshop (2018). Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016). Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012). Vapnik, V. The Nature of Statistical Learning Theory (Springer, 1999). Gonzalez, R. C. & Woods, R. E. Digital Image Processing 4th edn. (Pearson, 2018). Blickhan, R., Andrada, E., Hirasaki, E. & Ogihara, N. Global dynamics of bipedal macaques during grounded and aerial running. J. Exp. Biol. 221, 58 (2018). We would like to thank Lisa Dargel for animal training and animal guidance during the experiments. Rommy Petersohn and Yefta Sutedja for their technical assistance during the experiments. Ben Witt (formerly known as Ben Derwel) together with students worked hard to digitalize landmarks from the X-ray images for the semi-automatic identification. Open Access funding enabled and organized by Projekt DEAL. The study was supported by the German Research Foundation DFG-grants (De 735/8-1/3, Bl 236/22-1/3, Fi 410/15-1/3, AN 1286/2-1) to DJ, RB, MSF and EA, respectively. This work was also supported by DFG FI 410/16-1 and NSF (DBI-2015317) as part of the NSF/CIHR/DFG/FRQ/UKRI-MRC Next Generation Networks for Neuroscience Program. Institute of Zoology and Evolutionary Research, Friedrich-Schiller-University Jena, Jena, Germany Emanuel Andrada, Heiko Stark & Martin S. Fischer Computer Vision Group, Friedrich-Schiller-University Jena, Jena, Germany Oliver Mothes & Joachim Denzler Department of Physiology, Northwestern University, Chicago, IL, USA Matthew C. Tresch Science of Motion, Friedrich-Schiller-University Jena, Jena, Germany Reinhard Blickhan Emanuel Andrada Oliver Mothes Heiko Stark Joachim Denzler Martin S. Fischer E.A., M.S.F., and R.B. conceived the study. E.A. and M.S.F. supervised the experiments. J.D. and O.M. developed and O.M. performed the semi-automatic landmark identification, E.A. analyzed experimental data inclusive 2D and 3D kinematics, H.S. performed the statistics, E.A., M.S.F., D.J., M.T. and R.B. grants acquisition. E.A. drafted the manuscript. All authors contributed to the interpretation of the results and revised the manuscript. Correspondence to Emanuel Andrada. Supplementary Information. Andrada, E., Mothes, O., Stark, H. et al. Limb, joint and pelvic kinematic control in the quail coping with steps upwards and downwards. Sci Rep 12, 15901 (2022). https://doi.org/10.1038/s41598-022-20247-y
CommonCrawl
Estimating health service utilization potential using the supply-concentric demand-accumulation spatial availability index: a pulmonary rehabilitation case study Kevin A. Matthews ORCID: orcid.org/0000-0001-8193-22021, Anne H. Gaglioti2, James B. Holt1, Anne G. Wheaton1 & Janet B. Croft1 The potential for a population at a given location to utilize a health service can be estimated using a newly developed measure called the supply-concentric demand accumulation (SCDA) spatial availability index. Spatial availability is the amount of demand at the given location that can be satisfied by the supply of services at a facility, after discounting the intervening demand among other populations that are located nearer to a facility location than the given population location. This differs from spatial accessibility measures which treat absolute distance or travel time as the factor that impedes utilization. The SCDA is illustrated using pulmonary rehabilitation (PR), which is a treatment for people with chronic obstructive pulmonary disease (COPD). The spatial availability of PR was estimated for each Census block group in Georgia using the 1105 residents who utilized one of 45 PR facilities located in or around Georgia. Data was provided by the Centers for Medicare & Medicaid Services. The geographic patterns of the SCDA spatial availability index and the two-step floating catchment area (2SFCA) spatial accessibility index were compared with the observed PR utilization rate using bivariate local indicators of spatial association. The SCDA index was more associated with PR utilization (Morans I = 0.607, P < 0.001) than was the 2SFCA (Morans I = 0.321, P < 0.001). These results suggest that the measures of spatial availability may be a better way to estimate the health care utilization potential than measures of spatial accessibility. The potential for a population at a given location to utilize a health service can be estimated as the spatial availability of a service. Spatial availability is the amount of demand at the given location that can be satisfied by the supply of services at a facility, after discounting the intervening demand among other populations that are located nearer to a facility location than the given population location. This differs from spatial accessibility measures, which treat absolute distance or travel time as the primary factor that impedes people from using a health care service [1,2,3,4,5,6]. While distances or travel times from demand locations to supply locations is a common way of measuring impedance, [7,8,9] the demand for the service among populations that reside closer to the available health care facilities has never been investigated as a source of impedance. Formally, we define the spatial availability of a health care service at a given population location(i) as the amount of demand at that can be satisfied by the supply of services at a facility(j), after discounting the intervening demand among other populations (ii) that are located nearer to a facility location(j) than the given population location(i). Here, we introduce the supply-concentric demand-accumulation (SCDA) spatial availability index as new approach for estimating utilization potential. We illustrate the SCDA using pulmonary rehabilitation (PR) in Georgia. PR is an effective treatment for chronic obstructive pulmonary disease (COPD), which is an irreversible respiratory disease that worsens over time. Improving the availability of PR can potentially improve the lives of over 15 million Americans with COPD [10]. We chose Georgia because it is within a region of the United States that has significantly higher COPD prevalence, Medicare hospitalizations for COPD, and COPD-related mortality than other areas in the United States [10, 11]. PR is a multi-modal intervention; a typical session may include breathing exercises, education on disease processes and physiology, psychological support, nutrition counseling, peer support, and exercise training [12]. Patients with COPD who participate in PR have better exercise outcomes, fewer chronic comorbidities, and a higher quality of life [13]. PR programs usually last from 8 to 12 weeks, with 2 or 3 sessions per week [14]. Given the time intensity and frequency of this treatment, adherence to a prescribed regimen may be hindered or facilitated by the amount of demand for PR among a population with COPD that can be satisfied by the number of treatments that are available at their nearby PR facilities. In this study, we compare the geographic pattern of spatial availability of PR using the SCDA spatial availability index with the geographic pattern of a contemporary measure of spatial accessibility called the two-step floating catchment area (2SFCA) spatial accessibility index [15]. Then we compared both measures of health service utilization potential with the geographic pattern of observed PR utilization. While we used PR to illustrate the SCDA spatial availability index, this method could be used to estimate the utilization potential of any specific procedure in a healthcare utilization database that contains locational information about each health care facility and the number of services they provide. The Centers for Medicare & Medicaid Services (CMS) annually publishes 100% Medicare Limited Data Set (LDS)–Outpatient Files [16]. This data set contains all Fee-for-Service (FFS) claims submitted by institutional outpatient facilities. The analytic cohort consists of Medicare FFS beneficiaries aged ≥ 65 years who resided in Georgia in 2014 and were treated for COPD with PR using Healthcare Common Procedure Coding System (HCPCS) code G0424. A medical diagnosis of a chronic respiratory condition including chronic bronchitis (ICD-9-CM codes 491.0–491.1), obstructive chronic bronchitis, without exacerbation (ICD-9-CM code 491.20), other chronic bronchitis (ICD-9-CM code 491.8), other emphysema (ICD-9-CM code 492.8), or chronic airway obstruction, not elsewhere classified (ICD-9-CM code 496) is required for reimbursement under this HCPCS code. Since PR typically requires several treatments to be fully effective, each patient receives multiple PR treatments. PR facilities were defined as any facility used by the analytic cohort. One important characteristic of the LDS data is that the geographic detail for the beneficiaries is low (e.g., county of residence), but the geographic detail about the provider is high (e.g. street address of practice location). That is, the LDS data provides the National Provider Inventory (NPI) number of each provider which, when matched to the publicly available NPI database, contains the full street address of their practice location. Any facility located within states that border Georgia were included if they billed Medicare for services provided to a Georgia resident. Multiple providers can practice at a single facility and multiple facilities can be located within a single ZIP Code. The supply locations in this study were the geometric centroid of the ZIP Code tabulation area (ZCTA) corresponding to the ZIP Code of their practice location in their National Provider Inventory (NPI) record. Then, the number of services for the providers and facilities were summed together if they had the same ZIP Code. For example, if ten providers with the same ZIP Code each performed ten services, the total supply at the ZCTA would be equal to 100. Calculating the estimated demand field One necessary input for calculating the SCDA spatial availability index is the estimated demand field, which is a pre-computed estimate of demand for PR at each population location [17]. We estimated demand for PR among Medicare Fee-for-Service Medicare (FFS) beneficiaries aged ≥ 65 years who were diagnosed with COPD at each Census block group. The geographic and population data were collected by US Census Bureau as part of the 2010 decennial Census. Geographic and age-specific population data for each Census 2010 block group were downloaded from the National Historical Geographic Information System database [18]. Demand estimates were needed because the demand for PR is higher than the observed utilization among the analytic cohort. That is, not all people who needed PR (e.g., persons with COPD) used it. For this study, demand for PR was estimated for each Census block group in Georgia (n = 5529) and in block groups located within the counties of other states that border of Georgia (n = 5576). Two publicly available datasets published by CMS were used to create the estimated demand field. The first is a county-level dataset containing the prevalence of selected chronic conditions (including COPD) for Medicare beneficiaries enrolled in the Fee-for-Service (FFS) program [19]. Beneficiaries with COPD were identified if a patient had at least one inpatient, skilled nursing facility, home health agency, or two carrier claims with any International Classification Diseases, 9th edition Clinical Modification (ICD-9-CM) codes 490–492 or 496 present on any claim within a 1 year reference period beginning in 2014 [20]. The second is a county-level dataset containing the percent of Medicare beneficiaries who were FFS beneficiaries, which was necessary given that only 66% of the US population aged ≥ 65 years are FFS enrollees; this percentage varies substantially across the United States [21]. Equation 1 shows that the estimated demand (Ei) for block group (i) was calculated by multiplying the 2010 Census population of persons aged ≥ 65 years (Pi) at block group(i) by the county-level percentage of that population who were Fee-for-Service beneficiaries (FFSci) and then by the county-level percentage of those FFS beneficiaries who were diagnosed with COPD (COPDci). $$E_{i} = P_{i} * COPD_{ci} * FFS_{ci}$$ where, i = index of block groups, c = index of counties in state of Georgia, Ei = the estimated number of Medicare FFS beneficiaries aged ≥ 65 years diagnosed with COPD at block group i, Pi = the number of people aged ≥ 65 years residing within block group i, FFSci = County-level percentage of population aged ≥ 65 years who were Medicare Fee-for-Service enrollees in 2014, COPDci = County-level percentage of Medicare FFS beneficiaries aged ≥ 65 years diagnosed with chronic obstructive pulmonary disease (COPD) in county c. Overview of the supply-concentric demand accumulation (SCDA) spatial availability index The most commonly used contemporary measure of spatial accessibility is called the two-step floating catchment area (2SFCA) spatial accessibility index [22]. The 2SFCA spatial accessibility index uses two types of floating catchments. Floating catchments are areas drawn around a location and have been defined in a number of ways, such as by a fixed Euclidean distance, [23,24,25] or travel time from a population location to a facility location [15, 26]. An alternate approach is for all catchments to vary in size according to some threshold value, such as the number of people needed to support the facility [5, 17, 27]. The first type of catchment is centered on the facilities where the supply of a service is located. A provider-to-population (P2P) ratio is calculated for each facility using the number of providers at the facility as the numerator and the number of people who reside within the facility's catchment as the denominator. The second type of catchment is centered on each population location. The final 2SFCA measure for a given population location is calculated in step by summing the P2P ratios for all health care facilities located within the floating catchment of that given population location. Another important advancement was the creation of the enhanced 2SFCA (2SFCA) spatial accessibility index, which uses discrete distance zones to account for the decreasing service utilization potential among the population of an area as their distance or travel time from facilities increased [15]. However, this distance decay parameter can also be estimated continuously using a variety of functions; in this study we used a Gaussian function thus removing the need for discrete zones [4]. The SCDA spatial availability index also uses floating catchments but uses them in an entirely different way than the 2SFCA. The SDCA only produces floating catchment areas around facility locations, but it produces as many catchment areas as there are population locations—or as many population locations located within a threshold distance or travel time if a threshold is imposed by the researcher. The "supply-concentric" component refers to the concentric catchment areas that surround each facility as the distance or travel time from the facility to each population location increases. These concentric catchments will be circular if based on Euclidean distance and oddly shaped if based on travel time along a road network. The Network Analyst extension of ArcGIS 10.5.1 (ESRI, Redlands, CA) and ESRI Streetmap data were used to create an origin(i)-destination (j) matrix of travel time in minutes from each PR facility to each population-weighted Census block group centroid. Unique facility(j)-population location(i) dyads are created and denoted as SCDAji when the facility(j) catchment intersects each population location(i). The "demand accumulation" component refers to the intervening demand for the health service that accumulates as the distance or travel time from the facility to each population location increases. The SCDA spatial availability index (SCDAi) requires two general steps. The first step is to calculate the SCDA ratio for each SCDAji dyad. Equation 2 shows that the numerator for a given dyad is the number of services observed at the facility location(j) and the accumulated demand at each population location(i) is the denominator. The accumulated demand at a given population location(i) is the estimated demand (Eq. 1) at that location plus the sum of the estimated demand at all population locations (ii) that were located nearer to a facility(j). An SCDAji ratio > 1 indicates that the observed number of services at facility(j) exceeds the accumulated demand at population location(i) and that the supply at facility(j) can fully satisfy the accumulated demand at each population location(i). An SCDA ratio < 1 indicates that the accumulated demand at population location(i) exceeds the supply at facility(j). $$SCDA_{ji} Ratio = \frac{{O_{j} }}{{\sum E_{i} \in \left( {d_{jii} \le d_{ji} } \right) }}$$ where, SCDAji = Facility(j)-specific SCDA ratio at population location(i), j = index of facility location, i = index of population locations, Oj = observed number of procedures at facility location j, Ei = the estimated number of Medicare FFS beneficiaries aged ≥ 65 years diagnosed with COPD at block group i, dji = Travel time from facility location(j) to population location(i), djii = Travel time from facility location(j) to population location (ii). The SCDA spatial availability index for each population location(i) is a summary measure of the SCDA ratios for all facilities (j) within a threshold distance (d0) from the population location(i). Eq. 3 shows that the SCDA availability index for each population location(i) is the gravity-weighted mean of all SCDAji ratios of facilities within a threshold distance (d0) from the population location(i). The numerator is the observed number of services provided at each facility(j) weighted by a distance decay weight G(dji,d0) presented in Eq. 4 [4]. This parameter is used to account for the decay in utilization potential as travel time increases. The denominator attributed to each population location is the accumulated demand (Ei) for PR divided by the number of PR facilities within a 60-min travel time (Nj); the denominator is not gravity weighted because the demand for PR from a person with COPD is independent of whether or not they are able to access the service. Logarithmic transformation aids in interpretation of the SCDA spatial availability index. An SCDA spatial availability index > 1, or log(SCDA index) > 0, at a population location(i) indicates the number of services at all facilities within 60 min can fully satisfy the accumulated demand at population location(i). An SCDA index < 1, or log (SCDAij) < 0, at a population location(i) indicates that the supply of the services at all facilities within 60 min is unable to satisfy the accumulated demand at population location(i). $$SCDA_{i} = \log \left( {\frac{{\mathop \sum \nolimits_{i} O_{j} *G\left( {d_{ji} ,d_{0} } \right)}}{{\left( {\frac{{\sum E_{i} \in \left( {d_{jii} \le d_{ji} } \right)}}{{N_{j} < d_{0} }}} \right)}}} \right)$$ where, i = index of population locations, j = index of facility location, Cji = Facility(j)-specific SCDA ratios at population location(i), sorted by dji. dji = Travel time from facility location(j) to population location(i), djii = Travel time from facility location(j) to population location(ii), Ci = The sum of the facility(j)-specific SCDA ratios of all facilities at block group(i) within a 60-min travel time, d0 = Threshold travel time (60 min), G(dji, d0) = Distance decay weight based on the Gaussian function, Nj = number of facilities within threshold travel time. $$G\left( {d_{ji} ,d_{0} } \right) = \left\{ {\begin{array}{*{20}c} {\frac{{e^{{{ - 1/2* \left( {{\raise0.7ex\hbox{${d_{ji} }$} \!\mathord{\left/ {\vphantom {{d_{ji} } {d_{0} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${d_{0} }$}}} \right)^{2} }}} - e^{ - 1/2} }}{{ - e^{ - 1/2} }} \quad d_{ji} \le d_{0} } \\ {0 d_{ji} > d_{0} } \\ \end{array} } \right.$$ where, dji = Travel time from facility location(j) to population location(i), G(dji, d0) = Distance decay weight based on the Gaussian function, d0 = Threshold travel time. To ensure comparability between the SCDA and 2SFCA, we used the same demand estimates from Eq. 1, the same number of PR treatments observed at each facility location(j), and the same Gaussian distance decay function. We also used travel time along a road network and imposed a 60-min travel time limit (d0). We used the equation for the two-step floating catchment area (2SFCA) spatial accessibility index which can be found elsewhere [15]. Block groups > 60 min from a facility were symbolized as their own map class, but they were assigned the minimum value of the block group with the longest travel time that was within 60 min. We used the bivariate local Moran's I statistic to evaluate the spatial association of the SCDA spatial availability index and the 2SFCA spatial accessibility index; this statistic measures the degree of positive or negative linear association between the value for one variable at a given location and the average of another variable at neighboring locations [28]. We also used Pearson's R correlation to measure the association between the two measures of utilization potential and the observed PR utilization rate for all block groups in Georgia. The PR utilization rate for each county used the total number of PR procedures observed in the county in the numerator and the total number of beneficiaries who received PR as the denominator. However, the SCDA index and the 2SFCA index were measured at the block group level while the PR utilization rate was a county-level measure because the most detailed level of geography for beneficiaries in the Medicare Limited Data Set (LDS)–Outpatient Files is county. Calculating measures of association under these conditions is generally known as the change of spatial support problem (COSP) where spatial support refers to the shape, size, and orientation of the geographic units into which spatial measurements are taken [29]. We addressed the COSP by transforming the PR utilization variable from a county-level variable to a block group-level using a process called downscaling. We downscaled each county's PR utilization rate by assigning its value to all block groups nested within that county. We also calculated these correlation coefficients stratified by metropolitan status using the 2013 NCHS Urban–Rural Classifications Scheme for Counties [30]. This scheme breaks counties in the United States into 6 classes, which we further collapsed into the following three categories: (1) large central and fringe metropolitan, (2) medium and small metropolitan, and (3) micropolitan and noncore. We used ArcGIS 10.5 for spatial data handling and cartography. We used the ArcGIS Network Analyst to create an origin–destination matrix of travel times along a street network consisting of 45 PR facilities(j) in and around the state of Georgia to the 11,305 block groups(i) in Georgia and counties that neighbor Georgia for a total of 508,725 unique facility(j)-population location(i) dyads. However, we only show results for the 5529 block groups within Georgia. We used custom scripts written for STATA 15/SE to calculate the SCDA index, the Two-Step Floating Catchment Area (2SFCA) index, the PR utilization rate, and Pearson's R correlation coefficients. Geoda 1.14.0 was used to calculate the bivariate local Moran's I statistic for each block group. In 2014, 1105 Medicare FFS beneficiaries aged ≥ 65 years who resided in Georgia received a total of 18,166 PR treatments at the 33 PR facilities practicing in Georgia or at one of the 12 PR facilities located outside the state. PR facilities were in only 18.9% (n = 30) of the 159 counties in Georgia. Almost half of all counties had at least one beneficiary who obtained PR in 2014 (n = 80). In 2010 population data, there were 985,137 persons aged ≥ 65 years. Of the 5529 block groups in Georgia, only 3.4% were more than 60 min away from a PR provider representing 3.6% of the population aged ≥ 65 years. The county-level PR utilization rate ranged from 0 to 56 treatments per PR beneficiary (Fig. 1). Most counties that had a PR utilization rate > 0 were in the northern half of Georgia, which is where most of the PR facilities were located. Pulmonary rehabilitation utilization rate and locations of facilities used by Georgia beneficiaries, 2014 The association between the geographic pattern of the PR utilization rate (Fig. 1) and the SCDA index (Fig. 2a) was relatively high using both the Pearson's R (aspatial) and local Moran's I (spatial) measures of association (R = 0.692 and I = 0.607, P < 0.001). The association of the PR utilization rate and the 2SFCA index (Fig. 2b) was much lower (R = 0.268 and I = 0.321, P < 0.001). The SCDA index was more strongly associated with the PR utilization rate than the 2SFCA index even when stratified by rural–urban status. Note that we did not conduct the local Moran's I tests by urban–rural status because the observations need to be spatially contiguous. In the large central and fringe metropolitan areas, the SCDA was more associated with PR utilization (R = 0.589, P < 0.001) than was the 2SFCA (R = 0.442, P < 0.001). The association between the SCDA index and PR utilization was highest in the medium and small metropolitan counties (R = 0.690, P < 0.001), but this urban–rural category exhibited the lowest association for the 2SFCA (R = 0.264), P < 0.001). PR utilization was more correlated in the nonmetropolitan counties with the SCDA spatial availability index (R = 0.431, P < 0.001) than with the 2SFCA spatial accessibility index (R = 0. 327, P < 0.001). a The spatial availability of pulmonary rehabilitation (PR) and (b) spatial accessibility of PR in Georgia, 2014 The association between the geographic pattern of the SCDA index (Fig. 2a) and the 2SFCA index (Fig. 2b) was relatively low using both the Pearson's R (aspatial) and local Moran's I (spatial) measures of association (R = 0.433 and I = 0.524, P < 0.001). The association was strongest in the large central and fringe metropolitan counties (R = 0.910, P < 0.001) and in the non-metropolitan counties (R = 0.846, P < 0.001), but was much smaller in the small and medium metropolitan areas (R = 0.143, P < 0.001). The spatial availability map (Fig. 2a) shows the geographic distribution of PR availability. Areas in red indicate that the supply of the services at all facilities within 60 min is potentially able to satisfy the demand for PR services while areas in blue have insufficient levels of supply to satisfy the demand. The spatial accessibility map (Fig. 2b) shows the sum of the facility-to-population ratios for providers located within 60 min of a block group. Despite the general pattern of high spatial availability and accessibility in block groups surround the PR facility locations, the geographic patterns of the two measures of potential utilization are quite different. For example, the SCDA in the block groups around Albany show a relatively few block groups with enough availability to satisfy the demand. But, according to the 2SFCA map, spatial accessibility to PR remains relatively high even a great distance from Albany. Likewise, the SCDA measure is much more concentrated around the provider locations than the 2SFCA measure. For example, Fig. 2b shows a large swath of relatively high spatial accessibility to PR in the middle of the state (around Macon) and another around Valdosta, but these large areas do not appear on Fig. 2a. Figure 3 highlights where the two measures are most similar and where they are different. Overall, it highlights that the high values of spatial accessibility do not dissipate as quickly with distance as the spatial availability measure suggesting that the spatial accessibility measure may overestimate the utilization potential of those populations. For example, the swath of red surround Atlanta shows where accessibility and availability are both relatively high, but there is a large fringe of green where there is high accessibility, but low availability. This phenomenon also occurs near Macon, Valdosta, and the block groups on the northern border near Chattanooga, TN. This suggests that the availability is exhausted at much shorter travel times. The purple areas represent places with high availability but low accessibility. These areas tend to be more rural so that a relatively small supply at a location can potentially satisfy a relatively large rural area. Co-location of the Supply-Concentric Demand Accumulation (SCDA) spatial availability index and the Two-Step Floating Catchment Area (2SFCA) spatial accessibility index In this study, we introduced the SCDA spatial availability index as a new measure of health service utilization potential and compared its results with those of the commonly used 2SFCA spatial accessibility index. We found that that the spatial availability of pulmonary rehabilitation (PR) in Georgia was significantly associated with the spatial accessibility of PR, but the spatial accessibility index appears to overestimate PR utilization potential. This overestimation may also explain why the SCDA measure of PR spatial availability is more highly correlated with PR utilization than the measure of PR spatial accessibility. While we measured the spatial availability to PR facilities to Medicare beneficiaries in the state of Georgia, the method can be generalized to any given health service in any part of the world that has data about the location and supply of that service and a population who needs it. Like the 2SFCA and other measures of spatial accessibility, the SCDA spatial availability index also models supply and demand simultaneously, which is considered an essential property of any measure of spatial utilization potential [7,8,9]. However, this study also highlights the importance of intervening demand as the primary factor impeding utilization of a health service at a given population location(i). The SCDA method presented here also builds upon the field-based framework for creating spatially adaptive floating catchment (SAFC) areas, which relies on pre-computed estimates of demand for a health service at each population location [17]. It is important to note the relationship between the concentric catchments described in this paper and the SAFCs; while each facility has several SCDAs, the one where the supply first exceeds the demand is the SAFC for a given facility. One of the strengths of the SCDA index compared to the 2SFCA is in its interpretability. While the value of the 2SFCA continually increases from its minimum value to suggest greater accessibility, it does not detect whether an area has an adequate supply of a service. The SCDA values center around an equilibrium point where demand equals supply thus explicitly describes which areas have enough supply to satisfy demand (SCDA > 0) and which areas do not (SCDA < 0). Another strength of this study is that we were able to measure spatial availability and accessibility along the border of Georgia by using any facility used by Georgia beneficiaries, including those located outside the state and the field-based framework of pre-computed estimates of demand for a health service at each population location. Another strength of the SCDA is the SCDA index is sensitive to the number of procedures that were provided rather than the number of clinicians providing the service. This is important because estimates of availability may be biased if they are based only on types of providers or facilities at a location [31]. This approach also addresses a phenomenon where a majority of physicians practice at multiple facility sites [32] because the supply is based on the number of patients seen at a facility rather than by which provider specifically treated the patient. There are also a few limitations that are specific to this study. First, the LDS data only contained 1 year of data about beneficiaries aged ≥ 65 years who were enrolled in Fee-for-Service programs and used an institutional facility for PR. This means that we had a relatively small number of beneficiaries to calculate utilization rates and the observed number of PR services for each facility. Furthermore, the CMS data did not have information about beneficiaries who obtained PR using managed care plans, Medicare Advantage programs, or at a Veteran's Affairs clinic or hospital. This limitation does not affect the SCDA method or the comparisons we made within the context of the data that were available, but it is possible that the data is not representative of the full landscape of PR utilization in Georgia. Second, the highest level of geographic detail for Medicare beneficiaries is their county of residence, which required us to downscale the observed PR utilization rates to the census block group-level. While the correlation coefficients are not as robust as they would be if all measures were originally at the block group-level, these findings still suggest that the spatial availability measure is potentially a better estimate of utilization potential than the 2SFCA method. However, there is another limitation of the SCDA that does not apply to the 2SFCA. One property of the SCDA is that the geographic reach of the supply at two or more individual facilities located within the same geographic unit will be much smaller than the reach of a single location with the supply data aggregated together. Our solution was to aggregate the supply data for two or more facilities located within the same ZIP Code and repositioned the facility locations to one statistically central location. The 2SFCA excels over SCDA because it produces a more geographically refined measure of utilization potential because the supply data does not need to be aggregated and repositioned. This SCDA method has wide application beyond one region in the United States or the therapeutic procedure known as pulmonary rehabilitation. While we used pulmonary rehabilitation to illustrate the SCDA spatial availability index, this method could be used to measure any specific procedure in a healthcare utilization database that contains locational information about each health care facility and the number of services it provided to a defined population. National health systems that maintain complete records of each patient, the type of care they sought, and the location where care was delivered are best positioned to take full advantage of this method. However, even health systems that do not maintain robust datasets can still take advantage of this method as long they have data about the locations of the available service facilities and the locations of the populations that potentially need the service. Khan AA, Bhardwaj SM. Access to health care. A conceptual framework and its relevance to health care planning. Eval Health Prof. 1994;17(1):60–76. Luo W, Wang F. Measures of spatial accessibility to health care in a GIS environment: synthesis and a case study in the Chicago region. Environ Plann B Plann Des. 2003;30(6):865–84. Guagliardo MF. Spatial accessibility of primary care: concepts, methods and challenges. Int J Health Geogr. 2004;3(1):3. PubMed PubMed Central Article Google Scholar Dai D. Black residential segregation, disparities in spatial access to health care facilities, and late-stage breast cancer diagnosis in metropolitan Detroit. Health Place. 2010;16(5):1038–52. McGrail MR. Spatial accessibility of primary health care utilising the two step floating catchment area method: an assessment of recent improvements. Int J Health Geogr. 2012;11(1):50. Delamater PL. Spatial accessibility in suboptimally configured health care systems: a modified two-step floating catchment area (M2SFCA) metric. Health Place. 2013;24:30–43. Joseph AE, Phillips DR. Accessibility and utilization: geographical perspectives on health care delivery. NY: Sage; 1984. Radke J, Mu L. Spatial decompositions, modeling and mapping service regions to predict access to social programs. Geographic Inform Sci. 2000;6(2):105–12. Higgs G. A literature review of the use of GIS-based measures of access to health care services. Health Serv Outcomes Res Methodol. 2004;5(2):119–39. Croft JB, et al. Urban-rural and state differences in COPD prevalence, Medicare hospitalizations, and mortality–United States, 2015. MMWR Morb Mortal Wkly Rep. 2018;67(7):205. Ford ES, et al. COPD surveillance—United States, 1999-2011. Chest. 2013;144(1):284–305. NHLBI. Pulmonary Rehabilitation 2020 February 7, 2020]; https://www.nhlbi.nih.gov/health-topics/pulmonary-rehabilitation. Ries A, et al. Pulmonary rehabilitation executive summary. Chest. 2007;131(5):1S–3S. Spruit M, et al. An official American Thoracic Society/European Respiratory Society statement: key concepts and advances in pulmonary rehabilitation. Am J Respir Crit Care Med. 2013;188(8):e13–64. Luo W, Qi Y. An enhanced two-step floating catchment area (E2SFCA) method for measuring spatial accessibility to primary care physicians. Health Place. 2009;15(4):1100–7. Centers for Medicare and Medicaid Services. Medicare Limited Data Set–100% Outpatient Files. 2014; https://www.ccwdata.org/web/guest/condition-categories. Matthews KA, Gaglioti AH, Holt JB, Wheaton AG, Croft JB. Using spatially adaptive floating catchments to measure the geographic availability of a health care service: pulmonary rehabilitation in the southeastern United States. Health Place. 2019;56:165–73. Manson S, et al., IPUMS National Historical Geographic Information System: Version 12.0 [Database]. Minneapolis: University of Minnesota. 2017. Centers for Medicare and Medicaid Services. Chronic Conditions—Prevalence State/County Level: All Beneficiaries by Age, 2007–2017. 2014; https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Chronic-Conditions/CC_Main.html. Centers for Medicare and Medicaid Services. Condition Categories. 2014; https://www.ccwdata.org/web/guest/condition-categories. Centers for Medicare and Medicaid Services. Medicare Geographic Variation Public Use File. 2014 [cited June 5, 2019; Geographic Variation Public Use File-State/County Tables–All Beneficiaries. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Geographic-Variation/GV_PUF.html. Accessed 1 July 2017. Luo W. Using a GIS-based floating catchment method to assess areas with shortage of physicians. Health Place. 2004;10(1):1–11. Lu H, et al. Population-based geographic access to endocrinologists in the United States, 2012. BMC Health Serv Res. 2015;15(1):541. Lian M, Struthers J, Schootman M. Comparing GIS-based measures in access to mammography and their validity in predicting neighborhood risk of late-stage breast cancer. PLoS ONE. 2012;7(8):e43000. CAS PubMed PubMed Central Article Google Scholar Wang F, Luo L, McLafferty SL. Healthcare access, socioeconomic factors and late-stage cancer diagnosis: an exploratory spatial analysis and public policy implication. Int J Public Pol. 2010;5(2–3):237–58. McGrail MR, Humphreys JS. Measuring spatial accessibility to primary care in rural areas: improving the effectiveness of the two-step floating catchment area method. Appl Geogr. 2009;29(4):533–41. Luo W, Whippo T. Variable catchment sizes for the two-step floating catchment area (2SFCA) method. Health Place. 2012;18(4):789–95. Anselin L, Syabri I, Smirnov O. Visualizing multivariate spatial correlation with dynamically linked windows. In Proceedings, CSISS Workshop on New Tools for Spatial Data Analysis, Santa Barbara, CA. 2002. Gotway CA, Young LJ. Combining incompatible spatial data. J Am Stat Assoc. 2002;97(458):632–48. Ingram DD, Franco SJJV. NCHS urban-rural classification scheme for counties. Vital Health Statistics. Series 2, Data Evaluation Methods Research, 2012(154): p. 1-65. Josey MJ, et al. Should measures of health care availability be based on the providers or the procedures? A case study with implications for rural colorectal cancer disparities. J Rural Health. 2019;35(2):236–43. Xierali IM. Physician multisite practicing: impact on access to care. JABFM. 2018;31(2):260–9. Centers for Disease Control and Prevention, Atlanta, GA, USA Kevin A. Matthews, James B. Holt, Anne G. Wheaton & Janet B. Croft National Center for Primary Care, Morehouse School of Medicine, Atlanta, GA, USA Anne H. Gaglioti Kevin A. Matthews James B. Holt Anne G. Wheaton Janet B. Croft KM conceived of the SCDA, performed all analyses and created all maps. AG drafted manuscript and provided analytical feedback. JH drafted manuscript and provided analytical feedback. AW provided subject matter expertise on pulmonary rehabilitation. JC provided subject matter expertise on pulmonary rehabilitation. All authors read and approved the final manuscript. Correspondence to Kevin A. Matthews. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Matthews, K.A., Gaglioti, A.H., Holt, J.B. et al. Estimating health service utilization potential using the supply-concentric demand-accumulation spatial availability index: a pulmonary rehabilitation case study. Int J Health Geogr 19, 30 (2020). https://doi.org/10.1186/s12942-020-00224-2 Received: 18 May 2020 DOI: https://doi.org/10.1186/s12942-020-00224-2
CommonCrawl
\(\require{cancel} \newcommand\degree[0]{^{\circ}} \newcommand\Ccancel[2][black]{\renewcommand\CancelColor{\color{#1}}\cancel{#2}} \newcommand{\alert}[1]{\boldsymbol{\color{magenta}{#1}}} \newcommand{\blert}[1]{\color{blue}{#1}} \newcommand{\glert}[1]{\color{green}{#1}} \delimitershortfall-1sp \newcommand\abs[1]{\left|#1\right|} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \definecolor{fillinmathshade}{gray}{0.9} \newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}} \) Intermediate Algebra: Functions and Graphs Katherine Yoshiwara AssignmentsPractice Peer Instruction (Instructor)Peer Instruction (Student) Edit ProfileChange PasswordLog Out ✏PrevUpNext PrevUpNext 1 Linear Models Linear Models Graphs and Equations Intercepts Equations of Lines Chapter Summary and Review 2 Applications of Linear Models Algebraic Solution of Systems Gaussian Reduction Linear Inequalities in Two Variables 3 Quadratic Models Extraction of Roots Intercepts, Solutions, and Factors Graphing Parabolas Completing the Square Chapter 3 Summary and Review 4 Applications of Quadratic Models Quadratic Formula The Vertex Curve Fitting Quadratic Inequalities 5 Functions and Their Graphs Graphs of Functions Some Basic Graphs Direct Variation Inverse Variation Functions as Models 6 Powers and Roots Integer Exponents Roots and Radicals Rational Exponents Working with Radicals Radical Equations 7 Exponential Functions Exponential Growth and Decay Exponential Models 8 Polynomial and Rational Functions Algebraic Fractions Operations on Algebraic Fractions More Operations on Fractions Equations with Fractions 9 Equations and Graphs Properties of Lines The Distance and Midpoint Formulas Conic Sections: Ellipses Conic Sections: Hyperbolas Nonlinear Systems 10 Logarithmic Functions Logarithmic Functions Logarithmic Scales The Natural Base Chapter 10 Summary and Review A Using a GeoGebra Calculator App Entering Expressions Graphing an Equation More graphing Troubleshooting the GeoGebra App B Answers to Selected Exercises Authored in PreTeXt Section 5.1 Functions Subsection 5.1.1 Definitions and Notation We often want to predict values of one variable from the values of a related variable. For example, when a physician prescribes a drug in a certain dosage, she needs to know how long the dose will remain in the bloodstream. A sales manager needs to know how the price of his product will affect its sales. A function is a special type of relationship between variables that allows us to make such predictions. We have already seen some examples of functions. For instance, suppose it costs $800 for flying lessons, plus $30 per hour to rent a plane. If we let \(C\) represent the total cost for \(t\) hours of flying lessons, then \begin{equation*} C=800+30t ~~~~ (t\ge 0) \end{equation*} The variable \(t\) is called the input variable, and \(C\) is the output variable. Given a value of the input, we can calculate the corresponding output value using the formula for the function. Thus, for example when \(t=\alert{0}\text{,}\) \(C=800+30(\alert{0})=800\) when \(t=\alert{10}\text{,}\) \(C=800+30(\alert{10})=1100\) We can display the relationship between two variables by a table or by ordered pairs. The input variable is the first component of the ordered pair, and the output variable is the second component. For the example above we have: \(t\) \(C\) \((t,C)\) \(0\) \(800\) \((0, 800)\) \(10\) \(1100\) \((10,1100)\) Note that there can be only one value of \(C\) for each value of \(t\text{.}\) We say that "\(C\) is a function of \(t\text{.}\)" Definition 5.1.1. Definition of Function. A function is a relationship between two variables for which a exactly one value of the output variable is determined by each value of the input variable. Checkpoint 5.1.2. QuickCheck 1. What distinguishes a function from other variable relationships? A) There cannot be two output values for a single input value. B) We can display the variables as ordered pairs. C) The variables are related by a formula. D) The values of the input and output variables must be different. \(\text{A) There ... input value.}\) There cannot be two output values for a single input value. Example 5.1.3. The distance, \(d\text{,}\) traveled by a car in 2 hours is a function of its speed, \(r\text{.}\) If we know the speed of the car, we can determine the distance it travels by the formula \(d = r \cdot 2\text{.}\) The cost of a fill-up with unleaded gasoline is a function of the number of gallons purchased. The gas pump represents the function by displaying the corresponding values of the input variable (number of gallons) and the output variable (cost). Score on the Scholastic Aptitude Test (SAT) is not a function of score on an IQ test, because two people with the same score on an IQ test may score differently on the SAT; that is, a person's score on the SAT is not uniquely determined by his or her score on an IQ test. Checkpoint 5.1.4. Practice 1. As part of a project to improve the success rate of freshmen, the counseling department studied the grades earned by a group of students in English and algebra. Do you think that a student's grade in algebra is a function of his or her grade in English? Explain why or why not. A) Each value of \(x \) has exactly one value of \(y \) associated with it. B) Two students with the same grade English can have different grades in algebra. C) Two students with the same grade math will also have the same grade in English. D) Two students with the same grade math can have different grades in English. Phatburger features a soda bar, where you can serve your own soft drinks in any size. Do you think that the number of calories in a serving of Zap Kola is a function of the number of fluid ounces? A) The number of calories is proportional to the number of fluid ounces. B) Two servings with the same calories will have different fluid ounces. C) Two servings with the same flid ounces will have different calories. Answer 1. \(\text{No}\) \(\text{Choice 2}\) \(\text{Yes}\) \(\text{A) The ... fluid ounces.}\) No, students with the same grade in English can have different grades in algebra. Yes, the number of calories is proportional to the number of fluid ounces. A function can be described in several different ways. In the following examples, we consider functions defined by tables, by graphs, and by equations. Subsection 5.1.2 Functions Defined by Tables When we use a table to describe a function, the first variable in the table (the left column of a vertical table or the top row of a horizontal table) is the input variable, and the second variable is the output. We say that the output variable is a function of the input. The table below shows data on sales compiled over several years by the accounting office for Eau Claire Auto Parts, a division of Major Motors. In this example, the year is the input variable, and total sales is the output. We say that total sales, \(S\text{,}\) is a function of \(t\text{.}\) Year \((t)\) Total sales \((S)\) The table below gives the cost of sending a letter by first-class mail in 2020. Weight in ounces \((w)\) Postage \((P)\) \(0 \lt w \le 1 \) $0.50 If we know the weight of the article being mailed, we can find the postage from the table. For instance, a catalog weighing 4.5 ounces would require $1.10 in postage. In this example, \(w\) is the input variable and \(p\) is the output variable. We say that \(p\) is a function of \(w\text{.}\) The table below records the age and cholesterol count for 20 patients tested in a hospital survey. Age Cholesterol count Age Cholesterol count 53 217 \(\alert{51}\) \(\alert{209}\) \(\alert{51}\) \(\alert{227}\) 57 208 According to these data, cholesterol count is not a function of age, because several patients who are the same age have different cholesterol levels. For example, three different patients are 51 years old but have cholesterol counts of 227, 209, and 216, respectively. Thus, we cannot determine a unique value of the output variable (cholesterol count) from the value of the input variable (age). Other factors besides age must influence a person's cholesterol count. Note 5.1.6. Note that several different inputs for a function can have the same output. For example, the inputs 4.5 and 4.25 in part (b) of the Example above have output $1.10. However, a single input cannot have more than one output, as illustrated in part (c) of the Example. Decide whether each table describes \(y\) as a function of \(x\text{.}\) Explain your choice. \(x\) \(3.5\) \(2.0\) \(2.5\) \(3.5\) \(2.5\) \(4.0\) \(2.5\) \(3.0\) \(y\) \(2.5\) \(3.0\) \(2.5\) \(4.0\) \(3.5\) \(4.0\) \(2.0\) \(2.5\) Is \(y\) a function of \(x\text{?}\) Each value of \(x\) has exactly one value of \(y\) associated with it. For example, \(x=3.5\) corresponds both to \(y=2.5\) and also to \(y=4.0\) \(x\) \(-3\) \(-2\) \(-1\) \(0\) \(1\) \(2\) \(3\) \(y\) \(17\) \(3\) \(0\) \(-1\) \(0\) \(3\) \(17\) For example, \(y=3\) corresponds both to \(x=-2\) and also to \(x=2\) No, for example, \(x=3.5\) corresponds both to \(y=2.5\) and also to \(y=4\text{.}\) Yes, each value of \(x\) has exactly one value of \(y\) associated with it. How would you know if a table of values does not come from a function? A) The output values are all the same. B) The input values are not evely spaced. C) Two different input values have the same output value. D) Two different output values have the same input value. \(\text{D) Two ... input value.}\) Two different output values have the same input value. Subsection 5.1.3 Functions Defined by Graphs We can also use a graph to define a function. The input variable is displayed on the horizontal axis, and the output variable on the vertical axis. The graph shows the number of hours, \(H\text{,}\) that the sun is above the horizon in Peoria, Illinois, on day \(t\text{,}\) where \(t = 0\) on January 1. Which variable is the input, and which is the output? How many hours of sunlight are there in Peoria on day 150? On which days are there 12 hours of sunlight? What are the maximum and minimum values of \(H\text{,}\) and when do these values occur? The input variable, \(t\text{,}\) appears on the horizontal axis. The number of daylight hours, \(H\text{,}\) is a function of the date. The output variable appears on the vertical axis. The point on the curve where \(t = 150\) has \(H \approx 14.1\text{,}\) so Peoria gets about 14.1 hours of daylight when \(t = 150\text{,}\) which is at the end of May. \(H = 12\) at the two points where \(t \approx 85\) (in late March) and \(t \approx 270\) (late September). The maximum value of 14.4 hours occurs on the longest day of the year, when \(t \approx 170\text{,}\) about three weeks into June. The minimum of 9.6 hours occurs on the shortest day, when \(t \approx 355\text{,}\) about three weeks into December. Checkpoint 5.1.10. Practice 3. The graph shows the elevation in feet, \(a\text{,}\) of the Los Angeles Marathon course at a distance \(d\) miles into the race. (Source: Los Angeles Times, March 3, 2005) The input variable is \(d \text{,}\) and the output variable is \(a \text{.}\) The input variable is \(a \text{,}\) and the output variable is \(d \text{.}\) What is the elevation at mile 20? Answer: feet At what distances is the elevation 150 feet? The relevant distances (to the nearest half-mile) separated by commas: miles What are the maximum and minimum values of \(a \text{,}\) and when do these values occur? The maximum elevation is \(a=\) feet which occurs at \(d=\). The runners pass by the Los Angeles Coliseum at about 4.2 miles into the race. What is the elevation there? Approximately (within 5) feet \(210\) \(5, 11, 12, 16, 17.5, 18\) \(0\) Approximately 210 feet Approximately where \(d\approx 5 \text{,}\) \(d\approx 11 \text{,}\) \(d\approx 12 \text{,}\) \(d\approx 16 \text{,}\) \(d\approx 17.5 \text{,}\) and \(d\approx 18\) The maximum value of 300 feet occurs at the start, when \(d = 0 \text{.}\) The minimum of 85 feet occurs when \(d\approx 15 \text{.}\) Subsection 5.1.4 Functions Defined by Equations Example 5.1.11 illustrates a function defined by an equation. Example 5.1.11. As of 2020, One World Trade Center in New York City is the nation's tallest building, at 1776 feet. If an algebra book is dropped from the top of One World Trade Center, its height above the ground after \(t\) seconds is given by the equation \begin{equation*} h = 1776 - 16t^2 \end{equation*} Thus, after \(\alert{1}\) second the book's height is \begin{equation*} h = 1776 - 16(\alert{1})^2 = 1760 \text{ feet} \end{equation*} After \(\alert{2}\) seconds its height is For this function, \(t\) is the input variable and \(h\) is the output variable. For any value of \(t\text{,}\) a unique value of \(h\) can be determined from the equation for \(h\text{.}\) We say that \(h\) is a function of \(t\text{.}\) Write an equation that gives the volume, \(V\text{,}\) of a sphere as a function of its radius, \(r\text{.}\) \(V=\) \(\left({\textstyle\frac{4}{3}}\right)\pi r^{3}\) \(V=\dfrac{4}{3}\pi r^3\) Checkpoint 5.1.13. QuickCheck 3. Name three ways to describe a function. A) By inputs, outputs, or evaluation B) By tables, equations, or graphs C) By the intercepts, the slope, or the vertex D) By numbers, letters, or diagrams \(\text{B) By ... , or graphs}\) By tables, equations, or graphs Subsection 5.1.5 Function Notation There is a convenient notation for discussing functions. First, we choose a letter, such as \(f\text{,}\) \(g\text{,}\) or \(h\) (or \(F\text{,}\) \(G\text{,}\) or \(H\)), to name a particular function. (We can use any letter, but these are the most common choices.) For instance, in Example 5.1.11, the height, \(h\text{,}\) of a falling algebra book is a function of the elapsed time, \(t\text{.}\) We might call this function \(f\text{.}\) In other words, \(f\) is the name of the relationship between the variables \(h\) and \(t\text{.}\) We write \begin{equation*} h = f (t) \end{equation*} which means "\(h\) is a function of \(t\text{,}\) and \(f\) is the name of the function." Caution 5.1.14. The new symbol \(f(t)\text{,}\) read "\(f\) of \(t\text{,}\)" is another name for the height, \(h\text{.}\) The parentheses in the symbol \(f(t)\) do not indicate multiplication. (It would not make sense to multiply the name of a function by a variable.) Think of the symbol \(f(t)\) as a single variable that represents the output value of the function. With this new notation we may write \begin{equation*} h = f (t) = 1776 - 16t^2 \end{equation*} \begin{equation*} f (t) = 1776 - 16t^2 \end{equation*} to describe the function. Note 5.1.15. Perhaps it seems complicated to introduce a new symbol for \(h\text{,}\) but the notation \(f(t)\) is very useful for showing the correspondence between specific values of the variables \(h\) and \(t\text{.}\) In Example 5.1.11, the height of an algebra book dropped from the top of One World Trade Center is given by the equation when \(t=1\) \(h=1760\) Using function notation, these relationships can be expressed more concisely as \(f(1)=1760\) and \(f(2)=1712\) which we read as "\(f\) of 1 equals 1760" and "\(f\) of 2 equals 1712." The values for the input variable, \(t\text{,}\) appear inside the parentheses, and the values for the output variable, \(h\text{,}\) appear on the other side of the equation. Remember that when we write \(y = f(x)\text{,}\) the symbol \(f(x)\) is just another name for the output variable. Function Notation. True or False. The notation \(f(t)\) indicates the product of \(f\) and \(t\text{.}\) If \(y=f(x)\text{,}\) then \(f(x)\) gives the value of the input variable. If \(Q\) is a function of \(M\text{,}\) we may write \(M=f(Q)\text{.}\) In the equation \(d=g(n)\text{,}\) the letters \(d\text{,}\) \(g\text{,}\) and \(n\) are variables. \(\text{False}\) Let \(F\) be the name of the function defined by the graph in Example 5.1.9, the number of hours of daylight in Peoria \(t\) days after January 1. Use function notation to state that \(H\) is a function of \(t\text{.}\) \(\displaystyle F=H(t) \) \(\displaystyle H=F(t) \) \(\displaystyle t=F(H) \) \(\displaystyle H=t(F) \) What does the statement \(F(15) = 9.7\) mean in the context of the problem? A) The sun is 9.7 degrees above the horizon in Peoria on January 15. B) The sun is above the horizon in Peoria for 15 hours on January 10. C) The sun is above the horizon in Peoria for 9.7 hours on January 16. \(\text{C) The ... January 16.}\) \(\displaystyle H = F(t)\) The sun is above the horizon in Peoria for 9.7 hours on January 16. Use function notation to write the statement "\(L\) defines \(w\) as a function of \(p\text{.}\)" \(\displaystyle L=w(p)\) \(\displaystyle w=L(p)\) \(\displaystyle p=L(w)\) \(\displaystyle L=p(w)\) \(w=L(p)\) Subsection 5.1.6 Using Function Notation Finding the value of the output variable that corresponds to a particular value of the input variable is called evaluating the function. Let \(g\) be the name of the postage function defined by the table in Example 5.1.5 b. Find \(g(1)\text{,}\) \(g(3)\text{,}\) and \(g(6.75\)). According to the table, when \(w=1\text{,}\) \(p=0.50\) so \(g(1)=0.50\) when \(w=6.75\text{,}\) \(p=1.40\) so \(g(6.75)=1.40\) Thus, a letter weighing 1 ounce costs $0.50 to mail, a letter weighing 3 ounces costs $0.80, and a letter weighing 6.75 ounces costs $1.40. We can also find the input (or inputs) corresponding to a given output. For example, if \(p=g(w)\) is the postage function, we solve the equation \(g(w)=0.65\) by finding all input values, \(w\text{,}\) that correspond to the output $0.65. According to the table in Example 2b, any value of \(w\) greater than 1 but less than or equal to 2 is a solution. When you exercise, your heart rate should increase until it reaches your target heart rate. The table shows target heart rate, \(r = f(a) \text{,}\) as a function of age. \(a\) 20 25 30 35 40 45 50 55 60 65 70 \(r\) 150 146 142 139 135 131 127 124 120 116 112 Find \(f(25)\) and \(f(50) \text{.}\) \(f(25)=\) Find a value of \(a\) for which \(f(a) = 135 \text{.}\) \(a=\) \(40\) \(f (25) = 146 \text{,}\) \(f(50) = 127\) \(\displaystyle a = 40\) If \(n=f(a)\text{,}\) what are the input and output variables? \(f\) is the output and \(n\) is the input \(a\) is the input and \(f\) is the output \(a\) is the input and \(n\) is the output \(f(a)\) is the input and \(n\) is the output To evaluate a function described by an equation, we simply substitute the given input value into the equation to find the corresponding output, or function value. The function \(H\) is defined by \(~H=f(s) = \dfrac{\sqrt{s+3}}{s}.~~\) Evaluate the function at the following values. \(\displaystyle s=6\) \(\displaystyle s=-1\) \(f(\alert{6})=\dfrac{\sqrt{\alert{6}+3}}{\alert{6}}= \dfrac{\sqrt{9}}{6}=\dfrac{3}{6}=\dfrac{1}{2}\text{.}\) Thus, \(f(6)=\dfrac{1}{2}\text{.}\) \(f(\alert{-1})=\dfrac{\sqrt{\alert{-1}+3}}{\alert{-1}}= \dfrac{\sqrt{2}}{-1}=-\sqrt{2}\text{.}\) Thus, \(f(-1)=-\sqrt{2}\text{.}\) Complete the table displaying ordered pairs for the function \(f(x) = 5 - x^3\text{.}\) Evaluate the function to find the corresponding \(f(x)\)-value for each value of \(x\text{.}\) \(x\) \(f(x)\) \(-2\) \(f(\alert{-2})=5-(\alert{-2})^3=~\) \(0\) \(f(\alert{0})=5-\alert{0}^3=\) \(-22\) \(-2\) \(13\) \(0\) \(5\) \(3\) \(-22\) Exercises 5.1.7 Problem Set 5.1 Exercise Group. For Problems 1-4, evaluate. \(2x-x^2~~~\) for \(x=-4\) \(\dfrac{2z-3}{z+2}~~~\) for \(z=\dfrac{1}{2}\) \(\sqrt{36-(r+1)^2}~~~\) for \(r=3\) \(\sqrt{20}\) \(-t^3+3t^2~~~\) for \(t=-2\) For Problems 5-8, solve. \(4-5x-2x^2=1\) \(-3, \dfrac{1}{2}\) \(6(2x-8)^2=20\) \(\dfrac{1}{2x-9}=3\) \(\dfrac{14}{3}\) \(5\sqrt{8+x}=20\) \(x=h(v)=2v^2-3v+1\) Evaluate \(h(-2)\text{.}\) Solve \(h(v)=6\text{.}\) input: \(v\text{,}\) output: \(x\) \(-1, \dfrac{5}{2}\text{.}\) \(A=g(r)=750(1+r)^2\) Evaluate \(g(0.04)\text{.}\) Solve \(g(r)=874.80\text{.}\) For Problems 11 and 12, evaluate the function. \(F(x)=\dfrac{1-x}{2x-3}\) \(\displaystyle F(0)\) \(\displaystyle F(-3)\) \(\displaystyle F(\dfrac{5}{2})\) \(\displaystyle F(9.8)\) \(\displaystyle \dfrac{-1}{3}\) \(\displaystyle -0.530\) \(E(t)=\sqrt{t-4}\) \(\displaystyle E(16)\) \(\displaystyle E(4)\) \(\displaystyle E(4.2)\) Which of the following tables define the second variable as a function of the first variable? Explain why or why not. \(x\) \(t\) \(-1\) \(2\) \(1\) \(-2\) \(y\) \(w\) \(1\) \(12\) \(x\) \(y\) \(s\) \(t\) \(r\) \(-4\) \(-2\) \(0\) \(2\) \(4\) \(v\) \(6\) \(6\) \(3\) \(6\) \(8\) \(p\) \(-5\) \(-4\) \(-3\) \(-2\) \(-1\) \(d\) \(-5\) \(-4\) \(-3\) \(-2\) \(-1\) (b), (c), (e), and (f) Pressure (\(p\)) Volume (\(v\)) \(15\) \(100.0\) \(20\) \(75.0\) Frequency (\(f\)) Wavelength (\(w\)) \(5\) \(60.0\) \(40\) \(7.5\) Temperature (\(T\)) Humidity (\(h\)) Jan. 1 \(\hphantom{000}34\degree\)F \(42\%\) rate (\(I\)) Unemployment rate (\(U\)) 1972 \(\hphantom{000}5.6\%\) \(5.1\%\) 1974 \(\hphantom{000}10.1\%\) \(4.9\%\) The function described in Problem 14(a) is called \(g\text{,}\) so that \(v = g(p)\text{.}\) Find the following: \(\displaystyle g(25)\) \(x\) so that \(g(x) = 50\) The function described in Problem 14(b) is called \(h\text{,}\) so that \(w = h(f)\text{.}\) Find the following: \(\displaystyle h(20)\) \(x\) so that \(h(x) = 10\) For Problems 17—24, use the graph of the function to answer the questions. The graph shows \(C=h(t)\text{,}\) where \(C\) stands for the number of customers (in thousands) signed up for a new movie streaming service, measured in months after their advertising campaign at \(t=0\) in January. When did the service have 2000 customers? Write your answer with function notation. How long did it take that number to double? How long did it take for the number to double again? How many customers signed up between March and April (months 2 and 3)? \(\displaystyle h(1)=2\) The graph shows \(P\) as a function of \(t\text{.}\) \(P\) is the number of houses in Cedar Grove who have had solar panels installed \(t\) years after 2000. When did 3500 houses have solar panels? Write your answer using function notation. How many houses had solar panels in 2005? Write your answer using function notation. The number of houses with solar panels in Cedar Grove seems to be leveling off at what number? How many houses had solar panels installed between 2001 and 2004? The graph shows the revenue, \(R=f(d)\text{,}\) a movie theater collects as a function of the price, \(d\text{,}\) it charges for a ticket. Estimate the revenue if the theater charges $12.00 for a ticket. What should the theater charge for a ticket in order to collect $1500 in revenue? Write your answers to parts (a) and (b) using function notation. For what values of \(d\) is \(R \gt 1800\text{?}\) Approximately $1920 $5 or $15 \(\displaystyle f(12) \approx 1920;~~ f(5)=1500,~f(15)=1500\) \(\displaystyle 7\lt d\lt 13\) The graph shows \(S=g(w)\text{.}\) \(S\) represents the weekly sales of a best-selling book, in thousands of dollars, \(w\) weeks after it is released. In which weeks were sales over $7000? In which week did sales fall below $5000 on their way down? For what values of \(w\) is \(S\gt 3.4\text{?}\) The graph shows the U.S. unemployment rate, \(U=F(t)\text{,}\) where \(t\text{,}\) represents years. Give your answers to the questions below in function notation. (Source: U.S. Bureau of Labor Statistics) When did the unemployment rate reach its highest value, and what was its highest value? When did the unemployment rate fall to its lowest value, and what was its lowest value? Give two years in which the unemployment rate was 4.5%. \(\displaystyle F(1992) = 7.5\%\) \(\displaystyle F(2000) = 4\%\) \(\displaystyle F(1998+ = 4.5\%,~ F(2001) = 4.5\%\) The graph shows the federal minimum wage, \(M\text{,}\) over the pasr five decades, adjusted for inflation to reflect its buying power in 2004 dollars. (Source: www.infoplease.com) Is \(M\) a function of \(t\text{?}\) Support your answer. What is the largest function value on the graph? Write your answer with function notation, and explain what it means in this problem. Give two years in which the minimum wage was worth $8 in 2004 dollars. Does this fact mean that \(M\) is not a function of \(t\text{?}\) Why or why not? The bar graph shows the percent of Earth's surface that lies at various altitudes or depths below the surface of the oceans. (Depths are given as negative altitudes.) (Source: Open University) Read the graph and complete the table. Altitude (km) Percent of Earth's surface \(-7\) to \(-6\) \(\) \(-1\) to \(0\) \(\) \(0\) to \(1\) \(\) What is the most common altitude? What is the second most common altitude? Approximately what percent of the Earth's surface is below sea level? The height of Mt. Everest is 8.85 kilometers. Can you think of a reason why it is not included in the graph? Energy is necessary to raise the temperature of a substance, and it is also needed to melt a solid substance to a liquid. The table shows data from heating a solid sample of stearic acid. Heat was applied at a constant rate throughout the experiment. Time (minutes) \(0\) \(0.5\) \(1.5\) \(2\) \(2.5\) \(3\) \(4\) \(5\) \(6\) \(7\) \(8\) \(8.5\) \(9\) \(9.5\) \(10\) Temperature (\(\deg C\)) \(19\) \(29\) \(40\) \(48\) \(53\) \(55\) \(55\) \(55\) \(55\) \(55\) \(55\) \(64\) \(70\) \(73\) \(74\) Did the temperature rise at a constant rate? Describe the temperature as a function of time. Graph the temperature as a function of time. What is the melting point of stearic acid? How long did it take the sample to melt? The number of compact cars that a large dealership can sell at price \(p\) is given by \begin{equation*} N(p) = \dfrac{12,000,000}{p} \end{equation*} Evaluate \(N(6000)\) and explain what it means. As \(p\) increases, does \(N(p)\) increase or decrease? Support your answer with calculations. Solve the equation \(F(p)=400\text{,}\) and explain what it means. \(N(6000) = 2000\text{:}\) 2000 cars will be sold at a price of $6000. 30,000. At a price of $30,000, they will sell 400 cars. The distance, \(d\text{,}\) in miles that a person can see on a clear day from a height, \(h\text{,}\) in feet is given by \begin{equation*} d = G(h) = 1.22\sqrt{h} \end{equation*} Evaluate \(G(20,320)\) and explain what it means. As \(h\) increases, does \(d\) increase or decrease? Support your answer with calculations. Estimate the height you need in order to see 100 miles. Write your answer with function notation. You have attempted of activities on this page.
CommonCrawl
All of the discussion around this question and in the comments below this answer, about the local effects of "the expansion of space" (Metric expansion of space) or Cosmological redshift or just simply the Hubble constant has got me wondering about the feasibility of detecting or even measuring the effect directly using spacecraft as a probe of distance. Most (if not all see edit below) of the data so far come from interpreting measured shifts of spectral features. These include atomic and molecular emission and absorption lines, as well as broader features including Blackbody radiation and ad-hoc averages of stellar populations. They all have one thing in common - they are interpreted measurements of one-way light from things really really far away. Since we can now actually put complex instruments on the order of ten billion kilometers away and still exchange data with them and perform measurements on them, it's not unreasonable to start thinking of a controlled experimental measurement of the effect. For example this might be done by Doppler ranging using reflection, or better yet reception and simultaneous rebroadcast by a spacecraft. My question is - have there been institutionally proposed, or peer-reviewed and published discussions of the detection or measurement of the Metric expansion of space using spacecraft as a probe of distance? edit: I've changed the title to be plural - I don't want to exclude things like (just for example) compensation for the gradient in local gravity field for want of an "s". (imagine six spacecraft exploring +/- x, y, z) Note - I am not asking if you think it can or can't be done. I'd like to read serious discussions or proposals on the subject that quantitatively address feasibility. update: This was shared with me in this answer, and it's a good example of something quantitative, though it's data rather than explanation. The main focus of the paper is experimental testing of the Equivalence Principle (EP; gravitational mass vs inertial mass) using the Sun, Earth and Moon. It seems to be in the negative as far as the present question is concerned. Expansion seems to be quite readily "measureable", and their results seem to show it to be very small and consistent with zero locally. It reports on Laser ranging of the retro-reflector arrays on the moon. In 34 years, the distance between the earth and moon has followed the best predictions to within a scatter of about 2cm. If my arithmetic is correct, expansion of space at the Hubble constant rate would be about 82cm over 34 years. While this doesn't help me understand why, it certainly suggests in a peer-reviewed, quantitative way that the expansion might also not be seen by ranging spacecraft in distant orbit around the sun. APOLLO 11: GAO Laser Ranging Facility at the Goddard Spaceflight Center: spacecraft experiment $\begingroup$ Well, you apparently don't want my answer, so I'll just put it in this comment. It can't be done. The expansion is observable only on scales rather larger than galaxies, since it is otherwise completely swamped by the gravitational interactions within galaxies, or even between galaxies in clusters. $\endgroup$ – Mark Adler $\begingroup$ @MarkAdler can you help find some place where this conclusions has been worked out and discussed quantitatively? Sometimes "can't be done"s are really "don't know how yet"s in disguise. $\endgroup$ $\begingroup$ I doubt that you'll find the concept of a direct spacecraft measurement of expansion discussed anywhere, since it can be dismissed within the first few seconds of discussion. $\endgroup$ $\begingroup$ @uhoh I can only second MarkAdler's opinion. This is a question about astrophysics, not engineering. You'd only feel the additional Hubble-acceleration measurably at a significant distance from our galaxy. And even then it would be challenging to distinguish this effect from the regular galactic potential and the dark matter halo of the galaxy. This is also why there won't be any serious case-study on such a concept. $\endgroup$ – AtmosphericPrisonEscape $\begingroup$ @uhoh: Space does not expand locally. As you can see i.e. in the fact that our Galaxy is being held together by its own gravity. So there will be no measurable effect of space-time expansion. Measurably means that you can distinguish the signal you search for from other signal sources that accelerate your spacecraft. And as basic GR tells us, there is no acceleration due to metric expansion. This is why the Hubble law was discovered relatively late and why we need distant objects to measure it at all. $\endgroup$ Here are a couple of papers on this topic: Cooperstock, http://arxiv.org/abs/astro-ph/9803097v1 Carrera, http://arxiv.org/abs/0810.2712 I also have a brief treatment that may be more accessible (but still requires knowledge of general relativity) in my GR book, section 8.2.10. The basic result for a system orbiting with angular frequency $\omega_0$ is that cosmological expansion causes the orbit to expand according to the equation $\frac{\dot{r}}{r_0} = \omega_0^{-2}\frac{d}{dt}\left(\frac{\ddot{a}}{a}\right)$, where $a$ is the cosmological scale factor, and dots indicate time derivatives. Note that the effect depends on the acceleration of cosmological expansion, so it's not something you can just interpret naively as if space expands and therefore the system gets bigger. For purposes of order-of-magnitude estimation, we can take the second factor on the right-hand side to be the cube of the Hubble constant. For a small system like the earth-moon system, $\omega_0$ is high, and therefore the cumulative effect is incredibly small. Plugging in numbers, I find that over the period of time since the Apollo missions, the contribution of cosmological expansion to the recession of the moon is about $10^{-22}$ cm. Cooperstock has a picturesque way of describing the smallness of such effects, which is that the expansion of the earth's orbit since the age of the dinosaurs is less than the diameter of an atomic nucleus. Even if lunar laser ranging (LLR) had the kind of precision needed in order to measure this effect, it would never see it, because the distance between the earth and the moon is growing for reasons of tidal dynamics. This is the effect that really is seen in LLR, and it's many, many orders of magnitude greater than the cosmological effect. $\begingroup$ The tidal effects are discussed in the paper and are part of the model. The residual is what's left over after the best estimates for tidal effects are taken into account. The final result is essentially zero, with an uncertainty of a few centimeters, which is - I believe - in agreement with what you are saying. Thanks for your answer! $\endgroup$ $\begingroup$ OK your answer has stood the test of time - almost a week! Thanks. $\endgroup$ Not the answer you're looking for? Browse other questions tagged spacecraft experiment or ask your own question. Does communications with spacecraft red shift the further away they are? How are precision trajectory measurements made of trans-Neptunian spacecraft? What is this retroreflector on the Mars InSight lander used for? What will happen to a spacecraft if it cancels its galactic velocity? What is the fastest speed ever reached in space travel as measured from the point in space from which it was launched to its current/final position? What ever happened to SpinSat - did it work? If a MarCO-type CubeSat were in orbit around Bennu, what kind of power would it need to communicate with the Deep Space Network?
CommonCrawl
Effects of Nutritional State on Physiological Responses and Heat Production During Exercise of the Animal - a Review Kasa, I Wayan 331 This review was conducted to analyse the effect of nutrition on physiological responses; heat production of domestic animal during exercise. Overall, it can be concluded that the major factors likely to affect heat production in domestic animals during exercise (including work load) are body weight, speed, the gradients attempted, feed intake, ambient conditions (including temperature and solar radiation) and altitude. On nutrition-exercise interactions, for example, it has been concluded that animals on better quality diets produce more heat than those on poorer quality ones, and that glucose as well as acetate are metabolized as energy sources during both rest and exercise. Feed Intake, Nutrient Utilization and Growth Rate of Jamunapari Goats Fed Sundried Leucaena leucocephala Srivastava, S.N.L.;Sharma, K. 337 In a feeding trial, Jamunapari male kids (18) of about 4 months age were equally divided into two groups of nine animals each. Goats in the experimental group were fed sun-dried pelleted Leucaena leucocephala leaves and those in the control group were offered a conventional diet without Leucaena leaves as per Kearl (1982) recommendations for a period of 6 months. Daily dry matter intake DMI/100 kg BW was $3.13{\pm}0.04kg$ in the Leucaena group and $3.30{\pm}0.05kg$ in the control. There were significant (p < 0.01) differences in the apparent digestibilities of DM, OM, CP, EE, CF and NFE being lower in the Leucaena group. Contents of digestible crude protein (DCP) and total digestible nutrients (TDN) were 11.40 and 52.20%, respectively, in the Leucaena group and 14.04 and 66.10%, respectively in the control. The nitrogen in the Leucaena group was not well utilized as compared to the control, though kids were in positive nitrogen balance in both the groups. The average daily weight gain of kids on pelleted Leucaena was $29.95{\pm}2.60g$ as against $42.09{\pm}3.24g$ observed in the control. The mean DMI/kg LW gain was significantly (p < 0.01) higher in the Leucaena group ($14.70{\pm}0.78kg$) as compared to the control ($11.55{\pm}0.46kg$). The Hb, BUN, SGOT and SGPT concentrations were statistically similar in both the groups. Histopathological examination of thyroid gland of goats sacrificed at the end of experiment did not reveal any signs of colloidal goitre associated with mimosine toxicity. No significant pathological alterations were observed in vital organs irrespective of dietary treatment. Sundried, pelleted Leucaena foliage appears to be a promising potential feed for growing goats without any significant deleterious effect. Feeding Behaviour and Forage Nutrient Utilization by Goats on a Semi-Arid Reconstituted Silvipasture Sharma, K.;Saini, A.L.;Singh, Nawab;Ogra, J.L. 344 Seasonal variations in the feeding behaviour of Jamunapari and Barbari goat breeds and their utilization of browse and grass nutrients was evaluated in a promising 3-tier (Leucaena leucocephala- Dichro-stachys nutan-Cenchrus ciliaris) reconstituted pasture during summer, rainy and winter season of the years 1987 and 1988. Distinct diurnal pattern of feeding was observed with both the breeds. Jamunapari goats spent significantly more time foraging during winter season (352.0 min) followed by summer (306.0 min) and least in rainy season (277.0 min). Though no significant difference was observed in the relative time spent by Barbari goats on grazing activities during summer and winter season, they spent significantly more (p < 0.05) time during rainy season as compared to other two seasons. The preference of grazing goats for certain plant species in relation to others was evident with distinct seasonal and breed variations. DM intake (g/kg $BW^{075}$) varied significantly (p < 0.05) from season to season. Among the browse. L. leucocephala was prefered over D. nutan irrespective of breed over the seasons. There was no breed difference in DM intake, or proximate composition and nutrient digestibility of ingested herbage. The available nutrient content of ingested forage was found sufficient to meet the nutrient requirements of adult goats for maintenance (NRC, 1981). The reconstituted 3-tier pasture dominated by plant species like L. leucocephala and Cenchrus species appear to have great potential to sustain the nutrient requirement of goats without adverse seasonal fluctuations in pasture quality. A Comparison of Ammonia and Preformed Protein as a Source of Nitrogen for Microbial Growth in the Rumen of Sheep Given Oaten Chaff Kanjanapruthipong, J.;Leng, R.A. 351 Microbial growth efficiency in the rumen was studied in sheep given hourly, 31.25 g oaten chaff with either 0.31 and 0.88 g urea or 1.88 and 5.63 g casein (exp. 1) and 33.33 g oaten chaff with 1.04 casein or 0.3, 0.6 and 0.9 g urea or the mixture of the casein and urea (exp. 2). Concentrations of ruminal fluid ammonia increased with increasing nitrogenous supplements. Organic matter digestibility in sacco in the rumen was not different irrespective of N sources. Isoacids and valeric acid increased with increasing ingested casein but decreased with increasing urea intake. Peptide and amino acid pools in ruminal fluid increased with increasing ammonia concentrations (exp. 2) suggesting that proteolytic activity and transportation of peptides and amino acids across microbial membrane of rumen microbes may be regulated by the metabolite mechanism (intracellular amino acids and $NH_4{^+}$, respectively). Densities of total viable and cellulolytic bacteria in ruminal fluid increased with increasing ammonia levels but that of small Entodinia decreased. The density of fungal sporangia growth on oat leaf blades decreased with increasing ammonia concentrations but appeared to remain constant in the presence of casein. Efficiency of net microbial cell synthesis was 15-28% higher when ammonia concentrations increased from 100 to above 200 mg N/l regardless of N sources. In conclusion, supplementation of preformed protein had no effect on rumen digestion and microbial growth efficiency. This could not be accounted for its effect on ruminal fluid ammonia. Increased microbial growth efficiency with increasing ammonia levels may be due to a reduction in the turnover of microbial cells within the rumen. Genetic Trend for Growth in a Closed Indian Herd of Landrace × Desi Crossbreds Gaur, G.K.;Ahlawat, S.P.S.;Chhabra, A.K.;Paul, Satya 363 This study has objectives of to estimate the genetic and phenotypic trend for growth in a closed herd of Landrace $\times$ desi crossbreds. The possibility of early selection of boars was also investigated in order to reduce generation interval and thus, to enhance response per year in selection programmes. The data originated from Livestock Production Research (Pigs), Indian Veterinary Research Institute (IVRI), Izatnagar (UP), India - a unit of All India Coordinated research Project on Pigs (AICRP on Pigs). Data consisted of 891 crossbred piglets, progeny of 29 boars. The piglets were born in 132 parities of 72 sows between 8 years from 1987 to 1994. Records on weight at birth, at 2 weeks interval upto 8 weeks of age (Wl, W2, ${\cdots}\;{\cdots}$ W8) and at 16th week (W16) were used in this investigation. BLLTP estimates of the sires were computed. Breeding value of each sire was estimated as twice of sire and sire group solutions. Phenotypic trend was estimated as regression of weight performance on year. Genetic trend was computed by estimating regression of breeding value of sires on time. Average body weights ranged from 0.92 kg (W1) to 18.95 kg (W16) and showed a continuous increase over age. Heritabilities of the weight at 4th and 6th week were medium (0.29 and 0.14). Rest of the weights were highly heritable. The product moment and rank, both correlations were high between breeding value for W6 and W16 (0.68 and 0.70). This shows that sire selection for W6 can be successfully implemented in order to achieve sufficient genetic improvement in growth. Phenotypic trend was positive at all ages. The phenotypic regression coefficient ranged from 0.02 kg at birth to 0.40 kg at 16 weeks. Genetic trend was also positive. The regression coefficients of average breeding value of sires on time showed a range of 1.471 kg (0.021 to 1.492 kg) for different weights. These coefficients were significant and higher than their corresponding phenotypic regression coefficient. Plasma Hormones, Blood Metabolites, Milk Yield and Composition in Early Lactation of Buffaloes Treated with Bromocryptine Saha, A.;Singh, M. 368 The study was conducted on six multiparous Murrah buffaloes which were earlier artificially induced into lactation. During the experimental period of 15 days, buffaloes were managed in a loose housing system. All the buffaloes were administered a single injection of bromocryptine (@ $100{\mu}g/kg$ body weight) subcutaneously in the neck region at 08:30 A.M., 50 days postpartum (early lactation). Blood samples were collected from four buffaloes for a period of 5 days before the administration of bromocryptine i.e. on days -5, -4, -3, -2, -1, on day of treatment (day 0) and thereafter daily for a period of 9 days i.e 1, 2, 3, 4, 5, 6, 7, 8 and 9 to determine the hormones and blood metabolites. Homogeneous milk samples from all the buffaloes were collected at morning and evening milkings on days coinciding with the days of blood sampling for analysis of milk constituents. Administration of bromocryptine resulted in a significant inhibition of plasma prolactin within 24 hrs of treatment, but the response in all the buffaloes was not uniform. The effect of bromocryptine on plasma prolactin hormone lasted for 1-4 days but Cortisol concentration were not altered. Administration of bromocryptine neither affected blood glucose nor plasma non-esterified fatty acids concentration. Irrespective of level of milk production from different buffaloes, there was no effect of bromocryptine on milk yield which indicated that prolactin is not required for milk secretion during early lactation in buffaloes. Milk constituents like fat, protein and lactose were not affected by bromocryptine may be due to no effect of bromocryptine of milk yield. Effects of Nutritional Level on Digestive Enzyme Activities in the Pancreas and Small Intestine of Calves Slaughtered at Same Body Weight Wang, X.B.;Ogawa, T.;Suda, S.;Taniguchi, K.;Uike, H.;Kumagai, H.;Mitani, K. 375 Six Holstein heifer calves weaned at 45 days-of-age were randomly allocated into high daily gain (1.1 kg/d, HDG) and low daily gain (0.56 kg/d, LDG) groups, and were slaughtered at 170 kg of live weight. Energy intake level in the feeding period was 2.4 $\times$ maintenance in 105 days for HDG and 1.4 $\times$ maintenance in 216 days for LDG calves. Total length of the small intestine was identical between groups, but both weights of the pancreas and of the small intestinal mucosa were greater (p < 0.01) for HDG calves. Alpha-amylase, lipase, proteinase, and trypsin activities of the whole pancreas were higher (p < 0.05) in HDG calves. Disaccharidase activity of the whole small intestinal mucosa was also higher (p < 0.10) for HDG than for LDG calves. However, the enzymatic activities, expressed as per gram or per protein of the pancreas and the small intestinal mucosa, were not affected (p > 0.10) by the plane of nutrition. These results suggest that the digestive enzyme activity in the small intestine varies primarily with the weight of tissues synthesizing the enzyme. Effects of a Mineral-Salt Laxative in Lactation Diets for Primiparous Sows and Their Litters Kim, I.H.;Hancock, J.D.;Kim, C.S. 381 Twenty-three crossbred (Yorkshire $\times$ Duroc $\times$ Hampshire $\times$ Chester White) primiparous sows were used to evaluate the effects of the mineral-salt laxative in lactation diets on sow and litter performance. The sows were fed a sorghum-extruded soybean-based diet with .85% lysine, .90% Ca, .80% P, and 3.2 Mcal ME/kg. Sow body weight (p > .54) and backfat loss (p > .61), average daily feed intake (p > .42), and litter weight gain (p > .74) were not affected by the mineral-salt laxative in the diet. However, survivability of piglets was greater (p < .06) for sows with the mineral-salt laxative in their diet and, thus, number of pigs weaned was increased. As expected, fecal moisture was increased (p < .09) in sows fed the mineral-salt laxative. Apparent digestibilities of DM, N, and GE were not affected by treatment (p > .26). After weaning, stomachs were collected and scored for ulcers and keratinization using a scoring system of 0 = normal to 3 = severe. Severity of ulceration and keratinization was not significantly affected by treatment (row mean scores differ test p > .25), but scores for sows fed the diet containing the mineral-salt laxative were numerically lower than sows fed the control diet. Thus, our data indicate that sows fed the mineral-salt laxative during lactation had improved piglet survivability, greater fecal moisture, and tended to have fewer lesions in the mucosa of the stomach. Effect of Replacing Til Oil Cake by Poultry Excreta on Growth and Nutrient Utilization in Growing Bull Calves Khan, M.J.;Shahjalal, M.;Rashid, M.M. 385 An experiment was conducted for 90 days using 9 growing bull calves (initial LW 71.5 kg) to investigate the effect of replacing til oil cake by poultry excreta on growth performance and nutrient utilization. The animals were randomly divided into three groups. The control group A was fed with conventional concentrate mixture containing til oil cake, rice bran, wheat bran, bone meal and common salt and the groups B and C were offered diets in which 50 and 100 percent of til oil cake of diet A were replaced by dried poultry excreta. All the animals were fed urea soaked rice straw ad libitum and concentrate mixture was given at the rate of 10 g per kg LW. Towards the end of growth trial a conventional digestibility trial was conducted. Average daily live weight gain was 216, 211 and 188 g for animals fed diets A, B and C, respectively. Average daily dry matter intake in groups A, B and C was 3.42, 3.37 and 3.30 kg per 100 kg LW, respectively. The daily live weight gain and dry matter intake did not differ significantly (p > 0.05) among the dietary groups. The digestibility coefficient for DM or NFE was almost similar but that for OM, CP, CF and EE was significantly different (p < 0.01) among the dietary groups. TDN percent in diets A, B and C was 57.3 53.3 and 50.8, respectively and the difference was significant (p < 0.01). Animals in all the groups were in a state of positive nitrogen balance. The results indicated that til oil cake can be replaced by dried poultry excreta in bull calf ration. Effects of Dietary Protein and Energy on Growth Performance and Muscle Composition in Broilers Treated with Clenbuterol Hamano, Y.;Hamada, Y.;Miyahara, M.;Kobayashi, S.;Terashima, Y. 391 The present study was conducted to examine the effects of dietary protein (20, 22, 24%) with a constant protein-to-energy ratio on clenbuterol-induced performance in broilers. The protein-to-energy ratio was based on adequate level (22% protein, 3,100 kcal of energy). Female broiler chickens were used for a $3{\times}2$ factorial arrangement and fed diets with or without 1 ppm clenbuterol from 14- to 32-days of age. Feed efficiency improved with increasing dietary protein level, regardless of clenbuterol treatment. The dietary clenbuterol increased weights of breast and leg muscles (gastrocnemius and peroneus longus), and clenbuterol markedly reduced protein content of leg muscles in chickens fed the 20% protein diet, but did not in chickens fed the 22 and 24% protein diets. Feeding the 24% protein diet with clenbuterol improved the protien accretion (peroneus longus) by 8.4%. Clenbuterol decreased DNA content and increased the protein/DNA ratio in breast muscle regardless of dietary protein intake. Clenbuterol had no effect on RNA content in both breast and leg muscles. The present results demonstrated that various protein levels which retain the same protein-to-energy ratio in the diet markedly alter the protein accretion induced by ${\beta}$-agonist in broilers. Development of In Vitro Produced Buffalo (Bubalus bubalis) Embryos in Relation to Time Chauhan, M.S.;Singla, S.K.;Palta, P.;Manik, R.S.;Tomer, O.S. 398 The objective of the present study was to examine the developmental rates, and the stage of development in relation to time since fertilization, of in vitro produced buffalo embryos. Buffalo cumulus-oocyte complexes obtained from slaughterhouse ovaries were matured and fertilized in vitro. The fertilized oocytes (n = 248) were then co-cultured with buffalo oviductal epithelial cells and evaluated for the developmental stages on Days 2, 4, 6, 7, 8, 9 and 10 post-insemination. The peak of 4-cell stage embryos was observed on Day 2 (63.7 %), whereas Day 4 was marked by peaks of 6-8-cell stage embryos (20.9%) and 16-cell stage embryos to early morulae (50%). On Days 6, 7, 8, 9, and 10 post-insemination, 49.5, 48.3, 38.3, 33.8 and 33.4% embryos were found to be at morula/compact morula stages, 8.8, 12.5, 25.4, 6.0 and 1.2% at early blastocyst/blastocyst stages, 0, 6.8, 7.2, 15.3 and 2.0% at expanded blastocyst stage and 0, 1.6, 4.8, 19.3 and 38.5% hatching/hatched blastocyst stages, respectively. The peaks of early blastocyst/blastocyst, expanded blastocyst and hatching/hatched blastocyst stages were observed on Days 8, 9 and 10, respectively. The percentages of oocytes which initially became arrested and subsequently degenerated were 3.6, 4.8, 10.4, 14.5, 21.3 and 24.5% on Days 4, 6, 7, 8, 9 and 10 post-insemination, respectively. Influence of the Dominant Follicle on the Superovulatory Response in Cattle Manik, R.S.;Singla, S.K.;Palta, P.;Madan, M.L. 404 Nine cows were superovulated by administration of 8 injections of Folltropin each (2.5 ml/injection, 1.75 mg/ml) i.m spread over 4 days, beginning on Day 10 of oestrous cycle, and 30 and 20 mg prostaglandin $F_{2{\alpha}}$ was given along with the 5th and 6th injections of Folltropin, respectively, to induce luteolysis. The animals were artificially inseminated 48, 60 and 72 h after the first prostaglandin $F_{2{\alpha}}$ injection. The number of corpora lutea was recorded by palpation per rectum and by ultrasonography on Day 6 (Day 0 = day of oestrus). The ovaries were examined daily by ultrasonography on Days 3-9 of the oestrous cycle for following the growth and regression of the largest follicle, which was considered the morphologically dominant follicle. The animals were classified into two groups depending upon the presence (n = 4) and absence of a dominant follicle (n = 5). There was a high correlation (r = 0.97, p < 0.001) between the number of corpora lutea observed by palpation per rectum and that determined by ultrasonography. Mean (${\pm}SEM$) number of corpora lutea determined by ultrasonography ($11.20{\pm}3.71$ vs $3.25{\pm}0.75$) and by palpation per rectum ($10.40{\pm}3.91$ vs $2.25{\pm}0.75$) was significantly higher (p < 0.05) in the nondominant group compared to that in the dominant group. There was no difference in the numbers of follicles 2-3 mm ($13.80{\pm}4.49$ vs $8.00{\pm}1.08$), 4-6 mm ($7.00{\pm}1.87$ vs $3.50{\pm}1.33$), and the total number of follicles ${\geq}2mm$ ($22.00{\pm}5.95$ vs $12.50{\pm}1.26$) between the two groups, one day prior to initiation of superovulation. There was, however, a significant (p<0.01) positive correlation between the number of corpora lutea with the numbers of follicles 2-3 mm (r = 0.83), 4-6 mm (r = 0.80) and the total number of follicles ${\geq}2mm$ (r = 0.89) observed one day prior to initiation of superovulation. The results of this study indicate that the presence of a dominant follicle adversely affects the superovulatory response in cattle. Effect of Proportion of Recorded Cows Inseminated by Young A. I. Bulls on Genetic Improvement in Japanese Holstein Population Terawaki, Y.;Shimizu, H.;Fukui, Y. 410 The effects of the proprotion of cows inseminated by young A. I. bulls on genetic improvement in the Japanese Holstein population were examined using a simulation technique. The proportion of recorded cows inseminated by young A. I. bulls was assumed to be from 10% to 100% of the total number of recorded cows. The expected total genetic improvement was estimated for all cows and recorded and non recorded cows. The effects of the above were remarkable in the schemes that proven sires were used to produce recorded and non recorded cows for a limited time. Also the increase in the rates for -the expected total genetic improvement was larger when the proportion of recorded cows that were inseminated by young A. I. bulls was about 10% to 40%. When the expected total genetic improvement was estimated for the entire population, we found that the highest values were in a range of about 40 to 60% recorded cows that were inseminated by young A. I. bulls. On the other hand, the expected total genetic improvement that was only estimated in recorded cows dramatically decreased for more than 40% of the recorded cows. The results of this study showed that the optimal proportion of recorded cows inseminated with young A. I. bulls should be about 30% in the Japanese Holstein population. Hypophyseal and Gonadal Response to GnRH in Buffalo Heifers (Bubalus bubalis) Singh, C.;Madan, M.L. 416 The objective of this study was to investigate the responsiveness of hypophysis and gonads to synthetic GnRH among noncycling Murrah buffalo heifers at 24 months of age. The plasma FSH, LH, estradiol and progesterone levels were measured in blood samples collected at 1 hour before and upto 18th day subsequent to the administration of GnRH ($(200 {\mu}g)$) or saline (2 ml). The pretreatment levels of plasma FSH, LH estradiol and progesterone among GnRH treated heifers (N = 6) were $11.55{\pm}0.57ng/ml$, $0.68{\pm}0.06ng/ml$, $19.84{\pm}0.82pg/ml$ and $0.45{\pm}0.07ng/ml$ respectively. A quick elevation of FSH (p < 0.01) and LH (p < 0.05) within 5 min of GnRH administration was observed in all the heifers. The peak FSH ($74.97{\pm}18.63ng/ml$) and LH ($3.09{\pm}0.54ng/ml$) level was obtained at 30 min of GnRH administration. The elevated level of plasma estradiol on 5th to 18th day, FSH on 7th to 9th day (n = 3) and the progesterone on 13th to 18th day (n = 2) of GnRH injection was obtained. The study indicates that gonads of buffalo heifers at 24 months of age are responsive of GnRH induced gonadotropin release for folliculogenesis and luteal tissue formation DNA Polymorphisms of κ-Casein, β-Lactoglobulin, Growth Hormone and Prolactin Genes in Korean Cattle Chung, E.R.;Kim, W.T.;Lee, C.S. 422 The gene and genotypic frequencies of ${\kappa}$-casein (${\kappa}$-CN), ${\beta}$-lactoglobulin (${\beta}$-LG), growth hormone (bGH) and prolactin (bPRL) loci in Korean cattle were investigated using PCR-RFLP analyses. Genomic DNA samples were obtained from 290 cows and 30 AI bulls. In both cows and bulls, the most predominant genotypes of ${\kappa}$-CN, ${\beta}$-LG, bGH and bPRL loci were AB, BB, AA and AA, respecitively. The frequencies of A and B alleles for ${\kappa}$-CN locus were .612 and .388 for cows and .567 and .433 for bulls. The respective frequencies of A and B alleles for ${\beta}$-LG locus were .153 and .847 in cows and .217 and .783 in bulls. The frequencies of A and B alleles for bGH locus were .769 and .231 in cows and .784 and .216 in bulls, respectively. The frequencies of A and B alleles for bPRL locus were .678 and .322 for cows and .767 and .233 for bulls. Differences in frequencies of these alleles were not significant between cows and bulls at all loci examined. If the DNA polymorphisms of these candidate genes are associated with economically important traits, they could serve as genetic markers for genetic improvement in future marker-assisted selection programs in Korean cattle. Effects of Chromium Picolinate on In Vitro Lipogenesis and Lipolysis in Adipose Tissue and Protein Synthesis in Liver Tissue of Pigs Choi, Y.J.;Kim, H.G.;Cho, J.S.;Chung, I.B.;Kim, Y.H.;Han, I.K. 428 The effects of chromium picolinate supplementation in pig diet were evaluated by measuring the in vitro lipogenic and lipolytic activities in adipose tissue and the protein synthetic activity in liver acinar cell in culture. Thirty-two male and thirty-two female pigs were randomly assigned to one of four dietary groups: Control, 100 ppb, 200 ppb, and 400 ppb of Cr in the form of picolinate. The chromium picolinate supplementation (p < 0.01) increased the in vitro lipolytic activity in adipose tissue of pig, but had no effects on lipogenesis. The chromium picolinate effect was greater in female pigs than in male pigs on lipolytic activity. The results from the studies with the liver acinar cells in culture indicated that chromium picolinate supplementation increased protein synthetic activity (p < 0.05). It was observed through this experiment that chromium picolinate functions not only on fat degradation but also on retained protein synthesis. Acid-Soil and Psyllid Tolerance of Interspecific Hybrids of Leucaena in Malaysia Vadiveloo, J. 434 Seven hybrid lines of Leucaena leucocephala $\times$ L. diversifolia and two control lines of L. leucocephala were compared for their adaptation to acid-soils and tolerance to damage by the psyllid, Heteropsyla cubana, at four locations over two years in Peninsular Malaysia. Primary data on leaf composition and in vitro digestibility (nutrition variables) and secondary data on plant height, stem girth and psyllid damage (agronomy variables) were the measures of performance. Cluster solutions of the nine lines were different within locations, between locations and between years for nutrition and agronomy variables. Controls and hybrids did not cluster separately. Principal component scores of the nine lines gave rank orders which were different by location and by year. No performance trend could be detected between hybrids and controls. The conclusion is that nutritional and agronomic characteristics in Leucaena are independent, soil composition and weather did not consistently affect performance, and evidence is inconclusive as to the benefits of interspecific crossing with L. diversifolia. An Analytical Approach to Sire-by-Year Interactions in Direct and Maternal Genetic Evaluation Lee, C. 441 The negative direct-maternal genetic correlation $(r_{dm})$ for weaning weight is inflated when data are analyzed with model ignoring sire-by-year interactions (SY). An analytical study investigating the consequences of ignoring SY was undertaken. The inflation of negative correlation could be due to a functional relationship of design matrices for additive direct and maternal genetic effects to that for sire effects within which SY effects were nested. It was proven that the maternal genetic variance was inflated by the amount of reduction for sire variance; the direct genetic variance was inflated by four times the change for maternal genetic variance; and the direct-maternal genetic covariance was deflated by twice the change for maternal genetic variance. The findings were agreed to the results in previous studies. Semen Quality Assessment of Local Katjang and Cross-Bred (Katjang × German) Bucks Noran, A.M.;Mukherjee, T.K.;Abdullah, R. 445 Semen quality was compared between the local Katjang and the cross-bred (local Katjang ♀ ${\times}$ German Fawn ♂) bucks. There were on significant genotypic differences in semen characteristics of concentration (first ejaculate : $6.19{\pm}1.30$ -versus $6.33{\pm}1.40{\times}10^9/ml;$second ejaculate: $5.82{\pm}1.10$ - versus $5.68{\pm}1.45{\times}10^9/ml$, for Katjang and the cross-breds, respectively), percentage live (first ejaculate: $77.61{\pm}1.33%$ versus $77.81{\pm}0.53%$; second ejaculate: $81.97{\pm}1.59%$ versus $82.74{\pm}0.96%$, for Katjang and cross-breds, respectively) and percentage of normal sperms (first ejaculate: $12.54{\pm}3.88%$ versus $26.45{\pm}3.83%$; second ejaculate: $38.68{\pm}3.65%$ versus $28.54{\pm}4.38%$, for Katjang and cross-breds, respectively), with the exception of seminal volume and sperm motility. Means of all variables were within the values reported for other goat breeds, In contrast, the differences in semen characteristics between the first and second ejaculations of both genotypes were more distinct, the second ejaculations always had more volume, more normal sperms and better sperm motility but less sperm concentrations. Removing the seminal plasma and replacing it with tris-citrate buffer greatly prolonged the viability of sperms of both genotypes when stored at $5{^{\circ}C}$. Sperm motility seens to be a good indicator of sperm viability. However, the sperms of the corss-bred bucks withstood the washing process better and their swimming abilities were superior ($8.12{\pm}0.46mm/min$) when compared to those of the local Katjang breed ($5.42{\pm}0.49mm/min$). The higher content of calcium ions in their seminal plasma (first ejaculate: $10.5{\pm}0.8$ versus $10.6{\pm}0.8mg/100ml$;second ejaculate: $15.3{\pm}0.8$ versus $16.1{\pm}0.8mg/100ml$, for Katjang and cross-breds, respectively) means that in natural matings the sperms of the cross-breds would be at an advantage compared to those of the local Katjang, since calcium ions reportedly initiate acrosomal reactions. Influence of Depth of Rice Husk Litter on Broiler Performance, Litter Dampness and its Coccidial Oocyst Population During Winter Mizu, M.M.R.;Chowdhury, S.D.;Karim, M.J.;Debnath, S.C. 450 Four groups each containing 48 seven-day-old broiler chicks were reared for 7 weeks during winter on rice husk litter spread to depths of 20, 30, 40 or 50 mm. Broiler performance was evaluated in terms of weight gain, feed consumption, feed efficiency and production number. Litter dampness was determined and coccidial oocyst populations were counted at different weeks of age. The depth of litter did not significantly affect live weight gain, feed consumption, feed conversion ratio, liveability or production number. Variation in moisture contents of litter was observed but the coccidial oocysts count per gramme of litter was within the safety level and therefore, there was no outbreak of coccidiosis in any group. Use of rice husk litter at different depths (20 to 50 mm) did not cause any breast blisters or leg abnormalities. It was concluded that rice husk can be used as litter at depths of between 20 and 50 mm during winter to raise broilers without affecting performance characteristics and health of birds. Effects of Supplementary Chinese Milk Vetch Silage and Rapeseed Meal on the Performance and Rumen Fermentation of Lambs Given Ammoniated Rice Straw Based Diet Wu, Yueming;Liu, Jian Xin;Chen, Zhenming 455 This study was proposed to investigate effects of inclusion of Chinese milk vetch silage (MVS) and rapeseed meal (RSM) on the growth and rumen fermentation of Hu-sheep. Fifty weanling lambs were randomly divided into five equal groups and offered ammoniated rice straw (ABRS) ad libitum along with 100 g concentrate (Trial 1). The animals in $T_0$, $T_1$, $T_2$, $T_3$ and $T_4$ group were respectively supplemented with MVS at levels of 0, 0, 7, 14 or 21% and with RSM at levels of 0, 15, 10, 5 or 0%. Daily gain of lambs was significantly (p<0.05) higher in $T_1$, $T_2$ and $T_3$ group than that in $T_0$ and $T_4$ group. Feed conversion ratio was greatly reduced in supplemented groups as compared with $T_0$ group. In trial 2, five sheep with rumen cannulae were used in a $5{\times}5$ Latin square design. The experimental treatments were as described in Trial 1, but without concentrate. The intake of AVRS was significantly (p<0.05) lower in $T_4$ group than that in $T_0$ group, and also significantly (p<0.05) lower than those in $T_1$ and $T_2$ group. Little difference among all treatments was found in 48h DM degradability of ABRS, MVS and RSM, and in rumen pH value and microbial protein concentration. Rumen concentrations of individual and total VFA tended to be higher in supplemented groups than those in $T_0$ group.These rusults suggest that supplementation with RSM or RSM plus MVS can effectively improve the performance of lambs, and may fail to influence markedly the rumen digestion of ABRS and rumen environments.
CommonCrawl
Transkingdom interactions between Lactobacilli and hepatic mitochondria attenuate western diet-induced diabetes Richard R. Rodrigues ORCID: orcid.org/0000-0003-3985-30031 na1, Manoj Gurung ORCID: orcid.org/0000-0002-5652-56882 na1, Zhipeng Li2 na1, Manuel García-Jaramillo ORCID: orcid.org/0000-0003-2987-29133, Renee Greer2, Christopher Gaulke4, Franziska Bauchinger5, Hyekyoung You2, Jacob W. Pederson ORCID: orcid.org/0000-0001-8964-69212, Stephany Vasquez-Perez2, Kimberly D. White ORCID: orcid.org/0000-0002-3014-40432, Briana Frink2, Benjamin Philmus ORCID: orcid.org/0000-0003-2085-08731, Donald B. Jump3, Giorgio Trinchieri ORCID: orcid.org/0000-0001-5892-74646, David Berry5, Thomas J. Sharpton4, Amiran Dzutsev6, Andrey Morgun1 na2 & Natalia Shulzhenko ORCID: orcid.org/0000-0003-0077-84562 na2 Nature Communications volume 12, Article number: 101 (2021) Cite this article 298 Altmetric Western diet (WD) is one of the major culprits of metabolic disease including type 2 diabetes (T2D) with gut microbiota playing an important role in modulating effects of the diet. Herein, we use a data-driven approach (Transkingdom Network analysis) to model host-microbiome interactions under WD to infer which members of microbiota contribute to the altered host metabolism. Interrogation of this network pointed to taxa with potential beneficial or harmful effects on host's metabolism. We then validate the functional role of the predicted bacteria in regulating metabolism and show that they act via different host pathways. Our gene expression and electron microscopy studies show that two species from Lactobacillus genus act upon mitochondria in the liver leading to the improvement of lipid metabolism. Metabolomics analyses revealed that reduced glutathione may mediate these effects. Our study identifies potential probiotic strains for T2D and provides important insights into mechanisms of their action. Increasing evidence underscores the importance of the microbiome in human metabolic health and disease1. One of the most prevalent metabolic diseases, type 2 diabetes (T2D), is now a global pandemic and the number of patients that will be diagnosed with this disease is expected to further increase over the next decade2. The so-called "western diet" (WD, a diet high in saturated fats and refined sugars) has been recognized as one of the major culprits of T2D with gut microbiota playing an important role in modulating effects of diet3,4. Thus, there is an urgent need to elucidate the contributions of gut microbiota to metabolic damages caused by WD and to identify preventive approaches for T2D. On the one hand, it is believed that complex changes in the structure of gut microbial communities, resulting from interactions of hundreds of different microbes, also called dysbiosis, underlies metabolic harm to the host5. On the other hand, some reports claim that individual members of the microbial community changed by the diet might have a significant impact on the host6. Although these two points of view are not necessary mutually exclusive, it is still unclear which hypothesis is more credible7. Herein, we used a data-driven systems biology approach (Transkingdom Network analysis) to model host–microbe interactions under WD and to investigate whether individual members of microbiota and/or their interactions contribute to altered host metabolism induced by the WD. The interrogation of the Transkingdom Network pointed to individual microbes with potential causal effects on the host's lipid and glucose metabolism. Furthermore, the analysis also enabled inference of whether microbes might elicit beneficial or harmful effects on the host. In addition, we detected associations between the frequencies of these microbes and obesity in humans. We then validated the functional role of the predicted bacteria in regulating metabolism by supplementing mice with these microbes. Next, gene expression, electron microscopy, and multi-omics network pointed to a novel finding that these two Lactobacilli may act by boosting mitochondrial health in the liver leading to the improvement in hepatic lipid and systemic glucose metabolism. Finally, the metabolomics analysis revealed few metabolites (e.g., reduced glutathione; GSH) that may mediate the beneficial effects of probiotics. Transkingdom Network predicts beneficial and harmful microbes We started by inducing T2D-like metabolic disease in C57BL/6 mice by feeding them a WD, which prior work has found to yield murine phenotypes that mimic human T2D8,9,10. As expected, when compared with mice receiving a control (normal) diet, the mice fed the WD exhibited glucose intolerance and insulin resistance (Fig. 1a, Fig. S1). The observed phenotypic changes were consistent at 4 and 8 weeks, as well as between replicate experiments. These results align with previous studies showing metabolic changes in male C57BL/6 J mice-fed WD9,10. Concurrently, the gut (ileum and stool) microbial communities were altered because of diet (Fig. 1b). Although gut location explained the majority of the variation in the microbial communities as expected11,12 we observed robust changes in microbiota associated with feeding WD8,13. Interestingly, the overall composition of the gut microbiota was similar at 4 and 8 weeks of WD (Supplementary Data 1a). Fig. 1: Inference of gut microbes affecting glucose metabolism in the host. a The red and blue colors indicate higher and lower levels of metabolic parameters measured in mice fed normal diet (ND) or western diet (WD) at 4 and 8 weeks. Source data are provided as a Source Data file. b Principal Component Analysis of stool (triangle) and ileal (circle) microbial communities of mice on ND (blue) or WD (red). Source data are available at https://www.ncbi.nlm.nih.gov/sra/?term=PRJNA558801. c The microbe and host parameter nodes are represented by circles and squares, respectively, in the transkingdom (TK) network. Red and blue colors of nodes indicate increased and decreased (WD/ND) fold change, respectively, whereas the size of circle represents frequency of microbe in stool of WD mice. The black and green node borders indicate the microbes were significantly increased or decreased, respectively, in ileum of WD mice compared with ND (Fisher's p value across experiments <0.05). The orange and black edges indicate positive and negative correlations, respectively. The degree distribution of the TK-network follows a power law. The blue line indicates the fitted line. Source data are available at https://tinyurl.com/TK-NW-Fig-1C. d The left two figures allow inference of microbial candidates that are potentially improvers (left figure) or worseners (middle figure) using high values of TK-network property (bipartite betweenness centrality (BiBC) on the x axis) and significance of change in ileal (WD vs ND) abundance of microbes (log transformed Fisher's p value across experiments on y axis). The horizontal green line indicates a log transformed value for Fisher's p value of 0.05. The right figure shows the keystoneness score (x axis) of the microbial nodes (y axis). Source data are provided as a Source Data file. e Ileal abundance of potential candidate and keystone microbes in ND and WD-fed mice at 8 weeks. Asterisk indicate the change in abundance passed statistical significance threshold (two-tail Mann–Whitney p value <0.2 in each experiment, Fisher's p value across experiments <0.05, and FDR < 10%. Each dot represents a mouse, bars present median of the group. Source data same as for b. Previous studies showed associations between ecological properties of microbial community (e.g., Shannon diversity) and host metabolism14,15. Therefore, we analyzed the association between several community parameters (Supplementary Data 1b) and host phenotypes altered by WD. However, analysis of data from two separate time points (4 and 8 weeks of WD) and microbiome results from intestinal and fecal samples did not find any correlations that showed significant associations in both independent experiments (Supplementary Data 1c). Thus, it does not seem that general dysbiosis explains metabolic alterations in this experimental system. Next, we sought to identify specific microbes regulating metabolic parameters using a Transkingdom (TK) network approach; this approach has been successfully used to identify key microbiota associated with various disease states, including human disease16,17. Towards this end, we created a TK network by integrating microbial abundances with systemic measurements of host metabolic parameters changed by the WD (Fig. 1c, Supplementary Data 2). The TK-network contained 1009 edges between 226 nodes (6 metabolic parameters and 220 microbial operational taxonomic units (OTUs)). The node degree distribution of the TK-network followed the power law function (Fig. 1c), supporting that the TK-network captures a cross-regulatory nature of the gut microbiota and host phenotypic ecosystem as power law had been shown as a critical property of biological networks18,19. Thus, the TK-network provided an opportunity to infer microbes responsible for controlling the overall composition of the microbial community (i.e., keystone species) as well as those that may control host phenotypes. To identify microbes that likely contribute to T2D-related systemic changes in metabolism, we calculated a network property, called bipartite betweenness centrality (BiBC) that measures the frequency with which a node connects other microbe and host nodes in the graph20. We then integrated BiBC scores of each OTU with the WD-induced changes in abundance of ileal microbiota. A microbe was considered to be potentially beneficial (T2D improver) if it had a high-BiBC score and a lower abundance in the ileum of WD-fed mice (Supplementary Data 3). Conversely, a microbe was considered to be potentially harmful (i.e., a T2D worsener) if it had a high-BiBC score and a higher abundance in the ileum of mice fed WD (Supplementary Data 4). As a result of these analyses, we identified four OTUs predicted to regulate glucose metabolism, which corresponded with high similarity to four bacterial species Lactobacillus johnsonii, Lactobacillus gasseri, Romboutsia ilealis, and Ruminococcus gnavus (Figs. 1d, e; Supplementary Data 16). The first two microbes were considered potentially beneficial (i.e., T2D phenotype improvers). The other two (R. ilealis and R. gnavus) were predicted to be worseners. Notably, R. gnavus has been previously shown to be associated with obesity21,22. Overall, these results indicate that individual microbes and/or their interactions and not community level dysbiosis (Fig. 1, Supplementary Data 1) could be key players in T2D. It was proposed that keystone species have significant influence on the rest of gut microbiota, also characterized by a high number of connections within a network23,24. Therefore, we asked whether microbes with characteristics of keystone species in our network are among microbes that are predicted to influence host metabolic parameters. Using an approach developed by Berry and Widder24, we investigated the microbial network and found one microbe with the closest match to Bacteroides pectinophilus, with a prominent keystoneness score, followed by few other microbes that also might qualify as keystone species (Figs. 1d, e, Supplementary Data 5, Supplementary Data 16). Notably, the candidate microbes predicted to affect the host had a low keystoneness score, suggesting that microbes with potentially high effect on the host do not necessarily play a central role in regulating the microbial community (Fig. 1d, Supplementary Data 5). Inferences from mice are validated by associations in humans To check the relevance of the candidate microbes in humans we identified a human study of a clinical population that consumes a WD-like diet and used the data to computationally evaluate our predictions25. In agreement with inferences from mouse data, we found correlations between body mass index (BMI) and the abundance of these microbial candidates (Fig. 2) in obese humans25. Specifically, the abundance of improvers was negatively correlated with BMI, whereas the abundance of the worsener was positively correlated. Furthermore, we found R. ilealis to be present in over 80% of obese patients, suggesting that this microbe could be a prevalent pathobiont in obese humans. Although the result for R. ilealis seemed to be more robust we observed only trend of positive association for R. gnavus that concurs with much smaller BiBC score for this bacterium (Figs. 1 and S2). Altogether, these observations provide further support for the predictions resulting from our analyses in the WD-fed mouse model. Fig. 2: Computational verification of predicted microbes in human data from the literature26. Each scatterplot shows the abundance of the microbes (X axis) in stool versus the BMI of obese humans (Y axis). The dotted line indicates the fitted line. The Spearman rho correlation coefficient and one-tail p value is shown. Data retrieved from www.ebi.ac.uk/metagenomics/studies/ERP015317. Lactobacilli improve and Romboutsia worsen glucose metabolism Encouraged by the support of our inferences in human data, we proceeded to test the role of L. gasseri, L. johnsonii, and R. ilealis in in vivo experiments designed according to predicted functional effects on the host. We anticipated that potential metabolic improvers (L. gasseri, L. johnsonii) would ameliorate metabolism damaged by WD, whereas the potential pathobiont (R. ilealis) would worsen metabolism in mice fed with normal diet. As predicted, WD-fed mice administered L. gasseri or L. johnsonii showed improved glucose tolerance (AUC and 120 min glucose levels) compared with mice on WD (Figs. 3a and S3). In addition, supplementation with L. gasseri ameliorated the established glucose intolerance in mice (Figure S4). Conversely, mice supplemented with R. ilealis showed impaired glucose tolerance (15 mins. glucose levels in glucose tolerance test (GTT)) and reduced fasting insulin compared with mice fed with normal diet (Figs. 3a and S3). Accordingly, homeostatic model assessment (HOMA)-B, the index that reflects pancreatic beta-cell function, was also reduced by supplementation with R. ilealis (Fig. S3). These results suggest that the worsener/pathobiont and improver/probiotic microbes modulate the host systemic phenotypes likely via different mechanisms. Indeed, although higher levels of glucose early after glucose injection are most probably explained by decreased production of insulin in R. ilealis supplemented mice, L. gasseri and L. johnsonii improve glucose tolerance without altering insulin levels. Furthermore, whereas adiposity was not altered by R. ilealis, it was reduced in mice supplemented with improvers (L. gasseri or L. johnsonii) (Fig. 3a). Fig. 3: Experimental validation of microbial candidates. a Metabolic parameters in mice given control diets and supplemented with or without the indicated microbe. Glucose tolerance test (GTT) curves show the mean and SD of blood glucose over time. Open and closed circles indicate two independent experiments; * indicates statistically significant differences in levels of the parameter between control group (WD for Lactobacilli, ND for R. ilealis) versus those supplemented with bacteria (one-tail t test p value <0.05 with FDR < 15%). Blue, ND; red, WD; light green WD with L. gasseri (WD + LG); dark green, WD with L. johnsonii (WD + LJ); orange, R. ilealis (ND + RI), respectively. Source data are provided as a Source Data file. b Principal Component Analysis of stool (triangle) and ileal (circle) microbial communities and Venn diagram of microbes changed in mice on ND, WD, WD + LG or WD + LJ and with >0.1% median abundance in at least one group across experiments (Fisher's p value <0.05 calculated using two-tail Mann–Whitney per experiment). For Lactobacilli supplementation experiments, n = 11 mice for ND, WD and WD + Lg groups, n = 10 mice for WD + Lj group. For R. ilealis (ND and ND + RI), n = 5 mice per group. Although many human studies did not detect significant changes in fecal microbiota after probiotic administration26,27,28, there were recent reports concerning the possible damaging effects of probiotics on the upper intestinal microbiota29,30. Therefore, we sequenced 16 S rRNA gene in ileum and fecal samples from mice supplemented with three candidate bacteria. Very few changes were observed in the ileal and stool microbiota composition due to supplementation by these microbes (Fig. 3b, Fig. S5a, Supplementary Data 6). In hindsight, these results agree with the low keystoneness score of all three tested microbes that have indicated their little influence on the rest of bacterial community (Fig. 1d). Furthermore, we did not find differences for individual taxa in stool samples in mice supplemented with bacteria. In the ileum, only one bacterium, Anaerotruncus colihominis (Supplementary Data 16), was reduced owing to western diet and increased by both L. gasseri and L. johnsonii (Fig. S5b). In agreement with our result, a study of gut microbiota from the Old Order Amish sect found this microbe to be negatively correlated with BMI and serum triglycerides31. Altogether, however, minimal alterations in microbiota induced by L. gasseri and L. johnsonii supplementation did not explain restoration of glucose metabolism promoted by these bacteria. Lactobacilli improve hepatic mitochondria and lipid metabolism Besides identifying effective probiotics for obesity/diabetes, it is critical to establish the host pathways through which these microbes exert their effect. Therefore, we next investigated two major target organs (intestine and liver) upon which both Lactobacilli might be acting to improve systemic metabolism. For a comprehensive evaluation of these organs we first analyzed global gene expression altered by L. gasseri and L. johnsonii supplementation. To identify common mechanisms by which L. gasseri and L. johnsonii improve metabolism, we focused on the genes that responded similarly to both microbes by identifying genes differentially expressed between both L. gasseri and L. johnsonii comparing with WD. The transcriptome of the ileum and liver showed distinct changes in response to supplementation by these bacteria (Fig. 4a). In striking contrast to the number of genes differentially expressed in the ileum (152, false discovery rate; FDR < 10%), there were much higher numbers of genes differentially expressed in the liver (654, FDR < 10%) (Supplementary Data 7–8). Furthermore, the great majority (638/654) of these genes were upregulated by Lactobacilli supplementation. Fig. 4: Transcriptome analysis, liver mitochondria, and lipids after supplementation with L. gasseri or L. johnsonii. a Number of differently expressed genes (#DEGs, two-sided t test p value <5% in each Lactobacilli, Fisher's p value <5% calculated over both Lactobacilli, and FDR < 10%) regulated by L. gasseri and L. johnsonii in the same direction comparing to western diet. b Over-represented processes in the genes of the network shown in a of mice supplemented with Lactobacilli. c A heatmap showing the median expression of genes from the respiratory chain process in the livers of mice. d Representative electron microscope images of liver cells. The blue and red arrows indicate healthy and damaged mitochondria, respectively. e, f Various metrics of mitochondria in the liver of mice; *statistically significant differences between control and groups supplemented with bacteria (one-sided t test p value <5%). Data are presented as mean ± s.d. (n = 40 images for WD, n = 35 images for WD + LG and n = 37 images for WD + LJ groups; n = 60 mitochondria for healthy and n = 61 for damaged mitochondria). Source data are provided as a Source Data file. g Levels of long-chain fatty acids, h expression of cholesterol metabolism genes in livers, cholesterol levels in serum and liver of mice fed WD and supplemented with or without Lactobacilli. Each symbol represents one mouse, bars are median values. Source data are provided as a Source Data file; n = 3–5 mice per group (except serum cholesterol where n = 10–11 mice per group); * indicates statistically significant differences in WD vs WD + LG or LJ (one-sided t test p value <5%); # indicates p = 0.065. Functional enrichment analysis showed that genes that were changed in the ileum were enriched for only a few categories with the circadian rhythm function as the main one (Supplementary Data 9). Notably, one of the genes was Nfil3, which was downregulated in the ileum of L. gasseri or L. johnsonii supplemented mice as compared with the WD mice (Supplementary Data 7). In agreement with our results, the knockout of this gene in the intestinal epithelium had been shown to prevent mice from obesity, insulin resistance, and glucose intolerance32. Pathway enrichment analysis in liver, however, showed that multiple categories, and processes related to mitochondrial functions were over-represented among genes upregulated by L. gasseri and L. johnsonii (Figs. 4b and S6, Supplementary Data 10). In addition, further analysis demonstrated that genes belonging to all five mitochondrial complexes of the oxidative phosphorylation pathway (Fig. 4c) were upregulated in the liver of L. gasseri and L. johnsonii supplemented mice (Supplementary Data 8). There was also a group of genes coding for large and small subunits of mitochondrial ribosomal proteins with increased levels of expression in the L. gasseri and L. johnsonii group. Furthermore, genes involved in mitochondrial fusion were upregulated by the Lactobacilli including mitofusin 1 and 2 (Mfn1, Mfn2), mitoguardin 2 (Miga2), and optic atrophy 1 (Opa1) (Supplementary Data 8). Hepatic mitochondrial functions are well known to be dysregulated in T2D33,34,35. Overall, our results suggest that in addition to mitochondrial functions, these probiotic bacteria induced structural/morphological changes in liver mitochondria. Thus, we performed electron microscopy of the livers from mice fed with WD and supplemented or not with each Lactobacilli (i.e., WD, WD + LG, WD + LJ) (Fig. 4d). Although there was no difference in the number of mitochondria, overall area occupied by mitochondria was larger in WD group mice than in L. gasseri or L. johnsonii (Fig. 4e) suggesting increased size of mitochondria in livers of WD as compared with mice supplemented by Lactobacilli. This result indicates that mitochondrial swelling caused by WD, a phenomenon that can perturb proper functioning of mitochondria36,37,38, was ameliorated by probiotic supplementation. Next, we undertook quantitative evaluation of mitochondrial ultrastructural changes. Current agreement in the field is that healthy and damaged mitochondria correspond to dark, electron-dense and lucent, fragmented cristae images, respectively37,39. According to those criteria, we first identified a set of healthy and damaged mitochondria within individual images (Fig. 4f). Next, we estimated, in an unbiased manner (i.e., comparing healthy and good mitochondria within a given sample), which image parameters discriminated between the two types of mitochondria. We found lower values of standard deviation, integrated density, and the density mode in healthy compared with damaged mitochondria (note, in grayscale, white is 255 and black is 0) (Supplementary Data 11). Comparison between the three groups of mice showed significantly lower levels of these parameters in L. gasseri and L. johnsonii groups than in WD (Fig. 4f), pointing to healthier mitochondria in the former two groups of mice. Overall, these results support the prediction derived from gene expression data and indicate that L. gasseri and L. johnsonii supplementation prevented hepatic mitochondrial damage induced by western diet. One of the important consequences of improved mitochondrial health is a restoration of fatty acid beta-oxidation. This process decreases build-up of detrimental fatty acids in the liver leading to improved systemic glucose metabolism40,41. In our data, among 19 regulated genes from the beta-oxidation gene subset, 18 genes were upregulated by supplementation of probiotic strains (Supplementary Data 12). Among upregulated genes were those involved in fatty acid transport (Slc25a17, Slc27a2), oxidation (Acads, Acadl) and hydration (Echs1) of fatty acyl representing major steps of beta-oxidation. These results pointed to possible increase in catabolism of fatty acids by Lactobacilli supplementation. Indeed, we found overall reduction of total hepatic lipids including several most abundant fatty acids known to have damaging effects on metabolism associated with T2D42 such as monounsaturated fatty acids, oleic, and palmitic acids (Figs. 4g and S5c, Supplementary Data 13). Overall, these results are in accordance with the idea that changes in liver fat are central to development as well as reversion of T2D43. Besides fatty acids metabolism, two genes with well-established functions in cholesterol metabolism were also upregulated by both Lactobacilli: Abcg8, (hepatic cholesterol efflux44) and Cyp7a1, (conversion of cholesterol into bile acids45) (Fig. 4h). Therefore, we measured cholesterol in liver and serum samples. Although there was no change in serum cholesterol, there were reduced levels of liver total cholesterol in mice supplemented with L. gasseri or L. johnsonii (Fig. 4h). These results agree with an idea that alterations in the liver might precede lipid alterations detectable in serum43. Multi-omic network infers key liver genes for effects of Lactobacilli To identify potential mechanisms by which Lactobacilli alter lipid and glucose metabolism, we created a multi-omic network by integrating the gene expression changed by Lactobacilli and lipid profile from the liver with systemic measurements of metabolic parameters changed by the WD (Fig. 5a). The multi-omic network contained 1776 edges connecting 380 nodes. The node degree distribution of this network followed the power law function (Figure S7), a critical property of biological networks18,19. Furthermore, although over half of differentially expressed genes made into the multi-omic network, the enrichment analysis showed similar results with mitochondrial translation, fusion, organization, and autophagy formations being top enriched functions in this network (Fig. 5b). Next, we interrogated this network to infer genes regulated by Lactobacilli and potentially responsible for changing the systemic phenotypes. Specifically, we used the degree (local network property counting the immediate neighbors) and BiBC20, which is a global network property that measures the overall frequency with which a node connects to the nodes of other omics-type in the graph. Noteworthy, we found that gene expression nodes were predominantly connected to GTT, fasting glucose and 120 min glucose, two of which were significantly decreased by Lactobacilli supplementation (Fig. 5a–c). Furthermore, Ifitm3, Usp50, Rai12 (Elp5), and Snap47, which are known to be involved in the maintenance of functional mitochondria46,47,48, were found as key genes connecting expression alterations with systemic glucose metabolism (Fig. 5c). Interestingly, epididymal fat (also decreased in mice by Lactobacilli) was highly connected to liver fatty acids and to only one gene (Mfsd3), which codes for a solute carrier previously found in association with palmitic acid levels in a genome-wide association study49. Fig. 5: Multi-omic network analysis, metabolomics in mice supplemented with Lactobacilli and validation of glutathione in vitro. a Multi-omic network integrating gene expression of genes significantly regulated in liver by Lactobacilli (circles), liver lipid profile (diamonds), and systemic metabolic parameters (squares) with red symbols indicating upregulated and blue are down in Lactobacilli supplemented mice. Green outline of nodes indicates significantly decreased lipid or phenotype; size of circle corresponds to the combined score of degree and bipartite betweenness centrality (BiBC) in the network. The orange and black edges indicate positive and negative correlations, respectively. Genes with top degree and BiBC are indicated. Source data are available at https://tinyurl.com/multi-omic-NW-Fig-5A. b Gene ontology biological functions over-represented in the genes of multi-omic network. c Scatterplot showing the degree and BiBC of all nodes in the multi-omic network with genes (gray), lipids (blue), phenotypes (green). d Fold-changes of 133 serum metabolites in germ-free (GF) mice fed western diet (WD) and colonized with L. gasseri for 2 weeks in comparison with GF mice on WD (n = 2 per group). TG, Triacylglycerol (16:0/18:2(9Z,12Z)/20:4(5Z,8Z,11Z,14Z)); MG, Monoacylglycerol; 8-iso-15-keto PGF2α, 8-iso-15-keto Prostaglandin F2α. Source data are provided in Supplementary Supplementary Data S14. e Changes in 12 metabolites identified in Fig. 5d in specific-pathogen mice (SPF) fed WD (data of serum pools of 4–6 mice in each pool per group), in five experiments of Lactobacilli-supplemented mice, mean fold change across five experiments and FDR (false discovery rate) is plotted. Source data are provided in Supplementary Supplementary Data S14c. f Left heatmap shows the geometric mean of normalized gene expression in AML-12 cells treated with either low sugar medium (glucose 17 mM), high sugar medium (glucose and fructose at 50 mM each) or high sugar medium supplemented with 4 mM, 6 mM, or 9 mM of reduced glutathione (GSH) ethyl ester (5–6 independent experiments). The right heatmap shows geometric mean of normalized gene expression from RNA-Seq in liver of western diet (WD) fed mice or WD-fed mice supplemented with either L. gasseri or L. johnsonii (red, high; blue, low relative gene expression). Source data are provided as a Source Data file. Thus, the network analysis further suggested that the expression of genes responsible for mitochondrial organization and maintenance in the liver is the primary driver of improved systemic glucose metabolism. L. gasseri and L. johnsonii increase serum GSH and bilirubin Next, we applied a metabolomics approach to identify potential mechanisms responsible for improved hepatic mitochondrial health evoked by Lactobacilli. First, we established that metabolites were specifically increased by these bacteria in the serum of mice that did not contain other microbes. For this, germ-free mice fed WD were monocolonized or not with L. gasseri for 2 weeks and mouse serum was subject to metabolite profiling. Out of 133 metabolites that were identified (Supplementary Data 14a), 12 were increased after monocolonization, ranging from twofold for 8-iso-15-keto-PGF2a to 48 for bilirubin (Fig. 5d, Supplementary Data 14b). After this pre-selection in monocolonized mice, we compared abundance of the 12 metabolites between pools of sera of SPF mice supplemented with L. gasseri or L. johnsonii in three independent experiments (see details in Methods). We found that reduced (but not oxidized) GSH increased about four times, and bilirubin showed a trend to increased levels (FDR = 0.12), whereas two tauro-conjugated bile acids and 3-hydroxytetradecanedioic fatty acid showed various levels of decrease in Lactobacilli supplemented SPF mice (Fig. 5e, Supplementary Data 14c). Although the mechanisms of GSH surge by Lactobacilli is not clear yet, this metabolite seemed to be a plausible candidate to cause hepatic mitochondrial improvement in mice as its antioxidant functions are well-established50. To test this hypothesis, we used AML-12 cell culture mimicking diabetic alterations in liver by adding high concentrations of fructose and glucose. Treatment of cells with different concentrations of GSH (in high sugar) enhanced expression of several genes with well-known mitochondrial functions such as mt-Atp6, Ndufv1, Mfn1, Opa1, Foxo3, Gabpa whose expression was also upregulated by Lactobacilli in the livers of mice (Fig. 5f, Supplementary Data 15a). We further tested three genes (Usp50, Ifitm3, Rai12) predicted by the network analysis (Fig. 5c) to play a key role in the control of mitochondrial health in liver and systemic glucose metabolism and have been previously shown to support mitochondrial homeostasis47,48. While we could not detect Usp50 in cell culture, the two other genes (Ifitm3, Rai12) showed increased expression in 6 and 9 mM GSH similar to other mitochondrial genes (Fig. 5f). Thus, altogether these results indicate that an increase in GSH in the serum of mice is likely to be one of the important mechanisms used by Lactobacilli for boosting liver mitochondrial and antioxidant function, consequently improving systemic glucose metabolism. Our work provides further support for the hypothesis that variations in abundance of a few key (but not keystone) microbes rather than overall changes of the microbial community might explain microbiota-related damage caused by western diet in T2D. Indeed, administration of two bacteria (L. gasseri and L. johnsonii), decreased by western diet, improved systemic glucose metabolism. The fact that this improvement could be achieved by supplementation of single bacteria, however, does not eliminate a possibility of microbe–microbe interaction playing a role in this process. Furthermore, both Lactobacilli had very low keystoneness, and accordingly we did not detect strong alterations in the gut microbiota (fecal or ileal) of mice supplemented by these two microbes. This is in agreement with several human studies that used other strains of probiotic bacteria and largely did not observe changes in taxonomic composition of fecal microbiota26,27,28. In contrast, two recent reports showed alterations in human mucosal microbiota communities by probiotics and potential adverse side effects of probiotics, especially when used after antibiotics29,30. The two species of Lactobacilli we predicted and tested in mice fed WD, enhanced systemic glucose tolerance, decreased adiposity, reduced several "bad lipids" in the liver, which could be all a consequence of improved hepatic mitochondrial health. This thought is supported, on the one hand, by clinical studies that have shown that reduction in hepatic fat in animals and humans results in recovery from T2D37,51,52. On the other hand, impairment of liver mitochondrial function has been long known as an important contributor to metabolic disease33,34,35,53. Furthermore, it has been shown that both palmitic and oleic acids (decreased by Lactobacilli) can damage liver mitochondria54,55,56. Conversely, enhancement of mitochondrial functioning stimulates beta-oxidation resulting in the reduction of damaging fatty acids57,58. The multi-omic network analysis in our study further supported the central role of hepatic mitochondrial health. Specifically, it pointed to several genes (Fig. 5a–c) involved in proper mitochondrial organization and mitochondrial autophagy (mitophagy) as the key players in relation to systemic glucose metabolism. Investigations performed over the last decade have reported several mechanisms whereby microbiota can affect T2D including modulation of inflammation and immune mediators, gut hormones, mucosal permeability, insulin production among others59. Our present findings bring to the picture of host–microbiota interactions an intriguing link between mitochondria (regarded as mammalian endosymbionts) and the symbiotic microorganisms in the gut. Interactions between mitochondria and microbiota is an emerging direction in microbiome research and have been implicated in Parkinson's disease60, intestinal cell death by antibiotic-resistant microbiota61 and longevity of Caenorhabditis elegans62. Metabolic health is synonymous with mitochondrial health where the ancestral mitochondrion-microbiome axis may play an important role63. Our investigation of serum metabolome pointed to several changes caused by Lactobacilli. Although the fact that Lactobacilli supplementation can alter certain bile acids levels might not be surprising, a biological role of these alterations is uncertain. Furthermore, we were not able to follow-up the detected changes by targeted metabolomics in this work, which can be a subject of future studies. However, two metabolites, GSH and bilirubin, are known to play complementary antioxidant roles, which would improve mitochondrial respiration and other metabolic functions64,65. More recent reports demonstrated that deletion of biliverdin reductase A, which transforms biliverdin into bilirubin induced oxidative stress and lipid accumulation66 and that bilirubin itself protects mitochondria via scavenging O2−67. GSH, however, uses somewhat different mechanisms of beneficial effects on mitochondria. For example, it was shown to improve mitochondrial fusion68. Indeed, we found that both Lactobacilli in vivo and GSH in vitro increased expression of three main GTPases (Mfn1, Mfn2, Opa1) required for this process. Unlike bilirubin, which is produced by hepatocytes, GSH origin is not limited to mammalian cells but it can also be produced by many bacteria. For example, some species of Lactobacilli are known to produce GSH, which they utilize to protect themselves from bile salts, reactive oxygen species and other types of cellular damages69,70. Therefore, it is plausible that our observation of increased levels of GSH is a result of simultaneous induction of its production by host cells71 and by Lactobacilli itself. Although, further studies are warranted to identify the main source of GSH, it is highly plausible that this metabolite is one of the main mediators of Lactobacilli effect on liver mitochondria. In agreement with our result, it was reported that another strain of L. johnsonii may improve hepatic mitochondria72. Interestingly, these mitochondrial effects may not be limited to the liver, as another species of Lactobacilli L. paracasei attenuated cardiac mitochondrial dysfunction in obese rats73, and a different strain of L. gasseri increased resistance to mitochondrial dysfunction in aging C. elegans74. Notable, the two strains (L. gasseri and L. johnsonii) identified and tested in our study are also promising candidates for future testing in clinical settings of T2D as they would have minimal adverse effects on gut microbiota while improving glucose metabolism. Other strains of these two species of Lactobacilli have been tested in clinical trials for other diseases and in mouse models of diabetes59,75 and thus might share critical mechanisms of effects on the mammalian host. In conclusion, our study demonstrates that damaging effects of western diet on metabolism can be at least partially explained by decrease of beneficial microbes (e.g., Lactobacilli) and increase of pathobionts (e.g., R. ilealis) in gut microbiota, each of them acting via different host pathways. Furthermore, it revealed potential probiotic strains for treatment of T2D as well as critical insights into mechanisms of their action, offering an opportunity to develop targeted therapies of diabetes rather than attempting to restore "healthy" microbiota as a whole. Mice and diets Seven weeks old, C57BL/6 male mice were purchased from Jackson Laboratories (Bar Harbor, Maine) and housed at Laboratory Animal Research Center (LARC) at the Oregon State University. After 1 week of acclimatization, mice were either switched to western diet (WD) D12451 containing 45% lard and 20% sucrose or to a matched normal diet D12450K (ND) produced by Research Diets (New Brunswick, NJ). Mice were on these diets for 8 weeks. Two independent experiments were performed with five mice per group in each experiment. Ethical approval for this work was obtained from the Oregon State University Institutional Animal Care and Use Committee. The study complied with all relevant ethical regulations regarding the use research animals. L. gasseri ATCC 33323 were purchased from American Type Culture Collection (ATCC, Manassas, VA). L. johnsonii NCC 533 were donated by Nestlé Culture Collection (Nestec Ltd., Nestlé Research Center Lausanne, P.O. Box 44, CH-1000 Lausanne 26). Both bacteria were grown anaerobically in MRS broth for 24 h at 37oC, colony-forming unit (CFU) was determined by serial dilutions, aliquoted in 15% glycerol stocks in cryovials and stored at −80oC. Before the gavage, the bacterial glycerol stocks were thawed, spun down, and resuspended in sterile phosphate-buffered saline (PBS). For Romboutsia experiment, active culture of R. ilealis DSM 25109 were purchased from the German Collection of Microorganisms DMSZ. Bacterial supplementation experiments For the microbial supplementation experiments, 8-week-old C57BL/6 mice were given either ND or WD or WD + L. gasseri (gavaged 1 × 109 CFU/mouse every other day) or WD + L. johnsonii (gavaged 1 × 109 CFU/mouse every other day) for 8 weeks. For the control, both ND and WD groups were gavaged with equal volume of PBS (0.2 ml per mouse). Two independent experiments were performed with 5–6 mice per group per experiment. For the treatment experiment, mice were fed ND or WD for 8 weeks when one group of WD mice was supplemented with L. gasseri (gavaged 1 × 109 CFU/mouse every other day). GTT was performed at 8 weeks on WD and 4, 9, and 12 weeks on WD + L. gasseri (n = 5 per group). For R. ilealis supplementation experiment, after 1 week of acclimatization, all mice were switched to ND and were either given PBS or 1 × 109 CFU of R. ilealis every other day for 4 weeks (n = 5). Metabolic measurements were done as described below except for R. ilealis experiment 1 mg/kg glucose was injected for IPGTT. For gnotobiotic mouse experiment, germ-free mice on western diet were colonized with 1 × 109 CFU L. gasseri on Day 0, Day 2, Day 4, and Day 12 and killed on D14 (n = 2). Intraperitoneal glucose tolerance test (IPGTT) Mice were fasted for 6 h during the light phase with free access to water. A concentration of 2 mg/kg glucose (Sigma-Aldrich) was injected intraperitoneally. Blood glucose was measured at 0 min (immediately before glucose injection), 15, 30, 60, and 120 mins with a Freestyle Lite glucometer (Abbot Diabetes Care). Fasting insulin and fasting glucose Mice were fasted for 6 h with free access to water. Fasting blood was collected either via submandibular bleed or from the tail vein. Insulin and glucose levels in fasting plasma or serum was measured with Mouse Insulin ELISA Kit (Crystal Chem) and Glucose Colorometric Assay Kit (Cayman Chemical), respectively, according to manufacturer's protocol. HOMA-IR and HOMA-B were calculated according to Eqs. (1) and (2), respectively: $${\mathrm{HOMA}} - {\mathrm{IR}} = \frac{{{\mathrm{Glucose}}\,({\mathrm{mg}}/{\mathrm{dL}}) \times {\mathrm{Insulin}}({\mathrm{\mu U}}/{\mathrm{mL}})}}{{405}}$$ $${\mathrm{HOMA}} - {\mathrm{B}} = \frac{{360 \times {\mathrm{Insulin}}\left( {{\mathrm{\mu }}\frac{{\mathrm{U}}}{{{\mathrm{mL}}}}} \right)}}{{{\mathrm{Glucose}}\,\left( {\frac{{{\mathrm{mg}}}}{{{\mathrm{dL}}}}} \right) - 63}}\%$$ The heatmap of results of systemic measurements was created using Morpheus (https://software.broadinstitute.org/morpheus/). Hepatic fatty acids and cholesterol Hepatic fatty acids were quantified using established protocols76. In brief, total lipid was extracted from liver in chloroform–methanol (2:1) containing 1 mM butylated hydroxytoluene. 7-Nonadecenoic acid (C19:1) was added as a recovery standard. Total protein was measured after the initial homogenization step by bicinchoninic acid assay (Bio-Rad, Hercules, CA). Fatty acids in the extracts were saponified in 80% methanol containing 0.4 M KOH. Afterward, saponified fatty acids were converted to fatty acid methyl esters in methanol containing 1% of 24 M H2SO4 and then quantified by gas chromatography. Hepatic total cholesterol in liver lipid extracts and in serum was measured using Amplex™ Red Cholesterol Assay Kit (Thermo Fisher Scientific) according to manufacturer's protocol. RNA preparation and gene expression analysis RNA was extracted using an OMNI Bead Ruptor and 2.8 mm ceramic beads (OMNI International) in RLT buffer followed by Qiashredder and RNeasy kit using Qiacube (Qiagen) automated extraction according to manufacturer's specifications. Total RNA was quantified using Quant-iT RNA Assay Kit (Thermo Fisher Scientific). Complementary DNA was prepared using qScript reverse transcription kit (Quantabio) and qPCR was performed using Perfecta SYBR mix (Quantabio) and StepOne Plus Real Time PCR system and software (Applied Biosystems). RNA libraries were prepared with QuantSeq 3'mRNA-Seq Library Prep Kit (Lexogen) and sequenced using Illumina NextSeq. Sequences were processed to remove adapter, polyA and low-quality bases by BBTools (https://jgi.doe.gov/data-and-tools/bbtools/) using bbduk parameters of k = 13, ktrim = r, forcetrimleft = 12, useshortkmers = t, mink = 5, qtrim = r, trimq = 15, minlength = 20. Reads were aligned to mouse genome and transcriptome (ENSEMBL NCBIM37) using Tophat (v2.1.1) 77with default parameters. Number of reads per million for mouse genes were counted using HTSeq (v 0.6.0)78 and quantile normalized. BRB-ArrayTools was used to identify genes differentially expressed in the liver and ileum when supplemented with or without the Lactobacillus candidates. Pathway enrichment was performed using Metascape79. DNA extraction and 16 S rRNA gene libraries preparation For microbial measurements, stool pellets were collected at T1 (4 weeks of diet) and stool pellets and terminal ileum contents were collected at T2 (8 weeks). To get microbial DNA, frozen fecal pellets, and ileum with content were resuspended in 1.4 ml ASL buffer (Qiagen) and homogenized with 2.8 mm ceramic beads followed by 0.5 mm glass beads using an OMNI Bead Ruptor (OMNI International). DNA was extracted from the entire resulting suspension using QiaAmp mini stool kit (Qiagen) according to manufacturer's protocol. DNA was quantified using Qubit broad range DNA assay (Life Technologies). The V4 region of 16 s rRNA gene was amplified using universal primers (515 f and 806r) as in ref. 16. Individual samples were barcoded, pooled to construct the sequencing library, and then sequenced using an Illumina Miseq (Illumina, San Diego, CA) to generate pair-ended 250 bp reads. 16 S rRNA gene sequencing data analysis The samples were demultiplexed and forward-end fastq files were analyzed using QIIME v. 1.9.180. The default quality filter parameters from QIIME's split_libraries_fastq.py were applied to retain high-quality reads (Phred quality score ≥ 20 and minimum read length = 75% of 250 nucleotides). A closed reference OTU picking with 97% sequence similarity was performed using UCLUST81 and Greengenes reference database v13.882,83 to cluster 16 S rRNA gene sequence reads into OTUs and assign taxonomy. The reference sequence of candidate OTUs from the Greengenes database was used to obtain species level taxonomic assignment using Megablast84 (top hit using default parameters). A threshold of 99% cumulative abundance across all samples in an experiment was used to retain abundant microbes, thus removing OTUs with ~<0.01% abundance across all samples in that experiment. The read counts were normalized using cumulative sum scaling85, accounted for DNA quantity, followed by quantile normalization. The principal component analysis for the 16 S sequencing data was created using Clustvis86, GraphPad Prism software (version 7), R packages seqtime version 0.1.1, igraph version 1.2.5. Network analyses TK Network reconstruction and prediction of causal microbes Spearman rank correlations were calculated between all pairs of microbes (OTUs) and metabolic parameters (phenotypes) in each group of both experiments. A combined Fisher's p value was calculated for each pair from the correlation p values from each experiment. A FDR was calculated on the combined p values separately for the following correlations: (i) within metabolic parameters, (ii) within OTUs, and (iii) between OTUs and metabolic parameters. We retained edges that satisfied the following criteria: the sign of correlation coefficients in the two experiments consistent in stool of WD-fed mice at 4 weeks (n = 35 per expt.), individual p value of correlation within each experiment is <30%, combined Fisher's p value of all experiments <5% and FDR cutoff of 10% for within edges (i and ii). Finally, the TK network was generated20,61,87,88,89 by adding microbe-phenotype edges where the microbe showed significant change in (WD vs ND) abundance in ileum at 8 weeks, edges showed consistent sign of per group Spearman correlation coefficient between the two experiments of three WD-fed groups (WD-stool 4 weeks, WD-stool 8 weeks, and WD-ileum 8 weeks), and satisfied principles of causality90 (i.e., had concordance between fold change in WD vs. ND comparison and correlation sign between the two partners) in all three WD-fed groups. The network was visualized in Cytoscape. Identification of keystone microbes Generation of training data were accomplished as follows: 100 instances of 542 generalized Lotka-Volterra models were run to steady state and steady state species abundances were considered individual samples. Those individual samples consisted of 10–100 species drawn from a model-specific species pool. The size of the species pool was determined by defining similarity in species composition between samples (between 0.4 and 0.95). The individual models further varied in the following parameters: connectivity of the species interaction matrix (between 0.005 and 0.7), negative edge percentage of the species interaction matrix (0–100%), species-specific growth rates (between 0 and 1) and carrying capacities (between 0 and 100), as well as the topography of the species interaction matrix (interactions sampled from a uniform distribution or assigned according to the Klemm-Eguíluz model91. The R-package seqtime was used to generate the species interaction matrices92. Subsequently, each species included in a model was in turn removed from the community and a Canberra distance between original and sub-sampled community was calculated. In all, 1000 iterations of this procedure were performed per species and the average Canberra distance induced by a species' absence was considered its keystoneness score. For Model training, the data were split into training set and test set. The training set was used to train a linear model to predict keystoneness based on mean relative abundance and the following node parameters computed from a spearman correlation network: sum of absolute correlation strength, node degree, relative closeness centrality, betweenness centrality, and eccentricity. With the exception of absolute correlation strength, the network parameters were calculated within the R-package igraph (http://igraph.org). This model was then used to predict keystoneness on the test set. A linear model between real and predicted keystoneness in the test set gave an adjusted R² of 0.4219, with a p value <2.2e-16. The trained linear model was subsequently applied to the OTU abundance data and the previously computed correlation network to predict keystoneness scores for each OTU. At last, keystoneness scores were scaled between 0 and 1 to remove negative values occurring as an artifact of the linear model. Multi-omic network analysis Spearman rank correlations were calculated between all pairs of genes, lipids, and phenotypes. The phenotypic subnetwork was obtained from the TK network. For gene subnetwork, correlation was calculated by pooling samples supplemented with the same Lactobacilli from both experiments. Edges were retained if they satisfy the following criteria: the sign of correlation coefficients in the two Lactobacilli groups should be consistent, individual p value of correlation is <30%, combined Fisher's p value over two Lactobacilli groups <5%, FDR cutoff of 5%, and satisfying principles of causality (i.e., satisfied fold change relationship between the two partners in the Lactobacilli vs. WD comparison). For the lipid subnetwork, correlations were calculated per experiment in the WD groups of the three datasets (two WD vs ND experiments, and a Lactobacilli supplementation experiment). Edges were retained if the sign of correlation coefficients was consistent, Fisher's p value <5%, FDR cutoff of 10%, and satisfied principles of causality. For between-omics edges, correlations were calculated per experiment in the WD groups of three data sets and a voting strategy was used for meta-analysis. Pairs were shortlisted if they had the same sign of correlation and p values <10% in at least two data sets. If the p value in the third data set was over the threshold, the pair was retained but the third data set was removed during calculation of Fisher p value. The pair was kept if the p-value in the third data set was under the threshold and the sign of correlation was same in all three data sets, else the pair was entirely removed. Edges with FDR < 10% and satisfying principles of causality were added to the network. Computational analysis using human datasets Sequence read files of 1046 humans25 were downloaded from European Bioinformatics Institute (https://www.ebi.ac.uk/), quality filtered, and trimmed with ea-utils using default settings except the base removal quality threshold was set at <20. Cleaned sequence reads were binned into Greengenes (v13_8) 97% identity OTUs using the QIIME 1.9 closed reference OTU picking workflow (pick_closed_reference_otus.py). Spearman correlations between BMI and microbial abundance of exact candidate OTU (or the sum of OTUs assigned to the bacterial species) were calculated in obese humans. To avoid bias from outlier samples, a sample was considered only if had > 10 reads per million for Lactobacillus OTUs and >100 reads per million for Romboutsia OTUs. Transmission electron microscopy (TEM) Frozen liver samples were prepared and fixed in 1.5% paraformaldehyde and incubated at 4 °C overnight93, after which fixed tissues were processed usinf a protocol based on ref. 94. Specifically, the vibratome sectioned fixed tissues (~1 mm3) were postfixed in solution containing 2% osmium tetroxide and 1.5% potassium ferrocyanide for 30 min at room temperature in dark. It was followed by staining with 0.2% tannic acid in water for 10 min, fixing in 1% osmium tetroxide for 30 min and staining in 1% thiocarbohydrazide in water for 20 min at room temperature. The samples were then incubated with 1% osmium tetroxide for 30 min at room temperature. Then the samples were incubated with 0.5% uranylacetate in 25% methanol overnight at 4oC, which was followed by incubation in Walton's lead aspartate for 30 min at 60 C. Then samples were dehydrated with graded series of ethanol, infiltrated with ethanol/epon mixture (1:1) for 1 h at room temperature and 1:2 for 1 h at room temperature. Ultramicrotome was done using a RMC PowerTome PC. Microscopy was done with a Helios 650 NanoLab (ThermoFisher). Scanning transmission electron microscopy mode was used for imaging. In all, 10–12 images were taken per sample. The images were imported into FIJI (i.e. ImageJ) software (version 2.0.0-rc-69/1.52i). Each mitochondrion in the images was outlined and different attributes were measured using default "measure" option in the software. In order to identify image parameters that discriminate between healthy and damaged mitochondria, we used images representative of all analyzed groups. In each image, a pair of damaged (bright, lucent) and healthy mitochondria (dark, dense) were identified according to images in EM atlas (http://www.drjastrow.de/WAI/EM/EMAtlas.html). Next, we extracted quantitative data for 17 different image parameters (See Supplementary Data 11) and analyzed which of those differed between the two types of mitochondria. The selection has been performed "blindly" (i.e., the image analyst was unaware of treatment identity of samples. Among parameters that significantly differed between two types of mitochondria we chose less interdependent ones to compare different treatment groups. To establish whether the structure of mitochondria differs between groups supplemented or not with probiotic bacteria we analyzed the above selected image parameters in 119 TEM images from liver samples of nine mice totalizing 4709 mitochondria. Un-targeted metabolomics Serum samples used for metabolomics included the following: germ-free mice fed WD for 2 weeks (n = 2), monocolonized for 2 weeks with L. gasseri fed WD (n = 2); SPF mice supplemented or not with either L. gasseri or L. johnsonii (n = 4–6 per group) and fed WD for 8 weeks in two experiments shown in Fig. 3; SPF mice first fed WD for 8 weeks, then supplemented (or not) with L. gasseri for additional 12 weeks along with WD (n = 5 per group). For technical reasons, metabolomics was performed in pooled sera of each group of mice, which were run in a randomized manner as one batch. An aliquot of 30 µl of pooled serum was processed following a protocol adapted from a published study95. In brief, metabolites were extracted with four volumes of cold methanol/acetonitrile (1:1, v/v). To precipitate proteins, the samples were incubated for 1 h at −20 °C. After the samples were centrifuged at 4 °C for 15 min at 15,871 × g (13,000 rpm), the supernatant was collected and evaporated to dryness in a vacuum concentrator. The dry extracts were then reconstituted in 90 µL of acetonitrile/H2O (1:1, v/v) containing 10 ng/mL CUDA (12-(((cyclohexylamino)carbonyl) amino)-dodecanoic acid). This standard was used as a control to monitor platform stability along the fully randomized batch analysis, and to account for possible injection variabilities. A quality control (QC) pooled sample was prepared by combining, in a single vial, 10 µL of each sample. Pooled QC sample provided a 'mean' profile representing all analytes encountered during the analysis. To the QC sample a methanol solution containing verapamil and verapamil-D3 (Cayman Chemical, Ann Arbor, MI) was added at a final concentration of 0.1 ppm each. The ratio of their monoisotopic peaks was used to monitor quantification stability along the fully randomized batch analysis. The supernatant was then analyzed via LC-MS/MS (liquid chromatography with tandem mass spectrometry). High-resolution mass spectrometry was performed using an Agilent 6545 Q-ToF downstream of an Agilent 1260 Infinity high-performance liquid chromatography system consisting of a degasser, quaternary pump, autosampler (maintained at 4 °C) and column heater (maintained at 30 °C). The Q-ToF machine was operated using MassHunter software and an analysis in positive and negative ionization mode was performed for each sample. Separation was achieved using an InfinityLab Poroshell EC-C18 column (100 × 3.0 mm, 2.7 µm, Agilent) at a flow rate of 0.4 mL/min. Line A was water with 0.1% (v/v) formic acid and line B was methanol with 0.1% (v/v) formic acid, adapted from a previously described protocol96. The column was pre-equilibrated with 1% B. After injection (3 µL of the sample) this composition was held for 1 min and then changed to 30% B over the next 10 min using a linear gradient. The composition was then changed to 100% B over the next 14 min and then held at 100% B for 5 min. The mobile phase was then adjusted back to 1% B over two minutes and the column was re-equilibrated for 6 min prior to the next injection. The Agilent Q-ToF mass spectrometer was equipped with an Agilent JetSpray source operated with the following parameters: Auto MS/MS mode, Gas Temp, 325 °C; Drying gas, 10 L/min; Nebulizer, 20 psi; Sheath gas temp, 375 °C; Sheath gas flow, 12 L/min; Capillary Voltage (VCap), 4000 V; Nozzle voltage (Expt), 600 V; Fragmentor, 175 V; Skimmer, 65 V; Oct 1 RF Vpp, 750 V; Mass range, 100-3000 m/z; Acquisition rate, 10 spectra/s; Time, 100 ms/spectrum. The MS/MS spectra (mass range, 50–3000 m/z; acquisition rate, 10 spectra/s; time, 100 ms/spectrum) were obtained by isolating the precursor ion with a medium isolation width (~4 m/z) summing spectra generated with collision energies of 15, 30, and 40 V. Blanks and QC samples were run before and after every four serum samples to ensure system equilibration. Based on the reproducibility of our QC and on the intensity of the CUDA, we can assume that the instrument was stable during the full randomized batch, and that intensity differences are due to biological differences and not to technical variation. LC-MS/MS data processing Raw data were imported into Progenesis QI software (Version 2.3, Nonlinear Dynamics, Waters) in order to perform data normalization, feature detection, peak alignment, and peak integration97,98,99. Metabolites were confirmed by MS, MS/MS fragmentation, and isotopic distribution using Metlin (Version 1.0.6499.51447, https://metlin.scripps.edu) and the Human Metabolome (Version March 2020, https://hmdb.ca) databases as the reference100. The data acquired in both, electrospray ionization (ESI) negative and positive modes, which resulted in ESI+ in 7100 features with just MS information, 2461 features with both MS and MS/MS information; serum ESI− gave 2141 features with just MS information and 1204 features with both MS and MS/MS information. Thus, a total of 3665 features with both MS and MS/MS information was obtained. Next, a metabolite was sieved out when a match with a difference between observed and theoretical mass was <10 ppm and the molecular formula of matched metabolites further identified by the isotopic distribution measurement. By doing so, the number of annotated compounds with a known identification was reduced to 133 metabolites, which had match score >35 (range 36.1–57.8), and isotope similarity between 67.8 and 99.1%). We chose to increase the confidence of our annotations, rather than increase the number of annotated compounds with a lower level of confidence. Zero values were assigned minimal values calculated as three STDEV of technical variation subtracted from the minimal measured level of a given metabolite in this study. Technical variation was defined by using CUDA and corresponded to STDEV of 0.135 and mean of 1.02. The level of metabolite identification was 2 for all compounds based on Sumner et al. 101: level two refers to putatively annotated compounds (e.g., without chemical reference standards, based upon physicochemical properties and/or spectral similarity with public/commercial spectral libraries). AML-12(ATCC CRL-2254) cells were grown in complete growth medium (DMEM:12 Medium (ATCC 30-2006) supplemented with 10% fetal bovine serum (FBS), 10 µg/ml insulin, 5.5 µg/ml transferrin, 5 ng/ml selenium, 40 ng/ml dexamethasone, and 1% penicillin/streptomycin) at 37 °C in 5%CO2. After obtaining 80–85% confluency, 20,000 cells per well were seeded in complete growth medium in 96 well plate for 24 h. After 24 h of incubation, the medium was replaced either with low glucose medium (5.5 mM Glucose, 10% FBS, low sugar group) or mixture of 100 mM Glucose and Fructose (1:1 ratio, with 10% FBS, high sugar group) alone or mixed with 4, 6, or 9 mM reduced GSH ethyl ester (GSH, Sigma-Aldrich). After 6 h of treatment, culture medium was removed, cells were lysed in RLT buffer (Qiagen) and RNA was extracted using RNeasy Mini kit (Qiagen). Total RNA was quantified using Quant-iT RNA Assay Kit (Thermo Fisher Scientific). Complementary DNA was prepared using qScript reverse transcription kit (Quantabio) and qPCR was performed using Perfecta SYBR mix (Quantabio) and StepOne Plus Real Time PCR system and software (Applied Biosystems). Polymerase (Polr2c) gene was used as the control gene. Primers used for qPCR are listed in the supplementary Supplementary Data 15b. Total six experiments were performed. The gene expression was normalized using the control group per experiment and per gene across the experiments, followed by log2 transformation. Control and treatment groups were compared using paired, one-sided parametric t test. Statistics and reproducibility Overall, the data were log transformed, checked for normality and an appropriate test was performed accordingly (i.e., parametric tests as default and non-parametric tests when distribution did not fulfill normality criteria), followed by Benjamini–Hochberg false discovery rate correction. A two-sided test was used when there was no prior hypothesis of the expected direction of change; otherwise, one-sided test was used. For initial experiments, to capture the strongest and consistent signals across independent experiments (e.g., WD vs ND), non-parametric tests were used, and the meta-analysis was performed over experiments using Fisher's meta-analysis test. To achieve statistical power in the Lactobacilli supplementation experiments, the samples were normalized within each experiment to the mean of control group and analyzed together using parametric tests for host-derived variables. Meta-analysis was performed over the microbiome data. Gene enrichment analysis using Metascape software79 that implements hypergeometric test. For metabolomics analysis, results of five lactobacilli supplementation from three experiments were normalized over corresponding controls with no probiotic supplementation. Log2 transformed ratios (lacto/control) for each metabolite were compared for deviation from 0 using parametric test. In experiments with interrelated data from two groups (e.g., AML-12 in vitro experiment) we used paired test. Outliers (1%) were identified using ROUT method of GraphPad Prism 8.4.1 and removed (used only once in the whole study, one value was removed for one concentration of GSH treatment). Actual tests, cutoffs applied are mentioned in each figure caption, exact p values are available in supplementary data and source data files. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data were submitted to NCBI SRA under submission PRJNA558801 for 16 S rRNA, to GEO under GSE136033, and to Metabolomics Workbench under ST001436. TK network access: https://tinyurl.com/TK-NW-Fig-1C. Multi-omic network access: https://tinyurl.com/multi-omic-NW-Fig-5A. Source data are provided with this paper. Custom codes available at https://github.com/richrr/TransNetDemo and https://github.com/fbauchinger/keystone_species_model. Gilbert, J. A. et al. Current understanding of the human microbiome. Nat. Med. 24, 392–400 (2018). Unnikrishnan, R., Pradeepa, R., Joshi, S. R. & Mohan, V. Type 2 diabetes: demystifying the global epidemic. Diabetes 66, 1432–1442 (2017). Ussar, S. et al. Interactions between gut microbiota, host genetics and diet modulate the predisposition to obesity and metabolic syndrome. Cell Metab. 22, 516–530 (2015). Lazar, V. et al. Gut microbiota, host organism, and diet trialogue in diabetes and obesity. Front Nutr. 6, 21 (2019). Brown, K., DeCoffe, D., Molcan, E. & Gibson, D. L. Diet-induced dysbiosis of the intestinal microbiota and the effects on immunity and disease. Nutrients 4, 1095–1119 (2012). Devkota, S. et al. Dietary-fat-induced taurocholic acid promotes pathobiont expansion and colitis in Il10-/- mice. Nature 487, 104–108 (2012). CAS PubMed PubMed Central Article ADS Google Scholar Gould, A. L. et al. Microbiome interactions shape host fitness. Proc. Natl. Acad. Sci. USA 115, E11951–E11960 (2018). Carmody, R. N. et al. Diet dominates host genotype in shaping the murine gut microbiota. Cell Host Microbe 17, 72–84 (2015). Wang, C. Y. & Liao, J. K. A mouse model of diet-induced obesity and insulin resistance. Methods Mol. Biol. 821, 421–433 (2012). Hariri, N. & Thibault, L. High-fat diet-induced obesity in animal models. Nutr. Res. Rev. 23, 270–299 (2010). Donaldson, G. P., Lee, S. M. & Mazmanian, S. K. Gut biogeography of the bacterial microbiota. Nat. Rev. Microbiol. 14, 20–32 (2016). Human Microbiome Project, C. Structure, function and diversity of the healthy human microbiome. Nature 486, 207–214 (2012). Article ADS CAS Google Scholar Turnbaugh, P. J., Backhed, F., Fulton, L. & Gordon, J. I. Diet-induced obesity is linked to marked but reversible alterations in the mouse distal gut microbiome. Cell Host Microbe 3, 213–223 (2008). Zhao, L. et al. Gut bacteria selectively promoted by dietary fibers alleviate type 2 diabetes. Science 359, 1151–1156 (2018). CAS PubMed Article ADS PubMed Central Google Scholar Le Chatelier, E. et al. Richness of human gut microbiome correlates with metabolic markers. Nature 500, 541–546 (2013). Greer, R. L. et al. Akkermansia muciniphila mediates negative effects of IFNgamma on glucose metabolism. Nat. Commun. 7, 13329 (2016). Shulzhenko, N. et al. CVID enteropathy is characterized by exceeding low mucosal IgA levels and interferon-driven inflammation possibly related to the presence of a pathobiont. Clin. Immunol. 197, 139–153 (2018). Marquet, P. A. et al. Scaling and power-laws in ecological systems. J. Exp. Biol. 208, 1749–1769 (2005). Barabasi, A. L. & Oltvai, Z. N. Network biology: understanding the cell's functional organization. Nat. Rev. Genet. 5, 101–113 (2004). Dong, X. et al. Reverse enGENEering of regulatory networks from big data: a roadmap for biologists. Bioinform. Biol. Insights 9, 61–74 (2015). Petriz, B. A. et al. Exercise induction of gut microbiota modifications in obese, non-obese and hypertensive rats. BMC Genomics 15, 511 (2014). Zheng, X. et al. Bile acid is a significant host factor shaping the gut microbiome of diet-induced obese mice. BMC Biol. 15, 120 (2017). Banerjee, S., Schlaeppi, K. & van der Heijden, M. G. A. Keystone taxa as drivers of microbiome structure and functioning. Nat. Rev. Microbiol. 16, 567–576 (2018). Berry, D. & Widder, S. Deciphering microbial interactions and detecting keystone species with co-occurrence networks. Front. Microbiol. 5, 219 (2014). Goodrich, J. K. et al. Genetic determinants of the gut microbiome in UK Twins. Cell Host Microbe 19, 731–743 (2016). Kristensen, N. B. et al. Alterations in fecal microbiota composition by probiotic supplementation in healthy adults: a systematic review of randomized controlled trials. Genome Med. 8, 52 (2016). Simon, M. C. et al. Intake of Lactobacillus reuteri improves incretin and insulin secretion in glucose-tolerant humans: a proof of concept. Diabetes Care 38, 1827–1834 (2015). Maldonado-Gomez, M. X. et al. Stable engraftment of bifidobacterium longum AH1206 in the human gut depends on individualized features of the resident microbiome. Cell Host Microbe 20, 515–526 (2016). Zmora, N. et al. Personalized gut mucosal colonization resistance to empiric probiotics is associated with unique host and microbiome features. Cell 174, 1388–1405 e1321 (2018). Suez, J. et al. Post-antibiotic gut mucosal microbiome reconstitution is impaired by probiotics and improved by autologous FMT. Cell 174, 1406–1423 e1416 (2018). Zupancic, M. L. et al. Analysis of the gut microbiota in the old order Amish and its relation to the metabolic syndrome. PLoS ONE 7, e43052 (2012). Wang, Y. et al. The intestinal microbiota regulates body composition through NFIL3 and the circadian clock. Science 357, 912–916 (2017). Szendroedi, J., Phielix, E. & Roden, M. The role of mitochondria in insulin resistance and type 2 diabetes mellitus. Nat. Rev. Endocrinol. 8, 92–103 (2011). PubMed Article CAS PubMed Central Google Scholar Morino, K., Petersen, K. F. & Shulman, G. I. Molecular mechanisms of insulin resistance in humans and their potential links with mitochondrial dysfunction. Diabetes 55, S9–S15 (2006). Kim, J. A., Wei, Y. & Sowers, J. R. Role of mitochondrial dysfunction in insulin resistance. Circ. Res. 102, 401–414 (2008). Kahle, M. et al. High fat diet-induced modifications in membrane lipid and mitochondrial-membrane protein signatures precede the development of hepatic insulin resistance in mice. Mol. Metab. 4, 39–50 (2015). Mollica, M. P. et al. Butyrate regulates liver mitochondrial function, efficiency, and dynamics in insulin-resistant obese mice. Diabetes 66, 1405–1418 (2017). Rossignol, R. et al. Energy substrate modulates mitochondrial structure and oxidative capacity in cancer cells. Cancer Res. 64, 985–993 (2004). Cheng, Z. et al. Foxo1 integrates insulin signaling with mitochondrial function in the liver. Nat. Med. 15, 1307–1311 (2009). Orellana-Gavalda, J. M. et al. Molecular therapy for obesity and diabetes based on a long-term increase in hepatic fatty-acid oxidation. Hepatology 53, 821–832 (2011). Perry, R. J., Samuel, V. T., Petersen, K. F. & Shulman, G. I. The role of hepatic lipids in hepatic insulin resistance and type 2 diabetes. Nature 510, 84–91 (2014). Ricchi, M. et al. Differential effect of oleic and palmitic acid on lipid accumulation and apoptosis in cultured hepatocytes. J. Gastroenterol. Hepatol. 24, 830–840 (2009). Taylor, R., Al-Mrabeh, A. & Sattar, N. Understanding the mechanisms of reversal of type 2 diabetes. Lancet Diabetes Endocrinol. 7, 726–736 (2019). Yu, L. Q. et al. Disruption of Abcg5 and Abcg8 in mice reveals their crucial role in biliary cholesterol secretion. Proc. Natl. Acad. Sci. USA 99, 16237–16242 (2002). Ishibashi, S., Schwarz, M., Frykman, P. K., Herz, J. & Russell, D. W. Disruption of cholesterol 7alpha-hydroxylase gene in mice. I. Postnatal lethality reversed by bile acid and vitamin supplementation. J. Biol. Chem. 271, 18017–18023 (1996). Aoyagi, K. et al. VAMP7 regulates autophagosome formation by supporting Atg9a functions in pancreatic beta-cells from male mice. Endocrinology 159, 3674–3688 (2018). Tigano, M. et al. Elongator-dependent modification of cytoplasmic tRNALysUUU is required for mitochondrial function under stress conditions. Nucleic Acids Res. 43, 8368–8380 (2015). Cai, J. et al. Induction of deubiquitinating enzyme USP50 during erythropoiesis and its potential role in the regulation of Ku70 stability. J. Investig. Med. 66, 1–6 (2018). Palombo, V. et al. Genome-wide association study of milk fatty acid composition in Italian Simmental and Italian Holstein cows using single nucleotide polymorphism arrays. J. Dairy Sci. 101, 11004–11019 (2018). Mari, M. et al. Mitochondrial glutathione, a key survival antioxidant. Antioxid. Redox Signal. 11, 2685–2700 (2009). Taylor, R. Calorie restriction for long-term remission of type 2 diabetes. Clin. Med. (Lond.) 19, 37–42 (2019). Franko, A. et al. Bezafibrate improves insulin sensitivity and metabolic flexibility in STZ-induced diabetic mice. Diabetes 65, 2540–2552 (2016). Crescenzo, R. et al. A possible link between hepatic mitochondrial dysfunction and diet-induced insulin resistance. Eur. J. Nutr. 55, 1–6 (2016). Belosludtsev, K. N. et al. Ca(2+)-dependent permeabilization of mitochondria and liposomes by palmitic and oleic acids: a comparative study. Biochim. Biophys. Acta 1838, 2600–2606 (2014). Malhi, H., Bronk, S. F., Werneburg, N. W. & Gores, G. J. Free fatty acids induce JNK-dependent hepatocyte lipoapoptosis. J. Biol. Chem. 281, 12093–12101 (2006). Garcia-Ruiz, I., Solis-Munoz, P., Fernandez-Moreira, D., Munoz-Yague, T. & Solis-Herruzo, J. A. In vitro treatment of HepG2 cells with saturated fatty acids reproduces mitochondrial dysfunction found in nonalcoholic steatohepatitis. Dis. Model Mech. 8, 183–191 (2015). Yao, D. et al. Fatty acid-mediated intracellular iron translocation: a synergistic mechanism of oxidative injury. Free Radic. Biol. Med. 39, 1385–1398 (2005). Yuan, L. et al. Palmitic acid dysregulates the Hippo-YAP pathway and inhibits angiogenesis by inducing mitochondrial damage and activating the cytosolic DNA sensor cGAS-STING-IRF3 signaling mechanism. J. Biol. Chem. 292, 15002–15015 (2017). Gurung, M. et al. Role of gut microbiota in type 2 diabetes pathophysiology. EBioMedicine 51, 102590 (2020). Cardoso, S. M. & Empadinhas, N. The microbiome-mitochondria dance in prodromal Parkinson's disease. Front. Physiol. 9, 471 (2018). Morgun, A. et al. Uncovering effects of antibiotics on the host and microbiota using transkingdom gene networks. Gut 64, 1732–1743 (2015). Han, B. et al. Microbial genetic composition tunes host longevity. Cell 169, 1249–1262 e1213 (2017). Franco-Obregon, A. & Gilbert, J. A. The microbiome-mitochondrion connection: common ancestries, common mechanisms, common goals. mSystems 2, e00017–e00018 (2017). Sedlak, T. W. et al. Bilirubin and glutathione have complementary antioxidant and cytoprotective roles. Proc. Natl. Acad. Sci. USA 106, 5171–5176 (2009). Mustafa, M. G., Cowger, M. L. & King, T. E. Effects of bilirubin on mitochondrial reactions. J. Biol. Chem. 244, 6403–6414 (1969). Gordon, D. M. et al. CRISPR Cas9-mediated deletion of biliverdin reductase A (BVRA) in mouse liver cells induces oxidative stress and lipid accumulation. Arch. Biochem. Biophys. 672, 108072 (2019). Vasavda, C. et al. Bilirubin links heme metabolism to neuroprotection by scavenging superoxide. Cell Chem. Biol. 26, 1450–1460 e1457 (2019). Shutt, T., Geoffrion, M., Milne, R. & McBride, H. M. The intracellular redox state is a core determinant of mitochondrial fusion. EMBO Rep. 13, 909–915 (2012). Hamon, E. et al. Comparative proteomic analysis of Lactobacillus plantarum for the identification of key proteins in bile tolerance. BMC Microbiol. 11, 63 (2011). Kullisaar, T. et al. Complete glutathione system in probiotic Lactobacillus fermentum ME-3. Prikladnaia biokhimiia i mikrobiologiia 46, 527–531 (2010). Luongo, D. et al. Differential modulation of innate immunity in vitro by probiotic strains of Lactobacillus gasseri. BMC Microbiol. 13, 298 (2013). Xin, J. G. et al. Preventing non-alcoholic fatty liver disease through Lactobacillus johnsonii BS15 by attenuating inflammation and mitochondrial injury and improving gut environment in obese mice. Appl. Microbiol. Biotechnol. 98, 6817–6829 (2014). Tunapong, W. et al. Chronic treatment with prebiotics, probiotics and synbiotics attenuated cardiac dysfunction by improving cardiac mitochondrial dysfunction in male obese insulin-resistant rats. Eur. J. Nutr. 57, 2091–2104 (2018). Nakagawa, H. et al. Effects and mechanisms of prolongevity induced by Lactobacillus gasseri SBT2055 in Caenorhabditis elegans. Aging cell 15, 227–236 (2016). Panwar, H., Rashmi, H. M., Batish, V. K. & Grover, S. Probiotics as potential biotherapeutics in the management of type 2 diabetes - prospects and perspectives. Diabetes Metab. Res. Rev. 29, 103–112 (2013). Tripathy, S., Torres-Gonzalez, M. & Jump, D. B. Elevated hepatic fatty acid elongase-5 activity corrects dietary fat-induced hyperglycemia in obese C57BL/6J mice. J. Lipid Res. 51, 2642–2654 (2010). Trapnell, C., Pachter, L. & Salzberg, S. L. TopHat: discovering splice junctions with RNA-Seq. Bioinformatics 25, 1105–1111 (2009). Anders, S., Pyl, P. T. & Huber, W. HTSeq–a Python framework to work with high-throughput sequencing data. Bioinformatics 31, 166–169 (2015). Zhou, Y. et al. Metascape provides a biologist-oriented resource for the analysis of systems-level datasets. Nat. Commun. 10, 1523 (2019). PubMed PubMed Central Article ADS CAS Google Scholar Caporaso, J. G. et al. QIIME allows analysis of high-throughput community sequencing data. Nat. Methods 7, 335–336 (2010). Edgar, R. C. Search and clustering orders of magnitude faster than BLAST. Bioinformatics 26, 2460–2461 (2010). DeSantis, T. Z. et al. Greengenes, a chimera-checked 16S rRNA gene database and workbench compatible with ARB. Appl Environ. Microbiol. 72, 5069–5072 (2006). McDonald, D. et al. An improved Greengenes taxonomy with explicit ranks for ecological and evolutionary analyses of bacteria and archaea. ISME J. 6, 610–618 (2012). Morgulis, A. et al. Database indexing for production MegaBLAST searches. Bioinformatics 24, 1757–1764 (2008). Paulson, J. N., Stine, O. C., Bravo, H. C. & Pop, M. Differential abundance analysis for microbial marker-gene surveys. Nat. Methods 10, 1200–1202 (2013). Metsalu, T. & Vilo, J. ClustVis: a web tool for visualizing clustering of multivariate data using principal component analysis and heatmap. Nucleic Acids Res. 43, W566–W570 (2015). Greer, R., Dong, X., Morgun, A. & Shulzhenko, N. Investigating a holobiont: Microbiota perturbations and transkingdom networks. Gut Microbes 7, 126–135 (2016). Rodrigues, R. R. et al. Antibiotic-induced alterations in gut microbiota are associated with changes in glucose metabolism in healthy mice. Front. Microbiol. 8, 2306 (2017). Rodrigues, R. R., Shulzhenko, N. & Morgun, A. Transkingdom networks: a systems biology approach to identify causal members of host-microbiota interactions. Methods Mol. Biol. 1849, 227–242 (2018). Yambartsev, A. et al. Unexpected links reflect the noise in networks. Biol. Direct 11, 52 (2016). Klemm, K. & Eguiluz, V. M. Highly clustered scale-free networks. Phys. Rev. E Stat. Nonlin Soft Matter Phys. 65, 036123 (2002). PubMed Article ADS CAS PubMed Central Google Scholar Faust, K. et al. Signatures of ecological processes in microbial community time series. Microbiome 6, 120 (2018). Fortunato, F., Hackert, T., Buchler, M. W. & Kroemer, G. Retrospective electron microscopy: preservation of fine structure by freezing and aldehyde fixation. Mol. Cell Oncol. 3, e1251382 (2016). Paridaen, J. T., Wilsch-Brauninger, M. & Huttner, W. B. Asymmetric inheritance of centrosome-associated primary cilium membrane directs ciliogenesis after cell division. Cell 155, 333–344 (2013). Wikoff, W. R. et al. Metabolomics analysis reveals large effects of gut microflora on mammalian blood metabolites. Proc. Natl. Acad. Sci. USA 106, 3698–3703 (2009). Kirkwood, J. S. et al. Vitamin C deficiency activates the purine nucleotide cycle in zebrafish. J. Biol. Chem. 287, 3833–3841 (2012). Housley, L. et al. Untargeted metabolomic screen reveals changes in human plasma metabolite profiles following consumption of fresh broccoli sprouts. Mol. Nutr. food Res. 62, e1700665 (2018). Garcia-Jaramillo, M. et al. Lipidomic and transcriptomic analysis of western diet-induced nonalcoholic steatohepatitis (NASH) in female Ldlr -/- mice. PLoS ONE 14, e0214387 (2019). Tabassum, R. et al. Genetic architecture of human plasma lipidome and its link to cardiovascular disease. Nat. Commun. 10, 4329 (2019). MathSciNet CAS PubMed PubMed Central Article ADS Google Scholar Wishart, D. S. et al. HMDB: the human metabolome database. Nucleic Acids Res. 35, D521–D526 (2007). Sumner, L. W. et al. Proposed minimum reporting standards for chemical analysis Chemical Analysis Working Group (CAWG) Metabolomics Standards Initiative (MSI). Metabolomics 3, 211–221 (2007). We thank Teresa Sawyer from the OSU Electron microscopy core for excellent service, Drs. Jodi Nunnari, Chrissa Kioussi, Matthew Robinson for advice regarding mitochondria, Claudia S. Maier, Director of the OSU Mass Spectrometry Center for providing access to diverse software tools and advice regarding metabolomics, Laboratory Animal Resource Center (LARC) and Center of Genome Research Biocomputing (CGRB) at OSU for technical support. NIH R01 DK103761 (NS), DK112360 (DBJ), European Research Council starting grant FunKeyGut 741623 (D. Berry and F.B.). These authors contributed equally: Richard R. Rodrigues, Manoj Gurung, Zhipeng Li. These authors jointly supervised this work: Andrey Morgun, Natalia Shulzhenko. College of Pharmacy, Oregon State University, Corvallis, OR, USA Richard R. Rodrigues, Benjamin Philmus & Andrey Morgun Veterinary Medicine, Oregon State University, Corvallis, OR, USA Manoj Gurung, Zhipeng Li, Renee Greer, Hyekyoung You, Jacob W. Pederson, Stephany Vasquez-Perez, Kimberly D. White, Briana Frink & Natalia Shulzhenko College of Public Health and Human Sciences, Oregon State University, Corvallis, OR, USA Manuel García-Jaramillo & Donald B. Jump College of Science, Oregon State University, Corvallis, OR, USA Christopher Gaulke & Thomas J. Sharpton Department of Microbiology and Ecosystem Science, University of Vienna, Vienna, Austria Franziska Bauchinger & David Berry Cancer and Inflammation Program, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA Giorgio Trinchieri & Amiran Dzutsev Richard R. Rodrigues Manoj Gurung Zhipeng Li Manuel García-Jaramillo Renee Greer Christopher Gaulke Franziska Bauchinger Hyekyoung You Jacob W. Pederson Stephany Vasquez-Perez Kimberly D. White Briana Frink Benjamin Philmus Donald B. Jump Giorgio Trinchieri David Berry Thomas J. Sharpton Amiran Dzutsev Andrey Morgun Natalia Shulzhenko Original idea and overall study design: A.M., NS. Design of individual experiments: R.R.R., M.G., Z.L., M.G.J., R.G., B.P., D.B.J., G.T., A.D, A.M., N.S. Data generation: M.G., Z.L., M.G.J., R.G., H.Y., J.W.P., C.G., S.V.P., K.D.W., B.F., B.P., D.B.J., A.D. Data analysis: R.R.R., M.G., Z.L., M.G.J., F.B., T.J.S., C.G., B.P., D.B.J., G.T., D.B., A.D., A.M., N.S. Drafting manuscript: R.R.R., M.G., Z.L., M.G.J., F.B., B.P., A.M., N.S. Editing manuscript: T.J.S., B.P., D.B.J., G.T., D.B, A.M., N.S. Supervision of specific set of experiments and/or series of data analyses: T.J.S., B.P., D.B.J., G.T., D.B., A.M., N.S. Overall study leadership: A.M., N.S. Correspondence to Andrey Morgun or Natalia Shulzhenko. Peer review information Nature Communications thanks Nicholas Chia and the other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Description of Additional Supplementary Files Supplementary Data 1-16 Rodrigues, R.R., Gurung, M., Li, Z. et al. Transkingdom interactions between Lactobacilli and hepatic mitochondria attenuate western diet-induced diabetes. Nat Commun 12, 101 (2021). https://doi.org/10.1038/s41467-020-20313-x Increased circulating butyrate and ursodeoxycholate during probiotic intervention in humans with type 2 diabetes Paul J. McMurdie Magdalena K. Stoeva John Eid BMC Microbiology (2022) Reviews & Analysis Editorial Values Statement Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
SpringerPlus Biological potency and characterization of antibacterial substances produced by Lactobacillus pentosus isolated from Hentak, a fermented fish product of North-East India Chirom Aarti1, Ameer Khusro1, Mariadhas Valan Arasu2, Paul Agastian1 & Naïf Abdullah Al-Dhabi2 SpringerPlus volume 5, Article number: 1743 (2016) Cite this article Lactic acid bacteria (LAB) isolated from various foods are important due to their potential to inhibit microorganisms, including drug-resistant bacteria. The objectives of this investigation were to isolate and identify antibacterial substances producing LAB from Hentak, a traditional fermented fish product of Manipur (North-East India), and to optimize the production of antagonistic substances present in cell free neutralized supernatant (CFNS) against enteric bacterial pathogens using the 'one factor at a time' (OFAT) method. Out of 10 LAB, the most potent bacterium producing antibacterial substances was isolated and identified as Lactobacillus pentosus strain LAP1 based upon morphological, biochemical and molecular characterization. MRS (de Man, Ragosa and Sharpe) medium was determined to provide better bactericidal activity (AU/ml) than other tested media against the indicator enteric bacteria, including Staphylococcus epidermidis MTTC 3615, Micrococcus luteus MTCC 106, Shigella flexneri MTCC 1457, Yersinia enterocolitica MTCC 840 and Proteus vulgaris MTCC 1771. The culture conditions (pH: 5, temperature: 30 °C and inoculum volume: 1 %) and medium components (carbon source: lactose and nitrogen source: ammonium chloride) were observed to be the most influential parameters of significant antagonistic activity of CFNS against the enteric pathogens. MRS medium supplemented with Tween20 effectively stimulated the yield of antibacterial substances. The CFNS of strain LAP1 exhibited sensitivity to proteolytic enzyme (pepsin) treatment and heat treatment (60 °C for 60 min, 100 °C for 30 min and 121 °C for 15 min) and lost its inhibitory properties. The CFNS was active at an acidic (pH 3.0) to neutral pH (pH 7.0) but lost its antagonistic properties at an alkaline pH. The CFNS obtained from strain LAP1 scavenges the DPPH (1,1-diphenyl-2 picrylhydrazyl) significantly in a concentration-dependent manner within the range of 8.8 ± 0.12–57.35 ± 0.1 %. The OFAT-based approach revealed the baseline for statistical optimization, the scale-up process and efficient production of CFNS by L. pentosus strain LAP1, which could be used as a potential antibacterial and free radical scavenging agent. Fermented foods are among the essential constituents of the human diet. Fermented food products are considered a good source of industrially important microorganisms (Rejiniemon et al. 2015; Jagadeesh 2015; Ilavenil et al. 2015). Similar to other states in North-East India, Manipur has a rich tradition in food processing and preservation technologies. Fermented foods of aquatic origin are still widely prepared and consumed in Manipur. Hentak is a highly consumed fermented fish product in Manipur and is mainly prepared at the household level in a cost-effective manner. However, there is a lack of knowledge regarding the bacteria involved in the increased shelf life of these products and the health benefits of those bacteria in humans. Therefore, an attempt had been made to isolate bacteria that produce antibacterial substances from Hentak and to characterize their inhibitory activity against human enteric pathogens. Probiotics are non-pathogenic, known to compete with pathogens for available space by secreting lytic enzymes, organic acids and bacteriocins, inhibiting the growth of pathogens by disrupting their virulent gene expression, attachment and cell to cell communication, although widely adopted, is not acceptable to the European Food Safety Authority because it embeds a health claim which is not measurable (Pena et al. 2007; Ravi et al. 2007; Verschuere et al. 2000). Lactic acid bacteria (LAB) or probiotics from fermented foods are major resources for antimicrobial biosynthesis. Gram positive and non-sporulating bacteria play a prominent role in the production of growth inhibitory substances. LAB are safe and play an important role in food fermentation and preservation. The genus Lactobacillus belongs to the lactic acid bacteria that are rod shaped, Gram positive and non-spore forming. Lactobacillus pentosus is a lactic acid bacterium commonly used as starter culture for the fermentation process (Ruiz-Barba et al. 1994). Certain strains of L. pentosus exert probiotic properties, improve mucosal immunity and create resistance towards bacterial infections (Kotani et al. 2010; Izumo et al. 2011). Environmental factors such as pH, temperature and medium composition can influence the production of antagonistic substances from lactic acid bacteria. Several reports have been discussed regarding antibacterial components, especially bacteriocin production by LAB and its optimizing by altering several physical factors and medium composition (Parente et al. 1994; Moortvedt-Abildgaard et al. 1995; De Vuyst and Vandamme 1992). Lactic acid bacteria and their specific components could be an eco-friendly antibacterial substitute for synthetic antibiotics. There is a continuous effort by worldwide researchers to optimize culture conditions and other parameters for the efficient production of antibacterial components from LAB that mitigate the growth of human pathogens. Therefore, in light of the over demand of antibacterial substances for therapeutic applications, the present study had been undertaken to investigate the influence of various culture conditions and medium components on the production of CFNS by a 'one factor at a time' (OFAT)-based approach using lactic acid bacteria isolated from Hentak. Fresh water small fish, Ngasang (Esomus danricus), were smoked and sun dried until they crumbled. The petioles of an aroid plant, Khonagu (Alocasia macrorhiza) were cut into small pieces, washed with water and sun dried for an hour. The crumbled fish powder was crushed with plant material in a 1:1 ratio using a stone mortar and pestle to make a paste. The mixture was kneaded with clean hands to produce ball-shaped pieces, and fermentation was allowed by keeping the mixture at room temperature for 5–6 days in an earthen pot containing a thin layer of banana leaves. The ball-shaped pieces were taken out from the pot and mixed with onions and mustard oil. The mixture was kneaded again using a stone mortar and pestle and made into a ball shape. The ball-shaped pieces were kept again inside the earthen pot containing banana leaves for 2–3 days. The fermented non-salted fish product, Hentak, was brought to the laboratory for the bacterial isolation process. The involvement of fish in the experiments was approved by the Government of India Ethical Committee (IAEC-LC 05/13). Isolation of lactic acid bacteria One gram of Hentak was ground with sterilized distilled water using a mortar and pestle cleaned with ethanol (95 % w/v). The mixture was centrifuged at 8000×g for 15 min in order to remove heavy particles, and the supernatant was collected. The supernatant was serially diluted (10−1–10−5) for bacterial enumeration, and 1 ml of the suspension was poured onto sterilized MRS agar (g/l—proteose peptone 10.0, beef extract 10.0, yeast extract 5.0, dextrose 20.0, polysorbate 80 1.0, ammonium citrate 2.0, sodium acetate 5.0, magnesium sulfate 0.1, manganese sulfate 0.05, dipotassium phosphate 2.0, pH 6.5, Agar 18) plates. After spreading the suspension, the plates were incubated at 30 °C for 48 h. The total number of viable colonies was counted and expressed as colony forming units (CFU/ml). Based upon morphology, various colonies were selected for the isolation of pure bacterial cultures on MRS agar slants. Bacteria of interest Indicator bacterial strains (human enteric pathogens) were collected from the Department of Plant Biology and Biotechnology, Loyola College, Chennai, India. Both Gram positive (Staphylococcus epidermis MTTC 3615, Staphylococcus aureus MTCC 96, Enterococcus faecalis MTCC 439 and Micrococcus luteus MTCC 106) and gram negative (Shigella flexneri MTCC 1457, Yersinia enterocolitica MTCC 840, Enterobacter aerogens MTCC 111 and Proteus vulgaris MTCC 1771) bacteria were used for the present study. The indicator bacterial cultures were sub-cultured selectively onto basal media (nutrient broth for gram positive and Mueller–Hinton broth for gram negative bacteria, pH 7.0) at 37 °C for further study. A 24 h old bacterial culture was used for further experiments. Assay for antibacterial substance production The isolated lactic acid bacteria were screened individually for the production of antagonistic substances. The lactic acid bacteria were inoculated individually into sterilized MRS broth and incubated for 48 h at 30 °C. The indicator microorganisms were inoculated into Nutrient broth and Mueller–Hinton broth for 24 h at 37 °C and swabbed onto Mueller–Hinton agar (MHA) plates. Agar plates were punched using a sterilized, flamed and alcohol-dipped cork borer, and 5 mm wells were created. The lactic acid bacteria were centrifuged at 8000×g for 10 min, and the culture supernatant was subjected to membrane filtration (0.22 µm). The sterilized cell free supernatant was neutralized (pH 7.0) using 1 N NaOH in order to exclude the antibacterial effect of organic acids in the medium. The cell free neutralized supernatant (CFNS) was treated individually with catalase (Sigma, India; 1 mg/ml) and incubated at 37 °C for 2 h in order to eliminate the inhibitory effect of hydrogen peroxide. After catalase treatment, the CFNS obtained from lactic acid bacteria was then assayed for antibacterial assay against indicator bacteria using the agar well diffusion method. The growth inhibitory activity was expressed in arbitrary units (AU/ml). One AU was defined as the reciprocal of the highest level of dilution resulting in a clear zone of growth inhibition (Bhaskar et al. 2007). Identification and molecular characterization of the isolate The potent bacterium was identified using morphological and biochemical tests and further characterized using molecular tools. The genomic DNA of the potential isolate was isolated and purified using a QIAquick® kit (Qiagen Ltd., Crawley, UK). The amplicon sequencing was performed using universal primers 27F (5′ AGA GTT TGA TCG TGG CTC AG 3′) and 1492R (3′ GCT TAC CTT GTT ACG ACT T 5′). The 16S rRNA sequence of the isolate was subjected to BLAST, NCBI. Then, the sequence of the isolate was deposited into NCBI Genebank, and an accession number was assigned. The potential isolate was used for further experiments. Media optimization Lactobacillus pentosus strain LAP1 was inoculated individually into 250 ml conical flasks containing sterile production medium (50 ml) such as Nutrient broth, Mueller–Hinton broth, Luria–Bertani broth, MRS broth and Peptone broth to compare the production of antibacterial substances. The flasks were incubated at 30 °C for 48 h in an orbital shaker (120 rpm). The CFNS was obtained, and the arbitrary units (AU/ml) were estimated as described above against the indicator bacteria. Optimization of culture conditions and medium components using the OFAT method The suitable production media was optimized using various culture conditions (pH, temperature and inoculum volume) and medium components (carbon sources and nitrogen sources) utilizing the OFAT method after working out a series of experiments. The fermentation conditions and medium components were substituted one by one by keeping other factors constant in the production medium. The antibacterial substance production by strain LAP1 was examined by adjusting the pH (4, 5, 6, 7 and 8) of the production medium using 1 N HCl and 1 N NaOH. Similarly, the production of antagonistic substances with strain LAP1 was optimized by varying their respective conditions such as incubation temperature (20–70 °C) and inoculum volume (0.5–2 %) at the optimized pH. Likewise, the various media components such as carbon sources (maltose, fructose, sucrose, lactose, and xylose individually at 1.0 % w/v) and nitrogen sources (ammonium acetate, ammonium chloride, ammonium nitrate, ammonium sulphate and sodium nitrate individually at 0.5 % w/v) were substituted in the production medium in order to achieve maximum production of antibacterial substances. An appropriate control medium was also maintained. All of the flasks were aseptically inoculated with the isolate and kept in an orbital shaker (120 rpm) for 48 h. The CFNS was collected after centrifugation at 8000×g for 10 min, followed by membrane (0.22 μm) filtration of the supernatant and neutralization. The CFNS was collected and the antibacterial activity (AU/ml) was examined against the most susceptible indicator bacteria as described above. Effect of supplements on antibacterial substance production The production of antibacterial substances by L. pentosus strain LAP1 was assessed under optimized culture conditions in the suitable production medium supplemented with Tween20 (1 % v/v), Tween40 (1 % v/v) Tween80 (1 % v/v) and glycerol (1 % v/v). An appropriate control medium was also maintained, and the antagonistic activity of CFNS was determined as described above using the most susceptible indicator organisms. Characterization of CFNS The CFNS from strain LAP1 was characterized with respect to pH, heat treatment and proteolytic enzymes. The stability of CFNS at different pH values (pH 3, 5, 7, 8 and 10) was tested by adjusting the pH of the supernatant with either 1 N HCl or 1 N NaOH. The adjusted supernatants were incubated for 4 h at room temperature, and the activity was calculated using indicator bacteria. The CFNS of the isolate was subjected to heat treatment at temperatures of 60 °C for 60 min, 100 °C for 30 min, and autoclaving (121 °C/15 min). CFNS and H2O2-eliminated CFS (cell-free supernatant) without any heat treatment served as a control. Aliquots of each treatment were taken after the required incubation period, and the activity of heat treated CFNS was determined against indicator bacteria as described earlier using the agar well diffusion method. Similarly, the sensitivity of inhibitory substances produced by the isolate to proteolytic enzyme such as pepsin (1 mg/ml) was determined. The reaction mixtures were then incubated at 37 °C for 1 h, and the antagonistic activity of the supernatant was determined as described above. Determination of DPPH free radical scavenging activity The DPPH (2,2-diphenyl-1-picrylhydrazyl) assay is one of the most commonly used methods to detect free radical scavenging activity. The DPPH scavenging assay for the CFNS of strain LAP1 was measured by the method of Chen et al. (2005) with some modifications. Various concentrations (100–1000 µl) of CFNS were mixed with 1 ml of 0.05 mM DPPH solution. The reaction was incubated in the dark at room temperature for 30 min. DPPH solution was used as a control, and a combination of CFNS and methanol was used as the blank. The DPPH scavenging capacity of the CFNS of the isolate was calculated by measuring the decrease in absorbance at 517 nm compared to the control. The DPPH scavenging capacity was calculated as: $${\text{DPPH scavenging capacity (\% )}} = \left[ {{{\left( {{\text{A}}_{{{\text{sample}}}} - {\text{A}}_{{{\text{blank}}}} } \right)} \mathord{\left/ {\vphantom {{\left( {{\text{A}}_{{{\text{sample}}}} - {\text{A}}_{{{\text{blank}}}} } \right)} {{\text{A}}_{{{\text{control}}}} }}} \right. \kern-\nulldelimiterspace} {{\text{A}}_{{{\text{control}}}} }}} \right] \times 100$$ All of the experiments were performed in triplicate, and the data were calculated as the Mean ± SD with MS-Excel. Countable colonies of lactic acid bacteria were observed from dilutions of 10−3–10−5. The lactic acid bacteria from Hentak ranged from 209.0 ± 5.03 to 85.0 ± 4.0 CFU/mL at 10−3–10−5 dilutions, respectively. In the present study, 10 lactic acid bacteria were isolated on MRS agar plates based upon distinct morphologies (data not shown). Screening for antibacterial substance production Screening for potential antagonistic activity of all the isolates against the indicator bacteria was performed using the agar well diffusion assay. Twenty percent of the isolates were found to be effective against most of the indicator bacteria. Based upon the diameter of the zone of inhibition shown by the catalase-treated CFNS of the most potent isolate, susceptible bacteria, including Staphylococcus epidermidis, Micrococcus luteus, Shigella flexneri, Yersinia enterocolitica and Proteus vulgaris, were selected for further optimization (data not shown). The most potent bacterium underwent morphological identification, biochemical property characterization and molecular characterization using 16S rRNA sequencing (data not shown). An amplicon of 1519 bp was observed using PCR amplification and sequencing. The sequence was subjected to a multiple sequence alignment using the BLAST analysis of NCBI. The 16S rRNA sequence showed a homology of 100 % with L. pentosus. The sequence was deposited in GenBank, maintained by NCBI, USA (Accession No: KU945826), and the organism was identified as L. pentosus strain LAP1. Strain LAP1 was cultured in various media in order to ensure the maximum production of antibacterial substances. MRS broth was found to be the most favourable medium for the maximal production of antagonistic substances (98.7 ± 1.1–163.3 ± 2.13 AU/ml). The other production media resulted in minimal yield of antibacterial constituents compared to MRS medium (Fig. 1). The antibacterial substance yield of various media was as follows: MRS broth > Mueller–Hinton broth > Nutrient broth > Luria–Bertani broth > Peptone broth. Peptone broth was found to be the least effective medium for antibacterial substance production from strain LAP1, ranging from 30.4 ± 2.33 to 12.1 ± 2.31 AU/ml against the most susceptible indicator bacteria. Effect of various media on the production of antibacterial substances by strain LAP1. MRS broth favours the increased production of antibacterial substances. Each point represents the mean ± standard error of three independent experiments Optimization of culture conditions and medium components Subsequent investigation was carried out to optimize the production of antibacterial substances (AU/ml) from strain LAP1 using the OFAT method. The culture conditions, such as pH and temperature, were optimized for maximum production of growth inhibitory substances. The production of antibacterial components was enhanced by adjusting the pH of the MRS broth. Among the tested pH, the maximum production in terms of antagonistic activity was recorded at pH 5.0 and ranged from 166.6 ± 1.65 to 240.5 ± 3.18 AU/ml. However, a further decrease or increase of pH was found to mitigate the production of antibacterial substances significantly. The minimum production was recorded at pH 8.0 and ranged from 84.1 ± 2.08 to 121.4 ± 2.17 AU/ml against the control range (pH 7.0) of 96.7 ± 1.67 to 164.3 ± 3.08 AU/ml (Fig. 2). Effect of pH on the production of antibacterial substances by strain LAP1. An acidic pH (pH 5) favours the increased production of antibacterial substances from the isolate. Each point represents the mean ± standard error of three independent experiments Figure 3 shows the effect of incubation temperature on antibacterial substance production from strain LAP1. The maximum production of 175.6 ± 2.34 to 245.5 ± 2.41 AU/ml was recorded at 30 °C, and a temperature lower or higher than 30 °C markedly decreased the production of antibacterial substances. The minimum yield was within the range of 18.3 ± 2.08 to 22.1 ± 2.17 AU/ml at 70 °C over the control range. Effect of various incubation temperatures on the production of antibacterial substances by strain LAP1. The isolate showed enhanced antibacterial substance production at 30 °C. Each point represents the mean ± standard error of three independent experiments Different inoculums of strain LAP1 did not show any significant effect on the antagonistic activity of CFNS obtained against the indicator enteric bacteria (Fig. 4). The antibacterial substance production was higher (168.4 ± 2.41 to 305.4 ± 2.43 AU/ml) at the 1 % inoculum level. However, no further increase in production was observed at lower (0.5 %) or higher volumes of inoculum (2 %). Effect of inoculum volume on the production of antibacterial substances by strain LAP1. Inoculum volumes lower than or greater than 1 % did not have a large effect on antibacterial substance production. Each point represents the mean ± standard error of three independent experiments Strain LAP1 produced growth inhibitory components at a higher level (178.3 ± 2.41–310.4 ± 2.43 AU/ml) when the carbon source of MRS medium was substituted with lactose. On the other hand, the minimum antagonistic activity (41.3 ± 1.67–54.2 ± 3.08 AU/ml) was observed in xylose supplied medium over the control MRS medium ranging from 174.6 ± 1.23 to 244.5 ± 2.43 AU/ml (Fig. 5). Similar to the carbon source, the nitrogen source also favoured the optimal production of antagonistic substances from strain LAP1 (Fig. 6). The production of antibacterial substances from the isolate was higher (164.3 ± 1.65–302.3 ± 3.18 AU/ml) in the presence of ammonium chloride. However, the minimum production (98.3 ± 2.34–162.3 ± 2.41 AU/ml) was obtained in ammonium nitrate supplied medium over the control range (175.6 ± 1.1–240.5 ± 2.13 AU/ml). Effect of various substrates as carbon sources (% w/v) on the production of antibacterial substances by strain LAP1. By substituting for dextrose (original carbon source of MRS media, Control), lactose favours the maximum production of antibacterial substances from the isolate. Each point represents the mean ± standard error of three independent experiments Effect of various nitrogen sources (% w/v) on the production of antibacterial substances by strain LAP1. Ammonium chloride addition increased antibacterial substance production when substituted for ammonium citrate (control). Each point represents the mean ± standard error of three independent experiments Effect of supplements MRS medium supplemented with Tween20, Tween40, Tween80 and glycerol markedly affected the production of antibacterial substances by the candidate bacterium. The largest amount of antibacterial components (272.2 ± 1.65–472.3 ± 3.18 AU/ml) was produced in the MRS medium supplemented with Tween20 compared to the other tested supplements. Incorporation of Tween40, Tween80 and glycerol decreased the antagonistic activity of CFNS compared to the control range (Fig. 7). Effect of various supplements (% v/v) on the production of antibacterial substances by strain LAP1. MRS medium supplemented with Tween20 resulted in enhanced antibacterial substance production compared to control (MRS medium without any supplements). Each point represents the mean ± standard error of three independent experiments Characterization of the CFNS of strain LAP1 The stability of the catalase-treated CFNS of strain LAP1 at different pH and temperatures and in the presence of proteolytic enzymes is presented in Table 1. The antibacterial substances showed activity (AU/ml) at pH 3, 5 and 7 (control). However, elevating the pH toward alkaline conditions diminishes the antagonistic activity of the CFNS against the indicator bacteria. The CFNS of the isolated strain did not show any antagonistic activity against the indicator bacteria at pH >7.0. Heating the CFNS of strain LAP1 at 60 °C for 60 min, 100 °C for 30 min, and 121 °C for 15 min completely abolished the antagonistic activity of the bacteriocin against all of the indicator bacteria tested. Likewise, all of the potential proteinaceous components present in the CFNS of strain LAP1 were completely inactivated by the pepsin, resulting in the disappearance of the zone of inhibition on the agar plates inoculated with indicator bacteria. Table 1 Characterization of bacteriocin from Lactobacillus pentosus strain LAP1 against most susceptible indicator bacteria DPPH free radical scavenging activity Figure 8 shows the antioxidant activity of the CFNS of the isolate using DPPH free radicals. The scavenging potential of the cell free neutralized supernatant of the isolate increased significantly in a dose dependent manner (100–1000 µl). The antibacterial substance showed DPPH scavenging activity in the range of 8.8 ± 0.12–57.35 ± 0.1 % when compared to ascorbic acid (60.2 ± 0.11–92.1 ± 0.8 %). Effect of various concentrations (100–1000 µl) of the CFNS of strain LAP1 on DPPH scavenging activity. The results were compared with the free radical scavenging potential of the control (vitamin C). Each point represents the mean ± standard error of three independent experiments The LAB produce a variety of antibacterial substances, including bacteriocins and bacteriocin-like components that inhibit the growth of pathogenic bacteria (Yasmeen et al. 2015; Ekhay et al. 2013). The isolation and screening of bacteria from natural sources is a successful way to obtain strains with valuable medical applications (Yang et al. 2012). All of the isolates of preliminary study in the present context revealed a varying degree of antagonistic activity against indicator organisms by secreting different types of antibacterial substances. The growth inhibition of indicator bacteria by catalase-treated CFNS provided evidence that the antagonistic activity might be due to the production of antibacterial components (Yasmeen et al. 2015). Previous studies had reported extensively on the dominance of LAB in fermented foods such as meat, fish, fruits, vegetables and dairy products (Grosu-Tudor et al. 2014; Hwanhlem et al. 2011). Media play a very important role in the successful isolation of lactic acid bacteria and in maximizing the production of antibacterial substances from LAB. In the present context, strain LAP1 produced the maximum amount of inhibitory components in MRS medium. Our study favours earlier reports, which suggested that MRS medium was a better medium for the growth of probiotic bacteria and the production of antibacterial substances (Yang et al. 2012; Ten Brink et al. 1994). The low production of antibacterial substances recorded in other media suggests that the high yield of growth inhibitory components from the isolate depends upon the specific nutrients supplied in the medium for biomass production. In the present investigation, the production of antibacterial substances from strain LAP1 was enhanced by optimizing the pH of the medium (pH 5.0). Our study strongly favours the findings of Iyapparaj et al. (2013), who demonstrated the maximum production of bacteriocin from lactic acid bacteria at pH 5.0. On the other hand, our results showed partial agreement with the findings of Zamfir et al. (2000), Aasen et al. (2000), Yang and Ray (1994) and Todorov and Dicks (2005), who observed maximum bacteriocin production in the range of pH 4.5-6.0. Maximum bacteriocin production was observed at an initial pH of 5.8, while a further increase in pH decreased the antagonistic activity (Verellen et al. 1998). In another similar study, a change in pH from lower to higher decreased the production of antibacterial substances in LAB (Cheigh et al. 2002). The variation in the production of growth inhibitory components with the change in the pH of the production medium might be due to changes in the biomass of the bacteria, post-translational modification or modification of the genes responsible for antagonistic characteristics (Liu and Chung 2005). In general, the production of antibacterial substances from strain LAP1 was stimulated at pH 5.0. Likewise, the effect of incubation temperature is a very critical parameter for the production of antibacterial substances such as bacteriocin (Delgado et al. 2007; Leaes 2011). Growth temperature and antagonistic substance production from lactic acid bacteria are often correlated, as indicated in the present report. Our study favours the findings of Iyapparaj et al. (2013) and Moonchai et al. (2005), who reported that the production of antibacterial substances from LAB was maximal at 30 °C. The present investigation was in complete agreement with the finding of Ekhay et al. (2013), who demonstrated that maximal antibacterial substance production by the bacterium correlates with the optimal cell growth temperature. However, the maximum production of growth inhibitory proteinaceous components was achieved at a temperature which was far from the incubation temperature required for cell growth (Messens and De Vuyst 2002). The inoculum volume (1 %) of strain LAP1 showed improved antagonistic activity of the CFNS against the indicator bacteria, but the rate of antibacterial substance production was not much influenced. This clearly indicated that the synthesis of growth inhibitory components from strain LAP1 was correlated with the specific cell biomass. Further extensive investigation is required to evaluate culture parameters to correlate the production of antibacterial substances and cell growth for specific strains. In the present context, lactose was found to be the most effective sole substrate that favoured the enhancement of antibacterial component production from strain LAP1 towards the indicator bacteria. In agreement with our study, Iyapparaj et al. (2013), Abo-Amer (2011) and Moreno et al. (2003) showed maximum bacteriocin yield by LAB in the presence of lactose as a source of carbon in the production medium. On the other hand, the antagonistic activity of bacteriocin was increased when glucose was added to the medium (Ekhay et al. 2013; Todorov 2008). Previous reports and the present investigation clearly indicate that a specific substrate can induce or inhibit the antagonistic activity of the CFNS in a strain-dependent manner. According to the results obtained in our study, the rate of antibacterial substance production from strain LAP1 was affected by the addition of different nitrogen sources, but the antagonistic activity was not much influenced by the addition of ammonium chloride into the production medium. The investigation favours the finding of Ekhay et al. (2013), who demonstrated that the incorporation of inorganic nitrogen into the medium had no effect on the increased bacteriocin production. However, the present study was not in agreement with the finding of Iyapparaj et al. (2013), who reported that the increase in antagonistic activity was attributed to an inorganic nitrogen source, such as ammonium acetate. The production of antibacterial substances, such as bacteriocin, was also found to be inhibited due to the higher concentrations of nitrogen incorporated into the medium (Callewaert and De Vuyst 2000). MRS medium supplemented with Tween20 induced the synthesis of antagonistic substances (Castro et al. 2011), as was also shown in the present study. The increased production by strain LAP1 in the presence of Tween20 in MRS broth might be because Tween20 is a non-ionic surfactant agent, and hence it has the ability to overproduce growth inhibitory components by affecting the bacterial cell membrane and secreting antibacterial substances directly into the medium. In other reports, broth supplemented with glucose and Tween80 inhibited the growth of indicator bacteria in a broad range by inducing the production of antibacterial substances (Iyapparaj et al. 2013; Verellen et al. 1998). The CFNS showed stability and activity at both acidic and neutral pH (control). The results of the present study suggested that the antagonistic properties of the CFNS of the isolate against the indicator bacteria were due to the potent antibacterial substances, not because of the acidic environment. The stability of the antibacterial components at a low pH may be important in medicine as a potential antibacterial agent. These results are comparable with the reports of Messens and De Vuyst (2002), Yang et al. (2012) and Yasmeen et al. (2015), who demonstrated stability and better antagonistic activity of bacteriocins at an acidic pH. Incubating the CFNS of strain LAP1 at different temperatures completely abolished the inhibitory properties of the antibacterial substances. These results demonstrated that the heat-labile antibacterial substances might be responsible for the inhibitory activity of the CFNS of the isolate. The results obtained in the current study provide one more significant step towards the study of the CFNS of L. pentosus as an antibacterial agent. The sensitivity of the antibacterial substances towards proteolytic enzymes strongly established the proteinaceous nature of the CFNS obtained from L. pentosus strain LAP1. The result was in complete agreement with the findings of Bromberg et al. (2004), Sabia et al. (2014) and Yasmeen et al. (2015), who found that pepsin inhibited the antagonistic activity of most of the antibacterial substances produced by lactic acid bacterial strains. Free radicals are the end product of metabolic process, and antioxidants are known to scavenge the free radicals produced inside the body. In the current study, the CFNS of strain LAP1 showed significant antioxidant properties (8.8 ± 0.12–57.35 ± 0.1 %) compared to ascorbic acid (60.2 ± 0.11–92.1 ± 0.21 %) in a concentration-dependent manner (100–1000 µl). Similar DPPH inhibition activity by LAB cell free supernatant was observed by Uugantsetseg and Batjargal (2014), who found that the antioxidant activity of the CFNS of isolates was in the range of 26.1–38.4 %. The DPPH free radical scavenging potential of the CFNS obtained from strain LAP1 may be involved in its antioxidant properties, and that is directly correlated with the concentration of antibacterial substances due to the presence of proteinaceous compounds and secondary metabolites in the CFNS of the isolate. From the present investigation, it is clear that Lactobacillus pentosus strain LAP1 isolated from Hentak produced antibacterial substances with growth inhibitory properties against human enteric pathogens. The maximum production of antibacterial substances was obtained in MRS broth supplemented with Tween20 utilizing optimized culture conditions and medium components. Additionally, the CFNS obtained from the isolate demonstrated antioxidant activity by scavenging DPPH in a dose-dependent manner. The OFAT optimization data on antibacterial substance production provides strong preliminary information for further investigation on the statistical optimization and bio-preservative role of CFNS for cost-effective industrial applications. The antagonistic substances from L. pentosus strain LAP1 could be used not only as a barrier to the growth of enteric pathogens but also for developing food products with antioxidant properties. An extensive study needs to be performed to explore the potency of antibacterial substances as an alternative therapy against disease-causing enteric bacteria. Aasen IM, Moretro T, Katla T, Axelsson L, Storro I (2000) Influence of complex nutrients, temperature and pH on bacteriocin production by Lactobacillus sakei CCUG 42687. Appl Microbiol Biotechnol 53:159–166 Abo-Amer AE (2011) Optimization of bacteriocin production by Lactobacillus acidophilus AA11, a strain isolated from Egyptian cheese. Ann Microbiol 61:445–452 Bhaskar N, Sudeepa ES, Rashmi HN, Tamil Selvi A (2007) Partial purification and characterization of protease of Bacillus proteolyticus CFR3001 isolated from fish processing waste and its antibacterial activities. Bioresour Technol 98:2758–2764 Bromberg R, Moreno I, Zaganini C, Delboni RR, Oliveira JD (2004) Isolation of bacteriocin-producing lactic acid bacteria from meat and meat products and its spectrum of inhibitory activity. Braz J Microbiol 35:137–144 Callewaert T, De Vuyst L (2000) Bacteriocin production with Lactobacillus amylovorus DCE471 is improved and stabilized by fed-batch fermentation. Appl Environ Microbiol 66:606–613 Castro MP, Palavecino NZ, Herman CO (2011) Lactic acid bacteria isolated from artisanal dry sausages: characterization of antibacterial compounds and study of the factors affecting bacteriocin production. Meat Sci 87:321–329 Cheigh H, Choi H, Park S, Kim M, Kook T, Kim J, Hwang JK, Pyun Y (2002) Influence of growth conditions on the production of a nisin like bacteriocin by Lactococcus lactis sub sp. lactis A164 isolated from kimchi. J Biotechnol 95:225–235 Chen YC, Sugiyama Y, Abe N, Kuruto-Nima R, Nozawa R, Hirota A (2005) DPPH radical scavenging compounds from Dou-Chi, a soybean fermented food. Biosci Biotechnol Biochem 69:999–1006 De Vuyst L, Vandamme EJ (1992) Influence of the carbon source on nisin production in Lactococcus lactis sub sp. Lactis batch fermentations. J Gen Microbiol. 138:571–586 Delgado A, Lopez FNA, Brito D (2007) Optimum bacteriocin production by Lactobacillus plantarum17.2b requires absence of NaCl and apparently follows a mixed metabolite kinetics. J Biotechnol 130:193–201 Ekhay O, Ouhsassi M, Abdeltif EH, Idaomar M, Abrini J (2013) Optimization of bacteriocin-like production by Enterococcus durans E204 isolated from camel milk of Morocco. Curr Res Microbiol Biotechnol. 1:155–159 Grosu-Tudor SS, Stancu MM, Pelinescu D, Zamfir M (2014) Characterization of some bacteriocins produced by lactic acid bacteria isolated from fermented foods. World J Microbiol Biotechnol 30:2459–2469 Hwanhlem N, Buradaleng S, Wattanachant S, Benjakul S, Tani A, Maneerat S (2011) Isolation and screening of lactic acid bacteria from Thai traditional fermented fish (Plasom) and production of Plasom from selected strains. Food Control 22:401–407 Ilavenil S, Srigopalram S, Park HS, Choi KC (2015) Growth and metabolite profile of Pediococcus pentosaceus and Lactobacillus plantarum in different juice. South Ind J Biol Sci 1:1–6 Iyapparaj P, Maruthiah T, Ramasubburayan R, Prakash S, Kumar C, Immanuel G, Palavesam A (2013) Optimization of bacteriocin production by Lactobacillus sp. MSU3IR against shrimp bacterial pathogens. Aquat Biosyst 9:1–10 Izumo T, Izumi F, Nakagawa I, Kitagawa Y, Shibata H, Kiso Y (2011) Influence of Lactobacillus pentosus S-PT84 ingestion on the mucosal immunity of healthy and Salmonella typhimurium-infected mice. Biosci Microflora 30:27–35 Jagadeesh KS (2015) Lactic acid bacteria as a source of functional ingredients. South Ind J Biol Sci 1:70–71 Kotani Y, Shinkai S, Okamatsu H, Toba M, Ogawa K, Yoshida H et al (2010) Oral intake of Lactobacillus pentosus strain b240 accelerates salivary immunoglobulin A secretion in the elderly: a randomized, placebo-controlled, double-blind trial. Immun Ageing 7:11. doi:10.1186/1742-4933-7-11 Leaes FL, Sant'Anna V, Vanin NG (2011) Use of byproducts of food industry for production of antimicrobial activity by Bacillus sp. P11. Food Bioprocess Technol 4:822–828 Liu YK, Chung S (2005) Yang, Yousef AE. Continuous nisin production in laboratory media and whey permeate by immobilized Lactococcus lactis. Process Biochem 40:13–24 Messens W, De Vuyst L (2002) Inhibitory substances produced by Lactobacilli isolated from sourdoughs- a review. Int J Food Microbiol 72:31–43 Moonchai S, Madlhoo W, Jariyachavalit K, Shimizu H, Shioya S, Chauvatcharin S (2005) Application of a mathematical model and differential evolution algorithm approach to optimization of bacteriocin production by Lactococcus lactis C7. Bioproc Biosyst Eng 28:15–26 Moortvedt-Abildgaard CI, Nissen-Meyer J, Jelle B, Grenov B, Skaugen M, Nes IF (1995) Production and pH-dependent bactericidal activity of lactocin S, a lantibiotic from Lactobacillus sake L45. Appl Environ Microbiol 61:175–179 Moreno MRF, Rea MC, Cogan TM, De Vuyst L (2003) Applicability of a bacteriocin-producing Enterococcus faecium as a co-culture in Cheddar cheese manufacture. Int J Food Microbiol 81:73–84 Parente E, Ricciardi A, Addario G (1994) Influence of pH on growth and bacteriocin production by Lactococcus lactis subsp. lactis 140NWC during batch fermentation. Appl Microbiol Biotechnol 41:388–394 Pena MJ, Wang H, Johnson R, Anand S, Griffiths MW (2007) Probiotics affect virulence-related gene expression in Escherichia coli O157:h7. Appl Environ Microbiol 73:4259–4267 Ravi AV, Musthafa KS, Jegathammbal G, Kathiresan K, Pandian SK (2007) Screening and evaluation of probiotics as a biocontrol agent against pathogenic Vibrios in marine aquaculture. Let Appl Microbiol 45:219–223 Rejiniemon TS, Hussain RR, Rajamani B (2015) In-vitro functional properties of Lactobacillus plantarum isolated from fermented ragi malt. South Ind J Biol Sci. 1:15–23 Ruiz-Barba JL, Cahtcart DP, Warner PJ, Jimenez-Diaz R (1994) Use of Lactobacillus plantarum LPCO10 a bacteriocin producer, as a starter culture in Spanish-style green olive fermentations. Appl Environ Microbiol 60:2059–2064 Sabia C, Anacarso I, Bergonzini A, Gargiulo R, Sarti M, Condò C et al (2014) Detection and partial characterization of a bacteriocin-like substance produced by Lactobacillus fermentum CS57 isolated from human vaginal secretions. Anaerobe 26:41–45 Ten Brink B, Minekus M, Van der Vossen JMBM, Leer RJ, Huisin't Veld JHJ (1994) Antimicrobial activity of lactobacilli: preliminary characterization and optimization of production of Acidocin B, A novel bacteriocin produced by Lactobacillus acidophilus M46. J Appl Bacteriol 77:140–148 Todorov SD (2008) Bacteriocin production by Lactobacillus plantarum AMA-K isolated from Amasi, a zimbabwean fermented milk product and study of the adsorption of bacteriocin AMA-K to Listeria sp. Braz J Microbiol 39:178–187 Todorov SD, Dicks LMT (2005) Effect of growth medium on bacteriocin production by Lactobacillus plantarum ST194BZ, a strain isolated from Boza. Food Technol Biotechnol 43:165–173 Uugantsetseg E, Batjargal B (2014) Antioxidant activity of probiotic lactic acid bacteria isolated from Mongolian airag. Mongolian J Chem 15:73–78 Verellen TLJ, Bruggeman G, Van Reenen CA, Dicks LMT, Vandamme EJ (1998) Fermentation optimization of plantaricin 423, a bacteriocin produced by Lactobacillus plantarum 423. J Ferment Bioeng 86:174–179 Verschuere L, Rombaut G, Sorgeloos P, Verstraete W (2000) Probiotic bacteria as biological control agents in aquaculture. Microbiol Mol Biol Rev 64:655–671 Yang R, Ray B (1994) Factors influencing production of bacteriocins by lactic acid bacteria. Food Microbiol 11:281–291 Yang E, Fan L, Jiang Y, Doucette C, Fillmore S (2012) Antimicrobial activity of bacteriocin-producing lactic acid bacteria isolated from cheeses and yogurts. AMB Express. 2:48–59 Yasmeen YA, Elyas YY, Nuha ME, Yousif NM, Isam A, Ahmed M (2015) Screening of lactic acid bacteria from Sudanese fermented foods for bacteriocin production. J Microbiol Biotech Food Sci 4:373–378 Zamfir M, Callewaert R, Cornea PC, De Vuyst L (2000) Production kinetics of acidophilin 801, a bacteriocin produced by Lactobacillus acidophilus IBB 801. FEMS Microbiol Lett 190:305–308 CA, AK, NAAD and MVA carried out the experimental portion of the manuscript. CA, AK, MVA, NAAD and PA participated in study design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. The authors extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for funding this Prolific Research Group (PRG-1437-28). Research Department of Plant Biology and Biotechnology, Loyola College, Nungambakkam, Chennai, Tamil Nadu, 600034, India Chirom Aarti , Ameer Khusro & Paul Agastian Department of Botany and Microbiology, Addiriyah Chair for Environmental Studies, College of Science, King Saud University, P. O. Box 2455, Riyadh, 11451, Saudi Arabia Mariadhas Valan Arasu & Naïf Abdullah Al-Dhabi Search for Chirom Aarti in: Search for Ameer Khusro in: Search for Mariadhas Valan Arasu in: Search for Paul Agastian in: Search for Naïf Abdullah Al-Dhabi in: Correspondence to Mariadhas Valan Arasu or Paul Agastian. Aarti, C., Khusro, A., Arasu, M.V. et al. Biological potency and characterization of antibacterial substances produced by Lactobacillus pentosus isolated from Hentak, a fermented fish product of North-East India. SpringerPlus 5, 1743 (2016) doi:10.1186/s40064-016-3452-2 Antagonistic substances Cell free neutralized supernatant Hentak Lactic acid bacteria OFAT Lactobacillus pentosus
CommonCrawl
18F-FDG PET-guided diffusion tractography reveals white matter abnormalities around the epileptic focus in medically refractory epilepsy: implications for epilepsy surgical evaluation Stefan E. Poirier ORCID: orcid.org/0000-0002-0666-79521,2, Benjamin Y. M. Kwan3, Michael T. Jurkiewicz4, Lina Samargandy4, David A. Steven5,6, Ana Suller-Marti5, Victor Lam Shin Cheung7, Ali R. Khan2,4,8, Jonathan Romsa4, Frank S. Prato1,2,4, Jorge G. Burneo5,6, Jonathan D. Thiessen1,2,4 & Udunna C. Anazodo1,2 European Journal of Hybrid Imaging volume 4, Article number: 10 (2020) Cite this article Hybrid PET/MRI can non-invasively improve localization and delineation of the epileptic focus (EF) prior to surgical resection in medically refractory epilepsy (MRE), especially when MRI is negative or equivocal. In this study, we developed a PET-guided diffusion tractography (PET/DTI) approach combining 18F-fluorodeoxyglucose PET (FDG-PET) and diffusion MRI to investigate white matter (WM) integrity in MRI-negative MRE patients and its potential impact on epilepsy surgical planning. FDG-PET and diffusion MRI of 14 MRI-negative or equivocal MRE patients were used to retrospectively pilot the PET/DTI approach. We used asymmetry index (AI) mapping of FDG-PET to detect the EF as brain areas showing the largest decrease in FDG uptake between hemispheres. Seed-based WM fiber tracking was performed on DTI images with a seed location in WM 3 mm from the EF. Fiber tractography was repeated in the contralateral brain region (opposite to EF), which served as a control for this study. WM fibers were quantified by calculating the fiber count, mean fractional anisotropy (FA), mean fiber length, and mean cross-section of each fiber bundle. WM integrity was assessed through fiber visualization and by normalizing ipsilateral fiber measurements to contralateral fiber measurements. The added value of PET/DTI in clinical decision-making was evaluated by a senior neurologist. In over 60% of the patient cohort, AI mapping findings were concordant with clinical reports on seizure-onset localization and lateralization. Mean FA, fiber count, and mean fiber length were decreased in 14/14 (100%), 13/14 (93%), and 12/14 (86%) patients, respectively. PET/DTI improved diagnostic confidence in 10/14 (71%) patients and indicated that surgical candidacy be reassessed in 3/6 (50%) patients who had not undergone surgery. We demonstrate here the utility of AI mapping in detecting the EF based on brain regions showing decreased FDG-PET activity and, when coupled with DTI, could be a powerful tool for detecting EF and assessing WM integrity in MRI-negative epilepsy. PET/DTI could be used to further enhance clinical decision-making in epilepsy surgery. Medically refractory epilepsy (MRE) affects approximately 30% of epilepsy patients and is defined as a chronic neurological disorder where seizures persist despite administration of anti-epileptic drugs (AEDs) (Helmstaedter et al. 2003; Richardson et al. 2004; Jiang et al. 2017). In some MRE patients, surgical resection of the epileptic focus (EF)—the brain region responsible for seizures—can alleviate seizure occurrence and improve overall quality of life (Richardson et al. 2004; Caciagli et al. 2014; Cahill et al. 2019). Positive surgical outcomes are highly dependent on accurate identification of the EF to ensure the epileptic region is safely removed without harming surrounding healthy brain tissue (Bettus et al. 2009). The current gold standard for identifying the EF is intracranial electroencephalography (IC-EEG), where either subdural or depth electrodes are used to directly locate abnormal brain activity (suspected EF) before surgical resection is performed (Knowlton 2006; Blount et al. 2008). However, about 50% of MRE patients continue to have seizures after surgery (Téllez-Zenteno et al. 2005; de Tisi et al. 2011). Surgery can fail to prevent seizures when the EF is not properly delineated or detected prior to resection. Additionally, poor surgical outcomes can occur due to unknown interactions between the EF and surrounding neural networks (Aparicio et al. 2016). Recent advances in medical imaging have seen the increased clinical use of magnetic resonance imaging (MRI) and positron emission tomography (PET) to non-invasively locate the EF and map out the structure and function of surrounding brain regions. Anatomical MRI can detect structural lesions responsible for seizures in about 60% of MRE patients (Burneo et al. 2015), while other advanced MRI techniques, such as diffusion tensor imaging (DTI), can be used to effectively characterize the EF and its relationships with surrounding brain regions (Aparicio et al. 2016; Jiang et al. 2017). DTI non-invasively characterizes tissue microstructure by providing a three-dimensional model of water diffusion in the brain (Basser and Jones 2002; Jones and Cercignani 2010). In addition, DTI can be used to investigate the structural connectivity of neural networks through mapping out diffusion along white matter (WM) fiber pathways (Le Bihan et al. 1986; Le Bihan 2006; Aparicio et al. 2016; Sivakanthan et al. 2016). WM pathways can be characterized using DTI-derived parameters, which are extracted from the diffusion tensor used to model water diffusion at each voxel in the brain. The most commonly used tensor-derived scalar is fractional anisotropy (FA), which is a measure of WM integrity and describes the tendency of water to preferentially diffuse along the length of the fiber bundle (Le Bihan 2006; Mori and Zhang 2006; Soares et al. 2013). Recent DTI studies have revealed that severe FA reduction in WM may correspond to widespread microstructural abnormalities in MRE (Labate et al. 2015; Jiang et al. 2017). To further assess tissue microstructure breakdown, WM pathways can be visualized by reconstructing WM fibers using diffusion tractography. Diffusion tractography techniques continue to be refined and adapted for neurosurgical planning and these techniques have been shown to accurately track WM fibers in temporal lobe regions essential for surgical success (Sivakanthan et al. 2016). PET, on the other hand, is the most sensitive non-invasive clinical tool for identifying the EF especially in cases where MRI is negative or equivocal (Burneo et al. 2015). 18F-Fluorodeoxyglucose PET (FDG-PET) can be used to detect the EF as brain areas showing decreased glucose uptake (glucose hypometabolism) (Sarikaya 2015; Burneo et al. 2015; Aparicio et al. 2016; Cahill et al. 2019). Glucose hypometabolic regions of interest (ROIs) are often identified by visual assessment of FDG-PET images, however, some abnormalities may be missed during this process. Therefore, semi-quantitative approaches such as asymmetry index (AI) mapping have been proposed to aid visual detection of hypometabolic PET ROIs (Henry et al. 1990; Rausch et al. 1994; Van Bogaert et al. 2000; Didelot et al. 2010; Boscolo Galazzo et al. 2016; Anazodo et al. 2018; Kamm et al. 2018; Shang et al. 2018). AI mapping investigates metabolic abnormalities by measuring the voxel-wise difference in cerebral glucose metabolism between hemispheres and has been shown to be a very sensitive biomarker for epileptogenicity (Didelot et al. 2010; Boscolo Galazzo et al. 2016). Using AI to investigate metabolic asymmetries can be useful because the process may be done on individual patients and does not require comparison to a healthy control database. Recently, it has been shown that multimodal brain imaging combining PET and MRI information may improve seizure site characterization compared to standalone IC-EEG, PET, or MRI (Burneo et al. 2015). Opportunely, this finding coincides with increased availability of advanced imaging systems that combine PET/MRI into an integrated system. Although researchers are starting to implement simultaneous PET/MRI in the clinical setting, the combined use of PET and DTI for presurgical evaluation of epilepsy is yet to be fully investigated. To our knowledge, only two studies to date have assessed whether cortical glucose hypometabolism seen on FDG-PET is related to WM alterations identified by DTI in the brains of MRE patients (Lippé et al. 2012; Aparicio et al. 2016). However, these studies acquired PET and MRI scans at separate timepoints which can introduce registration errors between modalities, making it difficult to accurately detect the seizure onset zone in the brain and assess relationships between PET and MRI findings. Simultaneous acquisition of PET and MRI data using a hybrid PET/MRI scanner acquires both datasets in the same imaging session with intrinsic spatial and temporal registration, potentially improving the accuracy of detecting the EF and may shed new insight into the pathophysiology of MRE. In this hybrid PET/MRI study, we developed a PET-guided diffusion tractography (PET/DTI) approach combining FDG-PET and diffusion MRI to investigate WM integrity in the brains of MRE patients. AI mapping of FDG-PET was used to guide diffusion tractography of WM tracts in MRE patients to better understand structural connectivity of WM fibers affected by glucose hypometabolic regions (suspected EF). WM fibers were also visually inspected by a neurologist to assess the potential clinical impact of PET/DTI on decision-making in epilepsy surgery. The study included 14 MRE patients (6 males and 8 females; mean age = 38 ± 14 years) from the London Health Sciences Centre epilepsy monitoring unit (EMU), diagnosed after failing two or more adequate trials of AEDs. Clinical assessment in the EMU included neuropsychological evaluation, prolonged scalp video-EEG, and 1.5 T MRI to localize the EF. Patient demographics and clinical profile are provided in Table 1. Mean epilepsy onset and duration was 23 ± 13 and 15 ± 15 years, respectively. The cohort consisted of 10 MRI-negative and 4 MRI-equivocal MRE patients, determined based on all available diagnostic information (clinical hypothesis, semiology, and 1.5 T MRI reports). All patients provided written informed consent. The study was approved by the University Research Ethics Board and conducted in accordance with the Declaration of Helsinki ethical standards. Table 1 Patient demographics and clinical profile Data were acquired using a 3 T hybrid PET/MRI scanner (Biograph mMR, Siemens Healthineers, Erlangen, Germany) located at the Lawson Health Research Institute. Patients fasted for at least 6 h prior to the study (fasting blood glucose = 4.3 ± 0.6 mmol/L). PET/MRI was acquired immediately after clinical PET/CT scans (net injected dose of FDG = 190 ± 17 MBq, PET/MRI post-injection time = 72 ± 5 min), and the PET/MRI data were used in this study. Serial MRI scans were performed during a 30-min list-mode PET imaging session. An isotropic (1 mm3) high resolution T1-weighted MRI and T2-weighted FLAIR MRI were acquired covering the whole brain using a three-dimensional magnetization-prepared rapid gradient-echo sequence (MPRAGE) and fast-spin echo sequence (SPACE) respectively to assess evidence of structural abnormalities (Brant-Zawadzki et al. 1992). Diffusion-weighted imaging (DWI) was acquired using a single-shot echo-planar imaging (EPI) sequence with the following parameters: 2 mm isotropic resolution, 64 contiguous slices, b values = 0, 1000 s/mm2 and 64 diffusion encoding directions. Two spin-echo images were acquired in opposite phase-encoding directions with b values = 0 s/mm2 and 6 directions to correct for inherent susceptibility-induced distortions in DWI. The PET data were reconstructed to one image volume (ordered subset expectation maximization algorithm; 3 iterations, 21 subsets, 2 mm full-width at half-maximum (FWHM) Gaussian filter, 2.5 zoom factor, 344 × 344 × 127 matrix and 2.09 × 2.09 × 2.03 mm3 voxels). Attenuation correction was performed using an ultrashort echo time MRI sequence and an offline MRI-based attenuation correction approach (RESOLUTE) (Ladefoged et al. 2015). DWI preprocessing Before image preprocessing, all DWI volumes were visually inspected for artifacts to ensure only good quality data were used. DWI data were preprocessed using an in-house image analysis pipeline that incorporated steps from a variety of different image processing software packages (Figure S1). Each patient's DWI images were first denoised using an optimized non-local means filter (Wiest-Daesslé et al. 2008; Coupé et al. 2008, 2010) in MATLAB (MathWorks®, Natick, MA) followed by subject motion, eddy current, and bias field corrections using FMRIB's Software Library (FSL) (Woolrich et al. 2009), MRtrix3 (Tournier et al. 2019), and ANTS (Avants et al. 2011), respectively. Tensors were fit to the data using non-linear least-squares estimation in ExploreDTI (Leemans et al. 2009) to generate an FA map. For WM fiber reconstruction, all diffusion tractography steps were performed using MRtrix3. A single fiber WM response function was estimated from the preprocessed DWI data using a spherical harmonics order of 8. The DWI data were upsampled to 1 × 1 × 1 mm3 isotropic voxels, and the fiber orientation distribution function was calculated by constrained spherical deconvolution with a spherical harmonics order of 8 and a whole-brain mask to constrain calculations to voxels within the brain. The maximas of the fiber orientation distribution function were then extracted and used to visualize the WM fibers. PET data analysis PET preprocessing steps were completed using FSL, ANTS, and SPM12 (Wellcome Department of Cognitive Neurology, Institute of Neurology, London). For AI mapping, we used the MNI T1 1 mm isotropic image provided by FSL as a template for spatial alignment of patient FDG-PET images. To account for geometric distortions in patient anatomy between hemispheres, this template was made symmetric by flipping it about the sagittal plane and then calculating the mean image of the flipped and unflipped images. Each patient's FDG-PET data were spatially normalized to the symmetric template using a three-step registration method in ANTS that consisted of linear and non-linear warping transformations that aligned brain structures in the PET image as closely as possible to the template. A voxel-wise standardized uptake value (SUV) map was calculated using: $$ \mathrm{SUV}=\frac{C_{\mathrm{PET}}(t)\times \mathrm{BW}}{\mathrm{Dose}} $$ where CPET(t) is the activity concentration in each voxel of the spatially normalized PET image, BW is the patient's body weight, and Dose is the net injected dose of FDG. The SUV map was smoothed using a FWHM of 2 mm to account for differences in patient anatomy. Each patient's T1-weighted image was spatially normalized to the symmetric MNI template and then segmented into gray matter (GM), WM, and cerebrospinal fluid tissue probability maps. Because the EF is typically in GM focal regions, we only considered SUV values in voxels with at least 30% GM (based on segmentation of the aligned T1-weighted MRI). The GM SUV maps were then scaled by the individual mean GM SUV in the cerebellum to account for global metabolism effects in the brain (Anazodo et al. 2018). The relative GM SUV (SUVr) map was spatially flipped about the sagittal plane and a voxel-wise AI map was calculated using: $$ \mathrm{AI}=\frac{\mathrm{I}\hbox{-} \mathrm{fI}}{2\left(\mathrm{I}+\mathrm{fI}\right)}\times 100 $$ where I and fI are the unflipped and flipped SUVr images, respectively. To determine significant hypometabolic areas on PET, a Z-score AI (ZAI) map was calculated using: $$ {Z}_{\mathrm{AI}}=\frac{X-\mu }{\sigma } $$ where X is the voxel intensity in the AI map, μ is the mean AI of all GM voxels in the brain, and σ is the standard deviation AI of all GM voxels. Because we did not know the exact distribution of AI values in our sample of patients, we scaled ZAI by the degrees of freedom (df) in our sample (Crawford and Garthwaite 2012). For our sample of 14 MRE patients, df was 13 therefore we considered ZAI < −1.77 to represent significant hypometabolism compared to the contralateral brain region. In each ZAI map, the largest focal GM area containing voxels with ZAI < −1.77 was extracted as the hypometabolic PET ROI (suspected EF). To validate our AI mapping approach, these PET ROIs were compared against clinical findings on seizure onset area, including clinical hypothesis, scalp video-EEG, clinical reader assessment of PET SUV images, stereo-EEG (SEEG), and surgical outcome (Engel classification and ground-truth histopathology). PET/MR image reading All FDG-PET and MR images were visually inspected by two neuroradiologists (B.Y.M.K. and M.T.J.). FDG-PET was also inspected by a third reader, a nuclear medicine physician (L.S.). FDG-PET was co-registered and overlaid onto MRI. T1-weighted, T2-weighted, and SUV images were visually assessed using a standard clinical imaging software (MI Neurology, SyngoVia, Siemens Healthcare, Erlangen, Germany). To aid visual assessment of PET, semi-quantitative analysis was also included in the image reading through statistical comparison of SUV values with cerebellar normalization to an age-matched healthy control database provided by the software. PET-guided diffusion tractography (PET/DTI) We developed a PET/DTI approach by using seed-based diffusion tractography to investigate structural integrity of WM regions around the hypometabolic PET ROI (suspected EF) identified by AI mapping. The PET ROI, which was initially defined in MNI space, was inverse mapped back to the subject's diffusion space and used as a seed to initiate fiber tracking. WM fiber tracts were visualized and quantified using Fibernavigator, a novel diffusion tractography tool (Chamberland et al. 2015). In Fibernavigator, a 3 × 3 × 3 mm3 volume of interest (VOI) was placed in the GM PET ROI that was directly adjacent to the closest WM area. This VOI was dilated at incremental distances of 3, 9, and 15 mm into surrounding WM (Fig. 1). Each dilated VOI was used as a seed region to generate WM tracts at each distance from the PET ROI. Another 3 × 3 × 3 mm3 VOI was manually defined in the contralateral brain region and dilated to generate fibers for the same three distances into surrounding WM. To assess WM tract asymmetry between ipsilateral and contralateral WM fiber tracts, WM fiber quantification was performed by extracting measurements readily available in Fibernavigator, such as fiber count (number of fibers within the bundle), mean fiber length (mm), and mean fiber cross-section (CS) (mm2). In addition, the mean FA was calculated as the weighted average of all FA values along the length of the tracts. Normalized (ipsilateral/contralateral) fiber count, mean FA, mean fiber length, and mean CS measurements served as preliminary assessments of WM tract asymmetry and the Wilcoxon signed-rank test was then used to compare fiber measurements across the three WM distances from the PET ROI (p < 0.05 was considered significant). 2D representation of the 3D procedure for tracking WM regions around the EF in one MRE patient (patient #9). a EF (detected by AI mapping of FDG-PET) overlaid onto structural MRI. b EF overlaid onto a WM probability map. Because the EF is located in a cortical area (left hippocampus), WM tracking was performed at three distances away from the EF: 3 mm, 9 mm, and 15 mm. The colored regions around the EF represent WM areas covering the three distances. These WM regions were used as seed ROIs to initiate neural fiber bundle tracking in Fibernavigator Clinical assessment of PET/DTI findings WM fibers around the hypometabolic PET ROI for each patient were visualized by a senior neurologist with over 15 years of practice experience (J.G.B.) in order to assess the potential clinical impact of the PET/DTI approach in guiding epilepsy surgical evaluation. For each patient, the neurologist first viewed the summary of presurgical evaluation findings (clinical hypothesis, scalp video-EEG, 1.5 T MRI, PET report from PET/CT, SEEG) and then using Fibernavigator, interactively viewed the ipsilateral and contralateral WM fibers 3 mm away from the hypometabolic PET ROI identified by AI mapping. A distance of 3 mm away from the PET ROI was chosen for this assessment, as WM fibers generated from this distance pass directly adjacent to the GM PET ROI and are likely to give the best indicator of structural integrity around the epileptic zone. For the clinical assessment of the PET/DTI approach, the neurologist determined whether the differences between ipsilateral and contralateral WM fibers around the hypometabolic PET ROI (suspected EF) were concordant with the clinical hypothesis. In order to assess the potential clinical impact of PET/DTI, the neurologist's confidence after viewing the WM fibers was assigned to one of the following categories: unchanged or improved. If confidence was improved, the neurologist also reported if reassessment of surgical candidacy would be beneficial in patients who had not undergone surgery. AI mapping of FDG-PET for EF localization and lateralization in MRE AI mapping was used to detect the EF based on regions showing significant metabolic asymmetry between hemispheres in the brain. A visual example of the AI mapping results for one MRE patient (patient #9) is shown in Fig. 2. In this patient, AI mapping was able to detect a clear hypometabolic region (suspected EF) in the left temporal lobe, which matched the overall clinical hypothesis. Images from a 45 year old female MRE patient (patient #9) with a clinical hypothesis of left temporal lobe focal epilepsy. a PET SUV map. b Anatomical MRI. c PET fused with MRI. d Z-score map from computer-assisted diagnosis of PET data (Siemens Syngo Via). e Z-score map generated from AI mapping (ZAI map), which shows a clear glucose hypometabolic region (green circle) in the left temporal lobe, indicative of a potential EF. f Hypometabolic PET ROI (yellow) from AI mapping overlaid onto structural MRI Clinical hypothesis, scalp video-EEG findings from the EMU, FDG-PET hypometabolism reports from the three clinical readers (3 T MRI visual assessment reported in Table S1), AI mapping, SEEG, and surgical findings for our cohort of 14 MRE patients are summarized in Table 2. AI mapping findings were concordant with the clinical hypothesis in localizing and lateralizing the epileptic region in 12/14 (86%) and 9/14 (64%) patients, respectively. AI mapping agreed with scalp video-EEG in 13/14 (93%) patients for both EF localization and lateralization. Concordance between AI mapping and clinical PET readings was 64%/69% (average EF localization/lateralization from the three clinical readers). Five patients underwent SEEG prior to surgical resection, and EF localization/lateralization concordance with AI mapping was observed in four patients. Mean SUV, max SUV, and mean ZAI were decreased in hypometabolic PET ROIs identified by AI mapping (see Table S2). Eight patients underwent surgical resection to remove the EF on the suspected epileptogenic side based on all clinical information and diagnoses available. After a one-year follow-up, 5/8 (62.5%) patients achieved Engel class IA (long-term seizure freedom), 2/8 (25%) patients achieved Engel class IIIA (significant improvement, but not completely seizure free), and 1/8 (12.5%) patients had Engel class IV (no improvement). AI mapping was concordant with surgical findings, where histopathology was performed to determine the ground-truth EF classification, in localizing and lateralizing the EF in six and four patients, respectively. Table 2 EEG, PET, and surgical findings PET/DTI—tracking WM around glucose hypometabolic regions (suspected EF) An example of the WM fiber visualization at each distance away from the hypometabolic PET ROI (suspected EF) for one MRE patient (patient #9) is shown in Fig. 3. In this patient, visual assessment revealed noticeable differences between ipsilateral (left) and contralateral (right) fiber bundles in WM 3 mm away from the EF. No notable differences between ipsilateral and contralateral WM fibers were observed in WM 15 mm away from the EF. PET-guided diffusion tractography in one MRE patient (patient #9) with a clinical hypothesis of left temporal lobe focal epilepsy. Ipsilateral (left) and contralateral (right) WM fibers (colored lines) are shown for the three WM distances (3, 9, and 15 mm) away from the EF (yellow) identified by AI mapping of FDG-PET. Fewer WM fibers are observed on the ipsilateral side. Differences in WM fibers between ipsilateral and contralateral sides appear more prominent at closer distances (3 mm) to the EF. Abbreviations: L, left; R, right When comparing fiber values across the three distances (3, 9, and 15 mm) into surrounding WM, normalized fiber count, mean FA, and mean fiber length were the lowest at a distance of 3 mm (Fig. 4). At 3 mm, normalized mean FA, fiber count, and mean fiber length were decreased in 14/14 (100%), 13/14 (93%), and 12/14 (86%) patients, respectively. Normalized mean CS was decreased in 7/14 (50%) patients at this same distance. Analysis using the Wilcoxon signed-rank test revealed that mean FA was significantly decreased at 3 mm compared to 9 mm (p = 0.0031) and 15 mm (p = 0.0004). Fiber count was the lowest at 3 mm and 9 mm, compared to 15 mm (p < 0.01). Mean fiber length was significantly reduced across all three distances (p < 0.05). The same trend was also observed when DTI scalar measurements were made in the WM seed regions used for tracking around the hypometabolic PET ROI, where mean FA was decreased at distance 3 mm compared to 9 mm and 15 mm (see Table S3). Quantification of WM fibers around the hypometabolic PET ROI (suspected EF) in 14 MRE patients. Ipsilateral fiber measurements were normalized to contralateral fiber measurements as a preliminary measure of WM tract asymmetry. Normalized values are plotted for the three distances away from the PET ROI. Wilcoxon signed-rank test was used to compare normalized fiber measurements across the three distances into surrounding WM (p < 0.05 was considered significant). Fiber count, mean fiber length, and mean FA are decreased at closer distances to the PET ROI (3 mm) compared to 15 mm (p < 0.05). Abbreviations: *p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001 Table 3 summarizes findings from the neurologist's clinical assessment of the PET/DTI approach. Eight patients had already undergone surgery. Based on clinical hypotheses, the MRE cohort consisted of seven temporal lobe, four extratemporal lobe, and three frontal lobe epilepsy patients. Upon inspection of PET/DTI, WM fiber abnormalities in the epileptic lobe were observed in 10/14 (71%) patients and these findings were concordant with the clinical hypothesis. In all 10 patients, diagnostic confidence improved after presentation of PET/DTI. Specifically, PET/DTI was contributive in five temporal lobe, three extratemporal lobe, and two frontal lobe epilepsy patients. Most importantly, PET/DTI indicated that surgical resection could be beneficial in 3/6 (50%) patients who had not undergone surgery. Table 3 Clinical assessment of PET-guided diffusion tractography (PET/DTI) findings Discussion and conclusions To our knowledge, this is the first study to simultaneously combine FDG-PET and diffusion MRI to investigate WM integrity in the brains of MRE patients. We showed that AI mapping of FDG-PET can successfully detect hypometabolic brain regions (suspected EF) that are concordant with conventional epilepsy surgical evaluation techniques (1.5 T MRI, EEG, visual PET assessment). We used AI mapping and diffusion tractography to develop a non-invasive approach that combines PET and MRI information into one integrated tool (PET/DTI). We demonstrated that our PET/DTI approach is feasible and can detect epileptic zones in the brains of MRI-negative epilepsy patients. We localized seizure-onset sites using AI mapping of FDG-PET and tracked WM fibers from these sites to the rest of the brain using diffusion tractography. This was achieved by implementing a robust image analysis process standardized for use in each patient and adapting readily available imaging analysis tools for ROI mask generation and subsequent fiber tracking. The potential clinical impact of PET/DTI in epilepsy surgical evaluation was also demonstrated in this study. Specifically, we showed that investigation of WM abnormalities adjacent to seizure-onset zones in the brain can improve diagnostic confidence in MRE. Furthermore, we found PET/DTI can even indicate that surgical resection may be beneficial in some MRE patients who have not undergone surgery. Of course, surgical candidacy of these patients would first need to be reassessed through future interdisciplinary meetings before concrete decisions to proceed with the resection can be made. Nevertheless, our findings suggest that PET/DTI can potentially impact clinical decision-making in epilepsy surgery and is a promising tool for advancing epilepsy treatment and management. Numerous standalone PET and diffusion MRI studies have reported functional and structural alterations in MRE (Henry and Pennell 1998; Knowlton 2006; Focke et al. 2008; Lin et al. 2008; Thivard et al. 2011; James et al. 2015; Labate et al. 2015; Burneo et al. 2015; Sivakanthan et al. 2016; Jiang et al. 2017; Güvenç et al. 2018; Cahill et al. 2019), however, very few studies have assessed relationships between FDG-PET and diffusion MRI findings in epilepsy. Similar to our study, one previous report also found microstructural alterations (decreased FA and increased apparent diffusion coefficient) in WM adjacent to the epileptic zone identified by FDG-PET hypometabolism (Lippé et al. 2012), while another study revealed that metabolic and structural alterations seen using FDG-PET and DTI involve similar brain regions in mesial temporal lobe epilepsy (Aparicio et al. 2016). In contrast to (Lippé et al. 2012) and (Aparicio et al. 2016) who acquired PET and DTI separately, we used a hybrid PET/MRI scanner to simultaneously acquire PET and MRI in our study. While this may appear as a trivial difference, this has profound implications. Patients typically undergo PET and MRI scans on different days, up to a few months apart. In our cohort, the initial 1.5 T MRI evaluation was on average eight months prior to the clinically indicated PET/CT. Acquiring PET and diffusion MRI scans separately can create spatial and temporal registration problems, making it difficult to accurately identify the seizure-onset zone and map its effects on brain structure and function undergoing disease-related changes (Wang et al. 2018; Shang et al. 2018). Misalignment errors are usually due to the subject's head position being different in image space between scans which are significantly minimized by hybrid PET/MRI. Co-registration of PET with MRI through multimodal imaging therefore may allow for improved diagnostic accuracy and more precise EF detection than standalone PET or MRI, especially in MRI-negative epilepsy (Boscolo Galazzo et al. 2016; Shang et al. 2018). The majority of the patients with temporal lobe epilepsy in our cohort had apparent PET/DTI WM abnormalities. This result is consistent with past studies that have illustrated the utility of diffusion tractography in revealing microstructural breakdown of WM pathways implicated in drug-resistant temporal lobe epilepsy (Ahmadi et al. 2009; Sivakanthan et al. 2016), as well as other studies reporting FDG-PET to have higher sensitivity for detecting the EF in temporal lobe epilepsy patients (70-90%) who had good surgical outcomes compared to those with other types of epilepsy, especially extratemporal lobe epilepsy (30-60%) (Sarikaya 2015; Burneo et al. 2015; Aparicio et al. 2016). The surgical success rates in extratemporal lobe epilepsy are much lower than temporal lobe epilepsy (30-40% vs. 60-70%) with likelihood of achieving long-term seizure freedom further decreasing in the MRI-negative cases (Téllez-Zenteno et al. 2005; de Tisi et al. 2011), suggesting the possible involvement of intricate neural networks extending beyond the EF in extratemporal lobe epilepsy that may be responsible for surgical failure. Interestingly, PET/DTI identified WM abnormalities around the EF in 3/4 patients with extratemporal lobe epilepsy in our MRE cohort (patients #1, #4, and #11 in Table 3) with improved diagnostic confidence observed in all three patients. While this is a very small number of patients, we argue this might provide some preliminary evidence that PET/DTI may potentially shed new insight into neural networks altered in extratemporal lobe epilepsy and is thus a promising tool for improving surgical outcomes, even in patients where the EF and its interactions with surrounding brain tissue extend beyond the temporal lobe. In our study, PET/DTI was unremarkable in four patients (see patients #6, #7, #8, and #10 in Table 3). Specifically, in patients #6, #7, and #8, all clinical findings lacked concordance, with only patient #6 becoming seizure-free after surgery (see Table 2). In patient #10, AI mapping was not concordant with visual PET assessment from the three clinical readers and the patient showed no improvement after surgery (Engel class IV). These findings suggest that the four PET/DTI-negative patients in our study may have had a seizure focus with underlying physiological abnormalities that were too subtle to confidently detect using neuroimaging. Further research needs to be conducted on why functional and structural properties measured using PET and MRI are impaired in some epilepsy patients while in others, they appear intact. It is well established that FDG-PET is the most sensitive functional imaging tool for indirectly identifying epileptic regions based on glucose hypometabolism (Knowlton 2006; Burneo et al. 2015; Aparicio et al. 2016). However, glucose hypometabolic regions identified by PET could extend beyond the true EF especially in extratemporal lobe epilepsy, and may reflect pathophysiology of seizure propagation from the epileptic zone to surrounding neural networks (Sarikaya 2015; Aparicio et al. 2016). Recent studies have found that using semi-quantitative approaches, such as AI mapping that extend beyond visual reads can not only detect hypometabolic regions in high agreement with other clinical and electrophysiological findings but can also increase a reader's confidence in their visual assessment of PET (Didelot et al. 2010; Boscolo Galazzo et al. 2016; Shang et al. 2018). Here, we demonstrated—albeit retrospectively—the utility of AI mapping in epilepsy surgical evaluation, where AI mapping was able to successfully localize and lateralize the epileptogenic focus in most MRE patients. While it is possible that some of the metabolic asymmetries observed could simply reflect normal physiological asymmetries in the brain, especially in patients with multi-focal hypometabolism, we used a standard AI mapping thresholding approach to isolate significant hypometabolic brain regions that has been validated by past studies (Boscolo Galazzo et al. 2016; Shang et al. 2018), which gives us confidence that the metabolic asymmetries detected in our study are more likely associated with epileptic regions rather than normal healthy brain tissue. AI mapping is thus a promising tool for guiding assessment of surgical candidacy in epilepsy, especially in MRI-negative cases. Furthermore, similar to our findings, past studies have reported FDG-PET hypometabolism in contralateral brain regions in some epilepsy patients (Aparicio et al. 2016; Cahill et al. 2019), presumably due to spread of epileptic activity across hemispheres. Despite these challenges with FDG-PET specificity, we were still able to show that FDG-PET can aid detection of the epileptogenic zone and assessment of surgical candidacy in epilepsy, especially when combined with DTI. Perhaps the use of novel PET tracers targeted to pathogenesis of epilepsy such as imaging reduced synaptic density using PET-ligands targeting the synaptic vesicle protein 2A (Finnema et al. 2016) as well as receptor imaging using PET tracers targeting serotonin and gamma-aminobutyric acid (Sarikaya 2015; Galovic and Koepp 2016), could increase the specificity of PET in detecting the true EF. In this study, we used diffusion tractography to assess structural integrity around MRI-negative epileptic zones identified by FDG-PET. Although there is no current gold standard for validation of WM fibers generated using diffusion tractography techniques, there are a number of phantom models adapted to simulate WM pathways in healthy human brains and provide some evaluation of tractography approaches. We empirically evaluated our diffusion MRI preprocessing and tractography approach to a computer-simulated WM phantom (Neher et al. 2014). However, this and other phantom models do not take into account any GM or WM pathologies present in epilepsy patients (Neher et al. 2014; Maier-Hein et al. 2017). As such, we opted not to compare WM fibers between epilepsy patients and a healthy control group, and instead assessed structural integrity by comparing WM fibers between hemispheres within individual patients. This individual assessment is more likely to be of clinical utility in epilepsy surgical centers where epilepsy patients are typically evaluated on a case-by-case basis. Nevertheless, we were able to show that WM fibers appear to be affected at multiple distances away from the epileptic tissue. Interestingly, these abnormalities were most apparent in WM directly surrounding the epileptic zone. While no other studies to date have assessed WM fiber integrity at different distances from MRI-negative EF sites using WM fiber quantification, some studies have shown that diffusion tractography can reveal widespread microstructural changes in drug-resistant epilepsy that could be responsible for surgical failure (Sivakanthan et al. 2016; Jiang et al. 2017). Our results suggest that WM directly adjacent to the epileptic zone is most prone to structural alterations. More specifically, we found that out of the three WM distances investigated, WM anomalies were most prominent at an average distance of approximately 3 mm away from the epileptic zone. This finding suggests that investigation of WM at this distance from epileptic tissue may better inform clinicians about whether surgery is an option, and if so, how to properly resect the EF without damaging surrounding healthy brain tissue. This is especially important to assess in WM affecting memory, language, and visual pathways in the brain, which are of prime importance in perioperative planning (Lin et al. 2008; James et al. 2015; Sivakanthan et al. 2016; Li et al. 2019). Because our AI mapping procedure detected hypometabolism (suspected EF) in cortical brain areas, we were left with the task of developing a method to track surrounding WM regions closest to the EF. We sampled WM regions at three incremental distances away from the epileptic zone using a VOI placed manually in the part of the EF directly adjacent to surrounding WM. This manual implementation poses a few issues. First, because we manually defined VOIs in GM regions contralateral to the EF, there is the possibility of spatial error between ipsilateral and contralateral VOIs. Second, focal cortical dysplasias and other GM/WM pathologies may result in different amounts of WM being sampled between ipsilateral and contralateral regions. However, it is conceivable that any differences in WM size between ipsilateral and contralateral regions are presumably small and are likely offset by the noticeable WM fiber abnormalities observed around the EF in the majority of our MRE patient cohort. The clinical potential of the proposed PET/DTI approach could be impacted by the relatively small size of our heterogeneous MRE patient cohort, making it difficult to draw any conclusions regarding what epilepsy patient groups are most likely to benefit from PET/DTI. However, the purpose of this hybrid PET/MRI study was to demonstrate the feasibility of PET/DTI and provide some preliminary assessment on whether PET/DTI could potentially impact clinical decision-making in epilepsy surgery, particularly in MRI-negative epilepsy where FDG-PET could instead be used to non-invasively locate the EF. Of note, hybrid PET/MRI relies on MR-based attenuation correction (MRAC) for PET reconstruction instead of CT-based AC used in PET/CT, which is the current clinical standard for FDG-PET imaging in epilepsy. While some studies show that traditional MRAC approaches can produce small bias in quantitative PET due to inadequate modeling of bone (Larsson et al. 2013; Andersen et al. 2014), recent reports have revealed that these MRAC biases do not significantly impact the clinical diagnosis of FDG-PET readings in epilepsy (Paldino et al. 2017; Oldan et al. 2018). Nevertheless, alternative MRAC methods have been proposed to reduce potential bias in reconstructed PET (Ladefoged et al., 2017). In our study, we used an improved robust MRAC method (Ladefoged et al. 2015) that adequately models bone tissue information to produce PET/MR images that provide comparable diagnostic information to PET/CT. Because the clinical assessment of our PET/DTI approach was retrospectively completed by one neurologist, potential interobserver variability could not be determined from this study. A potential future direction of this research is to pilot a prospective study to assess the clinical utility of combined PET/DTI through interdisciplinary meetings that would evaluate MRE patients both with and without including our PET/DTI approach to determine whether this approach will have any impact on the final surgical decision in these patients. In general, this retrospective study demonstrated the feasibility of combining PET and DTI to investigate WM integrity in the brains of MRE patients to further enhance clinical decision-making in epilepsy surgery. An extension of this study could combine functional MRI (fMRI) with DTI and PET to map out the structure and function of brain networks in the presence of seizure-related brain abnormalities. fMRI is another non-invasive imaging modality that may have promising applications in neurosurgical planning. While DTI investigates structural connections, fMRI measures functional correlates between brain regions based on differences in blood flow and can be used to effectively map neural connections in the brain (Bettus et al. 2009, 2010; Fox and Greicius 2010; Moeller et al. 2011; Pittau et al. 2012). By combining structural and functional connectivity analysis, we would be able to even better characterize seizure sites in MRE surgical candidates. We plan to incorporate PET, DTI, and fMRI modalities into an integrated software platform that would allow clinicians to non-invasively probe healthy brain tissue and areas around the epileptic zone to further improve neurosurgical planning, especially in challenging epilepsy cases where MRI and IC-EEG findings lack concordance. The integration and proper use of these non-invasive imaging modalities will help advance the field of epilepsy treatment and management and may lead to completely non-invasive epilepsy surgical planning (Knowlton 2006; Sivakanthan et al. 2016). Data are available from the corresponding authors upon reasonable request and with permission of the Lawson Health Research Institute and Western University Health Research Ethics Board. AED: Anti-epileptic drug AI: Asymmetry index ANTS: Advanced normalization tools BW: df: DTI: Diffusion tensor imaging DWI: Diffusion-weighted imaging EEG: Electroencephalography EF: Epileptic focus EMU: Epilepsy monitoring unit EPI: Echo-planar imaging FA: Fractional anisotropy FDG: 18F-fluorodeoxyglucose FLAIR: Fluid-attenuated inversion recovery fMRI: Functional MRI FSL: FMRIB's Software Library FWHM: Full-width at half-maximum GM: Gray matter IC-EEG: Intracranial EEG MNI: Montreal Neurological Institute MPRAGE: Magnetization-prepared rapid gradient-echo MRAC: MR-based attenuation correction MRE: Medically refractory epilepsy MRI: PET: PET/DTI: PET-guided diffusion tractography RESOLUTE: Region specific optimization of continuous linear attenuation coefficients based on UTE Region of interest SEEG: Stereo-EEG Sampling perfection with application optimized contrasts using different flip angle evolution SPM: Statistical parametric mapping SUV: Standardized uptake value SUVr: Relative SUV VOI: Volume of interest WM: White matter ZAI : Z-score AI Ahmadi ME, Hagler DJ, McDonald CR, et al (2009) Side matters: diffusion tensor imaging tractography in left and right temporal lobe epilepsy. Am J Neuroradiol 30:1740–1747. https://doi.org/https://doi.org/10.3174/ajnr.A1650 Anazodo UC, Finger E, Kwan BYM, et al (2018) Using simultaneous PET/MRI to compare the accuracy of diagnosing frontotemporal dementia by arterial spin labelling MRI and FDG-PET. Neuroimage Clin 17:405–414. https://doi.org/https://doi.org/10.1016/j.nicl.2017.10.033 Andersen FL, Ladefoged CN, Beyer T, et al (2014) Combined PET/MR imaging in neurology: MR-based attenuation correction implies a strong spatial bias when ignoring bone. NeuroImage 84:206–216. https://doi.org/https://doi.org/10.1016/j.neuroimage.2013.08.042 Aparicio J, Carreño M, Bargalló N, et al (2016) Combined 18F-FDG-PET and diffusion tensor imaging in mesial temporal lobe epilepsy with hippocampal sclerosis. Neuroimage Clin 12:976–989. https://doi.org/https://doi.org/10.1016/j.nicl.2016.05.002 Avants BB, Tustison NJ, Song G, et al (2011) A reproducible evaluation of ANTs similarity metric performance in brain image registration. NeuroImage 54:2033–2044. https://doi.org/https://doi.org/10.1016/j.neuroimage.2010.09.025 Basser PJ, Jones DK (2002) Diffusion-tensor MRI: theory, experimental design and data analysis – a technical review. NMR Biomed 15:456–467. https://doi.org/https://doi.org/10.1002/nbm.783 Bettus G, Bartolomei F, Confort-Gouny S, et al (2010) Role of resting state functional connectivity MRI in presurgical investigation of mesial temporal lobe epilepsy. J Neurol Neurosurg Psychiatry 81:1147–1154. https://doi.org/https://doi.org/10.1136/jnnp.2009.191460 Bettus G, Guedj E, Joyeux F, et al (2009) Decreased basal fMRI functional connectivity in epileptogenic networks and contralateral compensatory mechanisms. Hum Brain Mapp 30:1580–1591. https://doi.org/https://doi.org/10.1002/hbm.20625 Blount JP, Cormier J, Kim H, et al (2008) Advances in intracranial monitoring. FOC 25:E18. https://doi.org/https://doi.org/10.3171/FOC/2008/25/9/E18 Boscolo Galazzo I, Mattoli MV, Pizzini FB, et al (2016) Cerebral metabolism and perfusion in MR-negative individuals with refractory focal epilepsy assessed by simultaneous acquisition of 18 F-FDG PET and arterial spin labeling. Neuroimage Clin 11:648–657. https://doi.org/https://doi.org/10.1016/j.nicl.2016.04.005 Brant-Zawadzki M, Gillan GD, Nitz WR (1992) MP RAGE: a three-dimensional, T1-weighted, gradient-echo sequence--initial experience in the brain. Radiology 182:769–775. https://doi.org/https://doi.org/10.1148/radiology.182.3.1535892 Burneo JG, Poon R, Kellett S, Snead OC (2015) The utility of positron emission tomography in epilepsy. Can J Neurol Sci 42:360–371. https://doi.org/https://doi.org/10.1017/cjn.2015.279 Caciagli L, Bernhardt BC, Hong SJ, et al (2014) Functional network alterations and their structural substrate in drug-resistant epilepsy. Front Neurosci 8:411. https://doi.org/https://doi.org/10.3389/fnins.2014.00411 Cahill V, Sinclair B, Malpas CB, et al (2019) Metabolic patterns and seizure outcomes following anterior temporal lobectomy. Ann Neurol 85:241–250. https://doi.org/https://doi.org/10.1002/ana.25405 Chamberland M, Bernier M, Fortin D, et al (2015) 3D interactive tractography-informed resting-state fMRI connectivity. Front Neurosci 9:275. https://doi.org/https://doi.org/10.3389/fnins.2015.00275 Coupé P, Manjón JV, Gedamu E, et al (2010) Robust Rician noise estimation for MR images. Med Image Anal 14:483–493. https://doi.org/https://doi.org/10.1016/j.media.2010.03.001 Coupé P, Yger P, Prima S, et al (2008) An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images. IEEE Trans Med Imaging 27:425–441. https://doi.org/https://doi.org/10.1109/TMI.2007.906087 Crawford JR, Garthwaite PH (2012) Single-case research in neuropsychology: a comparison of five forms of t-test for comparing a case to controls. Cortex 48:1009–1016. https://doi.org/https://doi.org/10.1016/j.cortex.2011.06.021 de Tisi J, Bell GS, Peacock JL, et al (2011) The long-term outcome of adult epilepsy surgery, patterns of seizure remission, and relapse: a cohort study. Lancet 378:1388–1395. https://doi.org/https://doi.org/10.1016/S0140-6736(11)60890-8 Didelot A, Mauguiere F, Redoute J, et al (2010) Voxel-based analysis of asymmetry index maps increases the specificity of 18F-MPPF PET abnormalities for localizing the epileptogenic zone in temporal lobe epilepsies. J Nucl Med 51:1732–1739. https://doi.org/https://doi.org/10.2967/jnumed.109.070938 Finnema SJ, Nabulsi NB, Eid T, et al (2016) Imaging synaptic density in the living human brain. Sci Transl Med 8:348ra96. https://doi.org/https://doi.org/10.1126/scitranslmed.aaf6667 Focke NK, Yogarajah M, Bonelli SB et al (2008) Voxel-based diffusion tensor imaging in patients with mesial temporal lobe epilepsy and hippocampal sclerosis. NeuroImage 40:728–737. https://doi.org/https://doi.org/10.1016/j.neuroimage.2007.12.031 Fox MD, Greicius M (2010) Clinical applications of resting state functional connectivity. Front Syst Neurosci 4:19. https://doi.org/https://doi.org/10.3389/fnsys.2010.00019 Galovic M, Koepp M (2016) Advances of molecular imaging in epilepsy. Curr Neurol Neurosci Rep 16:58. https://doi.org/https://doi.org/10.1007/s11910-016-0660-7 Güvenç C, Dupont P, Van den Stock J, et al (2018) Correlation of neuropsychological and metabolic changes after epilepsy surgery in patients with left mesial temporal lobe epilepsy with hippocampal sclerosis. EJNMMI Res 8:31. https://doi.org/https://doi.org/10.1186/s13550-018-0385-5 Helmstaedter C, Kurthen M, Lux S, et al (2003) Chronic epilepsy and cognition: a longitudinal study in temporal lobe epilepsy. Ann Neurol 54:425–432. https://doi.org/https://doi.org/10.1002/ana.10692 Henry TR, Mazziotta JC, Engel J, et al (1990) Quantifying interictal metabolic activity in human temporal lobe epilepsy. J Cereb Blood Flow Metab 10:748–757. https://doi.org/https://doi.org/10.1038/jcbfm.1990.128 Henry TR, Pennell PB (1998) Neuropharmacological imaging in epilepsy with PET and SPECT. Q J Nuclear Med Torino 42:199–210 James JS, Radhakrishnan A, Thomas B, et al (2015) Diffusion tensor imaging tractography of Meyer's loop in planning resective surgery for drug-resistant temporal lobe epilepsy. Epilepsy Res 110:95–104. https://doi.org/https://doi.org/10.1016/j.eplepsyres.2014.11.020 Jiang Y, Mao L, Yan X, et al (2017) Investigation of altered microstructure in patients with drug refractory epilepsy using diffusion tensor imaging. Neuroradiology 59:597–608. https://doi.org/https://doi.org/10.1007/s00234-017-1835-x Jones DK, Cercignani M (2010) Twenty-five pitfalls in the analysis of diffusion MRI data. NMR Biomed 23:803–820. https://doi.org/https://doi.org/10.1002/nbm.1543 Kamm J, Boles Ponto LL, Manzel K, et al (2018) Temporal lobe asymmetry in FDG-PET uptake predicts neuropsychological and seizure outcomes after temporal lobectomy. Epilepsy Behav 78:62–67. https://doi.org/https://doi.org/10.1016/j.yebeh.2017.10.006 Knowlton RC (2006) The role of FDG-PET, ictal SPECT, and MEG in the epilepsy surgery evaluation. Epilepsy Behav 8:91–101. https://doi.org/https://doi.org/10.1016/j.yebeh.2005.10.015 Labate A, Cherubini A, Tripepi G, et al (2015) White matter abnormalities differentiate severe from benign temporal lobe epilepsy. Epilepsia 56:1109–1116. https://doi.org/https://doi.org/10.1111/epi.13027 Ladefoged CN, Benoit D, Law I, et al (2015) Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE): application to PET/MR brain imaging. Phys Med Biol 60:8047–8065. https://doi.org/https://doi.org/10.1088/0031-9155/60/20/8047 Larsson A, Johansson A, Axelsson J, et al (2013) Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images. Magn Reson Mater Phy 26:127–136. https://doi.org/https://doi.org/10.1007/s10334-012-0339-2 Le Bihan D (2006) Looking into the functional architecture of the brain with diffusion MRI. Int Congr Ser 1290:1–24. https://doi.org/https://doi.org/10.1016/j.ics.2006.04.006 Le Bihan D, Breton E, Lallemand D, et al (1986) MR imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders. Radiology 161:401–407. https://doi.org/https://doi.org/10.1148/radiology.161.2.3763909 Leemans A, Jeurissen B, Sijbers J, Jones DK (2009) ExploreDTI: a graphical toolbox for processing, analyzing, and visualizing diffusion MR data. Proc Intl Soc Mag Reson Med 17:3537 Li W, An D, Tong X, et al (2019) Different patterns of white matter changes after successful surgery of mesial temporal lobe epilepsy. Neuroimage Clin 21:101631. https://doi.org/https://doi.org/10.1016/j.nicl.2018.101631 Lin JJ, Riley JD, Juranek J, Cramer SC (2008) Vulnerability of the frontal-temporal connections in temporal lobe epilepsy. Epilepsy Res 82:162–170. https://doi.org/https://doi.org/10.1016/j.eplepsyres.2008.07.020 Lippé S, Poupon C, Cachia A, et al (2012) White matter abnormalities revealed by DTI correlate with interictal grey matter FDG-PET metabolism in focal childhood epilepsies. Epileptic Disorders 14:404–413. https://doi.org/https://doi.org/10.1684/epd.2012.0547 Maier-Hein KH, Neher PF, Houde JC, et al (2017) The challenge of mapping the human connectome based on diffusion tractography. Nat Commun 8:1349. https://doi.org/https://doi.org/10.1038/s41467-017-01285-x Moeller F, Maneshi M, Pittau F, et al (2011) Functional connectivity in patients with idiopathic generalized epilepsy. Epilepsia 52:515–522. https://doi.org/https://doi.org/10.1111/j.1528-1167.2010.02938.x Mori S, Zhang J (2006) Principles of diffusion tensor imaging and its applications to basic neuroscience research. Neuron 51:527–539. https://doi.org/https://doi.org/10.1016/j.neuron.2006.08.012 Neher PF, Laun FB, Stieltjes B, Maier-Hein KH (2014) Fiberfox: facilitating the creation of realistic white matter software phantoms: realistic white matter software phantoms. Magn Reson Med 72:1460–1470. https://doi.org/https://doi.org/10.1002/mrm.25045 Oldan JD, Shin HW, Khandani AH, et al (2018) Subsequent experience in hybrid PET-MRI for evaluation of refractory focal onset epilepsy. Seizure 61:128–134. https://doi.org/https://doi.org/10.1016/j.seizure.2018.07.022 Paldino MJ, Yang E, Jones JY, et al (2017) Comparison of the diagnostic accuracy of PET/MRI to PET/CT-acquired FDG brain exams for seizure focus detection: a prospective study. Pediatr Radiol 47:1500–1507. https://doi.org/https://doi.org/10.1007/s00247-017-3888-8 Pittau F, Grova C, Moeller F, et al (2012) Patterns of altered functional connectivity in mesial temporal lobe epilepsy. Epilepsia 53:1013–1023. https://doi.org/https://doi.org/10.1111/j.1528-1167.2012.03464.x Rausch R, Henry TR, Ary CM, et al (1994) Asymmetric interictal glucose hypometabolism and cognitive performance in epileptic patients. Arch Neurol 51:139–144. https://doi.org/https://doi.org/10.1001/archneur.1994.00540140045013 Richardson MP, Strange BA, Thompson PJ, et al (2004) Pre-operative verbal memory fMRI predicts post-operative memory decline after left temporal lobe resection. Brain 127:2419–2426. https://doi.org/https://doi.org/10.1093/brain/awh293 Sarikaya I (2015) PET studies in epilepsy. Am J Nucl Med Mol Imaging 5:416–430 Shang K, Wang J, Fan X, et al (2018) Clinical value of hybrid TOF-PET/MR imaging–based multiparametric imaging in localizing seizure focus in patients with MRI-negative temporal lobe epilepsy. Am J Neuroradiol 39:1791–1798. https://doi.org/https://doi.org/10.3174/ajnr.A5814 Sivakanthan S, Neal E, Murtagh R, Vale FL (2016) The evolving utility of diffusion tensor tractography in the surgical management of temporal lobe epilepsy: a review. Acta Neurochir 158:2185–2193. https://doi.org/https://doi.org/10.1007/s00701-016-2910-5 Soares J, Marques P, Alves V, Sousa N (2013) A hitchhiker's guide to diffusion tensor imaging. Front Neurosci 7:31. https://doi.org/https://doi.org/10.3389/fnins.2013.00031 Téllez-Zenteno JF, Dhar R, Wiebe S (2005) Long-term seizure outcomes following epilepsy surgery: a systematic review and meta-analysis. Brain 128:1188–1198. https://doi.org/https://doi.org/10.1093/brain/awh449 Thivard L, Bouilleret V, Chassoux F, et al (2011) Diffusion tensor imaging can localize the epileptogenic zone in nonlesional extra-temporal refractory epilepsies when [18F]FDG-PET is not contributive. Epilepsy Res 97:170–182. https://doi.org/https://doi.org/10.1016/j.eplepsyres.2011.08.005 Tournier JD, Smith R, Raffelt D, et al (2019) MRtrix3: a fast, flexible and open software framework for medical image processing and visualisation. NeuroImage 202:116137. https://doi.org/https://doi.org/10.1016/j.neuroimage.2019.116137 Van Bogaert P, Massager N, Tugendhaft P, et al (2000) Statistical parametric mapping of regional glucose metabolism in mesial temporal lobe epilepsy. NeuroImage 12:129–138. https://doi.org/https://doi.org/10.1006/nimg.2000.0606 Wang YH, An Y, Fan XT, et al (2018) Comparison between simultaneously acquired arterial spin labeling and 18F-FDG PET in mesial temporal lobe epilepsy assisted by a PET/MR system and SEEG. Neuroimage Clin 19:824–830. https://doi.org/https://doi.org/10.1016/j.nicl.2018.06.008 Wiest-Daesslé N, Prima S, Coupé P et al (2008) Rician noise removal by non-local means filtering for low signal-to-noise ratio MRI: applications to DT-MRI. In: Metaxas D, Axel L, Fichtinger G, Székely G (eds) Medical image computing and computer-assisted intervention – MICCAI 2008. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 171–179 Chapter Google Scholar Woolrich MW, Jbabdi S, Patenaude B, et al (2009) Bayesian analysis of neuroimaging data in FSL. NeuroImage 45:S173–S186. https://doi.org/https://doi.org/10.1016/j.neuroimage.2008.10.055 The authors would like to thank John Butler and Heather Biernaski (PET/MRI technologists) for their assistance in data acquisition and Krista Doyle, the nurse navigator at the London Health Sciences Centre Epilepsy Program, for her assistance with patient recruitment. This study was supported by research funding from PSI Foundation Resident Research Grant (B.Y.M.K. and J.G.B.), Mitacs Accelerate in partnership with SJHCF, and MultiMagnetics Inc (U.C.A.), London X-Ray Associates (U.C.A. and J.D.T.), and Lawson Health Research Institute Internal Research Fund (U.C.A.). Lawson Imaging, Lawson Health Research Institute, 268 Grosvenor St., London, Ontario, N6A 4 V2, Canada Stefan E. Poirier, Frank S. Prato, Jonathan D. Thiessen & Udunna C. Anazodo Department of Medical Biophysics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada Stefan E. Poirier, Ali R. Khan, Frank S. Prato, Jonathan D. Thiessen & Udunna C. Anazodo Department of Diagnostic Radiology, Queen's University, Kingston, Ontario, Canada Benjamin Y. M. Kwan Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada Michael T. Jurkiewicz, Lina Samargandy, Ali R. Khan, Jonathan Romsa, Frank S. Prato & Jonathan D. Thiessen Epilepsy Program, Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada David A. Steven, Ana Suller-Marti & Jorge G. Burneo Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada David A. Steven & Jorge G. Burneo Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Ontario, Canada Victor Lam Shin Cheung Imaging Research Laboratories, Robarts Research Institute, London, Ontario, Canada Ali R. Khan Stefan E. Poirier Michael T. Jurkiewicz Lina Samargandy David A. Steven Ana Suller-Marti Jonathan Romsa Frank S. Prato Jorge G. Burneo Jonathan D. Thiessen Udunna C. Anazodo Conceived and designed the study: UCA and BYMK. Patient recruitment and PET/MRI data collection: UCA. Clinical data collection: UCA, VL, and ASM. Created image analysis pipeline: SEP, UCA, and JDT. Analyzed data: SEP, UCA, JDT, and ARK. Interpreted data: SEP, UCA, JDT, BYMK, MTJ, LS, DAS, JGB, and JR. Manuscript preparation: SEP, UCA, JDT, and FSP. The authors read and approved the final manuscript. Correspondence to Stefan E. Poirier or Udunna C. Anazodo. The study was approved by the University Research Ethics Board and conducted in accordance with the Declaration of Helsinki ethical standards. All patients provided written informed consent to participate. All patients gave written consent for publication. Diffusion MR image analysis pipeline. Table S1. EEG and MRI findings from clinical reports and visual assessment. Table S2. SUV analysis in hypometabolic PET ROIs and contralateral ROIs from AI mapping in 14 MRE patients. Table S3. Regional FA analysis in WM surrounding hypometabolic PET ROIs and contralateral ROIs detected by AI mapping of FDG-PET. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Poirier, S.E., Kwan, B.Y.M., Jurkiewicz, M.T. et al. 18F-FDG PET-guided diffusion tractography reveals white matter abnormalities around the epileptic focus in medically refractory epilepsy: implications for epilepsy surgical evaluation. European J Hybrid Imaging 4, 10 (2020). https://doi.org/10.1186/s41824-020-00079-7 Received: 09 March 2020 Accepted: 12 June 2020 PET/MRI Fluorodeoxyglucose Diffusion tractography
CommonCrawl
•https://doi.org/10.1364/BOE.10.005461 Chromophore selective multi-wavelength photoacoustic remote sensing of unstained human tissues Saad Abbasi, Martin Le, Bazil Sonier, Kevan Bell, Deepak Dinakaran, Gilbert Bigras, John R. Mackey, and Parsin Haji Reza Saad Abbasi,1,5 Martin Le,1,5 Bazil Sonier,1 Kevan Bell,1,2 Deepak Dinakaran,2,3 Gilbert Bigras,4 John R. Mackey,3 and Parsin Haji Reza1,* 1PhotoMedicine Labs, Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada 2illumiSonics, Inc., Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada 3Department of Oncology, University of Alberta, Edmonton, Alberta, T6G 2V1, Canada 4Department of Laboratory Medicine and Pathology, University of Alberta, Edmonton, Alberta, T6G 2V1, Canada 5Equal contributions *Corresponding author: [email protected] Parsin Haji Reza https://orcid.org/0000-0002-4928-6244 S Abbasi M Le B Sonier K Bell D Dinakaran G Bigras J Mackey P Haji Reza Saad Abbasi, Martin Le, Bazil Sonier, Kevan Bell, Deepak Dinakaran, Gilbert Bigras, John R. Mackey, and Parsin Haji Reza, "Chromophore selective multi-wavelength photoacoustic remote sensing of unstained human tissues," Biomed. Opt. Express 10, 5461-5469 (2019) Histopathology for Mohs micrographic surgery with photoacoustic remote sensing microscopy Benjamin R. Ecclestone, et al. Biomed. Opt. Express 12(1) 654-665 (2021) In vivo combined virtual histology and vascular imaging with dual-wavelength photoacoustic remote... Brendon S. Restall, et al. OSA Continuum 3(10) 2680-2689 (2020) Fast hybrid optomechanical scanning photoacoustic remote sensing microscopy for virtual histology Biomed. Opt. Express 13(1) 39-47 (2022) Photoacoustic Imaging and Spectroscopy Optical clearing Original Manuscript: June 18, 2019 Revised Manuscript: September 16, 2019 Identifying positive surgical margins after resection of cancer often triggers re-excision and adjuvant treatments. Incomplete initial resections result in poorer patient outcomes, psychological and financial stress to the patient and increased healthcare costs. Surgical margins are typically assessed post-operatively using time consuming and expensive slide-based histopathology tissue analysis. Currently, a real-time non-contact virtual histology-like intraoperative margin assessment tool is not available. To address this need, we have developed a non-contact multi-wavelength reflection-mode, photoacoustic remote sensing (PARS) microscope demonstrating chromophore selective contrast in human tissues. We show the capabilities of multi-wavelength PARS microscopy utilizing both 266 nm and 532 nm excitation wavelengths and a 1310 nm detection wavelength. Cell nuclei and hemoglobin were visualized at the cellular scale without the addition of exogenous contrast agents. These works provide a critical step towards a virtual histology tool to provide intraoperative histology-like information in living tissue. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Surgical resection remains one of the most effective primary treatments for most solid cancers [1,2]. The goal of cancer surgery is to remove all cancerous tissue along with a minimal margin of healthy tissue to ensure no cancerous cells remain. To achieve these negative surgical margins, surgical oncologists rely on visual inspection, palpation of tissue, pre-operative imaging and clinical judgement to determine if any cancerous tissue remains. Currently, the gold standard for surgical margin assessment is post-operative histological analysis of resected tissue. Margin assessment is a multi-step process of oriented tissue selection from the resection specimen, production of high-quality histology slides, bright-field microscopic assessment, and integration of the results in three dimensions. To prepare histology slides, tissues must be fixed, sectioned, mounted, and stained. This process requires highly skilled staff, significant resources, and may take up to two weeks to complete. When positive surgical margins are identified, patients frequently require additional surgery for revision of margins, and may need more aggressive adjuvant systemic therapy or radiation therapy. These additional measures impose physical, psychological, and financial burdens on the patient. Frozen section analysis (FSA) is a commonly used method for intraoperative margin assessment that can reduce re-excision rates and improve patient outcomes in some settings. The accuracy of this process varies widely when compared to standard histopathological sections [3]. FSA requires freezing of the sample that can introduce artifacts in the tissue, reduce the technical quality of the slides and make interpretation difficult [4]. Furthermore, the procedure can take up to an hour to complete, increasing anesthesia and surgical risks [5]. There have been several techniques developed to improve intraoperative assessment and decrease positive margin rates. However, these techniques require the addition of exogenous dyes or optical clearing [6–8]. Photoacoustic (PA) imaging delivers contrast through optical absorption. This enables the targeting of individual endogenous chromophores such as hemoglobin, lipids, cell nuclei, cytochromes, and melanin with specific wavelengths without the addition of any exogenous contrast. Optical-resolution photoacoustic microscopy has demonstrated visualization of cellular morphology and cytochromes in ex-vivo samples [9,10]. However, this technique requires physical contact with the sample to achieve acoustic coupling making it impractical for in-situ clinical applications. Physical contact with the surgical site increases the risk of infection, especially as post-operative infections account for 25% of all nosocomial infections [11]. Additionally, use of a contact-based imaging microscope would require extensive sterilization procedures for in-situ applications. This method would also require thin samples as it operates in transmission-mode, making it unsuitable for in-situ imaging. A recently reported photoacoustic imaging modality called photoacoustic remote sensing (PARS) microscopy has demonstrated subcellular resolution in a label-free non-contact reflection-mode operation. Unlike conventional PA methods, PARS has demonstrated centimeter-scale working distance, rejecting the requirement of contact-based acoustic coupling [12,13]. Haven et. al. have demonstrated PARS' capability in visualizing cultured HeLa cell nuclei and thin slices of HT1080 tumors grown in chicken embryo models by targeting the optical absorption peak of DNA at 266 nm [14]. In this paper, we present the first visualization of cellular morphology in human pancreatic and tonsil tissue samples using a multi-wavelength, label-free, all-optical, non-contact, reflection-mode microscope. 2.1 Sample preparation Human pancreatic and tonsil tissue samples were obtained under a protocol approved by the Research Ethics Board of Alberta (Protocol ID: HREBA.CC-18-0277) and the University of Waterloo Health Research Ethics Committee (Humans: #40275 Photoacoustic Remote Sensing (PARS) Microscopy of Surgical Resection, Needle Biopsy, and Pathology Specimens). All experiments were performed in accordance with the relevant guidelines and regulations. Specimens of human tissue were attained, with approval of the institutional ethics committee, from clinical collaborators in the Cross-Cancer Institute (Edmonton, Alberta, Canada). These samples were resected and immediately submerged in formaldehyde for fixation, dehydrated, cleared with xylene, infiltrated with hot paraffin wax, and mounted on a cassette. To provide a comparison between PARS images and conventional histopathology images, we prepared a set of hematoxylin and eosin (H&E) stained and adjacent unstained tissue samples. The unstained slides were prepared in the identical manner apart from omitting staining with the H&E dyes. For both the H&E slides and unstained slides, 4 µm thick adjacent sections were cut from the cassettes and placed in a warm water bath, and the sections were transferred to glass sides and baked at 60°C to remove any excess paraffin. The H&E slides were baked for 30 minutes whereas the unstained slides were baked for an hour. Next, the H&E slides were stained with standard H&E staining dyes and covered with a coverslip. The unstained slides were not covered with any coverslips or any other media. 2.2 Imaging mechanism PARS utilizes both a pulsed excitation laser and a continuous-wave detection laser in order to generate and detect initial PA pressures. In brief, an excitation pulse incident on an optically absorbing region produces thermo-elastic expansion through the photoacoustic effect generating large initial pressures. These initial pressures modulate the local refractive index via the elasto-optic effect, modulating the scattering properties of this region. Meanwhile, a continuous-wave detection beam is co-focused with the excitation spot. The modulating scattering profile induces intensity variations in the back-scattered detection beam. Photoacoustic signals are measured as variations of this back-scattered intensity. The magnitude of these signals is proportional to the optical absorption of the absorber at the excitation wavelength [12,15,16]. Following the analytical methods described in [12,16], and using values for optical absorption of biomolecules presented by Soltani et. al. we calculate the PARS reflectivity changes for cell nuclei to compare against red blood cells (RBC) [17,18]. Cells are modeled using refractive indices of 1.358 and 1.377 for cell nuclei and cytoplasm respectively [19]. Red blood cells are modeled using refractive indices of 1.413 for the cell, and 1.33 for the blood plasma [20,21]. Since both the cell nuclei and the red blood cells are assumed to be much larger than the detection wavelength and the focal spot size, a simple planar interface is assumed. The PARS signal will then be a result of comparing the unperturbed reflection from this structure, assuming a Fresnel interface, and the perturbed reflection following photoacoustic excitation and elasto-optic modulation of the absorbing medium. This signal is then characterized as the reflectivity difference $\varDelta R$ between these two reflections. This yields reflectivity changes of $\varDelta {R_{DNA,266}}\; = 1.43 \times {10^{ - 4}}$, $\varDelta {R_{DNA,532}}\; = 1.88 \times {10^{ - 7}}$, $\varDelta {R_{RBC,266}}\; = 7.88 \times {10^{ - 6}}$, and $\varDelta {R_{RBC,532}}\; = 1.78 \times {10^{ - 5}}$ where ­$\varDelta {R_{DNA,\lambda }}$ is the reflectivity change for DNA in cell nuclei at $\lambda $ and $\varDelta {R_{RBC,\lambda }}$ is the reflectivity change for RBC at $\lambda $. For these calculations excitation fluences are assumed to be at ANSI limits of 3 mJ/cm2 for 266 nm and 20 mJ/cm2 for 532 nm. This yields predicted fractional signals of $\varDelta {R_{DNA,266}}/\varDelta {R_{RBC,266}} = 18.1$ at 266 nm and $\varDelta {R_{RBC,532}}/\varDelta {R_{DNA,532}} = 94.5$ at 532 nm. 2.3 Experimental apparatus As shown in Fig. 1, a 266 nm 0.5 nanosecond-pulsed laser (SNU-20F-10x, Teem Photonics) operating at 21.4 kHz was filtered through a 25 µm pinhole. The beam was then expanded using a fixed magnification beam expander. This excitation beam was then passed through a dichroic beam combiner where it was combined with a 1310 nm detection source. A 532 nm 3 nanosecond-pulsed laser (VGEN-G-HE-10, Spectra Physics) operating at 20 kHz was expanded by two lenses. It was then fiber coupled using a polarization-maintaining fiber launch (MBT621D, Thorlabs) and a polarization maintaining fiber. The beam was then released from the fiber, passed through a collimator and then expanded using two lenses. The excitation beam was then passed through a dichroic beam combiner (DMLP900R, Thorlabs) where it was combined with a 1310 nm detection source. A 1310 nm superluminescent continuous-wave diode (S5FC1018P, Thorlabs) was fiber coupled and passed through a fiber polarization controller. It was then collimated and expanded using two lenses. The expanded beam was passed through a polarizing beam splitter then was converted into circular polarized light with a zero-order quarter waveplate (WPQ10M-1310, Thorlabs). The beam was then passed through a dichroic beam combiner, where it meets the 532 nm excitation source. The detection beam then passes through another dichroic beam splitter where it meets a 266 nm excitation source. The beams then moved into a 2D galvanometer scanning mirror system and were co-focused and co-scanned using a 0.3 numerical-aperture reflective objective lens (LMM-15x-UVV, Thorlabs). The pulse energies were measured to be 3 nJ and 15 nJ for the 266 nm and 532 nm pulses respectively. The back-reflected detection beam was passed back through the quarter waveplate, converting it from circular polarization to vertical polarization and directed by the polarized beam splitter into a 75 MHz bandwidth InGaAs balanced photodiode (PDB425C-AC, Thorlabs). During these experiments no thermal damage to the samples was observed during the study. Fig. 1. Simplified schematic of multi-wavelength PARS microscope. Components labels are defined as: pinhole (PH), neutral density filter (NDF), collimator (C), polarized beam splitter (PBS), quarter waveplate (QWP), dichroic mirror (DC), photodiode (PD), fiber launch (FL), galvanometer mirrors (GM), objective lens (OL), mirror (M) 2.4 Image formation High resolution small field of view acquisitions were acquired using a 2D galvanometer mirror system to direct the beam. A raster-like scan pattern was created using a function generator which produced two ramp waveforms with fixed frequencies of 30 Hz and 60 mHz. Positional signals from the 2D galvanometer mirror system and the mechanical stages were recorded using a 16-bit data acquisition card (CSE161G4, Gage Applied). The PARS modulation event produced a voltage signal from the balanced photodiode and was also recorded using the digitizer card. A Hilbert transform was applied to the raw time domain signals and the absolute values of the functions were extracted. Images were then formed by taking a maximum amplitude projection of each modulation event and were Delaunay interpolated to fit in a Cartesian grid. A greyscale colour map was applied to the images. Mosaic images were formed by capturing successive high-resolution small field of view acquisitions in a grid pattern. In-house developed software allowed for precision movement of the linear translation stages after each high-resolution acquisition. Each mosaic image was 100 µm x 100 µm in size and took four seconds to acquire at a laser repetition rate of 20 kHz. Mosaic sections were stitched together using ImageJ's Grid/Collection Stitching Plugin [22], which determines optimal positioning of the sections using a cross-correlation algorithm. A color map was then applied to the respective 266 nm and 532 nm acquisitions to simulate an H&E-stained histology-like image. 2.5 Resolution characterization To characterize the lateral resolution of the system, carbon fibers were imaged at 266 nm (Fig. 2(a)) and 532 nm (Fig. 2(c)) wavelengths. The resolution is characterized by fitting an edge spread function to a carbon fiber's edge pixel values. The lateral resolution is then defined as the width between the 10% and 90% of the edge spread function. The resolution for the 266 nm beam was found to be 1.2 µm and for 532 nm was found to be 1.5 µm. Fig. 2. Resolution characterization of the multi-wavelength PARS system (a) carbon fibers imaged with the 266 nm excitation (b) corresponding edge-spread function measuring a resolution of 1.2 µm (c) carbon fibers imaged with the 532 nm excitation (d) corresponding edge spread function measuring a resolution of 1.5 µm. Scale bar 10 µm. 3. Results and discussion Figure 3 compares a high-resolution H&E image (Fig. 3(a)) of human epidermal tissue with a PARS image (Fig. 3(b)) of an unstained adjacent section. The PARS image is able to resolve cell nuclei similar to the H&E section. As the images are of adjacent sections and not of the same tissue sample, the cell nuclei do not appear at the same locations in both images. Fig. 3. (a) high-resolution H&E image of human epidermal tissue (b) high-resolution PARS image with a 266 nm excitation of an adjacent section. Color bar represents normalized PARS signal. Scale bar 10 µm. Using the apparatus described in Fig. 1, we imaged and compared unstained human pancreatic (Fig. 4) and tonsil tissue (Fig. 5) sections to adjacent H&E stained sections (Fig. 4(a) and Fig. 5(a)). The tissue sections were first imaged with a 266 nm excitation wavelength (Fig. 4(b) and Fig. 5(b)) followed by a 532 nm excitation wavelength (Fig. 4(c) and Fig. 5(c)). The 266 nm and 532 nm images were then superimposed as shown in Fig. 4(d) and Fig. 5(d) emulating contrast from an H&E colour map. Fig. 4. (a) A standard H&E stained slide of a blood vessel (blue outline) within human pancreatic tissue imaged with a conventional brightfield microscope. (b) An adjacent unstained slide of the same specimen imaged with 266 nm excitation and (c) with 532 nm excitation. (d) A superimposed image of (b) and (c) with a histology-like colormap, DNA is colored purple, hemoglobin is colored red. Scale bar 100 µm. Fig. 5. (a) A standard H&E stained slide of venules (blue outline) embedded within a human tonsil tissue imaged with a conventional brightfield microscope. (b) An adjacent unstained slide of the same specimen imaged with 266 nm excitation and (c) with 532 nm excitation. (d) A superimposed image of (b) and (c) with a histology-like colormap, DNA is colored purple, hemoglobin is colored red. Scale bar 100 µm. The contrast provided in Fig. 4(b) and Fig. 5(b) image highlights the optical absorption of DNA at 266 nm. As a result, cellular morphology and bulk tissue structure is distinguishable in reference to the H&E prepared section. The areas corresponding to hemoglobin (blue outlines in Fig. 4 and Fig. 5) lack signal in the cells presumed to be erythrocytes, since they are anuclear and do not have DNA to strongly absorb the 266 nm excitation. Similarly, 532 nm excitation images shown in Fig. 4(c) and Fig. 5(c) presents hemoglobin contrast as comparable to their corresponding H&E prepared section. The blood vessel as outlined in Fig. 4(d) can be identified in comparison to the H&E prepared section in Fig. 4(a). The 266 nm signal (Fig. 4(b)) also shows other nucleated cells of blood in the vessel lumen, such as leukocytes, that are clearly evident in the background of the 532 nm signal from erythrocytes (Fig. 4(c)). The PARS-imaged signals from erythrocytes and nucleated cells of blood correspond well with the H&E image (Fig. 4(a)), and (Fig. 4(d)) suggest an enhanced ability of PARS to identify these nucleated cells amongst the population of erythrocytes. Similarly, the hemoglobin highlighted by the 532 nm excitation (Fig. 5(c)) in the tonsil tissue is comparable to the location of erythrocytes in Fig. 5(a). Although hemoglobin and DNA are assumed to be the primary absorbers at the excitation wavelengths, other chromophores such as cytochrome and collagen will produce non-zero signals at these same wavelengths. However, their contributions are considered to be negligible. Nuclear density, morphology, and organization are critical identifiers which are commonly used to diagnose tissue samples. For example, the nucleus is the best cellular sub-unit to identify a cell. Semi-quantitative measurements such as the "nucleus/cytoplasm" ratio (N/C) are used in order to identify malignant versus healthy cells, as malignant cells are biologically more aggressive and therefore tend to have a larger nucleus and smaller cytoplasm. As we demonstrate PARS is capable of imaging targeted chromophores in human tissues with a multi-wavelength excitation design, additional excitation wavelengths may be added to image additional chromophores. A 422 nm excitation wavelength has previously shown efficacious visualization of the cytoplasm, emulating an eosin-like stain [10]. Furthermore, a 1197 nm excitation wavelength has been previously used to image adipose tissue which is otherwise lost during the extensive H&E staining process [23]. These features can be useful in distinguishing among different tissue types and are some of the important histopathologic features that distinguish normal tissue from cancerous tissue. The contrast delivered by multi-wavelength excitation PARS permits regions of cancer to be identified even if they are macroscopically indistinguishable from benign tissue, which can guide more accurate surgical resection. This work represents the first report of a non-contact multi-wavelength microscope capable of label-free visualization of cellular morphology and hemoglobin. Unstained slides of human pancreas and tonsil tissue samples were imaged and were found to be comparable to conventional H&E processed slides, highlighting cellular morphology and hemoglobin. The addition of hemoglobin presents a vital step towards clinically useful in-situ histology-like imaging. Furthermore, this report demonstrates PARS' ability to incorporate additional excitation wavelengths to selectively deliver label-free contrast for various components of tissue structure such as cytochrome, lipids or adipocytes, potentially enabling microscopic intraoperative assessment of living tissue. Natural Sciences and Engineering Research Council of Canada (DGECR-2019-00143, RGPIN-2019-06134); Canada Foundation for Innovation (JELF #38000); Mitacs Accelerate (IT13594); University of Waterloo Startup funds; Centre for Bioengineering and Biotechnology (CBB Seed fund); illumiSonics Inc (SRA #083181). DD, KB, JRM and PHR have financial interests in illumiSonics Inc. IllumiSonics partly supported this work. 1. J. M. Kurtz, R. Amalric, H. Brandone, Y. Ayme, J. Jacquemier, J.-C. Pietra, D. Hans, J.-F. Pollet, C. Bressac, and J.-M. Spitalier, "Local recurrence after breast-conserving surgery and radiotherapy. Frequency, time course, and prognosis," Cancer 63(10), 1912–1917 (1989). [CrossRef] 2. N. Martini, M. S. Bains, M. E. Burt, M. F. Zakowski, P. McCormack, V. W. Rusch, and R. J. Ginsberg, "Incidence of local recurrence and second primary tumors in resected stage I lung cancer," J. Thorac. Cardiovasc. Surg. 109(1), 120–129 (1995). [CrossRef] 3. S. R. Shi, C. Liu, L. Pootrakul, L. Tang, A. Young, R. Chen, R. J. Cote, and C. R. Taylor, "Evaluation of the value of frozen tissue section used as "gold standard" for immunohistochemistry," Am. J. Clin. Pathol. 129(3), 358–366 (2008). [CrossRef] 4. E. B. Desciak and M. E. Maloney, "Artifacts in frozen section preparation," Dermatol. Surg. 26(5), 500–504 (2000). [CrossRef] 5. G. M. Bricca, D. G. Brodland, and J. A. Zitelli, "Immunostaining Melanoma Frozen Sections: The 1-Hour Protocol," Dermatol. Surg. 30(3), 403–408 (2004). [CrossRef] 6. E. A. te Velde, T. Veerman, V. Subramaniam, and T. Ruers, "The use of fluorescent dyes and probes in surgical oncology," Eur. J. Surg. Oncol. 36(1), 6–15 (2010). [CrossRef] 7. D. Zhu, K. V. Larin, Q. Luo, and V. V. Tuchin, "Recent progress in tissue optical clearing," Laser Photonics Rev. 7(5), 732–757 (2013). [CrossRef] 8. A. M. Zysk, K. Chen, E. Gabrielson, L. Tafra, E. A. May Gonzalez, J. K. Canner, E. B. Schneider, A. J. Cittadine, P. Scott Carney, S. A. Boppart, K. Tsuchiya, K. Sawyer, and L. K. Jacobs, "Intraoperative Assessment of Final Margins with a Handheld Optical Imaging Probe During Breast-Conserving Surgery May Reduce the Reoperation Rate: Results of a Multicenter Study," Ann. Surg. Oncol. 22(10), 3356–3362 (2015). [CrossRef] 9. T. T. W. Wong, R. Zhang, P. Hai, C. Zhang, M. A. Pleitez, R. L. Aft, D. V. Novack, and L. V. Wang, "Fast label-free multilayered histology-like imaging of human breast cancer by photoacoustic microscopy," Sci. Adv. 3(5), e1602168 (2017). [CrossRef] 10. C. Zhang, Y. S. Zhang, D.-K. Yao, Y. Xia, and L. V. Wang, "Label-free photoacoustic microscopy of cytochromes," J. Biomed. Opt. 18(2), 020504 (2013). [CrossRef] 11. R. L. Nichols, "Preventing surgical site infections: A surgeon's perspective," Emerg. Infect Dis. 7(2), 220–224 (2001). [CrossRef] 12. P. Hajireza, W. Shi, K. Bell, R. J. Paproski, and R. J. Zemp, "Non-interferometric photoacoustic remote sensing microscopy," Light: Sci. Appl. 6(6), e16278 (2017). [CrossRef] 13. P. H. Reza, K. Bell, W. Shi, J. Shapiro, and R. J. Zemp, "Deep non-contact photoacoustic initial pressure imaging," Optica 5(7), 814–820 (2018). [CrossRef] 14. N. J. M. Haven, K. L. Bell, P. Kedarisetti, J. D. Lewis, and R. J. Zemp, "Ultraviolet photoacoustic remote sensing microscopy," Opt. Lett. 44(14), 3586–3589 (2019). [CrossRef] 15. K. L. Bell, P. Hajireza, W. Shi, and R. J. Zemp, "Temporal evolution of low-coherence reflectrometry signals in photoacoustic remote sensing microscopy," Appl. Opt. 56(18), 5172–5181 (2017). [CrossRef] 16. K. Bell, P. Hajireza, and R. Zemp, "Scattering cross-sectional modulation in photoacoustic remote sensing microscopy," Opt. Lett. 43(1), 146–149 (2018). [CrossRef] 17. S. Soltani, A. Ojaghi, and F. E. Robles, "Deep UV dispersion and absorption spectroscopy of biomolecules," Biomed. Opt. Express 10(2), 487–499 (2019). [CrossRef] 18. J. McHowat, J. H. Jones, and M. H. Creer, "Quantitation of individual phospholipid molecular species by UV absorption measurements," J. Lipid Res. 37, 2450–2460 (1996). 19. M. Schürmann, J. Scholze, P. Müller, J. Guck, and C. J. Chan, "Cell nuclei have lower refractive index and mass density than cytoplasm," J. Biophotonics 9(10), 1068–1076 (2016). [CrossRef] 20. G. S. Adair and M. E. Robinson, "The specific refraction increments of serum-albumin and serum-globulin," Biochem. J. 24(4), 993–1011 (1930). [CrossRef] 21. J. Vörös, "The density and refractive index of adsorbing protein layers," Biophys. J. 87(1), 553–561 (2004). [CrossRef] 22. S. Preibisch, S. Saalfeld, and P. Tomancak, "Globally optimal stitching of tiled 3D microscopic image acquisitions," Bioinformatics 25(11), 1463–1465 (2009). [CrossRef] 23. R. Li, M. N. Slipchenko, P. Wang, and J.-X. Cheng, "Compact high power barium nitrite crystal-based Raman laser at 1197 nm for photoacoustic imaging of fat," J. Biomed. Opt. 18(4), 040502 (2013). [CrossRef] J. M. Kurtz, R. Amalric, H. Brandone, Y. Ayme, J. Jacquemier, J.-C. Pietra, D. Hans, J.-F. Pollet, C. Bressac, and J.-M. Spitalier, "Local recurrence after breast-conserving surgery and radiotherapy. Frequency, time course, and prognosis," Cancer 63(10), 1912–1917 (1989). N. Martini, M. S. Bains, M. E. Burt, M. F. Zakowski, P. McCormack, V. W. Rusch, and R. J. Ginsberg, "Incidence of local recurrence and second primary tumors in resected stage I lung cancer," J. Thorac. Cardiovasc. Surg. 109(1), 120–129 (1995). S. R. Shi, C. Liu, L. Pootrakul, L. Tang, A. Young, R. Chen, R. J. Cote, and C. R. Taylor, "Evaluation of the value of frozen tissue section used as "gold standard" for immunohistochemistry," Am. J. Clin. Pathol. 129(3), 358–366 (2008). E. B. Desciak and M. E. Maloney, "Artifacts in frozen section preparation," Dermatol. Surg. 26(5), 500–504 (2000). G. M. Bricca, D. G. Brodland, and J. A. Zitelli, "Immunostaining Melanoma Frozen Sections: The 1-Hour Protocol," Dermatol. Surg. 30(3), 403–408 (2004). E. A. te Velde, T. Veerman, V. Subramaniam, and T. Ruers, "The use of fluorescent dyes and probes in surgical oncology," Eur. J. Surg. Oncol. 36(1), 6–15 (2010). D. Zhu, K. V. Larin, Q. Luo, and V. V. Tuchin, "Recent progress in tissue optical clearing," Laser Photonics Rev. 7(5), 732–757 (2013). A. M. Zysk, K. Chen, E. Gabrielson, L. Tafra, E. A. May Gonzalez, J. K. Canner, E. B. Schneider, A. J. Cittadine, P. Scott Carney, S. A. Boppart, K. Tsuchiya, K. Sawyer, and L. K. Jacobs, "Intraoperative Assessment of Final Margins with a Handheld Optical Imaging Probe During Breast-Conserving Surgery May Reduce the Reoperation Rate: Results of a Multicenter Study," Ann. Surg. Oncol. 22(10), 3356–3362 (2015). T. T. W. Wong, R. Zhang, P. Hai, C. Zhang, M. A. Pleitez, R. L. Aft, D. V. Novack, and L. V. Wang, "Fast label-free multilayered histology-like imaging of human breast cancer by photoacoustic microscopy," Sci. Adv. 3(5), e1602168 (2017). C. Zhang, Y. S. Zhang, D.-K. Yao, Y. Xia, and L. V. Wang, "Label-free photoacoustic microscopy of cytochromes," J. Biomed. Opt. 18(2), 020504 (2013). R. L. Nichols, "Preventing surgical site infections: A surgeon's perspective," Emerg. Infect Dis. 7(2), 220–224 (2001). P. Hajireza, W. Shi, K. Bell, R. J. Paproski, and R. J. Zemp, "Non-interferometric photoacoustic remote sensing microscopy," Light: Sci. Appl. 6(6), e16278 (2017). P. H. Reza, K. Bell, W. Shi, J. Shapiro, and R. J. Zemp, "Deep non-contact photoacoustic initial pressure imaging," Optica 5(7), 814–820 (2018). N. J. M. Haven, K. L. Bell, P. Kedarisetti, J. D. Lewis, and R. J. Zemp, "Ultraviolet photoacoustic remote sensing microscopy," Opt. Lett. 44(14), 3586–3589 (2019). K. L. Bell, P. Hajireza, W. Shi, and R. J. Zemp, "Temporal evolution of low-coherence reflectrometry signals in photoacoustic remote sensing microscopy," Appl. Opt. 56(18), 5172–5181 (2017). K. Bell, P. Hajireza, and R. Zemp, "Scattering cross-sectional modulation in photoacoustic remote sensing microscopy," Opt. Lett. 43(1), 146–149 (2018). S. Soltani, A. Ojaghi, and F. E. Robles, "Deep UV dispersion and absorption spectroscopy of biomolecules," Biomed. Opt. Express 10(2), 487–499 (2019). J. McHowat, J. H. Jones, and M. H. Creer, "Quantitation of individual phospholipid molecular species by UV absorption measurements," J. Lipid Res. 37, 2450–2460 (1996). M. Schürmann, J. Scholze, P. Müller, J. Guck, and C. J. Chan, "Cell nuclei have lower refractive index and mass density than cytoplasm," J. Biophotonics 9(10), 1068–1076 (2016). G. S. Adair and M. E. Robinson, "The specific refraction increments of serum-albumin and serum-globulin," Biochem. J. 24(4), 993–1011 (1930). J. Vörös, "The density and refractive index of adsorbing protein layers," Biophys. J. 87(1), 553–561 (2004). S. Preibisch, S. Saalfeld, and P. Tomancak, "Globally optimal stitching of tiled 3D microscopic image acquisitions," Bioinformatics 25(11), 1463–1465 (2009). R. Li, M. N. Slipchenko, P. Wang, and J.-X. Cheng, "Compact high power barium nitrite crystal-based Raman laser at 1197 nm for photoacoustic imaging of fat," J. Biomed. Opt. 18(4), 040502 (2013). Adair, G. S. Aft, R. L. Amalric, R. Ayme, Y. Bains, M. S. Bell, K. Bell, K. L. Boppart, S. A. Brandone, H. Bressac, C. Bricca, G. M. Brodland, D. G. Burt, M. E. Canner, J. K. Chan, C. J. Chen, K. Chen, R. Cheng, J.-X. Cittadine, A. J. Cote, R. J. Creer, M. H. Desciak, E. B. Gabrielson, E. Ginsberg, R. J. Guck, J. Hai, P. Hajireza, P. Hans, D. Haven, N. J. M. Jacobs, L. K. Jacquemier, J. Jones, J. H. Kedarisetti, P. Kurtz, J. M. Larin, K. V. Lewis, J. D. Li, R. Liu, C. Luo, Q. Maloney, M. E. Martini, N. May Gonzalez, E. A. McCormack, P. McHowat, J. Müller, P. Nichols, R. L. Novack, D. V. Ojaghi, A. Paproski, R. J. Pietra, J.-C. Pleitez, M. A. Pollet, J.-F. Pootrakul, L. Preibisch, S. Reza, P. H. Robinson, M. E. Robles, F. E. Ruers, T. Rusch, V. W. Saalfeld, S. Sawyer, K. Schneider, E. B. Scholze, J. Schürmann, M. Scott Carney, P. Shapiro, J. Shi, S. R. Shi, W. Slipchenko, M. N. Soltani, S. Spitalier, J.-M. Subramaniam, V. Tafra, L. Tang, L. Taylor, C. R. te Velde, E. A. Tomancak, P. Tsuchiya, K. Tuchin, V. V. Veerman, T. Vörös, J. Wang, L. V. Wang, P. Wong, T. T. W. Xia, Y. Yao, D.-K. Young, A. Zakowski, M. F. Zemp, R. Zemp, R. J. Zhang, C. Zhang, R. Zhang, Y. S. Zhu, D. Zitelli, J. A. Zysk, A. M. Am. J. Clin. Pathol. (1) Ann. Surg. Oncol. (1) Biochem. J. (1) Biomed. Opt. Express (1) Biophys. J. (1) Dermatol. Surg. (2) Emerg. Infect Dis. (1) Eur. J. Surg. Oncol. (1) J. Biomed. Opt. (2) J. Biophotonics (1) J. Lipid Res. (1) J. Thorac. Cardiovasc. Surg. (1) Laser Photonics Rev. (1) Light: Sci. Appl. (1) Optica (1) Sci. Adv. (1) Ruikang (Ricky) Wang, Editor-in-Chief
CommonCrawl
EJNMMI Physics Fully digital PET is unaffected by any deterioration in TOF resolution and TOF image quality in the wide range of routine PET count rates Julien Salvadori ORCID: orcid.org/0000-0002-3823-25311,2, Freddy Odille1,2, Gilles Karcher1,2, Pierre-Yves Marie1,3 & Laetitia Imbert1,2 EJNMMI Physics volume 8, Article number: 1 (2021) Cite this article Digital PET involving silicon photomultipliers (SiPM) provides an enhanced time-of-flight (TOF) resolution as compared with photomultiplier (PMT)-based PET, but also a better prevention of the count-related rises in dead time and pile-up effects mainly due to smaller trigger domains (i.e., the detection surfaces associated with each trigger circuit). This study aimed to determine whether this latter property could help prevent against deteriorations in TOF resolution and TOF image quality in the wide range of PET count rates documented in clinical routine. Variations, according to count rates, in timing resolution and in TOF-related enhancement of the quality of phantom images were compared between the first fully digital PET (Vereos) and a PMT-based PET (Ingenuity). Single-count rate values were additionally extracted from the list-mode data of routine analog- and digital-PET exams at each 500-ms interval, in order to determine the ranges of routine PET count rates. Routine PET count rates were lower for the Vereos than for the Ingenuity. For Ingenuity, the upper limits were estimated at approximately 21.7 and 33.2 Mcps after injection of respectively 3 and 5 MBq.kg-1 of current 18F-labeled tracers. At 5.8 Mcps, corresponding to the lower limit of the routine count rates documented with the Ingenuity, timing resolutions provided by the scatter phantom were 326 and 621 ps for Vereos and Ingenuity, respectively. At higher count rates, timing resolution was remarkably stable for Vereos but exhibited a progressive deterioration for Ingenuity, respectively reaching 732 and 847 ps at the upper limits of 21.7 and 33.2 Mcps. The averaged TOF-related gain in signal/noise ratio was stable at approximately 2 for Vereos but decreased from 1.36 at 5.8 Mcps to 1.14 and 1.00 at respectively 21.7 and 33.2 Mcps for Ingenuity. Contrary to the Ingenuity PMT-based PET, the Vereos fully digital PET is unaffected by any deterioration in TOF resolution and consequently, in the quality of TOF images, in the wide range of routine PET count rates. This advantage is even more striking with higher count-rates for which the preferential use of digital PET should be further recommended (i.e., dynamic PET recording, higher injected activities). Three PET/CT systems using silicon photomultipliers (SiPM) are currently commercialized, namely the Philips VereosTM, the Siemens Biograph VisionTM, and the GE DiscoveryTM MI. Their performances according to the National Electrical Manufacturers Association (NEMA) NU standard have already been described [1,2,3,4]. When compared with PMT-based PET, they provide better quality time-of-flight (TOF) images [5,6,7,8,9,10], leading to enhanced diagnostic confidence and accuracy for oncologic diseases [11,12,13,14]. To date, clinical PET/CT systems using SiPM provide the best TOF resolution—i.e., ~ 210 ps for the Biograph VisionTM [2], ~ 310 ps for the VereosTM [15] and ~ 376 ps for the Discovery MITM [4]. However, TOF resolution may also be enhanced by further optimizations of PMT-based detectors, as evidenced by the 435 ps TOF resolution reached by the new PoleStarTM m660 PET/CT system (SinoUnion) [16]. The better quality of TOF images is mainly due to a better localization of the emission points, thereby leading to a faster convergence of the iterative reconstruction process and thus to a better tradeoff between contrast and noise [17, 18]. The TOF resolution is the key parameter in this setting and is currently measured at rather low counting rates, around 2 Mcps, as part of the daily procedures of quality control of PET cameras [19], whereas higher counting rates are typically reached in clinical routine [2]. While both Siemens and GE PET/CT scanners use analog SiPMs (aSiPM), the Vereos PET/CT use a detection system based on digital SiPMs (dSiPM) with the particularity that each Single Photon Avalanche Diode (SPAD) is connected to its own readout electronics [20, 21]. Currently, aSiPM and dSiPM have comparable performances with regard to TOF resolution [22], although the latter is expected to better prevent electronic- and detector-related noise [23,24,25,26]. When compared to aSiPMs, dSiPMs have active quenching/recharge mechanisms, a property that usually leads to enhanced count rate stability and shortened dead times [23]. In addition, only dSiPMs provide the timestamp of each detected photon and allow providing a full digital processing of all available information. Moreover, the small aSiPM or dSiPM can be implanted in much higher numbers than the large PMT, this being particularly the case for the Philips Vereos system for which the dSiPMs are individually coupled with scintillator crystals. This leads to a reduction in the number of detected pulses per photodetector and per trigger circuit, thereby preventing count-related rises in dead time and pile-up effect [1, 15]. This feature is the main reason why the Vereos provides a peak noise equivalent count rate (NECR) which is 50% higher and is reached at a 2-fold higher activity concentration, when compared with other systems involving comparable axial field-of-views [1, 27, 28]. Although the count rates achieved during routine PET exams are lower than those corresponding to NECR peaks, small rises in dead time and in the rates of pile-up effects have already been documented at rather low levels of recorded activity on analog PET [29]. However, it is not known whether such small rises may affect the TOF image quality and TOF resolution in the range of routine PET count rates, and whether they may be definitely prevented by the SiPM systems and trigger circuits of digital PET cameras. The present study is aimed at comparatively assessing the changes in TOF resolution and in the TOF-related enhancement in the image quality, according to the levels of recorded activity and counting rates, between the Vereos fully digital PET camera and the Ingenuity analog PET camera (Philips), with special focus on their respective behaviors in the high range of routine PET count rates. The Vereos and Ingenuity PET-CT systems have already been described in detail elsewhere [15, 29]. Count-rate performances and count-rate impact on TOF and energy resolutions The experimental setup was that of the NEMA standard count-rate test. A linear 18F source was inserted into the NEMA scatter phantom [30] at the saturating concentrations of 2700 MBq for Vereos PET and of 1700 MBq for Ingenuity PET. This phantom was placed at the FOV center of each camera, but with the line source being off-centered at a radial distance of 45 mm from the central axis. Thereafter, 30 consecutive PET recordings were obtained during a 16-h period and with increasing recording times in order to compensate for radioactive decay. The recordings obtained with an activity higher than that associated with the peak single-count rate were excluded for this analysis. Count-rate performances, NECR, and scatter fraction For each recording, true (T), random (R), and scatter (S) count rates were determined according to NEMA standards [30]. The noise equivalent count rate (NECR), which can be interpreted as the true coincidence rate of an ideal system, namely that which would yield the same signal-to-noise ratio as a real system whose measurement is degraded by scattered and random coincidences, was computed with a noiseless estimate of randoms [31] as follows: $$ \mathrm{NECR}=\frac{T^2}{T+S+R}\kern1.5em $$ where R was estimated using delayed coincidence measurements and a variance reduction method [32, 33] (Fig. 1a). Results obtained from the NEMA count-rate test for the Ingenuity (red) and the Vereos (blue) cameras, with the relationships between noise equivalent count rates and activity concentration (NECR) (a) and with the relationships between TOF (b) or energy (c) resolutions and single-count rates Single-count rate/activity relationship In addition to the coincidence information, the list-mode data of both cameras comprise the prompt, delayed, and single-event rates recorded at each 500 ms interval. Single-count rates, listed at each 500 ms interval, were averaged herein for each of the consecutive recordings. Double-exponential functions were used to fit the relationship between single-count rates (S) and activity concentrations (A), as well as the reverse relationship between activity concentrations and single-count rates: $$ {f}_{S\to A}(S)\ or\ {f}_{A\to S}(A)=a.{e}^{b.\left(S\ or\ A\right)}+c.{e}^{d.\left(S\ or\ A\right)} $$ where a, b, c, and d are the coefficients providing the best fit (all R2 > 0.9999) for each camera and for each of the 2 functions (.i.e., the function fS → A predicting A from S values and the function fA → S predicting S from A values) (Fig. 2). Corresponding R2 coefficients and goodness-of-fit criteria are given in Table 1. Relationships between single-count rates and activity concentration obtained from the NEMA count-rate test with the Ingenuity (red) and the Vereos (blue) cameras, with the double-exponential regressions predicting: a Single-count rates based on activity concentration (\( {f}_{A\to S}^{Ing} \) and \( {f}_{A\to S}^{Ver} \) for respectively Ingenuity and Vereos) and b Activity concentration based on single-count rates (\( {f}_{S\to A}^{Ing} \) and \( {f}_{S\to A}^{Ver} \) for respectively Ingenuity and Vereos) Table 1 Parameters of the double-exponential function used to model the relationship between single-count rates S and activity concentration A = fS → A(S), and the inverse relationship between activity concentration A and single-count rates S = fA → S(A), both for the ingenuity (fIng) and Vereos (fVer) cameras. Two goodness-of-fit criteria are additionally provided for each regression: the coefficients of determination (R2) and the root mean square error (RMSE) TOF and energy resolutions TOF resolution was determined for each of the consecutive recordings according to a method described in the NEMA NU2-2018 standard [30], based on a previous work by Wang et al. [34], considering the nearest line-of-response points from the linear source as the annihilation points of positrons (Fig. 1b). Briefly, for all true coincidences extracted from the list-mode data, the distances between these annihilation points and the corresponding points computed through TOF differences are considered to correspond to TOF errors (∆t) with the latter represented in a histogram. TOF resolution is then computed as the full width at half maximum (FWHM) of this histogram. According to the NEMA NU2-2018 standard, true coincidences were separated from random and scatter on the basis of a distance from the line source of less than 20 mm [30]. Energy resolution was derived for each of the consecutive recordings from the FWHM value of an energy histogram, the latter being computed from the photon energy of all true coincidences extracted from the same list-mode data (Fig. 1c). Extraction and characterization of routine PET count rates Single-count rates, which are given at each 500 ms interval in the list-mode of both cameras, were extracted from (i) routine PET exams previously recorded on the Vereos or Ingenuity camera after the injection of 3 MBq.kg-1 of a 18F-labeled tracer (18F-FDG, 18F-Choline, or 18F-DOPA) and (ii) a cardiac exam recorded with a Vereos camera after the injection of 7 MBq.kg-1 of Rubidum-82 at stress. The number of selected patients as well as the number of collected samples of single rates are reported in Table 2 and this, for each tracer type and for each camera. As seen in Table 2, the main patient characteristics were comparable between patients imaged with the Vereos and those imaged with the Ingenuity camera. Table 2 General patient characteristics and main PET parameters recorded or computed in the different groups imaged in clinical routine with the Vereos (digital-PET) or Ingenuity (analog-PET) camera and after the injection of 3 MBq.kg-1 of current 18F-labeled tracers, with an additional patient imaged after the injection of 7 MBq.kg-1 of 82Rb on the Vereos camera. Continuous variables are presented with median values [minimum – maximum values after exclusion of outliers] A corresponding activity concentration of the NEMA phantom was computed for these single-count rates collected in patients by using the aforementioned function fS → A described above. The distributions of these count rates and of corresponding activity concentrations were analyzed according to tracer type and camera type and while considering all samples of 500 ms interval recorded in all bed positions and in all patients. As depicted in Fig. 3a, b, this distribution is represented through box plots along with the display of median values and interquartile ranges, as well as with the maximal and minimal values after excluding the outliers (i.e., values higher than 1.5-fold the upper limit of the 3rd quartile or lower than 1.5-fold the lower limit of the 1st quartile). These maximal and minimal values of the box plots were deemed to reflect the upper and lower limits for count rate, as well as for activity concentrations. Box-plot representation of a distributions of single-count rates collected during routine PET exams as a function of tracer type (18F-FDG, 18F-Choline, 18F-Dopa, Rubidium-82) and camera type (blue for Vereos and red for Ingenuity), b corresponding activity concentrations from the NEMA scatter phantom (i.e., obtained through the relationship displayed in Fig. 2a), with c examples of time-evolutions of the count rates recorded during whole-body PET exams with 18F-FDG and with 18F-Choline, and during dynamic exams focused on the pelvis region for 18F-DOPA and on the cardiac region for Rubidium-82 Single-count rates versus signal-to-noise ratios (SNR) of the 10 to 37-mm diameter hot spheres of the IEC phantom for the noTOF (left panels) and TOF (median panels) images from the Vereos (upper panels) and Ingenuity (lower panels) cameras, together with the corresponding TOF-related gain, i.e., the ratio between the SNR of TOF and noTOF images (right panels). The dashed and solid vertical black lines correspond to respectively the lower and upper levels of single-count rates for the PET exams performed with the injections of 3 MBq.kg-1. The solid red vertical line corresponds to the estimated upper level of single-count rates for the PET exams performed with the injections of 5 MBq.kg-1 TOF-related enhancement in the image quality in the upper range of routine PET count rates Recording and reconstruction of PET images The NEMA IEC phantom is an anthropometric body phantom containing 6 spheres with diameters of 10, 13, 17, 22, 28, and 37 mm, as well as a cylindrical central "lung" insert, filled with a low-density material [30]. Representative examples of PET slices of the IEC phantom recorded with the TOF information with the Vereos (upper panels) and the Ingenuity (lower panels) at count rates corresponding to the same levels of activity concentrations for both cameras (i.e., at background concentrations of the IEC phantom of 2, 5, 10, 20, and 40 kBq.mL-1). The values of signal-to-noise ratio (SNR) and contrast recovery coefficients (CRC) for the 10 to 37-mm diameter hot spheres are inserted in each image, together with the background relative noise (RN) An IEC phantom, filled with an 18F concentration of 60 kBq.mL-1 in the background and 4-fold higher concentrations in the 6 spheres, was positioned at the centers of the Vereos and Ingenuity cameras for serial 3 min PET recordings at each 30-min interval, until the background concentration fell under the level of 1 kBq.mL-1. Images were reconstructed with and without TOF information (TOF and noTOF images, respectively), with a fixed number of 10 subsets and with the number of OSEM iterations previously shown to maximize the signal-to-noise ratio [35] of the 10-mm sphere irrespective of the level of recorded activity—i.e., 1 and 2 OSEM iterations for TOF images from the Vereos and Ingenuity, respectively, and 4 OSEM iterations for noTOF images from both cameras [9]. No image filter was applied and the relaxation parameter was set to 1.0, the latter controlling the magnitude of the image change induced by each iteration. The single-count rates, listed at each 500-ms interval, were averaged for each 3-min recording (this 3-min period corresponding to a total of 360 samples). Image quality metrics Signal-to-noise ratio (SNR) and contrast recovery coefficient (CRC) were computed for each sphere of the IEC phantom and for each of the consecutive PET recordings according to the following formulas [30, 35]: $$ {\mathrm{CRC}}_{\mathrm{sphere}}=\frac{\frac{S_i}{B_i}-1}{\frac{a_H}{a_c}-1}\ast 100 $$ $$ {\mathrm{SNR}}_{\mathrm{sphere}}=\frac{S_i-{B}_i}{\sigma_i} $$ where Si represents the mean voxel activities extracted from a circular 2D region-of-interest (ROI), matching the sphere of i diameter and placed on the slice passing through the sphere center; Bi and σi, respectively represent the mean and standard deviation of the activities from all voxels setting within the 60 background regions-of-interest (ROIs) defined by the NEMA standard for the sphere of i diameter; and \( \frac{a_H}{a_c} \) is the actual ratio of activity concentrations between spheres and background, which was set at 4 in the present instance. Background relative noise was computed through the coefficient of variation of the activities from the 60 background 37-mm diameter ROIs, with the resulting values converted in percentages [36]: $$ \mathrm{RN}=\frac{\sigma_{37 mm}}{B_{37 mm}}\ast 100 $$ As evidenced in Fig. 2, the single-count rates from both cameras were closely linked to the activity concentrations during the NEMA count-rate test. However, this relationship was shifted to higher single-count rate values for the Ingenuity camera, due to its higher absolute count sensitivity [1, 29]. A similar upward shift was documented for the noise equivalent count rate (NECR) of the Ingenuity, as compared with that of the Vereos, but only up to the NECR peak of the Ingenuity, a level where the two curves intersect (Fig. 1a). The NECR peak of the Vereos camera was reached at a greater than 2-fold higher activity than that of the Ingenuity (54 vs. 24 kBq.mL-1 (Fig. 1a). For the large range of single-count rates, tested at up to 100 Mcps, the TOF and energy resolutions of the Vereos were remarkably stable, at approximately 326 ps and 11%, respectively (Fig. 1b, c). For the Ingenuity, by contrast, TOF and energy resolutions exhibited progressive deteriorations according to count rate. For both cameras, the single-PET count rates collected in patients were higher for 18F-choline and 18F-DOPA comparatively to 18F-FDG (see Fig. 3a), which was mainly explained by much shorter injection-to-recording delay times (Table 2). Single-count rates were even higher at the onset of the Rubidium-82 Vereos PET exam, presumably at the time of the first cardiac pass of the tracer bolus (Fig. 3c). Due to the aforementioned higher absolute count sensitivity [1, 29], the Ingenuity exhibited higher single-count rates than the Vereos (Fig. 3a). However, this difference was much less pronounced when these count rates were replaced by the corresponding concentrations from the scatter phantom obtained from the relationships displayed in Fig. 2b for each camera (Fig. 3b). The count rates of all of the PET exams performed with 18F-labeled tracers ranged from 2.3 to 13.3 Mcps for the Vereos and from 5.8 to 21.7 Mcps for the Ingenuity, corresponding to the upper and lower limits of the box plots displayed in Fig. 3a. Estimation of top levels of routine PET count rates For routine protocols performed with injected doses of 5 MBq.kg-1 instead of 3 MBq.kg-1 of 18F-labeled tracers [37], the upper limits were presumptively estimated to be higher, reaching 21.6 Mcps for the Vereos and 33.2 Mcps for the Ingenuity. These estimated count rates values (Sestimated) were obtained from the upper limits of the count rates documented after injection of 3 MBq.kg-1 for the Vereos (S = 13.3 Mcps) and Ingenuity (S = 21.7 Mcps) systems with the following equation: $$ {S}_{\mathrm{estimated}}={f}_{A\to S}\left(C.{f}_{S\to A}(S)\right) $$ where fS → A is the double-exponential function predicting the activity concentrations of the NEMA count-rate phantom based on single-count rates for a given camera (Fig. 2b), and fA → S is the double-exponential function predicting the single-count rates of the same camera based on activity concentrations (Fig. 2a). The constant C corresponds to the change in activity concentration by a factor of 5/3, corresponding to the shift in injected dose from 3 to 5 MBq.kg-1. A similar method was applied to estimate an upper limit of 39.5 Mcps for the Rubidium-82 PET exam recorded on the Ingenuity camera, based on the upper limit observed for this exam on the Vereos camera (SVer = 29.5 Mcps. For this purpose, the following equation was used: $$ {S}_{\mathrm{estimated}}^{\mathrm{Ing}}={f}_{A\to S}^{Ing}\left({f}_{S\to A}^{Ver}\left({S}_{\mathrm{Ver}}\right)\right) $$ where \( {f}_{S\to A}^{Ver} \) is the double-exponential function predicting activity concentrations based on the single-count rates of the Vereos camera (29.5 Mcps) (Fig. 2b), and \( {f}_{A\to S}^{Ing} \)is the double-exponential function predicting the single-count rates of the Ingenuity camera based on activity concentration (Fig. 2a). The validity of these estimations was strengthened by comparisons with actual measurements, although these comparisons could not be obtained with human data (no patient was injected in this instance with 5 MBq.kg-1 of tracer and no patient was investigated with Rubidium-92 with the Ingenuity camera) and only on the IEC phantom data obtained on a wide activity range (see Additional file 5: Appendix 1). TOF resolution As mentioned above, the TOF resolution of the Vereos was stable around 326 ps irrespective of the level of recorded count rate (Fig. 1b). By contrast, the TOF resolution of the Ingenuity rose from 621 to 732 ps between 5.8 and 21.7 Mcps, corresponding to the lower and upper limits, respectively, of the count rates documented with the injection of 3 MBq.kg-1 of 18F-tracers. This TOF resolution of the Ingenuity deteriorated even further at the upper limits of count rates expected to be achieved with the injection of 5 MBq.kg-1 of 18F-tracers (847 ps at 33.2 Mcps) or of Rubidium-82 (899.5 ps at 39.5 Mcps). TOF-related gain in SNR The TOF-related gain in SNR was computed through the ratio between the SNR from TOF and noTOF images. As evidenced in Fig. 4, the TOF-related gain in SNR provided by the Vereos was of 2, on average, and was clearly stable according to single count rates, although slight variations were documented according to sphere size (i.e., this gain was consistently higher for the smallest as opposed to the largest spheres). For the Ingenuity, the TOF-related gain in SNR exhibited comparable variations according to sphere size, although was found to deteriorate according to count rate. More precisely, the averaged TOF-related gain decreased from 1.36 at the lower limit of routine PET count rate of the Ingenuity (5.8 Mcps), to 1.14 and 1.00 at the upper limit of routine PET count rate documented for this camera with the respective injections of 3 MBq.kg-1 and 5 MBq.kg-1 (Fig. 4). This gain even fell below 1 (0.90) at the upper limit of the count rate estimated during the Rubidium-82 PET exam with the Ingenuity (Fig. 4). In addition, as seen in Fig. 4, a saturation of the SNR could be documented for the Ingenuity but not the Vereos camera with SNR peaks of the Ingenuity achieved around 30 Mcps on the noTOF images and even earlier, around 15 Mcps, on TOF images. Consequently, the SNR achieved by TOF images in the upper range of routine PET count rates were dramatically higher for the Vereos than for the Ingenuity. In particular, the Vereos-to-Ingenuity ratio of the averaged SNR of the 6 spheres was 1.3 at the lower limits of the count rates documented in clinical routine (i.e., 2.3 Mcps for Vereos and 5.8 Mcps for Ingenuity). However, this ratio further increased to 2.0 and 2.6 at the upper limits of the count rates documented for each camera at respectively 3 and 5 MBq.kg-1 of 18F-labeled tracers and to 3.3 at the upper limits corresponding to the Rubidium-82 exams. Representative examples of TOF PET images of the IEC phantom are given in Fig. 5 for both cameras and for growing count rate conditions. Finally, the count-related deterioration in the TOF-related gain in SNR from the Ingenuity camera was found to be in line with concomitant deteriorations in the gains of both contrast and noise (see Additional files 1 and 2: Figures S6 and 7). It may additionally be pointed out that, for the Ingenuity, the TOF-related gain in SNR and in contrast fell below 1 with the rise in count rate (see Fig. 4 and Additional file 1: Figure S6, respectively), yielding evidence of a paradoxical worsening of the quality for images reconstructed with the TOF information. Fully digital PET was previously shown to significantly improve the quality of TOF images due to an enhanced TOF resolution as compared with analog PET and to better prevent the count-related rises in dead time and pile-up effects due to the use of digital SiPM systems with small trigger domains (i.e., small detection surfaces associated with each trigger circuit). The present study provides evidence that combined together, these two properties are particularly advantageous in the high range of the activities recorded by PET in clinical routine and where digital PET is particularly better suited to prevent the deterioration in TOF resolution and, consequently, in the TOF image quality. Although the majority of our data were obtained with 18F-labeled tracers, they may likely be extrapolated to all other PET tracers, depending on the level of single-count rates achieved during recording. A previous study had already shown that count rate-related deteriorations in TOF and energy resolutions of the Ingenuity PMT-based PET cameras were virtually linear and started at very low count rates [29]. To our knowledge, the present study is the first to show that this deterioration has a significant impact on the image quality, especially in the upper range of routine PET count rates. The TOF resolution of PET cameras is commonly assessed as part of daily control procedures with a point source of 22Na and thus, under low count rate conditions (~ 2 Mcps). In clinical routine, however, the activity recorded by PET is higher and varies according to injected doses as well as patient characteristics such as body weight or renal function. In addition, this recorded activity varies according to tracer type and recording protocols with significant fluctuations during the course of each PET exam. Indeed, as illustrated herein in Fig. 3c, higher activities are generally recorded during the abdominal and pelvic recording steps for whole body recordings of current 18F-labeled tracers, as well as during the early recording phases for dynamic PET exams. In the present study, the levels of single-count rates were able to be collected during the course of current PET exams, at each 500-ms interval, and these levels were put in correspondence with the TOF resolution and image quality metrics measured at the same count rates with the NEMA count-rate phantom and the IEC phantom, respectively. It should be emphasized, however, that the absolute count sensitivity of the Vereos camera is lower than that of the Ingenuity camera [1, 29], due to [1] a lower axial field of view (164 vs. 180 mm) [2], shorter crystals (19 vs. 22 mm) [3], larger inter-pixel dead space, and [4] a smaller energy window (164 vs. 276 keV). This point is clearly evidenced in Fig. 2a where the curves from single-count rates of both cameras are put in correspondence with the activity concentrations from the NEMA count-rate phantom. The higher count sensitivity of the Ingenuity is further illustrated by the observation in Fig. 3a in which this camera was also associated with higher single-count rates than the Vereos camera during the recording of routine PET exams. However, as evidenced in Fig. 3b, this difference between the 2 cameras was clearly minimized after conversion of these routine count rates into recorded activities. Such equivalence in terms of recorded activities was expected since all study patients were submitted to the same injection protocols and their anthropometric and clinical characteristics were comparable between the two cameras (Table 2). In these conditions, both the body distribution and time-evolution of the tracer activities during the PET exams could indeed be expected to be roughly comparable between the two cameras, even if they were associated with higher absolute count rates with the Ingenuity as opposed to the Vereos camera. As a result, the upper limits of the routine PET count rates were reached herein with the Ingenuity but not with the Vereos cameras, due to the aforementioned difference in absolute sensitivity. The limit attained was 21.7 for the injection of 3 MBq.kg-1 of current 18F-labeled tracers and was estimated at 33.2 Mcps for 5 MBq.kg-1 of 18F-labeled tracers and at 39.5 Mcps for 7 MBq.kg-1 of Rubidium-82. At these upper limits, which were all reached with the Ingenuity, both the TOF resolution and the TOF-related gain in SNR were found to be remarkably stable for the Vereos camera throughout the broad range of activities tested herein (Figs. 1b and 4). In contrast, with the Ingenuity camera, both parameters deteriorated progressively in an almost linear relationship according to count rates (see Figs. 1b and 4), and with significant deteriorations being already documented between the lower and upper limits of the count rates observed after the injection of only 3 MBq.kg-1 of current 18F-labeled tracers, i.e., from 621 ps to 732 ps for TOF resolution and from 1.36 to 1.14 for the TOF-related gain in SNR. Finally, at the upper limit of the count rates achieved with the injection of 3 MBq.kg-1 of 18F-labeled tracers (21.7 Mcps), the SNR achieved by the TOF images from the Vereos was, on average, 100% higher than the SNR achieved by the TOF images from the Ingenuity. By contrast, this percentage was only of 30% at the lower limit (5.8 Mcps). This superiority of the SNR achieved by the TOF images of the Vereos was even more pronounced at the upper limits of the count rates likely achieved with the injection of 5 MBq.kg-1 of 18F-labeled tracers (33.2 Mcps), as well as with the Rubidium-82 protocol (39.5 Mcps), with respective enhancements of 160% and 220%, as compared with the SNR achieved by the TOF images of the Ingenuity camera. As already stated above, the dSiPM-based detection system of the Vereos camera was already shown to be highly advantageous for its performance at high count rates [1, 15]. This point is well illustrated by the fact that the NECR peak was documented for activity levels which were two-fold higher for the Vereos than for the Ingenuity system. The latter also likely explains the further deteriorations in the image quality of the Ingenuity camera beyond this NECR peak. However, the present study shows that TOF resolution as well as the quality of the TOF images from the Ingenuity deteriorated at much lower count rates than those corresponding to the NECR peak. This progressive worsening in the quality of the TOF images from the Ingenuity could be directly attributed to the concomitant deterioration in TOF resolution, especially for the count rates setting within the usual routine ranges (i.e., under the 30 Mcps level corresponding to the SNR peak on noTOF images (see Fig. 4)). Indeed, by using a method initially described by Wang et al. [34], the TOF resolution of the Ingenuity was found to deteriorate linearly as a function of counting rates, even under the 30 Mcps level, whereas the TOF resolution of Vereos was very short and stable regardless of the count rate level. An additional and paradoxical observation was that, at count rates setting over 10 Mcps and 30 Mcps, respectively, the averaged SNR and contrast values from the spheres were lower on TOF than on noTOF images from the Ingenuity (Fig. 4 and Additional file 1: Figure S6). This is likely the result of the fact that the count-related deterioration in TOF resolution may not be accurately taken into account in the reconstruction process of TOF images. This hypothesis is further strengthened by previous studies having shown that the use of an incorrect Gaussian TOF kernel, due to miscalibration or count rate influence, leads to increased noise and to lower lesion contrast [38,39,40]. Finally, this hypothesis was definitely confirmed through additional "in silico" experiments with Monte-Carlo simulations where significant deteriorations in contrast and SNR were documented for reconstructions performed with misfit TOF kernel values (see Additional file 4: Figure 8). It may be mentioned that not only the width but also the shape of the TOF distribution needs to be modeled accurately. This distribution is commonly considered as Gaussian, although a Laplace distribution could be more appropriate for very high timing resolutions (< 50 ps) [41]. It should be pointed out that not only the TOF resolution but also the energy resolution of the Ingenuity camera was affected at increasing values of recorded count rates (Fig. 1c). In the range of routine count rates, however, the maximal amplitude of the deterioration in energy resolution was only of one percentage point for the Ingenuity, corresponding to the difference between the level of 11.3%, at the lower limit achieved with 18F-tracers, and the level of 12.4%, the maximal count rate estimated for 82Rb. On additional "in silico" experiments, the impact on the image quality parameters was found to be negligible for this small deterioration in energy resolution compared to that induced by the concomitant deterioration in TOF resolution (results not shown). The simulated camera was the Vereos for which the GATE Monte Carlo model was previously validated by a direct comparison with experimental data [42]. The degradation in TOF resolution with the increase in count rate is a deleterious consequence of the increases in dead time and pile-up effect, two parameters which are inversely linked to the size of trigger domains (i.e., the detection surface associated with each trigger circuit). However, the number of photodetectors is dramatically higher for the Vereos than for the Ingenuity system (23040 dSiPMs vs. 420 PMTs), with a similar difference being observed between the 2 cameras for the number of trigger circuits (5760 vs. 28). Therefore, the Vereos involves a much smaller trigger domain than the Ingenuity (0.64 cm2 vs. 132.48 cm2), leading to a drastic reduction in the data flow per trigger circuit and thus, to better prevention against dead time and pile-up effects. As detailed in Additional file 3: Table S3, trigger domains are smaller on the commercially available digital PET systems than on the analog PET systems, although they are far from being equivalent within each of these two categories. "In particular, the PMT-based Biograph mCT PET/CT (Siemens) involves a trigger domain of only 27 cm2 (one domain per detector block), and therefore, its vulnerability to high-count rates may be expected to be much lower than that of the Ingenuity (one domain per detector panel) [43,44,45]." In the present study, we analyzed the behavior of timing resolution (assessed on the NEMA count-rate phantom) and of TOF image quality (assessed on the IEC phantom) in the wide range of routine PET count rates. For this purpose, the level of single-count rates from a given camera was considered to have comparable consequences on the TOF images from phantoms and patients. However, it must be recognized that these consequences could vary according to the spatial distribution of the recorded radioactive sources. It could even be considered that local deteriorations in TOF-image quality might occur in the absence of any evident increases in total count rates, when only a few trigger domains are saturated, due to radioactive sources lying in close proximity. The count rate distribution between the different trigger domains of a given camera is likely not identical when imaging phantoms or patients, as well as when imaging patients with different anthropometric characteristics or injected with different tracers. These considerations constitute a limitation in the interpretation of our results. Another limitation was that no PET data were available for measuring the extreme levels of the count rates that may be achieved in clinical routine with these cameras (i.e., after the injection of 5 MBq.kg-1 of FDG or during the early vascular phase of an 82Rb exam for the Ingenuity). These count rates could only be estimated and the accuracy of these estimations may be questioned, even if we have also observed that our estimation method yields coherent results when applied for estimating the count rates achieved not in patients but on the IEC phantom and on a large range of recorded activity. The relative difference with regard to actual values is ≤ 2% when estimating the count rates achieved at a higher activity than those prescribed in our patients and ≤ 7% when estimating the Ingenuity count rates corresponding to the count rates recorded with the Vereos (see Additional file 5: Appendix 1). However, all of these limitations likely do not challenge our main observation that, contrary to PMT-based PET with large trigger domains, fully digital PET is unaffected by any deterioration in TOF resolution and the TOF image quality in the wide range of the global PET count rates observed in clinical routine. Finally, we only used an OSEM image-reconstruction algorithm, but our results would likely be unchanged with other TOF-based algorithms, such as Bayesian penalized likelihood algorithms [46]. In conclusion, fully digital PET was already shown to provide significant advantages, when compared with PMT-based PET, due to an enhanced TOF resolution, as well as to a better prevention of the count-related rises in dead time and pile-up effects. The present study provides new compelling evidence that combined together, these properties are particularly advantageous in the upper range of activities recorded by PET in clinical routine and where digital PET is unaffected by any deterioration in the TOF resolution and TOF image quality, contrary to the Ingenuity PMT-based PET. While this advantage is already significant in the range of routine PET count rates achieved with limited injected doses, it becomes even more prominent with higher count rates for which the preferential use of digital PET should be further recommended (i.e., dynamic PET recording, higher injected activities). The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Rausch I, Ruiz A, Valverde-Pascual I, Cal-Gonzalez J, Beyer T, Carrio I. Performance evaluation of the Philips Vereos PET/CT system according to the NEMA NU2-2012 standard. J Nucl Med. 2018;60:561–7. Reddin JS, Scheuermann JS, Bharkhada D, et al. Performance evaluation of the SiPM-based Siemens biograph vision PET/CT system. Sydney: IEEE; 2018. p. 1–5. Pan T, Einstein SA, Kappadath SC, et al. Performance evaluation of the 5-ring GE discovery MI PET/CT system using the national electrical manufacturers association NU 2-2012 standard. Med Phys. 2019:mp.13576. Vandendriessche D, Uribe J, Bertin H, De Geeter F. Performance characteristics of silicon photomultiplier based 15-cm AFOV TOF PET/CT. EJNMMI Phys. 2019;6:8. Salvadori J, Perrin M, Marie P-Y, Imbert L, Verger A. High-resolution brain 18F-FDG images provided by fully digital PET. Clin Nucl Med. 2019;44:301–2. Salvadori J, Imbert L, Perrin M, et al. Head-to-head comparison of image quality between brain 18F-FDG images recorded with a fully digital versus a last-generation analog PET camera. EJNMMI Res. 2019;9:61. Nguyen NC, Vercher-Conejero JL, Sattar A, et al. Image quality and diagnostic performance of a digital PET prototype in patients with oncologic diseases: initial experience and comparison with analog PET. J Nucl Med. 2015;56:1378–85. Gnesin S, Kieffer C, Zeimpekis K, et al. Phantom-based image quality assessment of clinical 18F-FDG protocols in digital PET/CT and comparison to conventional PMT-based PET/CT. EJNMMI Phys. 2020;7:1. Salvadori J, Odille F, Verger A, et al. Head-to-head comparison between digital and analog PET of human and phantom images when optimized for maximizing the signal-to-noise ratio from small lesions. EJNMMI Phys. 2020;7:11. van Sluis J, Boellaard R, Somasundaram A, et al. Image quality and semiquantitative measurements on the biograph vision PET/CT system: initial experiences and comparison with the biograph mCT. J Nucl Med. 2020;61:129–35. Wright CL, Binzel K, Zhang J, Knopp MV. Advanced functional tumor imaging and precision nuclear medicine enabled by digital PET technologies. Contrast Media Mol Imaging. 2017;2017:1–7. Fuentes-Ocampo F, López-Mora DA, Flotats A, et al. Digital vs. analog PET/CT: intra-subject comparison of the SUVmax in target lesions and reference regions. Eur J Nucl Med Mol Imaging. 2019;46:1745–50. Surti S, Viswanath V, Daube-Witherspoom ME, Conti M, Casey ME, Karp JS. Benefit of improved performance with state-of-the art digital PET/CT for lesion detection in oncology. J Nucl Med. 2020;42:462–570. Koopman D, van Dalen JA, Stevens H, Slump CH, Knollema S, Jager PL. Performance of digital PET compared to high-resolution conventional PET in patients with cancer. J Nucl Med. 2020;61:1448–54. Zhang J, Maniawski P, Knopp MV. Performance evaluation of the next generation solid-state digital photon counting PET/CT system. EJNMMI Res. 2018;8:97. Huo L, Li N, Wu H, et al. Performance evaluation of a new high-sensitivity time-of-flight clinical PET/CT system. EJNMMI Phys. 2018;5:29. Conti M. Effect of randoms on signal-to-noise-ratio in TOF PET. IEEE Trans Nucl Sci. 2006;53:1188–93. Surti S. Update on time-of-flight PET imaging. J Nucl Med. 2015;56:98–105. Griesmer J, Laurence T, Cooke S, Karp J, Perkins A, Kolthammer J. Time-of-flight quality control for a new Philips Gemini PET/CT scanner. J Nucl Med. 2006;47:391P. Frach T, Prescher G, Degenhardt C, de Gruyter R, Schmitz A, Ballizany R. The digital silicon photomultiplier-principle of operation and intrinsic detector performance. Orlando; 2009. p. 2383–6. Haemisch Y, Frach T, Degenhardt C, Thon A. Fully digital arrays of silicon photomultipliers (dSiPM) – a scalable alternative to vacuum photomultiplier tubes (PMT). Phys Procedia. 2012;37:1546–60. Gundacker S, Auffray E, Jarron P, Meyer T, Lecoq P. On the comparison of analog and digital SiPM readout in terms of expected timing performance. Nucl Instrum Methods Phys Res Sect Accel Spectrometers Detect Assoc Equip. 2015;787:6–11. Schaart DR, Charbon E, Frach T, Schulz V. Advances in digital SiPMs and their application in biomedical imaging. Nucl Instrum Methods Phys Res Sect Accel Spectrometers Detect Assoc Equip. 2016;809:31–52. Lecoq P. Pushing the limits in time-of-flight PET imaging. IEEE Trans Radiat Plasma Med Sci. 2017;1:473–85. Gundacker S, Turtos RM, Auffray E, Paganoni M, Lecoq P. High-frequency SiPM readout advances measured coincidence time resolution limits in TOF-PET. Phys Med Biol. 2019;64:055012. Gundacker S, Heering A. The silicon-photomultiplier: fundamentals and applications of a modern solid-state photon detector. Phys Med Biol. 2020;65. Hsu DFC, Ilan E, Peterson WT, Uribe J, Lubberink M, Levin CS. Studies of a next-generation silicon-photomultiplier–based time-of-flight PET/CT system. J Nucl Med. 2017;58:1511–8. Surti S, Kuhn A, Werner ME, Perkins AE, Kolthammer J, Karp JS. Performance of Philips Gemini TF PET/CT scanner with special consideration for its time-of-flight imaging capabilities. J Nucl Med. 2007;48:471–80. Kolthammer JA, Su K-H, Grover A, Narayanan M, Jordan DW, Muzic RF. Performance evaluation of the ingenuity TF PET/CT scanner with a focus on high count-rate conditions. Phys Med Biol. 2014;59:3843–59. NEMA. NEMA NU 2-2018: performance measurements of positron emission tomographs; 2018. Dahlbom M, Schiepers C, Czernin J. Comparison of noise equivalent count rates and image noise. IEEE Trans Nucl Sci. 2005;52:1386–90. Casey ME, Hoffman EJ. Quantitation in positron emission computed tomography: 7. A technique to reduce noise in accidental coincidence measurements and coincidence efficiency calibration. J Comput Assist Tomogr. 1986;10:845–50. Brasse D, Kinahan PE, Lartizien C, Comtat C, Casey M, Michel C. Correction methods for random coincidences in fully 3D whole-body PET: impact on data and image quality. J Nucl Med. 2005;46:859–67. Wang G-C, Li X, Niu X, et al. PET timing performance measurement method using NEMA NEC phantom. IEEE Trans Nucl Sci. 2016;63:1335–42. Lois C, Jakoby BW, Long MJ, et al. An assessment of the impact of incorporating time-of-flight information into clinical PET/CT imaging. J Nucl Med. 2010;51:237–45. Karp JS, Surti S, Daube-Witherspoon ME, Muehllehner G. Benefit of time-of-flight in PET: experimental and clinical results. J Nucl Med. 2008;49:462–70. Boellaard R, O'Doherty MJ, Weber WA, et al. FDG PET and PET/CT: EANM procedure guidelines for tumour PET imaging: version 1.0. Eur J Nucl Med Mol Imaging. 2010;37:181–200. Daube-Witherspoon ME, Scheuermann J, Viswanath V, Surti S, Matej S, Karp JS. Quantitative accuracy of time-of-flight PET at high count rates. Strasbourg: IEEE; 2016. p. 1–4. Clementel E, Vandenberghe S, Karp JS, Surti S. Comparison of image signal-to-noise ratio and noise equivalent counts in time-of-flight PET. Knoxville: IEEE; 2010. p. 3622–5. Daube-Witherspoon ME, Surti S, Matej S, Werner M, Jayanthi S, Karp JS. Influence of time-of-flight kernel accuracy in TOF-PET reconstruction. San Diego: IEEE; 2006. p. 1723–7. Efthimiou N, Thielemans K, Emond E, Cawthorne C, Archibald SJ, Tsoumpas C. Use of non-Gaussian time-of-flight kernels for image reconstruction of Monte Carlo simulated data of ultra-fast PET scanners. EJNMMI Phys. 2020;7:42. Salvadori J, Labour J, Odille F, et al. Monte Carlo simulation of digital photon counting PET. EJNMMI Phys. 2020;7:23. Jakoby BW, Bercier Y, Conti M, et al. Performance investigation of a time-of-flight PET/CT scanner. Dresden: IEEE; 2008. p. 3738–43. Rausch I, Cal-González J, Dapra D, et al. Performance evaluation of the biograph mCT flow PET/CT system according to the NEMA NU2-2012 standard. EJNMMI Phys. 2015;2:26. Michopoulou S, O'Shaughnessy E, Thomson K, Guy MJ. Discovery molecular imaging digital ready PET/CT performance evaluation according to the NEMA NU2-2012 standard. Nucl Med Commun. 2019;40:270–7. Caribé PRRV, Koole M, D'Asseler Y, Van Den Broeck B, Vandenberghe S. Noise reduction using a Bayesian penalized-likelihood reconstruction algorithm on a time-of-flight PET-CT scanner. EJNMMI Phys. 2019;6:22. The authors declare that they have no conflict of interest. Department of Nuclear Medicine and Nancyclotep Molecular Imaging Platform, CHRU-Nancy, Université de Lorraine, F54000, Nancy, France Julien Salvadori, Freddy Odille, Gilles Karcher, Pierre-Yves Marie & Laetitia Imbert Université de Lorraine, INSERM, UMR 1254, F54000, Nancy, France Julien Salvadori, Freddy Odille, Gilles Karcher & Laetitia Imbert Pierre-Yves Marie Julien Salvadori Freddy Odille Gilles Karcher Laetitia Imbert All authors have participated either to (1) conception and design or analysis and interpretation of data, or both (JS, FO, GK, LI); (2) drafting of the manuscript or revising it critically for important intellectual content (JS, FO, PYM, LI), and/or (3) final approval of the manuscript submitted (FO, PYM, LI). Correspondence to Julien Salvadori. All procedures performed in the studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed written consent was obtained from all individual participants included in the study. Single count rates versus contrast recovery coefficients (CRC) of the 10 to 37-mm diameter hot spheres of the IEC phantom for the noTOF (left panels) and TOF (median panels) images from the Vereos (upper panels) and Ingenuity (lower panels) cameras, together with the corresponding TOF-related gain, i.e. the ratio between the CRC of TOF and noTOF images (right panels). The vertical lines are defined in the legend of Figure 4. Single count rates versus background relative noise of the IEC phantom for the noTOF (left panels) and TOF (median panels) images from the Vereos (blue) and Ingenuity (red) cameras, together with the corresponding TOF-related gain, i.e. the ratio between the background relative noise of TOF and noTOF images (right panels). The vertical lines are defined in the legend of Figure 4. Additional file 3: Table S3 : Characteristics of currently commercialized PMT- and SiPM-based PET cameras with a special focus on the size of their trigger domains. Data derived from simulated images of the IEC phantom, obtained with the GATE/Geant4 platform, reconstructed with the CASToR software (OSEM, 2 iterations 10 subsets) and confirming that incorrect TOF-kernel leads to a decrease in the contrast and SNR of TOF images. More precisely, these data show the comparative evolutions of the contrast recovery coefficients (upper panel) and SNR (middle panel) of hot spheres for increasing values of TOF resolution between the TOF images reconstructed with TOF-kernels matching exactly the increasing values of TOF resolution (left panel) and those reconstructed with misfit TOF-kernels kept unchanged at 300 ps (right panel). Simulated PET slices, obtained for various TOF resolutions, are shown in the bottom panels. Additional file 5: Appendix 1 IEC phantom assessment of our methodology of count rate estimation in patients Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Salvadori, J., Odille, F., Karcher, G. et al. Fully digital PET is unaffected by any deterioration in TOF resolution and TOF image quality in the wide range of routine PET count rates. EJNMMI Phys 8, 1 (2021). https://doi.org/10.1186/s40658-020-00344-5 Accepted: 30 November 2020 Digital PET Count rate Time-of-flight
CommonCrawl
Home > Turkish Journal of Mathematics > Vol. 41 (2017) > No. 6 Turkish Journal of Mathematics Unions and ideals of locally strongly porous sets MAYA ALTINOK OLEKSIY DOVGOSHEY MEHMET KÜÇÜKASLAN 10.3906/mat-1604-44 For subsets of $\mathbb R^+ = [0,∞)$ we introduce a notion of coherently porous sets as the sets for which the upper limit in the definition of porosity at a point is attained along the same sequence. We prove that the union of two strongly porous at $0$ sets is strongly porous if and only if these sets are coherently porous. This result leads to a characteristic property of the intersection of all maximal ideals contained in the family of strongly porous at $0$ subsets of $\mathbb R^+$. It is also shown that the union of a set $A \subseteq \mathbb R^+$ with arbitrary strongly porous at $0$ set is porous at $0$ if and only if $A$ is lower porous at $0$. ALTINOK, MAYA; DOVGOSHEY, OLEKSIY; and KÜÇÜKASLAN, MEHMET (2017) "Unions and ideals of locally strongly porous sets," Turkish Journal of Mathematics: Vol. 41: No. 6, Article 13. https://doi.org/10.3906/mat-1604-44 Available at: https://journals.tubitak.gov.tr/math/vol41/iss6/13 Mathematics Commons Manuscript Template - NEW! All Issues Vol. 47, No. 1 Vol. 46, No. 8 Vol. 46, No. 7 Vol. 46, No. 6 Vol. 46, No. SI-2 Vol. 46, No. 4 Vol. 46, No. 3 Vol. 46, No. SI-1 Vol. 46, No. 1 Vol. 45, No. 6 Vol. 45, No. 5 Vol. 45, No. 4 Vol. 45, No. 3 Vol. 45, No. 2 Vol. 45, No. 1 Vol. 44, No. 6 Vol. 44, No. 5 Vol. 44, No. 4 Vol. 44, No. 3 Vol. 44, No. 2 Vol. 44, No. 1 Vol. 43, No. 6 Vol. 43, No. 5 Vol. 43, No. 4 Vol. 43, No. 3 Vol. 43, No. 2 Vol. 43, No. 1 Vol. 42, No. 6 Vol. 42, No. 5 Vol. 42, No. 4 Vol. 42, No. 3 Vol. 42, No. 2 Vol. 42, No. 1 Vol. 41, No. 6 Vol. 41, No. 5 Vol. 41, No. 4 Vol. 41, No. 3 Vol. 41, No. 2 Vol. 41, No. 1 Vol. 40, No. 6 Vol. 40, No. 5 Vol. 40, No. 4 Vol. 40, No. 3 Vol. 40, No. 2 Vol. 40, No. 1 Vol. 39, No. 6 Vol. 39, No. 5 Vol. 39, No. 4 Vol. 39, No. 3 Vol. 39, No. 2 Vol. 39, No. 1 Vol. 38, No. 6 Vol. 38, No. 5 Vol. 38, No. 4 Vol. 38, No. 3 Vol. 38, No. 2 Vol. 38, No. 1 Vol. 37, No. 6 Vol. 37, No. 5 Vol. 37, No. 4 Vol. 37, No. 3 Vol. 37, No. 2 Vol. 37, No. 1 Vol. 36, No. 4 Vol. 36, No. 3 Vol. 36, No. 2 Vol. 36, No. 1 Vol. 35, No. 4 Vol. 35, No. 3 Vol. 35, No. 2 Vol. 35, No. 1 Vol. 34, No. 4 Vol. 34, No. 3 Vol. 34, No. 2 Vol. 34, No. 1 Vol. 33, No. 4 Vol. 33, No. 3 Vol. 33, No. 2 Vol. 33, No. 1 Vol. 32, No. 4 Vol. 32, No. 3 Vol. 32, No. 2 Vol. 32, No. 1 Vol. 31, No. Suppl. Vol. 31, No. 4 Vol. 31, No. 3 Vol. 31, No. 2 Vol. 31, No. 1 Vol. 30, No. 4 Vol. 30, No. 3 Vol. 30, No. 2 Vol. 30, No. 1 Vol. 29, No. 4 Vol. 29, No. 3 Vol. 29, No. 2 Vol. 29, No. 1 Vol. 28, No. 4 Vol. 28, No. 3 Vol. 28, No. 2 Vol. 28, No. 1 Vol. 27, No. 4 Vol. 27, No. 3 Vol. 27, No. 2 Vol. 27, No. 1 Vol. 26, No. 4 Vol. 26, No. 3 Vol. 26, No. 2 Vol. 26, No. 1 Vol. 25, No. 4 Vol. 25, No. 3 Vol. 25, No. 2 Vol. 25, No. 1 Vol. 24, No. 4 Vol. 24, No. 3 Vol. 24, No. 2 Vol. 24, No. 1 Vol. 23, No. 4 Vol. 23, No. 3 Vol. 23, No. 2 Vol. 23, No. 1 Vol. 22, No. 4 Vol. 22, No. 3 Vol. 22, No. 2 Vol. 22, No. 1 Vol. 21, No. EK Vol. 21, No. 4 Vol. 21, No. 3 Vol. 21, No. 2 Vol. 21, No. 1 Vol. 20, No. 4 Vol. 20, No. 3 Vol. 20, No. 2 Vol. 20, No. 1
CommonCrawl
The labor market reintegration of returned refugees in Afghanistan Craig Loschmann1 & Katrin Marchand1 Small Business Economics (2020)Cite this article 74 Accesses Even though Afghanistan remains one of the top origin countries of refugees around the world, a considerable number of refugees have also returned over the last three decades. This paper investigates the labor market outcomes of those returned refugees from Iran and Pakistan, motivated by the fact that their reintegration greatly depends on the ability to access sustainable income-generating activities as a basis of their livelihood. The analysis relies on cross-sectional data from an original household survey collected in five provinces of Afghanistan in 2011. The analytical approach is twofold: first, to compare returned refugees to non-migrants in regard to what influences their respective labor market outcomes; and second, to investigate the influence of the returnees' migration and return experience on those outcomes. We find evidence that returned refugees are less likely to be wage employed in comparison to non-migrants and that those factors related to socioeconomic status including educational attainment, and the strength of social networks plays an influential role in labor market outcomes. When it comes to the migration and return experience of returnees, a few key factors are found to be of particular consequence for current employment status including employment prior to migration, time abroad, amount of savings brought back upon return, return assistance, and intentions to re-migrate. These findings help to shed light on the reintegration process of returned refugees in Afghanistan, an issue of growing concern for policymakers taking into consideration the recent increase in return flows. The topic of migration continues to receive considerable attention as of late both within high level policy circles and across popular media. This heightened interest is in large part due to the impression that we are living in times of unprecedented forced displacement, driven by the fact that the absolute number of people in exile both within and outside their countries of origin remains at a modern-day high (UNHCR 2018a). It is important to keep perspective, however, and consider that the relative number of refugees compared to the world's population remains small and mostly stable (de Haas 2016). Still, at the more local level, certain countries predominately in the "Global South" are indeed facing significant pressure to cope with refugee populations. While it is difficult to estimate just how many of today's refugees will be integrated into their host societies, an important consideration over the medium- and long-term is their potential return back to their countries of origin. Just as the influx of refugees from elsewhere may have important development-related consequences for a local community, so too can the sudden arrival of returnees who may have spent years, if not lifetimes, abroad. Only recently has return migration begun to gain interest among academic scholars and policymakers as evidence mounts that the knowledge, skills, and savings acquired abroad and subsequently transferred upon return have the potential to contribute to positive development outcomes. For this potential to be realized, however, the manner in which returnees reintegrate into their communities, including into the labor market, is fundamental. In this regard, certain case studies on record have found that return migrants are more likely than non-migrants to be self-employed rather than employed as wage labor (Piracha and Vadean 2010; Wahba and Zenou 2012). Yet such an observation is ultimately ambiguous without a qualified understanding of the greater context under study, including the underlying causes of migration in the first place. The majority of studies looking at labor market outcomes of returnees focus mainly on countries characterized by voluntary labor migration. Very few offer insights into the livelihood activities of returned refugees in (post-)conflict environments. With this in mind, this paper investigates the labor market outcomes of returned refugees in Afghanistan. Even though Afghans today still make up one of the largest refugee populations outside their country, Afghanistan has also experienced significant return migration at various intervals over the last three decades. Figure 1 illustrates, for example, how the Taliban's ouster in 2001 resulted in the sudden return of 2 million refugees and another 3.6 million in the immediate years following. While return flows tapered off around 2006, the yearly figure of officially returned refugees in 2016 was back up to levels not seen since then. In fact, the estimated 385,000 individuals repatriated throughout 2016 are more than fivefold increase relative to the year prior, and IOM (2017) believes there may have been be an additional 690,000 undocumented returnees. Refugees and returned refugees in Afghanistan. Source: UNHCR 2018b. The number of "Refugees" indicates the stock of the population from Afghanistan, while the number of "Returned refugees" indicates flows within the calendar year (i.e., January–December). This study is motivated by the fact that the reintegration of returned refugees in a (post-)conflict setting like Afghanistan greatly depends on the ability to access sustainable income-generating activities as a basis of their livelihood. The analysis relies on cross-sectional data from an original household survey collected in five provinces of Afghanistan in 2011. The analytical approach is twofold: first, to compare returned refugees to non-migrants in regard to what influences their respective labor market outcomes; and second, to investigate the influence of the returnees' migration and return experience on those outcomes. Because we are interested in the labor market reintegration of returned refugees, we only take into consideration those returnees who originally migrated because of political or security concerns or because of an environmental disaster. And while recent reports highlight the increasingly involuntary nature of return for many Afghan refugees and asylum-seekers (see, e.g., Human Rights Watch 2017; Bjelica and Ruttig 2017), our sample is made up of returnees from Iran and Pakistan who chose to return because of perceived improvements to the political and security situation in the country or due to a variety of personal reasons (e.g., missed country, culture, or family). None returned because of work-related opportunities, helping to isolate our estimates from selection bias. The sample ultimately covers 1841 individuals, of which 461 are returned refugees. The results indicate that returned refugees in Afghanistan are less likely to be wage employed in comparison to non-migrants. Differences in labor market outcomes arise from dissimilarities in socioeconomic status including educational attainment and the strength of social networks. As for the influence of the migration and return experience on employment status, a few key factors are found to be of consequence. First and somewhat expected, being employed prior to migrating helps raise the likelihood of being wage employed upon return. Less expected, however, given the context of forced migration, the more years spent abroad the greater the odds of being wage employed, indicating skill acquisition while abroad. Moreover, the amount of savings brought back upon return is positively associated with becoming self-employed in agriculture or herding (i.e., subsistence farming), while the opposite is true if the individual received assistance upon return or has intentions to re-migrate. From a scholarly perspective, this study contributes to the academic discussion in a variety of ways. For one, the empirical evidence on refugee return and reintegration into the labor market is relatively limited. Even though descriptive accounts of certain contexts provide insight (see, e.g., Mesic and Bagic 2011; ILO 2013), none to the best of our knowledge take a quantitative methodological approach. One clear reason for this is the fact that large-scale data sets covering conflict-affected environments such as Afghanistan are generally rare. That we are able to rely on relatively uncommon primary data in this context provides us with a unique opportunity to investigate the labor market reintegration of returned refugees. Furthermore, by investigating labor market outcomes, including self-employment in business, of both returned refugees and non-migrants, the study contributes more generally to the literature on labor markets in (post-)conflict settings. Such analysis is important considering the linkages that have been drawn in the literature between employment creation, economic growth, and stabilization after conflict (see, e.g., Collier 2009; Cramer 2015). The remainder of this paper is structured as follows. The next section provides a review of the relevant literature concerning return migration and the dynamics related to labor market outcomes upon return. This is followed by a more detailed account of the methodology including the empirical approach and sample. We finally present the results and conclude with a brief summary and policy discussion concerning ways to support returned refugees in Afghanistan in their labor market reintegration. In a (post-)conflict setting still fraught with lingering uncertainty about the future, the sustainability of return and reintegration is often a challenging process (Bascom 2005). Reintegration takes time and for some returnees is never achieved, often resulting in re-migration (Kuschminder 2013). Many factors contribute to a successful return and reintegration, including a welcoming community, security, access to basic infrastructure and services, and the chance to make a decent living. A robust local labor market providing job opportunities and livelihood possibilities therefore greatly influences whether or not a returnee chooses to settle permanently again at origin (Black and Gent 2006). At the same time, conflicts have significant impacts on labor markets and change the types of employment opportunities available (Stewart 2015). A common feature of conflict is an observed reallocation of employment, largely depending on the development of said conflict. Where infrastructure such as power plants or fuel facilities is destroyed, for example, major providers of employment disappear. Equally, trade and tourism tend to be affected by conflict and impact employment opportunities in related sectors (Cramer 2015). More generally, labor markets in developing countries often leave individuals to decide between engagement in self-employment activities, agriculture, household work, or migration due to a scarcity of wage-employment opportunities, particularly in rural regions (Nagler 2015). The role of small businesses and self-employment, especially in the informal sector, therefore has received specific attention within these discussions highlighting the importance of such activities in the context of developing countries in terms of employment and income generation (Zenou 2008). While self-employment in such contexts may intrinsically be subsistence-based, it is helpful to consider such an activity in relation to entrepreneurship, which more often than not is associated with positive changes such as job and wealth creation, innovation, and related welfare effects (Ács 2006; Desai 2011; Naudé 2010b). Desai (2011), for example, argues that entrepreneurship creates bottom-up activities addressing immediate and short-term problems. Naudé (2010a), on the other hand, believes that entrepreneurs drive the structural transformation of an economy away from agriculture and toward manufacturing and services. Beyond these macro-level effects, small businesses may also simply be a viable survival strategy when institutional support mechanisms are lacking (Ciarli et al. 2010). In this respect, it is necessary to make the distinction between opportunity and necessity entrepreneurship. Whereas opportunity entrepreneurs are thought to seize unique opportunities in the market, necessity entrepreneurs engage in entrepreneurial activities because it is the best or only option available (Reynolds et al. 2005). According to Margolis (2014), roughly two thirds of self-employment in developing countries is due to a lack of other alternatives for income-generation. Even though entrepreneurship based on opportunity may be preferred, the activities of necessity entrepreneurs are still important to consider in a context like that of Afghanistan, as such enterprises provide at least one livelihood and have the potential to contribute to local development (Ciarli et al. 2010). When it comes to finding a suitable activity in the labor market, three primary types of capital are essential: human, financial, and social. Human capital describes natural characteristics like intelligence and health but also skills and abilities acquired mainly through education and work experience (Bosma et al. 2004). Financial capital principally consists of personal savings as well as private and public loans either from friends and family, a financial institution, or the government. And social capital embodies an individual's relationships to others and the network on which one can rely (Westlund and Bolton 2003). With all, return migrants are often believed to have a distinct advantage in comparison to their non-migrant counterparts (Black et al. 2003). Beyond the potentially innate differences regarding risk aversion and the like, returnees often had sent or come back with substantial savings accumulated while abroad to be consumed and/or invested once back (OECD 2008). Moreover, returnees might arrive with additionally acquired education or skills useful to local markets (Cassarino 2004). Lastly, in many cases, spending time abroad exposes one to a diverse set of social networks potentially providing a returnee with a greater number of links and therefore opportunities beyond the community once back. On the other hand, migrating in the first place may lead to a loss in contact with local networks which may put returnees at a disadvantage with respect to local opportunities (Klagge et al. 2007). With this conceptual framework at hand, a number of empirical studies focusing on voluntary migration have made an effort to identify the labor market activities of returnees and more specifically the factors leading to self-employment and small business establishment. With regard to human capital, there is ample evidence that points to its importance in finding employment and for the small business creation by returnees. Looking at Turkish returnees from Germany, Dustmann and Kirchkamp (2002) find evidence of education as a driving factor in self-employment. In this case, those with a higher level attained have a greater probability of opening a business compared to non-participation, likely due to expected positive returns of education increasing the likelihood of choosing such an activity. Borodak and Piracha (2011) confirm such finding when it comes to returning Moldovans yet explain that those at a lower skill level are unable to afford being without a formal source of income leading to the greater likelihood of wage employment. Conversely, however, Ilahi (1999) and McCormick and Wahba (2001) show that returnees with higher levels of education are more likely to be wage employed rather than self-employed in the case of Pakistan and Egypt, respectively. Still, additional evidence in the latter case suggests that the length of employment while abroad also positively influences the odds of becoming self-employed upon return, an outcome corroborated elsewhere (McCormick and Wahba 2001; Black and Castaldo 2009; Wahba and Zenou 2012). Therefore it appears as Tani and Mahuteau (2008) show in their study of returnees to North Africa that the practical experiences and skills gained abroad play a crucial role in determining self-employment, while formal education is more likely to lead to wage employment even if it also decreases the chance of unemployment. The most common finding concerning self-employment relates to financial capital and more specifically the role of savings accumulated abroad in the launch of a small business upon return. For instance, both Arif and Irfan (1997) and Piracha and Vadean (2010) find strong indication that return migrants are more likely to be self-employed in business in comparison to non-migrants precisely because they had the opportunity to gather start-up capital abroad. Focusing exclusively on return migrants, Ilahi (1999), Dustmann and Kirchkamp (2002), and Mesnard (2004) arrive at a similar conclusion showing return migrants are prone to invest savings from abroad in business ventures back home, suggesting temporary migration may at times be employed as a strategy to overcome credit constraints faced in the country of origin. While in the context of forced migration, this strategy is less applicable, and it may still be the case that migrants are able to accumulate savings abroad that they can indeed utilize upon return to the home country. Finally, when it comes to social capital, personal networks play a significant role in the reintegration of return migrants in the home country (Omata 2012). The role networks play in the labor market reintegration of returnees is on the other hand empirically unclear. Black and Castaldo (2009), for instance, find that the strength of personal linkages, measured by membership in an association in the host country and visits home, does have a positive effect on business start-ups of return migrants in both Ghana and Côte d'Ivoire. Conversely, Piracha and Vadean (2010) show in the case of Albania no evidence of social capital, proxied by the number of friends one has, having any impact on the occupational choice of return migrants despite there being a significant effect for non-migrants. Going one step further, Wahba and Zenou (2012) model the potential trade-off between the financial and human capital accumulated while abroad against the social capital lost due to moving in the first place. In the context of Egypt, they provide evidence that gains in both financial and human capital which play a significant role in the choice of self-employment upon return, whereas a loss in social capital has no impact on returnees to become entrepreneurs even if it does for non-migrants. In all, the role of social capital largely depends on the specific local context as well as the type of employment activity. Return migrants may have comparative advantages in sectors where foreign networks are specifically beneficial, while non-migrants may benefit from having stronger local networks where those are most important. Although at times differing, overall the existing studies indicate that the migration experience greatly influences labor market outcomes of return migrants once back in the country of origin. Still, these experiences are not uniform as some individuals are inherently presented with greater opportunities abroad and therefore greater job prospects upon return (Arif and Irfan 1997; Gubert and Nordman 2011; Kilic et al. 2009). In a study of returnees in seven capital cities in Western Africa, for example, de Vreyer et al. (2010) show that there are significant differences in the uptake of an entrepreneurial activity upon return depending on the country of migration. In particular, they find those who returned from OECD countries in comparison to non-OECD countries are more likely to be entrepreneurs due to the better chances to accumulate financial and human capital at those destinations. Additionally, differences in the environment to which the migrant returns also play an important role. As such, it is important to better understand the labor market activities of returned refugees in particular (post-)conflict settings, in order to promote conditions that facilitate sustainable return and reintegration processes in such contexts. Empirical approach As indicated prior, our objective is twofold: first, to compare returned refugees to non-migrants in regard to what influences their respective labor market outcomes; and second, to investigate the influence of the returnees' migration and return experience on those outcomes. In both cases, we employ a multinomial logit model to estimate the propensity that an individual is engaged in one of the three labor market activities compared to the base alternative of not workingFootnote 1. The three activities include self-employment in business, agriculture which incorporates subsistence farming and/or animal herding, and wage employment. The model can be expressed as: $$ \mathit{\Pr}\left({y}_i=j\right)=\frac{e^{\beta_j{x}_i}}{\sum_{k=1}^K{e}^{\beta_k{x}_i}} $$ where yi represents activity j of individual i. On the right-hand side of the equation, the xi vector incorporates a range of individual, household, and community characteristics, as well as migration- and return-related characteristics when looking exclusively at returnees, and βj represents the vector of activity-specific coefficients. Prior to estimating the model, it is important to consider the possibility of self-selection. As has been established in the literature, there is reason to believe that both migrants and returnees may be intrinsically different from non-migrants based on unobservable characteristics that are correlated with employment status. Most of the evidence in this regard pertains to labor migration and the prospect that migrants are inherently more intrepid and thus less risk averse than the non-migrant population and that return migrants may have picked up informal skills and expertise during their time abroad (Dustmann and Kirchkamp 2002; OECD 2008, 2010; Borodak and Piracha 2011). Similarly, migrants may return only when they believe that the prospects for employment have improved to their advantage (Novak 2007; Hautaniemi et al. 2013). As discussed prior, our sample is limited to only those returnees who originally migrated because of political or security concerns or because of an environmental disaster and who stated their return was motivated by improvements to the political and security situation of the country or a variety of personal reasons (e.g., missed country, culture, or family).Footnote 2 We believe that by excluding voluntary migrants, and the few returnees motivated by employment opportunities, our estimates are less afflicted by selection bias than would otherwise be the case. Nonetheless, even in a context of systematic insecurity, there may be inherent differences between those able to migrate, as well as those deciding to return. The estimates, therefore, may still potentially suffer from positive self-selection and should be interpreted with caution. However, under such conditions, one can assume such bias would lead to inflated estimates and as a result can be considered upper bounds. The data used for the analysis comes from an original household survey implemented across Afghanistan in 2011, for the IS Academy "Migration & Development: A World in Motion" project.Footnote 3 Although not nationally representative due to difficulties surveying in high risk locations, the sampling incorporated households of differing fundamental characteristics in order to increase overall representativeness. More specifically, the five provinces of Kabul, Herat, Balkh, Nangarhar, and Kandahar were selected because of their highly populated urban centers, geographical dispersion, and varied profiles of migration. Within each province, a stratification of districts was applied based on whether they were considered urban, semirural, or rural.Footnote 4 This stratification allowed for greater representation of different socioeconomic groups, and districts were chosen based on their representativeness of the province at large. The primary sampling units were then selected at random taking into consideration a detailed list of specific sites for enumeration provided by the Afghan Central Statistics Office. In all, ten communities within an urban area and five from each of the semirural and rural areas were selected for enumeration. Within the communities, the absence of any official household listing made it necessary for the team leader to discuss the rough makeup of the community with a local leader or elder prior to enumeration. This led to a general distributional profile of the community based on current migrant, return migrant, and non-migrant households which was then respected throughout enumeration in order to be as representative as possible. Finally, the selection of households followed a random starting point and fixed interval sampling strategy in order to meet the pre-specified quota in each community. Ultimately, the survey covered a total 14,777 individuals within 2005 households across 100 distinct communities. Once excluding individuals outside the working age of 15–65, inactive on the labor market, females and returnees who migrated voluntarily, as well as returned before 1992, we are left with a sample of 1841 respondents of which 461 are returned refugees.Footnote 5 Table 1 provides the summary statistics of the sample, differentiated by migration status. We report a mean difference test in the final column, which only applies to those variables applicable to both non-migrants and returnees. When comparing non-migrants to returned refugees based on the labor market outcome variable of interest, we find little difference between the two groups. Returnees, on average, are about six percentage points more likely to be self-employed in business, whereas non-migrants are around five percentage points more likely to be wage employed, with the mean differences significant at the 10% level. There is no statistical mean difference between not working and being engaged in agricultural activity. Table 1 Summary statistics, comparing non-migrants to returned refugees As for fundamental demographic characteristics, there are considerable differences in terms of household position and age as nearly all returned refugees are the household head in comparison to around half of non-migrants, and the average difference in age between the two groups is 8 years. Likewise, returnees are more likely to be married in comparison to non-migrants, as well as have more children. Regarding educational attainment, a proxy for human capital, there is only a marginal statistical difference between the two groups with around 15% of returnees having a secondary or higher level of education compared to 11% of non-migrants. In terms of socioeconomic status, there is no discernable difference between groups based on land ownership. Still, returned refugees are on average 12 percentage points more likely to have social capital in the form of a local social network, indicated by involvement in a community organization other than a religious group. In looking at some of the migration-related characteristics for returned refugees only, a quarter of returnees were employed prior to migrating and just over two-thirds migrated to Pakistan, while the rest went to Iran.Footnote 6 The average time abroad is around 12 years, and only 6% sent remittances during that period. In terms of the return experience, around half repatriated between the fall of the Najibullah regime in 1992 and the ouster of the Taliban regime in 2001, corresponding to the average of 10 years since return. Nearly three-fourths of returnees cited improvements in the political and/or security situation as the main reason for return, while the rest reported personal reasons (i.e., wanting to be closer to my family and friends). Looking at the financial capital of returned refugees, the average amount of savings brought back upon return is 246 USD, and 28% received support upon return in the form of financial assistance by either an international organization or the government. Lastly, only 19% of returnees have concrete intentions to re-migrate in the future. In presenting our empirical results, we begin with a simple examination of whether being identified as a returned refugee makes an individual more likely to be involved in one of the three labor market activities in comparison to not working. In all models hereafter, we report the relative risk ratios along with robust standard errors in parentheses. And aside from the socio-demographic covariates presented in the tables, all models control for the ethnicity (i.e., Pashtun, Tajik, otherFootnote 7) of the returnee as well as the district type (i.e., urban, semirural, or rural) and province of return. Table 2 shows that when controlling for basic socio-demographic characteristics, a returned refugee is on average less likely to be involved in agricultural activity as well as wage employment holding all else constant. More specifically, for returned refugees relative to non-migrants, the relative risk of being wage employed is less likely by a factor of 0.42. While the same relationship holds for self-employment in business, the result is significant at the 10% level. Taking into consideration the potential for positive self-selection as previously discussed, these estimates can be considered upper bounds, meaning the negative effect may be even greater than is found here. Table 2 Labor market activity Expecting differences between non-migrants and returned refugees, we conduct a Chow test to rule out the null hypothesis of similar coefficients across the two groups. The results of the test show a statistically significant chi-square value for both self-employment in business and wage employment. This indicates that the estimated coefficients between groups are statistically different and individual covariates in our model influence non-migrants and returnees differently for both labor market categories. The estimated coefficients for agriculture, on the other hand, are not statistically different between both groups, suggesting return migration may not be influential for this activity. Table 3 compares non-migrants and returned refugees in regard to what influences their respective labor market activity. First, we notice statistically significant similarities in terms of basic demographic characteristics. For instance, being the head of the household and married makes an individually more likely to be employed in nearly all three categories compared to not working for both non-migrants and returnees. Alternatively, the older an individual, the slightly less prone they are to be self-employed in business or wage employed, regardless of migration status. Only in the case of returnees are these characteristics not relevant for being involved in agriculture. Table 3 Labor market activity, comparing non-migrants to returned refugees As for educational attainment, the results paint a mixed picture. Non-migrants with a higher level of educational attainment (i.e., at least secondary schooling) are less likely to be engaged in agricultural work and more likely to be involved in wage labor. For returned refugees, however, statistical significance drops out for wage employment. This suggests that non-migrants with low levels of education have few options other than subsistence agricultural labor, whereas relatively higher levels of education open up opportunities for wage labor. Conversely, the prospect of wage employment for returned refugees has less to do with their level of education. With respect to household socioeconomic characteristics, as to be expected, both non-migrants and returned refugees within households owning land have a higher likelihood of being engaged in an agricultural activity relative to not working. More interestingly, the strength of social networks, proxied for by involvement in a community organization, appears to be similarly relevant for both non-migrants and returned refugees across all labor market outcomes. Table 4 reports the differences across labor market activities based on the migration and return experience of returned refugees only. Nearly all of those same individual and household characteristics influential in the previous model are once again statistically significant, so as a matter of parsimony, only the migration- and return-related characteristics of interest are presented here. First, and somewhat expectedly, we find that those individuals who were employed prior to migrating have a higher likelihood of being wage employed in comparison to not working upon return. Less expected, however, given the context of forced migration, the more years spent abroad, the slightly greater the likelihood of being wage employed indicating a degree of skill acquisition. Conversely, returnees who originally migrated to Iran compared to Pakistan are more likely to be involved in farming or herding upon return. The same is true regarding the number of years since return and the amount of savings brought back, although all are only marginally statistically significant at the 10% level. Lastly, individuals having received assistance upon return and with concrete intentions to re-migrate are less likely to be occupied with agriculture. We believe this indicates labor-intensive activities such as farming or herding animals may necessitate high upfront investment in productive assets like land and livestock not covered by the support received but which makes future movement less desirable. Table 4 Labor market activity for returnees, with migration and return experience The reintegration into the local labor market is a key element of the sustainable return of refugees in (post-)conflict settings. Yet the income-generating activities of such populations upon return, and particularly the role of self-employment, are not well understood. Literature on the return of labor migrants has shown that returnees have a higher likelihood of being self-employed in contrast to wage employment than their non-migrant counterparts. Similar studies looking at the return of forced migrants, on the other hand, are lacking. Utilizing a unique data set, this paper therefore investigates the labor market outcomes of returned refugees in Afghanistan, a country that has been characterized by conflict and general insecurity for decades. The results of the analysis show that returned refugees are less likely to be wage employed in comparison to non-migrants, and differences in labor market outcomes seem to arise primarily from dissimilarities in socioeconomic status. For example, non-migrants with higher levels of schooling are more likely to be in waged labor, whereas labor market activities have less to do with educational attainment for returnees. As such, we can deduce that those individuals of a higher socioeconomic status are generally able to take advantage of the insufficient employment opportunities available, yet having left the country and since returned limits any such ability. On the other hand, having social capital within the local community, proxied for by community involvement, helps both non-migrants' and returnees' chances of being engaged in all labor market activities similarly. As for the influence of the migration and return experience on labor market outcomes, a few key factors are found to be of consequence. First, and somewhat expectedly, being employed prior to migrating helps raise the likelihood of being wage employed upon return. Less expected, however, given the context of forced migration, the more years spent abroad the greater the odds of being wage employed pointing to skill acquisition. Moreover, and likely corresponding to the prior notion related to socioeconomic status, those who received financial assistance to return from either an international organization or government program are less likely to be involved in agriculture as well as wage employed. On the other hand, the amount of savings brought back upon return is beneficial when it comes to agriculture or herding, highlighting the importance of financial capital for engaging in such activities. Finally, individuals with concrete intentions to re-migrate are less likely to be occupied with agriculture or herding, indicating labor-intensive activities such as farming necessitate greater investment in land and assets including livestock, making future movement less desirable. Taking a step back from our findings, it is important to consider the evolving context related to migration from and return to Afghanistan since the data was collected in 2011. As Fig. 1 shows, return flows increased once again in 2016 in great part due to a changing policy environment toward Afghans in Pakistan, as well as a rise in forced returns from Europe. Therefore, even though the data used in the analysis may be relatively dated, the fundamental issues addressed are arguably just as relevant today as they were a few years ago. In terms of policy, the findings imply a number of opportunities to assist small business creation by returnees in Afghanistan with the goal of supporting sustainable return and reintegration. When considering potential interventions, however, it is necessary to emphasize proper targeting and a logical focus on areas of high return. Programs already providing basic support to returnees (e.g., shelter assistance) currently operate in several provinces known for high rates of return including Nangarhar, Kabul, and Lagham in the east, Kandahar in the south, and Herat in the west (MGSoG and Samuel Hall 2013). Beyond targeting though, it is also important that assistance be meaningful to the localized context of the recipient. Unsurprisingly, individuals in rural areas are more likely to become self-employed in agriculture than in business. As such, in-kind assistance like tools, seeds, or livestock are likely to enable and support these agricultural activities, whereas assistance like business training may be more appropriate in an urban context. Given the role of social networks highlighted in our study, assistance focused on helping returnees build strategic linkages in their communities may be particularly beneficial. The capacity of return migrants, for instance, could be improved by bringing them in touch with other actors like business associations or a network of experts. Indeed a now-outdated program run by the Dutch IntEnt Foundation providing support to Afghan return migrants from the Netherlands had an extensive network at origin willing to help newcomers by sharing knowledge, contacts, and in some cases even investments (de Haas 2006). Moreover, a similar and currently ongoing program by the German Development Cooperation has proven to be beneficial to return migrants wanting to open a business in several developing countries and emerging market economies, for example, Morocco, Cameroon, Ghana, Senegal and Nigeria in Africa or Ecuador, and Colombia and Peru in Latin America (CIM 2018). Additionally, our finding concerning the importance of savings suggests a possible credit constraint at home which earnings from abroad help to ease. With this in mind, small grants and/or loans for the purpose of investing in a business venture may be a viable strategy if provided to a suitable recipient with practical ideas and the capacity to carry them out. Careful selection is therefore important to increase the likelihood of effective implementation, but certain conditions could be put in place to help improve the odds of success including mandatory attendance to training session or membership in a business group. Above all, reintegration into the labor market is an important step in the process of sustainable return to a (post-)conflict environment like that of Afghanistan. In a context where wage employment is systematically limited however, self-employment may simply be the only if not best viable income-generating activity. Providing support then to returned refugees for this specific purpose, whether for a business venture or agricultural endeavor, has the potential to not only facilitate reintegration and improve individual welfare but also contribute to local development. Not working refers to individuals unemployed and actively looking for work, as well as individuals unemployed and wanting a job but not actively looking. The percentage of all returnees who indicated their original migration episode was voluntary is around a quarter of the original sample, while the percentage of all returned refugees who indicated they returned for employment opportunities is less than 1 %. For more information on the IS Academy project, as well as sampling methodology in the case of Afghanistan, see: <https://www.merit.unu.edu/themes/6-migration-and-development/is-academy/>. Urban refers to those communities which are the district capital; semirural refers to those communities which share a common border with the district capital; and rural refers to those communities with no common border with the district capital. We look at male respondents that only given women's labor force participation in Afghanistan is systematically lower than that of men (CSO 2014). We exclude inactive individuals, for example, retired or permanently sick/disabled. We do not consider individuals who returned prior to 1992 because of differences to the political climate prior to the fall of the Najibullah regime in that year. These individuals account for only 8% of all returnees in the original sample. Just four returnees in the original sample indicated having migrated to and returned from a country outside of Pakistan or Iran (i.e., England, UAE, Saudi Arabia, and Tajikistan). However, none of those observations are included in the final sample used for analysis following the aforementioned exclusion criteria. The original questionnaire included more ethnic groups (e.g., Uzbek, Hazara, Turkmen and Baloch); however, the limited number of each in the sample led us to group these into one "other" category. Ács, Z. (2006). How is entrepreneurship good for economic growth? Innovations: Technology, Governance, Globalization, 1(1), 97–107. Arif, G. M., & Irfan, M. (1997). Return migration and occupational change: The case of Pakistani migrants returned from the Middle East. The Pakistan Development Review, 36(1), 1–37. Bascom, J. (2005). Last step? Reintegration of repatriates in Eritrea. Journal of Refugee Studies, 18(2), 165–180. Bjelica, J. & Ruttig, T. (2017). Voluntary and forced returns to Afghanistan in 2016/17: Trends, statistics and experiences. Afghanistan Analysts Network. Retrieved from <https://www.afghanistan-analysts.org/voluntary-and-forced-returns-to-afghanistan-in-201617-trends-statistics-and-experiences/>. Black, R., & Castaldo, A. (2009). Return migration and entrepreneurship in Ghana and Cote D'Ivoire: The role of capital transfers. Tijdschrift voor Economische en Sociale Geografie, 100(1), 44–58. Black, R., & Gent, S. (2006). Sustainable return in post-conflict contexts. International Migration, 44(3), 15–38. Black, R., King, R., & Tiemoko, R. (2003). Migration, return and small enterprise development in Ghana: A route out of poverty? Sussex Migration working no. 9. Sussex Centre for Migration Research, University of Sussex. Borodak, D., & Piracha, M. (2011). Occupational choice of return migrants in Moldova. Eastern European Economics, 49(4), 24–46. Bosma, N., van Praag, M., Thurik, R., & de Wit, G. (2004). The value of human and social capital investments for the business performance of startups. Small Business Economics, 23(3), 227–236. Cassarino, J. P. (2004). Theorising return migration: The conceptual approach to return migrants revisited. International Journal of Multicultural Societies, 6(2), 253–279. Central Statistics Office (CSO). (2014). National Risk and Vulnerability Assessment 2011/2012. Afghanistan living conditions survey. Kabul: Central Statistics Office. Centrum für internationale Migration und Entwicklung (CIM). (2018). Business ideas for development. Retrieved from <https://www.cimonline.de/en/html/business-ideas.html> Ciarli, T., Parto, S., & Savona, M. (2010). Conflict and entrepreneurial activity in Afghanistan: Findings from the national risk and vulnerability assessment data. Working paper no. 2010/08. Helskinki: UNU-WIDER. Collier, P. (2009). Post-conflict recovery: How should strategies be distinctive? Journal of African Economies, 18(suppl_1), i99–i131. Cramer, C. (2015). Peace work: Labour markets, work and violence. UNDP Human Development Report Office Think Piece. New York: UNDP. de Haas, H. (2006). Engaging diasporas. How governments and development agencies can support diaspora involvement in the development of their origin countries. A study for Oxfam Novib, Oxford: International Migration institute, University of Oxford. de Haas, H. (2016). Refugees: A small and relatively stable proportion of world migration [Blog post]. Retrieved from <http://heindehaas.blogspot.com.es/2016/08/refugees-small-and-relatively-stable.html>. de Vreyer, P., Gubert, F., & Robilliard, A. S. (2010). Are there returns to migration experience? An empirical analysis using data on return migrants and non-migrants in West Africa. Annals of Economics and Statistics, 97/98, 307–328. Desai, S. (2011). Measuring entrepreneurship in developing countries. In W. Naudé (Ed.), Entrepreneurship and economic development (pp. 94–107). Basingstoke: Palgrave Macmillan. Dustmann, C., & Kirchkamp, O. (2002). The optimal migration duration and activity choice after re-migration. Journal of Development Economics, 67(2), 351–372. Gubert, F., & Nordman, C. J. (2011). Return migration and small enterprise development in the Maghreb. In S. Plaza & D. Ratha (Eds.), Diaspora for Development in Africa (pp. 103–126). Washington D. C.: World Bank. Hautaniemi, P., Juntenen, M., & Sato, M. (2013). Return migration and vulnerability: Case studies from Somaliland and Iraqi Kurdistan. Helsinki: Interkont Books. Human Rights Watch (2017). Pakistan: Mass forced returns of afghan refugees. UN Refugee Agency Complicit in Government Coercion. Retrieved from <https://www.hrw.org/news/2017/02/13/pakistan-mass-forced-returns-afghan-refugees>. Ilahi, N. (1999). Return migration and occupational change. Review of Development Economics, 3(2), 170–186. ILO. (2013). Assessment of livelihood opportunities for the returnees/IDPs and the host communities. Geneva: ILO. IOM. (2017). Return of undocumented Afghans from Pakistan and Iran. 2016 Overview. Geneva: IOM. Kilic, T., Carletto, C., Davis, B., & Zezza, A. (2009). Investing back home: Return migration and business ownership in Albania. The Economics of Transition, 17(3), 587–623. Klagge, B., Klein-Hitpas, K., Fihel, A., Kindler, M., Matejko, E., & Okolski, M. (2007). High-skilled return migration and knowledge-based economic development in a regional perspective: Conceptual considerations and the example of Poland. CMR working paper 19. Centre of Migration Research, Warsaw University. Kuschminder, K. (2013). Female return migration and reintegration strategies in Ethiopia. Maastricht: MGSoG. Maastricht Graduate School of Governance (MGSoG) & Samuel Hall Consulting. (2013). Evaluation of the UNHCR shelter assistance programme: Full report. Maastricht: MGSoG. Margolis, D. N. (2014). By choice and by necessity: Entrepreneurship and self-employment in the developing world. European Journal of Development Research, 26(4), 419–436. McCormick, B., & Wahba, J. (2001). Overseas work experience, savings and entrepreneurship amongst return migrants to LDCs. Scottish Journal of Political Economy, 48(2), 164–178. Mesic, M., & Bagic, D. (2011). Minority return to Croatia: Study of an open process. Geneva: UNHCR. Mesnard, A. (2004). Temporary migration and capital market imperfections. Oxford Economic Papers, 56(2), 242–262. Nagler, P. (2015). Occupational Choice in the developing world. Maastricht: MGSoG. Naudé, W. (2010a). Entrepreneurship, developing countries, and development economics: New approaches and insights. Small Business Economics, 34(1), 1–12. Naudé, W. (2010b). Promoting entrepreneurship in developing countries. In Policy brief 4. UNU-WIDER: Helsinki. Novak, P. (2007). Place and Afghan refugees: A contribution to Turton. Journal of Refugee Studies, 20(4), 551–578. OECD. (2008). International Migration outlook. Paris: OECD. OECD. (2010). Entrepreneurship and migrants - report by the OECD working party on SMEs and entrepreneurship. Paris: OECD. Omata, N. (2012). Repatriation and integration of Liberian refugees from Ghana: The importance of personal networks in the country of origin. Journal of Refugee Studies, 26(2), 265–282. Piracha, M., & Vadean, F. (2010). Return migration and occupational choice: Evidence from Albania. World Development, 38(8), 1141–1155. Reynolds, P. D., Bosma, N., Autio, E., Hunt, S., De Bono, N., Servais, I., Lopez-Garcia, P., & Chin, N. (2005). Global entrepreneurship monitor: Data collection design and implementation 1998–2003. Small Business Economics, 24(3), 205–231. Stewart, F. (2015). Employment in conflict and post-conflict situations. UNDP human development report office think piece. New York: UNDP. Tani, M., & Mahuteau, S. (2008). Return migration and working choices. MIREM project analytical report 1. Robert Schuman Centre for Advanced Studies, European University Institute. UNHCR. (2018a). Global trends: Forced displacement in 2017. Geneva: UNHCR. UNHCR (2018b). UNHCR statistical online population database. Retrieved from <http://popstats.unhcr.org>. Geneva: UNHCR. Wahba, J., & Zenou, Y. (2012). Out of sight, out of mind: Migration, entrepreneurship and social capital. Regional Science and Urban Economics, 42(5), 890–903. Westlund, H., & Bolton, R. (2003). Local social capital and entrepreneurship. Small Business Economics, 21(2), 77–113. Zenou, Y. (2008). Job search and mobility in developing countries. Theory and policy implications. Journal of Development Economics, 86(2), 336–355. Maastricht Graduate School of Governance | UNU-MERIT, Maastricht University, Boschstraat 24, 6211, AX, Maastricht, the Netherlands Craig Loschmann & Katrin Marchand Search for Craig Loschmann in: Search for Katrin Marchand in: Correspondence to Katrin Marchand. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Loschmann, C., Marchand, K. The labor market reintegration of returned refugees in Afghanistan. Small Bus Econ (2020) doi:10.1007/s11187-019-00315-w DOI: https://doi.org/10.1007/s11187-019-00315-w Return migration Reintegration Not logged in - 34.229.131.116
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Inverse Matrix Linear Transformation Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Abelian Group Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems by Yu · Published 02/01/2017 · Last modified 07/13/2017 Quiz 3. Condition that Vectors are Linearly Dependent/ Orthogonal Vectors are Linearly Independent Problem 281 (a) For what value(s) of $a$ is the following set $S$ linearly dependent? \[ S=\left \{\,\begin{bmatrix} 1 \\ \end{bmatrix}, \begin{bmatrix} a \\ -1 \\ a^2 \\ a^3 \end{bmatrix} \, \right\}.\] (b) Let $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a set of nonzero vectors in $\R^m$ such that the dot product \[\mathbf{v}_i\cdot \mathbf{v}_j=0\] when $i\neq j$. Prove that the set is linearly independent. Read solution Click here if solved 17 Add to solve later Find a Nonsingular Matrix Satisfying Some Relation Determine whether there exists a nonsingular matrix $A$ if \[A^2=AB+2A,\] where $B$ is the following matrix. If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$. (a) \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 1 & 2 & -2 \end{bmatrix}\] (b) \[B=\begin{bmatrix} \end{bmatrix}.\] Determine Conditions on Scalars so that the Set of Vectors is Linearly Dependent Determine conditions on the scalars $a, b$ so that the following set $S$ of vectors is linearly dependent. S=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}, \[\mathbf{v}_1=\begin{bmatrix} \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} Sylow Subgroups of a Group of Order 33 is Normal Subgroups Prove that any $p$-Sylow subgroup of a group $G$ of order $33$ is a normal subgroup of $G$. Determine Linearly Independent or Linearly Dependent. Express as a Linear Combination Determine whether the following set of vectors is linearly independent or linearly dependent. If the set is linearly dependent, express one vector in the set as a linear combination of the others. \[\left\{\, \begin{bmatrix} \end{bmatrix}, \begin{bmatrix} \end{bmatrix}\, \right\}.\] Linear Transformation, Basis For the Range, Rank, and Nullity, Not Injective Let $V$ be the vector space of all $2\times 2$ real matrices and let $P_3$ be the vector space of all polynomials of degree $3$ or less with real coefficients. Let $T: P_3 \to V$ be the linear transformation defined by \[T(a_0+a_1x+a_2x^2+a_3x^3)=\begin{bmatrix} a_0+a_2 & -a_0+a_3\\ a_1-a_2 & -a_1-a_3 \end{bmatrix}\] for any polynomial $a_0+a_1x+a_2x^2+a_3 \in P_3$. Find a basis for the range of $T$, $\calR(T)$, and determine the rank of $T$, $\rk(T)$, and the nullity of $T$, $\nullity(T)$. Also, prove that $T$ is not injective. The Inverse Matrix of an Upper Triangular Matrix with Variables Let $A$ be the following $3\times 3$ upper triangular matrix. \[A=\begin{bmatrix} 1 & x & y \\ 0 &1 &z \\ 0 & 0 & 1 \end{bmatrix},\] where $x, y, z$ are some real numbers. Determine whether the matrix $A$ is invertible or not. If it is invertible, then find the inverse matrix $A^{-1}$. The Union of Two Subspaces is Not a Subspace in a Vector Space Let $U$ and $V$ be subspaces of the vector space $\R^n$. If neither $U$ nor $V$ is a subset of the other, then prove that the union $U \cup V$ is not a subspace of $\R^n$. Quiz 2. The Vector Form For the General Solution / Transpose Matrices. Math 2568 Spring 2017. (a) The given matrix is the augmented matrix for a system of linear equations. Give the vector form for the general solution. \[ \left[\begin{array}{rrrrr|r} 1 & 0 & -1 & 0 &-2 & 0 \\ 0 & 1 & 2 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ \end{array} \right].\] (b) Let 1 & 2 & 3 \\ 4 &5 &6 \end{bmatrix}, B=\begin{bmatrix} \end{bmatrix}, C=\begin{bmatrix} 1 & 2\\ 0& 6 \end{bmatrix}, \mathbf{v}=\begin{bmatrix} \end{bmatrix}.\] Then compute and simplify the following expression. \[\mathbf{v}^{\trans}\left( A^{\trans}-(A-B)^{\trans}\right)C.\] Find All Matrices $B$ that Commutes With a Given Matrix $A$: $AB=BA$ \end{bmatrix}.\] Then (a) Find all matrices \[B=\begin{bmatrix} x & y\\ z& w \end{bmatrix}\] such that $AB=BA$. (b) Use the results of part (a) to exhibit $2\times 2$ matrices $B$ and $C$ such that \[AB=BA \text{ and } AC \neq CA.\] If a Matrix $A$ is Singular, There Exists Nonzero $B$ such that the Product $AB$ is the Zero Matrix Let $A$ be an $n\times n$ singular matrix. Then prove that there exists a nonzero $n\times n$ matrix $B$ such that \[AB=O,\] where $O$ is the $n\times n$ zero matrix. Prove a Given Subset is a Subspace and Find a Basis and Dimension \end{bmatrix}\] and consider the following subset $V$ of the 2-dimensional vector space $\R^2$. \[V=\{\mathbf{x}\in \R^2 \mid A\mathbf{x}=5\mathbf{x}\}.\] (a) Prove that the subset $V$ is a subspace of $\R^2$. (b) Find a basis for $V$ and determine the dimension of $V$. Eigenvalues of Real Skew-Symmetric Matrix are Zero or Purely Imaginary and the Rank is Even Let $A$ be a real skew-symmetric matrix, that is, $A^{\trans}=-A$. Then prove the following statements. (a) Each eigenvalue of the real skew-symmetric matrix $A$ is either $0$ or a purely imaginary number. (b) The rank of $A$ is even. Eckmann–Hilton Argument: Group Operation is a Group Homomorphism Let $G$ be a group with the identity element $e$ and suppose that we have a group homomorphism $\phi$ from the direct product $G \times G$ to $G$ satisfying \[\phi(e, g)=g \text{ and } \phi(g, e)=g, \tag{*}\] for any $g\in G$. Let $\mu: G\times G \to G$ be a map defined by \[\mu(g, h)=gh.\] (That is, $\mu$ is the group operation on $G$.) Then prove that $\phi=\mu$. Also prove that the group $G$ is abelian. Vector Form for the General Solution of a System of Linear Equations Solve the following system of linear equations by transforming its augmented matrix to reduced echelon form (Gauss-Jordan elimination). Find the vector form for the general solution. x_1-x_3-3x_5&=1\\ 3x_1+x_2-x_3+x_4-9x_5&=3\\ x_1-x_3+x_4-2x_5&=1. Invertible Matrix Satisfying a Quadratic Polynomial Let $A$ be an $n \times n$ matrix satisfying \[A^2+c_1A+c_0I=O,\] where $c_0, c_1$ are scalars, $I$ is the $n\times n$ identity matrix, and $O$ is the $n\times n$ zero matrix. Prove that if $c_0\neq 0$, then the matrix $A$ is invertible (nonsingular). How about the converse? Namely, is it true that if $c_0=0$, then the matrix $A$ is not invertible? Click here if solved 6 Idempotent Matrices. 2007 University of Tokyo Entrance Exam Problem For a real number $a$, consider $2\times 2$ matrices $A, P, Q$ satisfying the following five conditions. $A=aP+(a+1)Q$ $P^2=P$ $Q^2=Q$ $PQ=O$ $QP=O$, where $O$ is the $2\times 2$ zero matrix. Then do the following problems. (a) Prove that $(P+Q)A=A$. (b) Suppose $a$ is a positive real number and let \[ A=\begin{bmatrix} a & 0\\ 1& a+1 \end{bmatrix}.\] Then find all matrices $P, Q$ satisfying conditions (1)-(5). (c) Let $n$ be an integer greater than $1$. For any integer $k$, $2\leq k \leq n$, we define the matrix \[A_k=\begin{bmatrix} k & 0\\ 1& k+1 \end{bmatrix}.\] Then calculate and simplify the matrix product \[A_nA_{n-1}A_{n-2}\cdots A_2.\] (Tokyo University Entrance Exam 2007) There is Exactly One Ring Homomorphism From the Ring of Integers to Any Ring Let $\Z$ be the ring of integers and let $R$ be a ring with unity. Determine all the ring homomorphisms from $\Z$ to $R$. by Yu · Published 01/19/2017 If matrix product $AB$ is a square, then is $BA$ a square matrix? Let $A$ and $B$ are matrices such that the matrix product $AB$ is defined and $AB$ is a square matrix. Is it true that the matrix product $BA$ is also defined and $BA$ is a square matrix? If it is true, then prove it. If not, find a counterexample. Quiz 1. Gauss-Jordan Elimination / Homogeneous System. Math 2568 Spring 2017. (a) Solve the following system by transforming the augmented matrix to reduced echelon form (Gauss-Jordan elimination). Indicate the elementary row operations you performed. x_1+x_2-x_5&=1\\ x_2+2x_3+x_4+3x_5&=1\\ x_1-x_3+x_4+x_5&=0 (b) Determine all possibilities for the solution set of a homogeneous system of $2$ equations in $2$ unknowns that has a solution $x_1=1, x_2=5$. Page 24 of 37« First«...10...2122232425262728...»Last » This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Probability (15) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Lower and Upper Bounds of the Probability of the Intersection of Two Events Find the Conditional Probability About Math Exam Experiment What is the Probability that Selected Coin was Two-Headed? If a Smartphone is Defective, Which Factory Made It? If At Least One of Two Coins Lands Heads, What is the Conditional Probability that the First Coin Lands Heads? Is the Trace of the Transposed Matrix the Same as the Trace of the Matrix? Quiz: Possibilities For the Solution Set of a Homogeneous System of Linear Equations Matrix Representations for Linear Transformations of the Vector Space of Polynomials Irreducible Polynomial $x^3+9x+6$ and Inverse Element in Field Extension Surjective Group Homomorphism to $\Z$ and Direct Product of Abelian Groups How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix Prove Vector Space Properties Using Vector Space Axioms 12 Examples of Subsets that Are Not Subspaces of Vector Spaces Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis The Intersection of Two Subspaces is also a Subspace Find a Basis for the Subspace spanned by Five Vectors Express a Vector as a Linear Combination of Other Vectors Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2020. All Rights Reserved.
CommonCrawl
Anchor voiceprint recognition in live streaming via RawNet-SA and gated recurrent unit Jiacheng Yao1,2, Jing Zhang1,2, Jiafeng Li1,2 & Li Zhuo1,2 EURASIP Journal on Audio, Speech, and Music Processing volume 2021, Article number: 45 (2021) Cite this article With the sharp booming of online live streaming platforms, some anchors seek profits and accumulate popularity by mixing inappropriate content into live programs. After being blacklisted, these anchors even forged their identities to change the platform to continue live, causing great harm to the network environment. Therefore, we propose an anchor voiceprint recognition in live streaming via RawNet-SA and gated recurrent unit (GRU) for anchor identification of live platform. First, the speech of the anchor is extracted from the live streaming by using voice activation detection (VAD) and speech separation. Then, the feature sequence of anchor voiceprint is generated from the speech waveform with the self-attention network RawNet-SA. Finally, the feature sequence of anchor voiceprint is aggregated by GRU to transform into a deep voiceprint feature vector for anchor recognition. Experiments are conducted on the VoxCeleb, CN-Celeb, and MUSAN dataset, and the competitive results demonstrate that our method can effectively recognize the anchor voiceprint in video streaming. With the substantial advances in computing technology, live video streaming is becoming increasingly popular. Due to the low employment threshold and acute competition of anchors, there are some issues in the online live streaming industry, such as unreasonable content ecology and uneven anchor quality. For seeking profits and accumulating popularity, some anchors mix inappropriate content into live programs. These offending anchors are usually found and banned after a period of time. However, they can still live by registering their sub-accounts as other anchors or occupying the rooms of other anchors after being blacklisted, which has caused great harm to the network environment. Therefore, it is indispensable to apply intelligent analysis techniques to identify anchors according to the specific characteristics of live streaming, so that regulators can prevent these banned anchors from continuing to live in various ways. The anchor is the host and guide of live streaming, who performs the show to attract viewers. In general, the anchor's voice is often relatively stable and constant because he/she needs to create a fixed impression in the audience. If the anchor does not use a voice changer, the voiceprint of the anchor can be used to recognize the anchor identity, furthermore, to prevent the blocked anchor from entering the online live streaming platform again. Figure 1 shows the architecture of a live streaming system working with an anchor voiceprint recognition system, including three parts of camera, server, and client, of which the camera is used to capture live streaming, the server is used to encode and push video, and the client is used to decode and play video. The anchor voiceprint recognition system obtains a certain length of audio from the server through sampling, and stores it in the buffer as the system input. The sampling rules are determined by the live streaming platform, usually at the beginning or intervals of live streaming. The voiceprint features of audio are extracted and the similarity between the voiceprint features of input audio and those of blacklisted anchors is calculated and returned to the server. If the similarity is too high, the live streaming will be interrupted or manual review will be conducted. The architecture of a live streaming system working with an anchor voiceprint recognition system Traditional speaker recognition methods usually use handcrafted features to recognize the speaker. For example, Reynolds et al. [1] proposed a speaker recognition method based on the Gaussian mixture model and universal background model (GMM-UBM). Firstly, acoustic features, such as Mel-scale frequency cepstral coefficients (MFCC), are projected onto high-dimensional space to generate high-dimensional mean hyper vector, and then to train a UBM. After that, taking UBM as the initial model, the target GMM of the speaker is constructed by adaptive training based on the maximum posterior probability with the target speaker data. Finally, the speaker is scored by calculating the likelihood value to make a recognition judgment. Although this method can reduce the speech demand for the target speaker as well as speed up the GMM training process, it is greatly affected by the channel type, training duration, male/female ratio, and other factors. Dehak et al. [2] proposed I-Vector (identity-vector) by using a specific space to replace the speaker space defined by the eigentone space matrix and the channel space defined by the channel space matrix. The new space can become a global difference space, including the differences between speakers and channels, thus reducing the impact of channel type and male/female ratio, but being sensitive to noise. Since live streaming is usually mixed with background music, game sound, and other noise, even though it is intractable to completely separate it by speech separation. Obviously, the traditional methods are not available to anchor voiceprint recognition. Recently, deep learning has demonstrated powerful representation ability and anti-noise ability in speech processing. By training massive data, robust features can be obtained by using deep neural network (DNN). Consequently, a series of deep learning-based speaker recognition methods have been explored. For instance, Variani et al. [3] took the FBank features stacked into 1-D vectors as the input of DNN and extracted voiceprint features through continuous fully connected layers for speaker recognition. Compared with the traditional methods, voiceprint features extracted by DNN have stronger anti-noise ability, but parameters of fully connected layers are larger, hard to train, and easy to overfit. Snyder et al. [4] extracted voiceprint features through a time delay neural network (TDNN) like dilated convolution, to expand receptive field and share network parameters, effectively reducing the number of network parameters and training difficulty, and achieving 4.16% equal error rate (EER) on the SITW [5] dataset. With the significant advantages of deep convolutional neural network (CNN) in image processing, some researchers refer to the idea of image processing, directly regarding the acoustic features as two-dimensional images, and further apply CNN to obtain voiceprint features. For example, Qian et al. [6] compared the effects of three deep models in automatic speaker verification (ASV) spoofing detection, including DNN, CNN, and bidirectional long short-term memory recurrent neural network (BLSTM-RNN), of which CNN performed best. Besides, Lavrentyeva et al. [7] proposed a CNN + bidirectional GRU (Bi-GRU) structure to extract voiceprint deep features for ASV spoof detection. Gomez-Alanis et al. [8] proposed a gated recurrent CNN (GRCNN) to extract voiceprint deep features by combining the ability of convolutional layer to extract discriminative feature sequences with the capacity of recurrent neural network (RNN) for learning long-term dependencies. Furthermore, they proposed a light convolutional gated RNN (LC-GRNN) [9], which solves the high complexity by using a GRU-based RNN learning long-term dependency. Gomez-Alanis et al. [10] proposed an integration neural network, which is composed of LC-GRNN [9, 11], TDNN, and well-designed loss function to generate the deep features for ASV spoof detection, reaching the state-of-the-art (SOTA). Nagrani et al. [12] directly extracted the voiceprint features using CNN after representing the acoustic features as two-dimensional images, reaching an EER of 7.8% on the VoxCeleb1 [12] dataset. Hajavi et al. [13] improved the CNN structure to produce multi-scale voiceprint features, and the EER of VoxCeleb1 dataset was reduced to 4.26%. Jiang et al. [14] increased the depth of CNN and constrained the network through channel attention mechanism to enhance its representation ability. As a result, the EER on the VoxCeleb1 dataset is reduced to 2.91%. Although the above methods can reduce the input dimension of neural network, the hyperparameters of acoustic feature extraction methods may affect speaker recognition so that it is difficult to control their positive or negative. Similar to the idea of paraconsistent feature engineering [15], whether these handcrafted features are suitable as inputs to neural network depends not only on the features themselves but also on the network adopted to process them. Therefore, the hyperparameters are set empirically without a theoretical explanation. Moreover, in the task of visual information processing, the first several layers of CNN are used to extract the local features at the low level, such as edge and texture features. In the subsequent convolutional layer, higher-level features are extracted layer by layer from these local features until semantic features are obtained. In speaker recognition tasks, we treat the input acoustic feature as a two-dimensional image to extract their features with CNN, similar to a local feature in physical meaning. Therefore, Jung et al. [16] further proposed a RawNet that can directly generate voiceprint features from the waveform of audio with 1-D residual CNN and GRU [17], achieving a 4.0% EER on the VoxCeleb1 dataset. This method does not need to extract any acoustic features, in which each 1-D convolutional layer can be regarded as a series of filters. Therefore, the final deep voiceprint feature can be extracted from a series of filters of the input audio. However, in view of the simple structure of RawNet, the deep voiceprint features extracted by RawNet will produce the performance of RawNet in speaker recognition inferior to that of methods using acoustic features as input. To improve the representation ability of the RawNet, Jung et al. [18] proposed RawNet2 by adding channel attention mechanism to the network to reduce EER to 2.48%, outperforming the method of taking acoustic features as input while eliminating the computational overhead of acoustic features. As shown in Fig. 2, the feature sequence can be segmented by channel dimension or frame/temporal dimension, yet the channel attention mechanism only regards the importance of different channels as well as ignores the relationship between frames. In fact, the relationship of frames is an important indication reflecting voiceprint information, yet channel attention alone cannot guide the network to pay attention to more important frames and ignore less important ones. Feature sequence segmented by channel dimension or frame dimension The transformer originally proposed by Vaswani et al. [19] has been applied to speech recognition. It can guide the network to learn the long-range dependence between feature sequence frames to enhance the representation ability of the model. Now, it has been extended to CNN. For instance, India et al. [20] proposed a voiceprint feature extraction method, which utilizes the multi-head self-attention module to replace the global pooling layer, aggregates the voiceprint feature sequence and transforms it into a deep voiceprint feature vector, dropping the EER by 0.9%. Safari et al. [21] also improved the performance of the speaker recognition model by replacing the global pooling layer with the self-attention pooling layer. This shows that proper use of self-attention structure can effectively improve the feature learning ability of neural network and contribute to voiceprint recognition. When using the raw waveform as the input, the output of each layer of the model will retain the temporal context information that plays an important role in speaker recognition. Notably, the RNN can enhance the overall performance of the model owing to temporal information. As a representative one of RNN, GRU is a structure replacing long short-term memory (LSTM) [22] structure, which removes the forget gate and uses compliment of update gate vector to discard the information. Compared with LSTM, GRU can not only make use of the temporal relationship of feature sequences but also increase computational efficiency of long sequence modeling, effectively improving the representation ability of the model. Through the analysis above, we choose the deep learning method for anchor voiceprint recognition, and take the waveform as the input of neural network. The self-attention mechanism is applied to the network to improve the feature learning ability of the model. Thereby, we propose an anchor voiceprint recognition method in live video streaming using RawNet-SA and GRU. The overall process of anchor voiceprint recognition system is as follows. First, the anchor's speech is extracted from the live streaming by using voice activation detection (VAD) and speech separation. Then, the feature sequence of the anchor voiceprint is generated from the waveform of the speech with the self-attention network RawNet-SA. RawNet-SA combines channel attention and self-attention to obtain the relationship between channel and frame in voiceprint feature sequence, to precisely distinguish the identity of anchors. And the input of RawNet-SA is waveform rather than acoustic features, so that makes the extracted deep features are not affected by the acoustic feature extraction process, and the network has better interpretability. Finally, the feature sequence of anchor voiceprint is aggregated by GRU and transformed into deep voiceprint feature vector for anchor recognition. The main contributions of this paper can be summarized as follows: An effective RawNet-SA is designed to generate the feature sequence of anchor voiceprint from the speech waveform by adding channel/self-attention to obtain the relationship between channel and frame in the voiceprint feature sequence to precisely distinguish the identity of anchors. The input of the proposed RawNet-SA is waveform rather than acoustic features, so that the extracted deep features are not affected by the acoustic feature extraction process, and the network has better interpretability. We propose to recognize the anchor from the live streaming via the voiceprint deep features, which is a situational application. The rest of this paper is organized as follows. Section 2 introduces our method in detail. Experimental results with ablation studies are presented and analyzed in Section 3. Conclusions are drawn in Section 4. The overall structure of our anchor voiceprint recognition method is shown in Fig. 3. First, the speech of the anchor is extracted from audio in live streaming by VAD and speech separation. Then, the feature sequence of anchor voiceprint is generated from the speech waveform by using the self-attention network RawNet-SA constructed based on RawNet2. Finally, the feature sequence of anchor voiceprint is aggregated by GRU to transform into a deep voiceprint feature vector for anchor recognition. Framework of the proposed anchor voiceprint recognition method, in which GRU diagram is modified based on [23] Voice activation detection and speech separation Since the anchor in the live streaming will not be talking all the time, and there will be music, sound effects, outdoor noise, and other information to interfere with voiceprint recognition, it is necessary to remove the silent voice segments of the anchor through VAD before further processing, and then separate the speech. Traditional VAD methods are usually based on energy [24], pitch [25], zero crossing rate [26], and the combination of various features, the key problem of which is to judge whether there is speech in the audio segment. Since the traditional methods cannot get expected results in complex environments, we adopt the lightweight network VadNet (Fig. 4) proposed by Wagner et al. [27] to realize VAD. Firstly, the feature sequence is generated by a three-layer CNN with the waveform of audio as the input. Then, the feature sequence is aggregated by a two-layer GRU and transformed into feature vector. Finally, the fully connected layer as a classifier is utilized to estimate whether the audio segment contains speech. The structure of VadNet After removing the silent voice, we need to extract the anchor speech separately from the remaining audio segments containing background sound. Spleeter [28] is an open-source software developed by Deezer Research, which can separate various sounds including vocals in music, and is mainly applied to music information retrieval, music transcription, and singer identification, etc. We take the characteristics of Spleeter to separate the singer's voice from music to pick up the anchor's speech. Figure 5 describes the structure of U-Net [29] in Demucs (Fig. 5A) and the structure of encoder and decoder in U-Net (Fig. 5B). In Fig. 5A, based on Demucs [30], the soft mask of each source is estimated by a 12-layer U-Net, and separation is then done from the estimated source spectrograms with soft masking or multichannel wiener filtering. The network is composed of 6 encoders and 6 decoders, in which the feature sequence is modeled by two-layer Bi-LSTM between the encoder and decoder. The specific structure of encoder and decoder is shown in Fig. 5B, in which each encoder consists of two 1-D convolution layers, with ReLU and GLU as activation functions respectively. The difference between decoder and encoder is that the convolution layer activated by GLU comes before the convolution layer activated by ReLU, and the convolution layer activated by ReLU is no longer ordinary convolution, but transposal convolution. Since it only needs to separate the speech of the anchor, we use the 2-stem model provided by Spleeter, which only separates the speech from other sounds, rather than out producing four different types of sounds like the original Demucs model, to increase the separation speed. The structure of U-Net in Demucs and the structure of Encoder and decoder in U-Net. A The structure of U-Net. B Detailed view of the layers Encoderi (upper) and Decoderi (below) Voiceprint deep feature sequence extraction with RawNet-SA During live streaming, there are normally tons of noise presented, for example, background music or noise and foreground non-human sound events. Even after pre-processing, the input audio will inevitably be mixed with some noise. More, the duration and speed of each speech may vary depending on the content of live streaming. However, the existing voiceprint feature extraction networks usually adopt acoustic features as input. The hyperparameters of the extracted acoustic features will influence the representation ability of voiceprint features. Thus, it is difficult to find the appropriate acoustic feature that can be adapted to the anchor voice in all cases. Besides that, acoustic feature extraction requires additional computational overhead. By using audio waveform as input, RawNet2 does not need to extract acoustic features, while retaining the temporal relationship of audio, and achieves good performance on VoxCeleb dataset. We know that the self-attention mechanism can effectively strengthen the feature learning ability of neural network by regarding the importance of channels and frames of feature sequences. As a result, to avoid using acoustic features and further enhance the feature extraction ability of the network, we proposed a model combining RawNet2 with self-attention module (RawNet-SA) to generate anchor voiceprint features. The structure of RawNet-SA is shown in Table 1, in which numbers in Conv and Sinc indicate filter length, stride, and number of filters, and the number in Maxpool indicates filter length. Table 1 The structure of RawNet-SA Since the computing cost of the self-attention layer will boost sharply with the increase of the dimension of input feature sequence, and the dimension of feature sequence is relatively high in the front part of RawNet-SA, the Sinc-conv layer and the first three Resblocks of RawNet-SA follow the structure of RawNet2 to accelerate the inference speed, while reducing the training difficulty. In addition, the channel attention layers of the last three Resblocks are replaced with self-attention layers to promote the feature representation ability of the model. To utilize the temporal information in the feature sequence, GRU is used to aggregate the feature sequence and transform it into a fixed-length feature vector. The Sinc is a convolution layer with interpretable convolutional filters proposed in [31]. Different from the standard convolution layer, the kernel of Sinc is defined as the form of filter-bank composed of rectangular bandpass filters, and the learnable parameters only contain low and high cutoff frequencies. The Sinc can be computed with: $$ y\left[n\right]=x\left[n\right]\ast \left(g\left[n,{f}_1,{f}_2\right]\cdotp w\left[n\right]\right) $$ $$ g\left[n,{f}_1,{f}_2\right]=2{f}_2\frac{\sin \left(2\uppi {f}_2n\right)}{2\uppi {f}_2n}-2{f}_1\frac{\sin \left(2\uppi {f}_1n\right)}{2\uppi {f}_1n} $$ $$ w\left[n\right]=0.54-0.46\cdot \cos \left(\frac{2\uppi n}{L}\right) $$ where x[n] is a chunk of the speech signal, g[n, f1, f2] is the filter of length L, y[n] is the filtered output, f1 and f2 represent low and high cutoff frequencies respectively, and w[n] is Hamming window function. The feature map scaled (FMS) layers in RawNet-SA follows the structure of the channel attention module in RawNet2. Different from the channel attention module commonly used in image processing, the vector generated by FMS is used as the weight and bias of the channels to improve the effect of attention constraint. Let C = [c1, c2, …, cF] be the output feature sequence of Resblock, f be the number of channels in the feature sequence, and cf∈ℝT (T is the length of feature sequence), then C∈ℝT×F. FMS can be computed with: $$ {s}_f=\mathrm{sigmoid}\left({\mathbf{c}}_f\cdot \mathbf{w}\right) $$ $$ {\mathbf{c}}_f^{\prime }={\mathbf{c}}_f\cdot {s}_f+{s}_f $$ Because FMS only considers the relationship between channels and ignores the relationship between feature sequence frames, self-attention layer is utilized to enhance the representation ability of the model. In addition, since the computational complexity of self-attention layer increases sharply with the augment of the size of its input feature sequence, we only use/add self-attention layers in the last three Resblocks. The self-attention (SA) in Table 1 above represents the self-attention layer. The structure of the original self-attention layer [19] for speech recognition is shown in Fig. 6A, where FC-KEY, FC-QUERY, and FC-VALUE represent fully connected layers respectively. The feature sequence is input to FC-KEY and FC-QUERY, and the outputs of FC-KEY and FC-QUERY are multiplied, and then normalized to obtain the weight matrix. The residual of the new feature sequence A is finally obtained by multiplying the weight matrix by the output of FC-VALUE as follows: $$ \mathbf{A}=\frac{\mathrm{softmax}\left({\mathbf{W}}_q\mathbf{X}{\left({\mathbf{W}}_k\mathbf{X}\right)}^{\mathrm{T}}\right)}{\sqrt{d_k}}+{\mathbf{W}}_v\mathbf{X} $$ The structure of self-attention module. A Original self-attention module [19]. B Advanced self-attention module where X∈ℝS×d is the matrix obtained by input word vectors concatenate; S denotes the number of word vectors; Wq, Wk, and Wv∈ℝd×d denote the parameter matrices of FC-QUERY, FC-KEY, and FC-VALUE in Fig. 6A respectively; and dk represents the dimension of word vectors. To apply the self-attention layer to RawNet-SA, we let the time dimension T of voiceprint feature sequence as the sequence dimension S in word vector matrix as shown in Fig. 6B. To accelerate the training speed, inspired by non-local neural network [32], the dimension of the feature sequence is compressed by FC-QUERY and FC-KEY, then restored by the fully connected layer FC-Extract before merging the residuals, and the batch normalize (BN) layer is applied to accelerate the training speed of the model. The residual C′ is formally obtained as follows: $$ {\mathbf{C}}^{\prime }={\mathbf{W}}_E\left(\mathrm{softmax}\left({\mathbf{W}}_q\mathbf{C}{\left({\mathbf{W}}_k\mathbf{C}\right)}^{\mathrm{T}}\right)\right)+{\mathbf{W}}_v\mathbf{C} $$ where WE∈ℝc×c denotes the parameter matrices of FC-Extract and the BN calculation is omitted. In a nutshell, the feature sequence V∈ℝT×c is obtained by 3 Resblocks with channel attention layers and 3 Resblocks with self-attention layers with the waveform of speech as input, at which time T = 26 and c = 256. Voiceprint deep feature aggregation by GRU Most voiceprint feature extraction networks tend to apply the pooling-like methods or learnable dictionary encoding methods, such as global average pooling, global maximum pooling, NetVLAD [33], and GhostVLAD [34], to aggregate voiceprint feature sequences to transform them into deep voiceprint feature vectors. However, these methods do not consider the temporal relationship of feature sequences and lose a lot of information. Therefore, to effectively utilize the temporal relationship of feature sequences, GRU is applied to aggregate feature sequences in RawNet-SA. First, the reset gate vector rt is generated to store the relevant information from the past time step in the new memory content. The Hadamard product of rt and the previously hidden state ht-1 is then added to the input vector to determine what information is collected from the current memory content. After summing up, the non-linear activation function (tanh) is applied to obtain the \( {\tilde{\mathbf{h}}}_t \). Secondly, the update gate will save the information of the current unit and pass it to the network. The update gate vector zt will determine what information is collected from the current memory content and previous timesteps. Finally, the hidden state of the current unit is obtained by applying Hadamard product to zt and ht-1, and summing it with the Hadamard product operation between (1- zt) and \( {\tilde{\mathbf{h}}}_t \). Let feature sequence V = [v1, v2,…,vT], vt∈ℝc and c is the number of channels, then the aggregation of feature sequence is carried out according to the follows: $$ {\mathbf{z}}_t=\delta \left({\mathbf{W}}_{xz}{\mathbf{v}}_t+{\mathbf{W}}_{hz}{\mathbf{h}}_{t-1}+{\mathbf{b}}_z\right) $$ $$ {\mathbf{r}}_t=\delta \left({\mathbf{W}}_{xr}{\mathbf{v}}_t+{\mathbf{W}}_{hr}{\mathbf{h}}_{t-1}+{\mathbf{b}}_r\right) $$ $$ {\tilde{\mathbf{h}}}_t=\tanh \left({\mathbf{W}}_{xh}{\mathbf{v}}_t+{\mathbf{W}}_{hh}\left({\mathbf{r}}_t\bullet {\mathbf{h}}_{t-1}\right)+{\mathbf{b}}_h\right) $$ $$ {\mathbf{h}}_t={\mathbf{z}}_t\bullet {\tilde{\mathbf{h}}}_t+\left(1-{\mathbf{z}}_t\right)\bullet {\mathbf{h}}_{t-1} $$ where vt is the input, zt is the update gate vectors, rt is the reset gate vectors, ht is the hidden states at time t, W represents the parameter matrices, b is the bias vector, and • denotes the element-wise product (Hadamard product). At last, to remove feature redundancy and accelerate the speed of anchor voiceprint recognition, the dimension of feature vector is controlled by the fully connected layer at the end of RawNet-SA. Anchor voiceprint recognition with deep features In this section, RawNet-SA is trained by the softmax loss function on the close dataset, and then the trained RawNet-SA generates the deep voiceprint feature of the anchor. As a result, the identity of the anchor depends on the similarity of the anchor voiceprint features. The softmax loss function is calculated as: $$ L=-\sum \limits_{i=1}^m\log \frac{\exp \left({\mathbf{W}}_{y_i}^{\mathrm{T}}{\mathbf{x}}_i+{\mathbf{b}}_{y_i}\right)}{\sum_{j=1}^n\exp \left({\mathbf{W}}_{y_j}^{\mathrm{T}}{\mathbf{x}}_i+{\mathbf{b}}_{y_j}\right)} $$ where m represents the size of Mini-Batch, n is the number of speakers in the dataset, xi is the ith voiceprint feature vector in Mini-Batch, yi is the true category of the ith feature vector in Mini-Batch, Wyi is the yith column of the parameter matrix of the full connection layer used for classification, and bj is the jth row of the bias vector of the full connection layer. By converting WyiT xi and WyiT xj using the cosine function, we obtain: $$ L=-\sum \limits_{i=1}^m\log \frac{\exp \left(\left\Vert {\mathbf{W}}_{y_i}^{\mathrm{T}}\right\Vert \left\Vert {\mathbf{x}}_i\right\Vert \cos \left({\theta}_{i,i}\right)+{\mathbf{b}}_{y_i}\right)}{\sum_{j=1}^n\exp \left(\left\Vert {\mathbf{W}}_{y_j}^{\mathrm{T}}\right\Vert \left\Vert {\mathbf{x}}_i\right\Vert \cos \left({\theta}_{i,j}\right)+{\mathbf{b}}_{y_i}\right)} $$ where θi,j is the included angle between the ith feature vector in the mini-batch and the jth column of the parameter matrix W. Each column of the parameter matrix W can be regarded as the central vector of its corresponding category. Therefore, the process of using softmax loss function to train the network can be viewed as guiding the network to find the feature space, which makes the cosine similarity between the feature vector x and the vector of the corresponding column vector of the parameter matrix as high as possible. Meanwhile, the cosine similarity between the feature vector x and the vectors of other columns is low enough. Accordingly, in our application, cosine similarity is used as the similarity of voiceprint feature vector: $$ similarity\kern0.5em =\kern0.5em \frac{{\mathbf{x}}_1^{\mathrm{T}}{\mathbf{x}}_2}{\left\Vert {\mathbf{x}}_1\right\Vert \left\Vert {\mathbf{x}}_2\right\Vert } $$ where x1 and x2 respectively represent the voiceprint feature vectors from different speech signals. Experiments and discussion In this section, we evaluate the performance of the proposed anchor voiceprint recognition in live streaming method by comparing with other SOTA speaker recognition methods. We conduct a total of seven experiments as follows: Experiment I: the overall performance comparison with SOTA methods. Experiment II: the role of self-attention mechanism by ablation study. Experiment III: the influence of self-attention module on inference speed. Experiment IV: the effect of different channel squeeze ratios on voiceprint recognition in self-attention layer. Experiment V: the influence of different feature aggregation methods on voiceprint recognition. Experiment VI: the effect of VAD and speech separation on voiceprint recognition. Experiment VII: the influence of different similarity measurement methods on voiceprint recognition. Experiment setup We choose VoxCeleb2 [35] dataset, VoxCeleb1 dataset, CN-Celeb [36] dataset, and MUSAN [37] dataset to conduct the experiments. VoxCeleb1 and VoxCeleb2 datasets contain 1251 and 6112 speakers, respectively, without duplication. The speakers cover different ages, genders, and accents. Audio scenes include red carpet catwalks, outdoor venues, indoor video studios, etc. Sound acquisition equipment adopts professional and handheld terminals, and the background noise includes conversation, laughter, and different scenes. Using VoxCeleb2 dataset for training and VoxCeleb1 dataset for testing is a standard procedure for many speaker recognition methods. CN-Celeb dataset is an unconstrained large-scale Chinese speaker recognition dataset. The dataset contains 1000 speakers, each speaker contains at least five different scene recordings, with a total of about 130,000 sentences and a total duration of 274 h. It is collected from entertainment TV shows, singing, vlog, etc. It is very similar to the live streaming environment and contains all kinds of noise, such as background music, audience applauded, etc. MUSAN dataset consists of music from several genres, speech from twelve languages, and a wide assortment of technical and non-technical noises. It is often used to generate the corrupted version of other noiseless datasets. The general statistics of VoxCeleb1, VoxCeleb2, and CN-Celeb are given in Table 2. Table 2 Statistics of different datasets All models in experiments were trained by VoxCeleb2 dataset. In experiment I-II, IV-V, and VII, we evaluate the methods on VoxCeleb-E and VoxCeleb-H [35] (two different test protocols of VoxCeleb1). We also use CN-Celeb dataset for test in experiment I-II, IV-V, and VII to see the effectiveness of the proposed method in scenes similar to the live streaming environment. Specially, we illustrate the effectiveness of VAD and speech separation on CN-Celeb test set (CN-Celeb-T), VoxCeleb1 test set (Vox1T-O), and corrupted versions of Vox1T-O (Vox1T-N and Vox1T-M) generated using MUSAN. Our experiment platform is a PC with 16 GB RAM, 2.40 GHz CPU, NVIDIA 2080Ti GPU, and Ubuntu 20.04 LTS operating system. Our framework is implemented by Pytorch, accelerated by CUDA10.1, and cuDNN7.6. RawNet-SA was trained on the VoxCeleb2 dataset. We modify the duration of the input waveforms to 59,049 samples (≈ 3.69 s) in training stage to facilitate mini-batch construction (If the length of the voice is less than 3.69 s, it can be copied to 3.69 s). In testing stage, we apply test time augmentation (TTA) [35] with a 20% overlap. Different parts of speech are intercepted to obtain multiple voiceprint feature vectors, and the average value is taken as the final voiceprint feature vector. During training stage, an Adagrad optimizer is used, and its learning rate starts from 0.01 and decays according to the following: $$ {lr}_t=\frac{lr_{t-1}}{1+\left(d\times t\right)} $$ where lrt is the learning rate at the tth iteration, t is the number of iteration steps, and d is the decay rate of the learning rate, which is set as 0.0001. And the batch size of network training is set as 50 and the total number of epochs is 35. We use the following evaluation indicators to verify the performance: EER: a method widely used to measure the performance of voiceprint recognition. When the EER is lower, the overall recognition performance is better. Let the threshold value to judge whether the speaker is the same person be t, and the similarity of the two voiceprint feature vectors be s, when s>t, it is considered that the two feature vectors come from the speech of the same speaker; otherwise, they come from the speech of different speakers. After traversing the test set, different false rejection rates (FRR) and false acceptance rates (FAR) can be calculated for different thresholds: $$ FAR=\frac{FP}{TP+ FP} $$ $$ FRR=\frac{FN}{FN+ PN} $$ where TP is the true positives, TN is the true negatives, FP denotes the false positives, and FN stands for the false negatives. When the threshold is adjusted to FAR=FRR, ERR=FAR=FRR. Minimum detection cost function (minDCF): a method widely used to measure the performance of voiceprint recognition. The lower the minDCF, the better the overall recognition performance. DCF is calculated as follows: $$ DCF={C}_{\mathrm{FR}}\ast FRR\ast {P}_{\mathrm{target}}+{C}_{\mathrm{FA}}\ast FAR\ast \left(1-{P}_{\mathrm{target}}\right) $$ where CFR and CFA represent the penalty cost of FRR and FAR respectively, and Ptarget is a prior probability, which can be set according to different application environments. To improve the intuitive meaning of DCF, it is normalized by dividing it by the best cost that can be obtained without processing the input data: $$ {DCF}_{\mathrm{norm}}=\frac{DCF}{\min \left[{C}_{\mathrm{FR}}\ast {P}_{\mathrm{target}},\kern1em {C}_{\mathrm{FA}}\ast \left(1-{P}_{\mathrm{target}}\right)\right]} $$ When CFR, CFA, and Ptarget are set, a set of values of FRR and FAR minimize DCF. Currently, DCFnorm is minDCF. Here, we use two different sets of parameters to calculate minDCF: DCF08: CFR = 10, CFA = 1, and Ptarget = 0.01 are set according to the setting of NIST SRE 2008. DCF10: CFR = 1, CFA = 1, and Ptarget = 0.001 are set according to the setting of NIST SRE 2010. Experiment I: comparison with state-of-the-art methods In this experiment, we present the visualization results of the proposed method and SOTA methods on VoxCeleb1 and CN-Celeb dataset. The method proposed by Chung et al. [35] extracts the deep voiceprint feature sequence through ResNet50 and aggregates the feature sequence using time average pooling (TAP). ResNet50 initializes the network weight by using softmax pre-training and then compares the loss training with the offline hard negative mining strategy. The method proposed by Xie et al. [38] extracts the deep voiceprint feature sequence through Thin ResNet35 and uses GhostVLAD to aggregate the feature sequence. The network is trained by cross-entropy loss, and its performance outperforms the method in [35]. The method proposed by Nagrani et al. [39] uses the same network as the method in [38], but the network is pre-trained by cross-entropy loss, and then trained by relation loss. SpeakerNet [40] was proposed by Nvidia that uses statistics pooling (SP) to aggregate the feature sequence and is trained by AAM-Softmax loss [41]. DANet [42] generates the deep voiceprint feature sequence through the VGG-like model described in [20] and introduces double multi-head attention to aggregate the feature sequence, in which the network is trained using cross-entropy loss. As mentioned in Section 2.2, the self-attention module in Fig. 6A is an original version designed for speech recognition, which has more parameters and is difficult to train, while the self-attention module in Fig. 6B is an improved version that can enable the fast convergence of the network, and the number of parameters is adjustable. In this experiment, we tested both the original version RawNet-origin-SA* (a model using the original self-attention module of Fig. 6A) and the improved version RawNet-SA (a model using the self-attention module of Fig. 6B). Table 3 presents that our RawNet-origin-SA* reaches the lowest EER compared to the method using acoustic feature as network input and the baseline method RawNet2. RawNet-origin-SA* got 2.37% EER in VoxCeleb-E, a decrease of 0.32% compared with SpeakerNet and 0.20% compared with RawNet2. In VoxCeleb-H, EER of 4.54% can be obtained, which is decreased by 0.07% and 0.35% for DANet and RawNet2, respectively. This is because the self-attention module can make the network focus on the relationship between feature frames, while RawNet2 only uses channel attention to pay attention to the channel dimension of the feature map. RawNet-SA attained 4.52% EER on VoxCeleb-H, 0.37% less than RawNet2, and 2.54% EER on VoxCeleb-E, 0.03% less than RawNet2. RawNet-SA is not as effective as RawNet-origin-SA* because the network is not initialized with the parameters of trained RawNet2, so the actual training iterations of RawNet-SA are less than RawNet-origin-SA*. Although Thin ResNet34 [38] and SpeakerNet perform better than RawNet-SA in CN-Celeb dataset, considering the performance of all datasets, the overall performance of RawNet-SA and RawNet-origin-SA is optimal. It should be noted that the number of training iterations of SpeakerNet is about six times that of RawNet-SA, and AAM-Softmax loss is used in training. Table 3 Results of comparison to state-of-the-art method on VoxCeleb-E and VoxCeleb-H evaluation protocols To evaluate and compare their performance at all operating points, we provide the detection error tradeoff (DET) curves (Fig. 7) of the baseline method RawNet2 and the proposed RawNet-origin-SA*, RawNet-SA, as shown in Fig. 7A, B, and C respectively. It can be seen that RawNet-origin-SA* performs best on all operating points of the simple test set VoxCeleb-E. In the complex test set VoxCeleb-H, RawNet-SA approximates RawNet-origin-SA* and exceeds RawNet-origin-SA* in CN-Celeb dataset. DET curves of models on different datasets. A VoxCeleb-E. B VoxCeleb-H. C CN-Celeb We also include Fig. 8 to exhibit what speech will be considered as the voice of the same person and what speech will be considered as the voice of different people by the RawNet-SA. We randomly selected 4 pairs of speech audios, which were true-positive (TP) pair, true-negative (TN) pair, false-positive (FP) pair, and false-negative (FP) pair respectively. Speech audios in true positive pair come from the same speaker and the similarity between these deep voiceprint features of audios is high enough. The spectrograms in the TP part of Fig. 8 are very similar. Speech audios in true negative pair come from different speakers so that the similarity between these deep voiceprint features of audios is low enough. It can be seen that there are significant differences in the spectrograms in the TN part of Fig. 8. However, the spectrograms in the FP part and FN part are similar so it is hard to judge if they are from the same speaker by the deep voiceprint feature extracted with RawNet-SA. The visualization results of anchor voiceprint recognition Experiment II: ablation study of self-attention mechanism To demonstrate the role of self-attention mechanism, we conducted ablation study on VoxCeleb1 dataset and CN-Celeb dataset using EER and minDCF, as shown in Table 4, in which RawNet2 was used as the baseline model to study the method. We can see that our RawNet-origin-SA* and RawNet-SA exceed the baseline method. More details of the experiment are described below. Table 4 The role of self-attention mechanisms on the recognition performance RawNet w/out SA* is based on RawNet2 that removes the channel attention layer of the last 3 Resblocks, which achieve 2.44% EER in VoxCeleb-E, 0.07% higher than RawNet-origin-SA*, and 4.69% EER in VoxCeleb-H, 0.15% higher than RawNet-origin-SA*. The training of RawNet w/out SA* follows the same protocol as RawNet-origin-SA*, but its performance is still inferior to RawNet-origin-SA*, which demonstrates that the model performance is not promoted by the redundancy of channel attention layer in RawNet2, but the addition of self-attention layers effectively enhances the representation ability of the model. RawNet-MHSA is a model that replaces the self-attention module in RawNet-SA with a multi-head version, and the number of SA heads is set to 4. This means the input feature sequence will be split into 4 chunks in the channel dimension, and then processed separately by the self-attention module, and finally concatenated to the output of the multi-head self-attention module. RawNet-MHSA achieves 2.75% EER in VoxCeleb-E, 0.18% higher than that of RawNet2, and 4.91% EER in VoxCeleb-H, 0.02% higher than that of RawNet2. In CN-Celeb, 22.16% of EER is obtained, 0.08% lower than that of RawNet-SA. Although RawNet-MHSA performed well in the CN-Celeb dataset, its performance on other datasets was even worse than the baseline method. RawNet-all-SA is based on RawNet2 that all six FMSs are replaced with self-attention modules, which can achieve 3.69% EER in VoxCeleb-E, 1.15% higher than that of RawNet-SA, and 6.61% EER in VoxCeleb-H, 2.09% higher than that of RawNet-SA. As mentioned in Section 2.2, the computing cost and parameter size of RawNet-all-SA are much larger than RawNet-SA, the training time of RawNet-all-SA will take about twice as that of RawNet-SA, and it will not converge like RawNet-SA. RawNet-origin-SA* is a model using the self-attention module described in Fig. 6A instead of the self-attention module in Fig. 6B. Due to the difficulty in training the original self-attention module, the network is initialized with the trained RawNet2 network parameters, and the VoxCeleb2 dataset is used to further fine-tune the network. From Table 4, RawNet-origin-SA* achieves 2.37% EER in the VoxCeleb-E, 0.20% lower than RawNet2. In VoxCeleb-H, 4.54% of EER is obtained, 0.35% lower than RawNet2. To ensure the fairness of the comparison, we also trained the original RawNet2 in the same way. The experimental results named RawNet2* achieves 2.43% EER in VoxCeleb-E, 0.06% higher than RawNet-origin-SA*, and 4.60% EER in VoxCeleb-H, 0.06% higher than RawNet-origin-SA*. This indicates that the improvement of RawNet-origin-SA* is not caused by more training iterations. RawNet-SA improves the structure of self-attention layers so that the network can quickly converge without using the parameters of trained RawNet2 for initialization. Finally, RawNet-SA achieved an EER of 2.54% in VoxCeleb-E, 0.03% lower than RawNet2, and 4.52% in VoxCeleb-H, 0.37% lower than RawNet2. RawNet-SA also achieved 22.24% EER in CN-Celeb dataset, even lower than other networks initialized by parameters such as the trained RawNet2* or RawNet-origin-SA*. This shows that the improved self-attention layer can further lift the robustness of voiceprint features and make the network suitable for different data distributions. Experiment III: influence of self-attention module on inference speed To prove that the inference speed of our proposed network structure is not significantly below that of the RawNet2, we test the time cost of different network structures as shown in Fig. 9. Inference speed of different structures Since the specific content of the input data does not affect the inference time of the model, we use the randomly generated sequence instead of the real-world audio as the network input and set the length of sequences to 3.69 s to control the length of the input. In the experiment, we randomly generated 1000 speech samples, each 100 into a group, for 10 consecutive tests, and finally taking the shortest time as the result. Figure 9 shows that RawNet-SA only consumes about 15.60 ms, 0.43 ms more than the original RawNet2 for each speech sample, and costs 1.02 ms less than RawNet-origin-SA* for each speech sample, which indicates that the addition of self-attention layers has little influence on the inference speed of the network, and the time consumption can be further crop by improved self-attention layers. Experiment IV: effect of different channel squeeze ratios on self-attention layer To investigate the effect of different channel squeeze ratios on self-attention layer, we compare the performance of RawNet-SA under different channel squeeze ratios as illustrated in Table 5. Let channel squeeze ratios r=d÷c; here, c is the input channel of self-attention layers and d is the number of output channels of FC-KEY, FC-VALUE, and FC-QUERY. The result shows that r = 0.25 produces the lowest EER in VoxCeleb-E and VoxCeleb-H. The EER in the VoxCeleb-E when r = 0.25 is 2.54%, 0.16% lower than r = 0.75. In the VoxCeleb-H, EER is 4.52%, 0.36% lower than that of r = 0.75. In the CN-Celeb dataset, r = 0.25 is 22.24% EER, only 0.29% higher than r = 0.75 and 0.10% higher than r = 0.5. This is because compressing the number of channels appropriately can remove the redundancy of the model to a certain extent, make the features more robust and the network easier to adapt. In general, the higher the channel squeeze ratio, the better the overall effect of the model, which produce the more the number of model parameters. Unfortunately, because we limit the total number of iterations during network training, the performance of RawNet-SA with a high channel squeeze ratio is worse than that of RawNet-SA with low channel squeeze ratio due to under-fitting. Figure 10 draws the EER changes of RawNet-SA with different channel squeeze ratios during network training. RawNet-SA with lower channel squeeze ratio has faster convergence speed and lower EER. When the channel squeeze ratio is 1, the network is significantly under-fitting. Table 5 The effect of different channel squeeze ratios on recognition performance EER curves during network training Experiment V: influence of different feature aggregation methods To illustrate the influence of different feature aggregation methods, we compared the performance of RawNet-SA with average pooling, max pooling, self-attentive pooling (SAP) [43], attentive statistical pooling (ASP) [44], GRU, and Bi-GRU. Table 6 exhibits that GRU has the lowest EER in VoxCeleb-E and VoxCeleb-H. In detail, the EER of GRU in VoxCeleb-E is 2.54% EER, 0.26% lower than that of Bi-GRU. In VoxCeleb-H, EER is 4.52%, which is 0.52% lower than Bi-GRU, indicating that Bi-GRU cannot improve the performance of RawNet-SA, making network convergence more difficult. RawNet-SA GhostVLAD achieves 21.32% EER in CN-Celeb dataset, 0.92% lower than GRU. However, in VoxCeleb-E, EER is 3.01%, 0.47% higher than GRU. In VoxCeleb-H, 5.01% EER is obtained, 0.49% higher than GRU, which indicates that GhostVLAD cannot adapt the network to different data distribution, although GhostVLAD reaches the lowest EER in CN-Celeb dataset. In this experiment, the performance of SAP and ASP is even worse than AP, which means that SAP and ASP are not suitable for the proposed model. Table 6 The effect of different feature aggregation methods on recognition performance Experiment VI: effect of VAD and speech separation on voiceprint recognition To illustrate the effectiveness of VAD and speech separation, we compared the performance of models on CN-Celeb-T. In this experiment, we regard the CN-Celeb-T as a noisy dataset because it inherently contains a lot of noise, such as background music, audience applauded, etc. And CN-Celeb-T-VAD is the dataset processed by VAD and speech separation. Table 7 shows that RawNet-origin-SA* has 16.14% EER in CN-Celeb-T, 0.25% higher than that in CN-Celeb-T-VAD. And RawNet-SA is 15.04% EER in CN-Celeb-T, 0.23% higher than that in CN-Celeb-T-VAD. These results indicate that the effect of network on CN-Celeb-T-VAD generally outperforms that CN-Celeb-T, proving that VAD and speech separation are effective. Table 7 The effect of VAD and speech separation We also compared with a speech enhancement + speaker recognition method VoiceID [45] on VoxCeleb1 test set (Vox1T-O) shown in Table 8. In this experiment, like VoiceID, we use the noise and music recordings of MUSAN to generate Vox1T-N and Vox1T-M where Vox1T-N is mixed with noise and Vox1T-M is mixed with music. We also applied the speech separation method on Vox1T-M dataset (Vox1T-M-S) to explore the effectiveness of Spleeter. Experimental results show that the EER of the RawNet-origin-SA* is 8.35% in Vox1T-N, 1.51% less than VoiceID, 5.75% in Vox1-M and 3.38% less than VoiceID. The EER of Vox1T-M-S is 5.52%, which is 0.23% lower than Vox1T-M. While the EER of RawNet-SA in Vox1T-N is 8.90%, 0.96% less than VoiceID, and the EER in Vox1-M is 6.15%, 2.98% less than VoiceID. In Vox1T-M-S, 6.11% of EER is obtained, 0.04% lower than that in Vox1T-M. These results prove that RawNet-origin-SA* and RawNet-SA perform better than VoiceID on corrupted datasets and speech separation is helpful for voiceprint recognition. It can also be seen that compared with VoiceID, RawNet-SA and RawNet-origin-SA* are more sensitive to noise. This is because VoiceID uses data mixed with noise during training, while we do not use any data enhancement trick. Table 8 Anti-noise test of different models Experiment VII: the influence of different similarity measurement methods on voiceprint recognition To illustrate the influence of different similarity measurement methods, we compared the performance of RawNet2, RawNet-origin-SA*, and RawNet-SA using different similarity measurement methods (such as cosine, probabilistic linear discriminant analysis (PLDA) [46], and b-vector [47]). The experiment results are shown in Table 9. We use the PLDA ToolkitFootnote 1, which follows the PLDA steps in [46] for our PLDA. Firstly, according to the suggestion of [46], we apply principal component analysis (PCA) to the extracted feature embeddings before PLDA. We use the 128 top principal components of deep voiceprint features to train the PLDA model. These features are generated from the training set (VoxCeleb2) of the model without normalization or whitening. Then, in the inference stage, the features generated by the test set (VoxCeleb1 and CN-Celeb) are transformed into a latent space, which keeps the same dimensions as the features after PCA. Finally, we calculate the log-likelihood ratio between the two features in latent space as their similarity. Table 9 The influence of different similarity measurements on the recognition performance B-vector system regards speaker verification as a binary classification problem, and takes the combination of element-wise addition, subtraction, multiplication, and division of two deep features as the input of binary classification network. Since more combinations will expand the input size of the classifier in the b-vector system and increase the computation overhead, as described in [47], we only use the concatenation of element-wise addition and multiplication in the b-vector system. The input of our b-vector system I is set as follow: $$ \mathbf{I}=\left[\left({\mathbf{w}}_{\mathrm{query}}\oplus {\mathbf{w}}_{\mathrm{target}}\right),\left({\mathbf{w}}_{\mathrm{query}}\otimes {\mathbf{w}}_{\mathrm{target}}\right)\right] $$ where wquery and wtarget denote the deep voiceprint features from the banned anchors and the current anchor respectively, and the symbol [·,·] represents the concatenation of the two vectors. The network of b-vector system is formed by two fully connected layers of a size of [1024, 512] with leaky rectified linear unit (ReLU) activations and dropout of 50%. The similarity of the two voiceprint features is obtained by the output linear layer composed of one neuron. From Table 9, for RawNet2, the cosine similarity in VoxCeleb-E reaches 2.57% EER, 1.21% lower than PLDA and 0.82% lower than b-vector. The cosine similarity of RawNet-origin-SA* in VoxCeleb-H is 4.54% EER, 1.50% lower than PLDA and 1.06% lower than b-vector. As for RawNet-SA, the cosine similarity achieves 22.24% EER in CN-Celeb, 2.43% lower than PLDA and 0.60% lower than b-vector. These results show that the cosine similarity is superior to PLDA and b-vector under all conditions of this experiment. This may be because the models of PLDA and b-vector are trained through the deep voiceprint features extracted from the VoxCeleb2 dataset, and the distribution difference between the training dataset and the test dataset makes the performance of PLDA and b-vector worse than expected. With the rapid development of online live streaming industry, we urgently need an intelligent method to identify anchors. Considering that the voiceprint information as one of the important information can represent the identity of the anchor, we propose an anchor voiceprint recognition method in live video streaming using RawNet-SA and GRU. Firstly, the speech of the anchor is extracted from the live streaming by using VAD and speech separation. Then, the feature sequence of anchor voiceprint is generated from the speech waveform with the self-attention network RawNet-SA. Finally, the feature sequence of anchor voiceprint is aggregated by GRU and transformed into deep voiceprint feature vector for anchor recognition. EER is used as the evaluation indicator for the effectiveness of anchor voiceprint recognition. We conducted seven experiments on public datasets. Overall, we verified the effectiveness of self-attention mechanism and GRU, and obtained 22.24% EER on CN-Celeb dataset. Experimental results show that our method obtains good voiceprint recognition performance without abundantly increasing time consumption. In the future, we plan to further optimize our model and loss function to improve the representation ability of the model. In recent years, various cross-domain methods based on generative adversarial networks (GAN) have made great progress. In the following work, we will combine GAN to improve the effectiveness of the network for unknown distributed data and make it conveniently applied to practical applications. To meet real-time recognition, the speed promotion will be another important direction of our research. Finally, to better verify the effect of deep features, we will introduce paraconsistent feature engineering to quantify the representation ability of deep features in future work. The datasets generated and/or analyzed during the current study are available in the VoxCeleb repository (https://www.robots.ox.ac.uk/~vgg/data/voxceleb/), CN-Celeb repository (http://www.openslr.org/82/), and MUSAN repository (http://www.openslr.org/17/). https://github.com/RaviSoji/plda GRU: Gated recurrent unit Voice activation detection GMM-UBM: Gaussian mixture model and universal background model MFCC: Mel-scale frequency cepstral coefficients DNN: Deep neural network TDNN: Time delay neural network EER: Equal error rate ASV: Automatic speaker verification BLSTM-RNN: Bidirectional long short-term memory recurrent neural network Bi-GRU: Bidirectional gated recurrent unit GRCNN: Gated recurrent convolutional neural network RNN: Recurrent neural network LC-GRNN: Light convolutional gated recurrent neural network SOTA: LSTM: Long short-term memory FMS: Feature map scaled Self-attention BN: Batch normalize Test time augmentation FRR: False rejection rates False acceptance rates True positive TN: True negative FN: False negative GAN: Generative adversarial networks D.A. Reynolds, T.F. Quatieri, R.B. Dunn, Speaker verification using adapted Gaussian mixture models. Digital Signal Processing 10, 19 (2000) N. Dehak, P.J. Kenny, R. Dehak, P. Dumouchel, P. Ouellet, Front-end factor analysis for speaker verification. IEEE Trans. Audio Speech Lang. Process. 19, 788 (2011) E. Variani, X. Lei, E. McDermott, I.L. Moreno, J. Gonzalez-Dominguez, Deep neural networks for small footprint text-dependent speaker verification, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Florence, Italy, 2014), pp. 4052–4056 D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, S. Khudanpur, X-Vectors: Robust DNN embeddings for speaker recognition, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Calgary, AB, 2018), pp. 5329–5333 M. McLaren, L. Ferrer, D. Castan, and A. Lawson, The speakers in the wild (SITW) speaker recognition database, in (2016), pp. 818–822. Y. Qian, N. Chen, H. Dinkel, Z. Wu, Deep feature engineering for noise robust spoofing detection. IEEE/ACM Trans. Audio Speech Lang. Process. 25, 1942 (2017) G. Lavrentyeva, S. Novoselov, E. Malykh, A. Kozlov, O. Kudashev, and V. Shchemelinin, Audio replay attack detection with deep learning frameworks, in Interspeech 2017 (ISCA, 2017), pp. 82–86. A. Gomez-Alanis, A.M. Peinado, J.A. Gonzalez, A.M. Gomez, A gated recurrent convolutional neural network for robust spoofing detection. IEEE/ACM Trans. Audio Speech Lang. Process. 27, 1985 (2019) A. Gomez-Alanis, A. M. Peinado, J. A. Gonzalez, and A. M. Gomez, A light convolutional GRU-RNN deep feature extractor for ASV spoofing detection, in Interspeech 2019 (ISCA, 2019), pp. 1068–1072. A. Gomez-Alanis, J. A. Gonzalez-Lopez, S. P. Dubagunta, A. M. Peinado, and M. Magimai.-Doss, On joint optimization of automatic speaker verification and anti-spoofing in the embedding space, IEEE Trans. Inform. Forensic Secur. 16, 1579 (2021). A. Gomez-Alanis, J.A. Gonzalez-Lopez, A.M. Peinado, A Kernel density estimation based loss function and its application to ASV-Spoofing Detection. IEEE Access 8, 108530 (2020) A. Nagrani, J. S. Chung, and A. Zisserman, VoxCeleb: a large-scale speaker identification dataset, in Interspeech 2017 (ISCA, 2017), pp. 2616–2620. A. Hajavi and A. Etemad, A deep neural network for short-segment speaker recognition, in Interspeech 2019 (ISCA, 2019), pp. 2878–2882. Y. Jiang, Y. Song, I. McLoughlin, Z. Gao, and L.-R. Dai, An effective deep embedding learning architecture for speaker verification, in Interspeech 2019 (ISCA, 2019), pp. 4040–4044. R.C. Guido, Paraconsistent feature engineering [Lecture Notes], IEEE Signal Process. Mag. 36, 154 (2019) J. Jung, H.-S. Heo, J. Kim, H. Shim, and H.-J. Yu, RawNet: Advanced end-to-end deep neural network using raw waveforms for text-independent speaker verification, in Interspeech 2019 (ISCA, 2019), pp. 1268–1272. K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, Learning phrase representations using RNN encoder-decoder for statistical machine translation, in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Association for Computational Linguistics, Doha, Qatar, 2014), pp. 1724–1734 J. Jung, S. Kim, H. Shim, J. Kim, and H.-J. Yu, Improved RawNet with feature map scaling for text-independent speaker verification using raw waveforms, in Interspeech 2020 (ISCA, 2020), pp. 1496–1500. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser, I. Polosukhin, Attention is all you need, in Proceedings of the 31st International Conference on Neural Information Processing Systems (Curran Associates Inc., Long Beach, California, USA, 2017), pp. 6000–6010 M. India, P. Safari, and J. Hernando, Self multi-head attention for speaker recognition, in Interspeech 2019 (ISCA, 2019), pp. 4305–4309. P. Safari, M. India, and J. Hernando, Self-attention encoding and pooling for speaker recognition, in Interspeech 2020 (ISCA, 2020), pp. 941–945. S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Computation 9, 1735 (1997) M. Jabreel, A. Moreno, A deep learning-based approach for multi-label emotion classification in Tweets. Applied Sciences 9, 1123 (2019) K.-H. Woo, T.-Y. Yang, K.-J. Park, C. Lee, Robust voice activity detection algorithm for estimating noise spectrum. Electron. Lett. 36, 180 (2000) R. Chengalvarayan, Robust energy normalization using speech/nonspeech discriminator for German connected digit recognition, in EUROSPEECH. Vol. 99, 61–64 (1999) A. Benyassine, E. Shlomot, H.- Su, D. Massaloux, C. Lamblin, and J.- Petit, ITU-T Recommendation G.729 Annex B: a silence compression scheme for use with G.729 optimized for V.70 digital simultaneous voice and data applications, IEEE Communications Magazine 35, 64 (1997). J. Wagner, D. Schiller, A. Seiderer, and E. André, Deep learning in paralinguistic recognition tasks: are hand-crafted features still relevant? in Interspeech 2018 (ISCA, 2018), pp. 147–151. R. Hennequin, A. Khlif, F. Voituret, M. Moussallam, Spleeter: A fast and efficient music source separation tool with pre-trained models. JOSS 5, 2154 (2020) O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in medical image computing and computer-assisted intervention—MICCAI 2015, edited by N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi (Springer International Publishing, Cham, 2015), pp. 234–241. A. Défossez, N. Usunier, L. Bottou, and F. Bach, Music source separation in the waveform domain, ArXiv:1911.13254 [Cs, Eess, Stat] (2019). M. Ravanelli and Y. Bengio, Interpretable convolutional filters with SincNet, ArXiv:1811.09725 [Cs, Eess] (2019). X. Wang, R. Girshick, A. Gupta, and K. He, Non-local neural networks, in 2018 IEEE/CVF Conference on computer vision and pattern recognition (2018), pp. 7794–7803. R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, J. Sivic, NetVLAD: CNN architecture for weakly supervised place recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1437 (2018) Y. Zhong, R. Arandjelović, A. Zisserman, in computer vision—ACCV 2018, ed. by C. V. Jawahar, H. Li, G. Mori, K. Schindler. GhostVLAD for set-based face recognition (Springer International Publishing, Cham, 2019), pp. 35–50 J. S. Chung, A. Nagrani, and A. Zisserman, VoxCeleb2: deep speaker recognition, in Interspeech 2018 (ISCA, 2018), pp. 1086–1090. Y. Fan, J. W. Kang, L. T. Li, K. C. Li, H. L. Chen, S. T. Cheng, P. Y. Zhang, Z. Y. Zhou, Y. Q. Cai, and D. Wang, CN-Celeb: A challenging chinese speaker recognition dataset, in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Barcelona, Spain, 2020), pp. 7604–7608. D. Snyder, G. Chen, and D. Povey, MUSAN: a music, speech, and noise corpus, ArXiv:1510.08484 [Cs] (2015). W. Xie, A. Nagrani, J. S. Chung, and A. Zisserman, Utterance-level aggregation for speaker recognition in the wild, in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Brighton, United Kingdom, 2019), pp. 5791–5795. A. Nagrani, J.S. Chung, W. Xie, A. Zisserman, Voxceleb: large-scale speaker verification in the wild. Computer Speech & Language 60, 101027 (2020) N. R. Koluguri, J. Li, V. Lavrukhin, and B. Ginsburg, SpeakerNet: 1D depth-wise separable convolutional network for text-independent speaker recognition and verification, ArXiv:2010.12653 [Eess] (2020). J. Deng, J. Guo, J. Yang, N. Xue, I. Cotsia, and S. P. Zafeiriou, ArcFace: additive angular margin loss for deep face recognition, IEEE Trans. Pattern Anal. Mach. Intell. 1 (2021). M. India, P. Safari, and J. Hernando, Double multi-head attention for speaker verification, in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Toronto, ON, Canada, 2021), pp. 6144–6148. K. Okabe, T. Koshinaka, and K. Shinoda, Attentive statistics pooling for deep speaker embedding, in Interspeech 2018 (ISCA, 2018), pp. 2252–2256. S. Min Kye, Y. Kwon, J. Son Chung, Cross attentive pooling for speaker verification, in 2021 IEEE Spoken Language Technology Workshop (SLT) (IEEE, Shenzhen, China, 2021), pp. 294–300 S. Shon, H. Tang, and J. Glass, VoiceID Loss: Speech enhancement for speaker verification, in Interspeech 2019 (ISCA, 2019), pp. 2888–2892. S. Ioffe, in Computer Vision – ECCV 2006, ed. by A. Leonardis, H. Bischof, A. Pinz. Probabilistic linear discriminant analysis (Heidelberg, Springer, Berlin, 2006), pp. 531–542 H.-S. Lee, Y. Tso, Y.-F. Chang, H.-M. Wang, S.-K. Jeng, Speaker verification using kernel-based binary classifiers with binary operation derived features, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Florence, Italy, 2014), pp. 1660–1664 The authors thank the associate editor and the anonymous reviewers for their constructive comments and useful suggestions. This research was supported by the Beijing Municipal Education Commission Cooperation Beijing Natural Science Foundation (No. KZ 201910005007), National Natural Science Foundation of China (No. 61971016). Faculty of Information Technology, Beijing University of Technology, Beijing, China Jiacheng Yao, Jing Zhang, Jiafeng Li & Li Zhuo Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing University of Technology, Beijing, China Jiacheng Yao Jiafeng Li Li Zhuo All the authors made significant contributions to the work. JY, JZ, JL, and LZ proposed the conception of this work and devised the algorithm. JY wrote the draft of this paper and did experiments. JZ checked experiments as well as revised this paper. LZ, JL, and JZ provide instrumentation and computing resources for this study. All authors read and approved the final manuscript. Correspondence to Jing Zhang. Yao, J., Zhang, J., Li, J. et al. Anchor voiceprint recognition in live streaming via RawNet-SA and gated recurrent unit. J AUDIO SPEECH MUSIC PROC. 2021, 45 (2021). https://doi.org/10.1186/s13636-021-00234-3 Voiceprint recognition RawNet-SA
CommonCrawl
Assessment of genetic variation for pathogen-specific mastitis resistance in Valle del Belice dairy sheep Marco Tolone1, Cristian Larrondo2, José M. Yáñez2, Scott Newman3, Maria Teresa Sardina1 & Baldassare Portolano1 Mastitis resistance is a complex and multifactorial trait, and its expression depends on both genetic and environmental factors, including infection pressure. The objective of this research was to determine the genetic basis of mastitis resistance to specific pathogens using a repeatability threshold probit animal model. The most prevalent isolated pathogens were coagulase-negative staphylococci (CNS); 39 % of records and 77 % of the animals infected at least one time in the whole period of study. There was significant genetic variation only for Streptococci (STR). In addition, there was a positive genetic correlation between STR and all pathogens together (ALL) (0.36 ± 0.22), and CNS and ALL (0.92 ± 0.04). The results of our study support the presence of significant genetic variation for mastitis caused by Streptococci and suggest the importance of discriminating between different pathogens causing mastitis due to the fact that they most likely influence different genetic traits. Low heritabilities for pathogen specific-mastitis resistance may be considered when including bacteriological status as a measure of mastitis presence to implement breeding strategies for improving udder health in dairy ewes. Mastitis is one of the most common diseases affecting dairy sheep. Mastitis leads to major economic losses, mainly due to discarded milk, reduced milk production and quality, alteration of cheese-making properties, early culling, and increased health care costs [1–8]. The alterations or reductions of the dry matter of milk and its composition, have a substantial effect on the economic and industrial values of the milk, considering that almost all is processed into fermented products and cheeses [4, 9, 10]. Mastitis resistance is a complex and multifactorial trait, and its expression depends on both genetic and environmental factors, including infection pressure. In the broadest sense, resistance could be defined as the ability to avoid any infection and/or the quick recovery from an infection [11, 12], and involves different factors such as to avoid entry of the pathogen into the mammary gland, to induce an immune response capable of limiting pathogen development in the udder and to recover from the infection, as well as controlling the pathogenic effects of the infection, such as tissue damage [13]. Over 100 different micro-organisms can cause mastitis, in particular coliform bacteria, staphylococci and streptococci [14]. In dairy sheep the most important agents involved in clinical mastitis are the bacterial infections, and the most frequently isolated pathogens are coagulase-negative staphylococci (CNS); that are present on and around the udder skin [9] with a different pathogenicity causing clinical and subclinical mastitis [15–18]. The bacterial pathogens responsible for infection of the mammary gland may be grouped into two main categories: major and minor pathogens. Major pathogen infection generally results in clinical illness or strong inflammatory responses and reduced milk yields, whereas minor pathogen infection is usually subclinical [19]. Selection for genetic resistance to mastitis can be done directly or indirectly. Direct selection corresponds to the diagnosis of the infection: the actual trait [i.e., bacteriological examination of milk and/or observation of clinical cases of mastitis] is measured on the animal or its relatives. Indirect selection corresponds to a prediction of the bacteriological status of the udder based on traits related to the infection [e.g., inflammatory parameters]: an indicator trait for mastitis is measured on the animal itself or its relatives [20]. Simple and indirect methods have been widely applied based on the evaluation of the degree of inflammation or of internal mammary lesions [21]. Their accuracy was established by bacteriological analysis as a reference method [17]. Among the indirect methods, the most frequently used to detect mastitis are milk somatic cell count (SCC). SCC is considered as a good measure to indirectly select for mastitis resistance in cattle, especially when a direct measure of clinical mastitis incidence is not available [18, 22]. In cattle, values of SCC between 250 and 300 × 103 cells/mL are recommended as satisfactory discrimination thresholds to distinguish between healthy and infected udders. In sheep there is no widely accepted threshold [15, 23] but some studies suggested a critical limit of 500 × 103 cells/mL [24]. There are few studies concerning genetic variation of mastitis in sheep according to bacteriological status [22, 25]. A genetic selection approach could be one of the strategies for controlling mastitis and has been shown to be a valid option, together with management, to prevent mastitis cases [1, 26]. Studies have reported genetic variation accounting for resistance to mastitis in Valle del Belice dairy sheep [22, 24]. These authors have defined mastitis as a binary trait distinguishing between ewes with at least one case of mastitis (1) and ewes without (0) in a defined period of lactation and was analyzed using a linear model approach. This definition excluded alternative definitions, for example multiple cases of mastitis within lactation, and ignored the etiology of intra-mammary infections. The purpose of this study was to determine the genetic bases of pathogen-specific resistance to mastitis in Valle del Belice dairy sheep using a threshold repeated model. Data were collected between 2006 and 2011 in five Valle del Belice flocks, with a total of 2350 ewes and 5856 animals in the pedigree. Observations for this study included 1795 primiparous, 1285 secondiparous and 2225 multiparous dairy ewes. All ewes were milked twice daily (morning and evening), and records for milk yield (MY), bacteriological status (infected or not infected), and SCC were collected at approximately monthly intervals, following an A4 recording scheme (monthly records with two daily milking) which is defined by the International Committee for Animal Recording [27]. The milk samples were collected during routine milking so avoiding any harmful process to individuals. The consent for sample collection was obtained by the animals' owners. Moreover, Sample collection, animal management and cares were in agreement with the Directive 2010/63/EU. The observed bacteriological colonies were identified as: Escherichia coli (ESCCL), Staphylococcus aureus (STHAU), Streptococcus dysgalactiae (STPDG), Streptococcus uberis (STPUB), Streptococcus agalactiae (STPAG) and Bacillus spp. (BACIL), Corynebacterium spp. (CORLT), Pasteurella spp. (PASCL), Pseudomonas spp. (PSELT), coagulase negative staphylococci (CNS) and Streptococcus spp. (STR). Ewes were considered infected if at least one record with positive bacteriological test during lactation period was recorded, while they were considered healthy if the bacteriological test did not show a positive result. Ewes were measured more than one time during the same lactation. Thus, the repeatability of records is across and within lactations. Table 1 shows average number of records per ewe within lactations. Moreover, ewes were considered infected if more than five colony forming units (CFU) per 10 μl of milk of one species of bacteria were isolated. The response variable used in the model corresponds to the binary disease status, coded as 0 or 1 to represent uninfected or infected individuals, respectively. Table 1 Mean, standard deviation (SD), minimum (Min) and maximum (Max) number of records per ewe within lactation Trait definition and statistical model Phenotypic observations of infection status were defined as a repeated binary trait. The binary trait distinguished between sheep with infected udder status (1) and uninfected udder status (0) within each lactation for each particular pathogen described above. SCC was also recorded and normalized through a logarithmic transformation into somatic cell score (SCS) according to the formula of Ali and Shook [28]: $$ \mathrm{Somatic}\ \mathrm{cell}\ \mathrm{score}\ \left(\mathrm{S}\mathrm{C}\mathrm{S}\right) = { \log}_2\left(\mathrm{S}\mathrm{C}\mathrm{C}/100,000\right) + 3 $$ The binary trait was analyzed using the following repeatability threshold probit animal model: $$ \boldsymbol{Pr}\left({\boldsymbol{Y}}_{\boldsymbol{i}\boldsymbol{jklmn}}\right) = \boldsymbol{\varPhi} \left(\boldsymbol{\mu} +\boldsymbol{O}{\boldsymbol{P}}_{\boldsymbol{i}}+\boldsymbol{M}{\boldsymbol{Y}}_{\boldsymbol{j}} + \boldsymbol{F}\boldsymbol{Y}{\boldsymbol{S}}_{\boldsymbol{l}}+\boldsymbol{P}{\boldsymbol{E}}_{\boldsymbol{m}}+{\boldsymbol{A}}_{\boldsymbol{n}}\right) $$ Where y ijklmn is the observation for the specific pathogen causing mastitis (CNS, STR, ESCCL, STHAU, STPDG, STPUB, STPAG, BACIL, CORLT, PASCL, PSELT and ALL); Φ is the normal cumulative density function; μ is the fixed effect of the overall mean; OP i is the order of parity fitted as fixed effect (with 5 classes); MY j is the milk production yield fitted as covariate; FYS l is the flock-year-season random effect (51 classes); PE m is the random permanent environmental effect of the individual m across lactations (2350 levels with records); and A n is the random animal effect (5856 levels in the pedigree). The implicit residual variance on the underlying scale is 1 for the probit model (standard normal). Parameters of the univariate threshold models were estimated using ASREML version 3.0 [29]. Heritabilities Heritabilities for resistance to different pathogens were calculated as: $$ {\boldsymbol{h}}^2=\frac{\sigma_a^2}{\sigma_a^2+{\sigma}_{FYS}^2+{\sigma}_{PE}^2 + {\sigma}_e^2} $$ Where σ a 2 is the animal additive genetic variance, σ FYS 2 is the variance associated with flock-year-season, σ PE 2 is the variance due to permanent environment and σ e 2 is residual variance. For all traits, the animal effect was assumed ~ N(0, A σ 2 a ), where A is the additive genetic relationship matrix among all animals included in the pedigree (5856). Similarly, FYS and PE effects were assumed ~ N(0, I σ 2 FYS ) and ~ N(0, I σ 2 PE ) , where I is an identity matrix with order equal to number of FYS and PE classes (51 and 2350) respectively. Arithmetic mean and standard deviation of MY, SCC and SCS, for infection status (infected and uninfected) of udders are shown in Table 2. Daily average MY values were 1275 ± 544 and 1338 ± 558 g, for infected and uninfected udders, respectively. Mean SCC was 2908 ± 4926 and 1155 ± 3083 (x 103 cells/mL), for infected and uninfected udders, respectively; whereas mean SCS was 6.21 ± 2.45 and 4.55 ± 2.13. Overall mean values for both infected and uninfected udders were 1314 ± 553 g, 1815 ± 3972 (x 103 cells/mL), and 5.18 ± 2.3, for MY, SCC and SCS, respectively. Table 2 Arithmetic mean and SD for MY, SCC, and SCS of infected and uninfected udders There was a low prevalence of isolation due to CORLT, ESCCL, PASCL, PSELT, STPDG, STPUB, STPAG, and BACIL, in all the observations and according to infection status of animals. Models did not converge for these pathogens, most likely due to the low incidence (zero inflation). Absolute (AF) and relative (RF) frequency distribution according to infection status (infected or uninfected) observations within all the records data set, are shown in Table 3. The most prevalent isolated pathogens were CNS with 7951 (39 %) observations, followed by STHAU (940; 5 %) and STR (541; 3 %). Considering all pathogens (ALL), 8019 (38 %) milk samples were infected. Table 3 Absolute (AF) and relative (RF) frequency distribution according to udder status of observations (n = 20,519) Table 4 shows absolute (AF) and relative (RF) frequency distributions according to udder status (infected or uninfected) of animals per pathogen. Similarly, the most prevalent isolated pathogens affecting ewes were CNS (1811; 77 %), followed by STHAU (513; 21.83 %) and STR (217; 9.23 %) when only animals were analyzed. Table 4 Absolute (AF) and relative (RF) frequency distribution according to udder status of animals (n = 2350) per pathogen Table 5 shows components of variance due to Flock-Year-Season effect, permanent environment effect, additive genetic effect, phenotypic effect and heritabilities for resistance to CNS, STR and ALL. Due to the low frequency of isolation of ESCCL, STHAU, STPDG, STPUB, STPAG, BACIL, CORLT, PASCL, and PSELT in the total records and isolation of pathogens per animals (Tables 3 and 4), problems associated with convergence were found when analyzing resistance to these bacteria. In contrast, there was statistically significant genetic variation for CNS, STR, and ALL pathogens (Table 5). Variances due to FYS random effect were 0.18, 0.09 and 0.79, whereas due to permanent environment effect were 0.39, 0.39, and 0.40 for CNS, STR and ALL, respectively. Variances of additive and phenotypic effects were 0.03, 0.15, 0.04; and 1.60, 1.62, 2.23, for CNS, STR and ALL, respectively. The heritability obtained for STR (0.09) was significant different from zero, whereas for CNS (0.02) and ALL (0.02) were not. Table 5 Estimates of components of variance and their standard errors for infectious status Table 6 shows phenotypic and genetic correlations for resistance to mastitis caused by STR, CNS and ALL estimated using a multivariate repeatability linear model. All of the estimated phenotypic and genetic correlations were significantly different from zero. The phenotypic correlation between ALL and STR was low, however, the genetic correlation between these traits was moderately high indicating that there was a direct relationship between these traits in genetic terms. On the other hand, both phenotypic and genetic correlations between ALL and CNS were high, indicating that there was a strong positive relationship, both phenotypic and genetic, between these traits. These results suggest that resistance to CNS is a similar to the resistance when it is measured as ALL. In addition, the phenotypic correlation between STR and CNS was negative and the genetic correlation between these traits was low, thus, indicating that selection for improved STR will not have an impact on CNS resistance. Table 6 Phenotypic (above diagonal) and genetic (below diagonal) correlations and standard errors for resistance to mastitis The overall infection prevalence considering frequency of infection on all the samples of records was 37.7 %, close to the value of 42 % reported in a previous investigation in the same breed [25], and higher than the values of 26.2 and 24.6 % reported by Pengov [15] and Gonzalo et al. [16], respectively. However, in the study of Pengov [15] only one milk sample (n = 496 samples, 251 ewes) of udder halves was considered, whereas the study of Gonzalo et al. [16] was based only on subclinical mastitis prevalence. When the prevalence was assessed on all the animals it increased to 74 %, higher than any prevalence reported in previous studies. Probably the high SCC reported in our study are a consequence of inadequate preventive management, a lack of strict hygiene conditions and extensive management practices, generating a high number of subclinical mastitis cases due to environmental pathogens. Moreover, our results suggested that ewes have higher SCC than cows and it is therefore necessary to establish an acceptable threshold in dairy sheep considering the difference in SCC between breeds and other factors [15, 18, 22]. Leitner et al. [9] suggested categories for classification of SCC in sheep and goat related to quality of milk and infection status. These researchers suggested that infection of 25, 50 and 75 % of the udders in a given herd was associated with 4.1 to 12.2 % of milk loss in sheep and 0.8 to 2.3 % in goats [9]. Mavrogenis et al. [30] suggested that an increase of 0.5 cells/mL × 106 SCC above the mean resulted in reduction of mean individual daily production of milk by 18 g. In the present study, there was a 68 g difference in mean MY between infected and non- infected ewes. For SCC, the mean value for infected animals was approximately 3-fold higher than uninfected animals, similar to values reported in a previous study [24]. However mean SCC for healthy animals were different. Mean SCC for uninfected animals was different to the value of 89 cells/mL × 103 reported by Pengov [15] and 311 cells/mL × 103 reported by Leitner et al. [4], and similar to the value of 1490 cells/mL × 103 reported by Kern et al. [31]. These studies focused on Domestic Highland, East Friesand, and Awassi breeds, including their crosses and the Assaf breed, respectively. Moreover, considering the whole data set, mean SCC for infected animals was similar to reported values in the literature [4, 24]. Mean SCS for uninfected and mean SCS of whole data set were similar to the values reported in Valle del Belice [24, 25], and Churra dairy sheep breeds [32]. For mean SCS of infected animals, Riggio et al. [24] reported a value of 6.42 and Leitner et al. [33] a value of 6.32 in Israel-Assaf and Awassi sheep, similar to the value of 6.21 obtained in our research. Another study reported lower values of mean SCS of infected animals using different breeds [32]. Our study confirms that CNS is the most prevalent etiological group of bacteria in the infected dairy ewes. The frequency of isolation of CNS on record (39 %) was lower than other studies ranging from 60 to 90 % [3, 24, 32, 34–36]. Moreover, a high percentage (77 %) of animals were found infected at least one or more times in the period of study, showing the importance of this group of bacteria in this population. Most cases of CNS infection produce subclinical mastitis, although intramammary infections in its subclinical form by CNS have been described as the main single factor affecting udder health and profitability in small ruminants [9]. Besides, due to the high prevalence of CNS during the ewe's lactations, subclinical cases could persist, significantly increase SCC and consequently cause clinical mastitis. This is a possible explanation of the observed differences of SCC between infected and non-infected animals and frequency of animals infected by CNS. Moreover, considering the opportunistic nature of CNS [17], with adequate hygiene practices, correct milking routine and periodic revision of milking equipment, intramammary infections by CNS could be reduced. In this investigation, CNS were classified as minor pathogens. Researchers have reported that some species of CNS can cause high SCC, similar to those of major pathogens [15, 32, 37] and even clinical mastitis [32, 38]. Ariznabarreta et al. [32] described that Staphylococcus caprae and S. simulans were associated with high log SCC, 6.43 and 6.35, respectively, in contrast with other CNS bacteria such as S. chromogenes, S. hominis, S. capitis, S. haemolyticus and S. epidermidis ranging from 5.93 to 6.09. Therefore, there was variation in the inflammatory response according to the involved CNS species and their pathogenicity in milk measured through SCC, even on average ten times higher than in dairy cattle [15]. STHAU was the second one more frequently isolated bacteria in our study (5 %) followed by STR (3 %). This order differs from other authors, which reported that CNS is the most prevalent group of bacteria followed by STR [15, 31, 35, 38]. For STHAU, ewes infected at least one time or more in the period of study were 22 % (513). These findings are different respect to what reported by Riggio et al. [24] with values of 10.47 % of milk samples and similar to other studies ranging from 2 to 5.5 % [15, 32]. Infection due to STHAU is related with subclinical to acute clinical mastitis (gangrenous mastitis) with different clinical symptoms according to the virulence of the strains and in severe cases lead towards culling of the affected sheep [2, 17]. The high percentage of animals affected by STHAU in the period of study could be related with clinical mastitis cases and culling of ewes in this population. This was in agreement with Mavrogenis et al. [30] which identified STHAU as the most prevalent bacteria in clinical mastitis cases. In sheep a heritability estimate of 0.09 for infection status assessed by bacteriological analyses was reported by Riggio et al. [24] and Tolone et al. [25] in the Valle del Belice breed using a threshold animal model assuming a probit link function. Gonzalo et al. [39] estimated genetic parameters of SCC in Churra sheep considering the type of mammary pathogen using a multitrait repeatability animal model. They reported that the effect related to the type of pathogen accounted for 32.5 % of the total variance in SCC, a value similar to that obtained for the residual effect (34.9 %), indicating a high relative importance of the type of pathogen in the decomposition of the variance for SCC. In addition, Holmberg et al. [40] in dairy cattle reported genetic variances for different pathogens ranging from 0.024 to 0.188, similar to the values of the present research (0.03 to 0.15). These results showed the importance of differentiating between the types of mammary bacteria assessed by bacteriological analyses in genetic mastitis studies. Variances due to permanent environment and FYS effects were high and were important factors to explain the phenotypic variance resistance against CNS, STR and ALL. The possible explanations of these results for CNS group of pathogens are their nature and their high frequency of isolation in this sheep breed. CNS group of bacteria are related with inadequate management and hygiene practices, which could be different among the flocks, through the year and among them. Therefore, due to opportunistic nature of CNS, poor flock management and inadequate milking hygiene could increase the probability of occurrence of mastitis, and flocks may act as reservoirs of some CNS species. Taking into account the predominant sheep husbandry system in Sicily based on grazing with animals kept outdoors, reductions in pasture quantity and quality through the year as in summer (peak of lambing in Valle del Belice sheep) could be a stress factor to increase the susceptibility due to ALL infection. High temperatures in summer are associated with heat stress [8], and as occurs in dairy cattle heat stress is recognized as a factor which increases susceptibility to mastitis. Gonzalo et al. [41] reported that month within flock and flocks were accounted 44.1 % of the variance on bulk tank bacteria count, whereas Portolano et al. [8] reported that flock-year of lambing effect explains 27 % of the variance of time interval between lambing and first record with mastitis. Heritabilities for pathogen-specific mastitis were in agreement with results of De Haas [20] in dairy cattle ranging from 0.02 to 0.10. However, this study only included heritabilities of pathogens involved in clinical mastitis cases and were estimated through threshold and linear models. For genetic correlations, the one estimated between CNS and ALL (0.92) was positive and very high suggesting that both are the same traits. This could be explained for the high frequency of isolation of CNS in the records (77 %). Thus, a high percentage of ALL group is explained by CNS pathogens. Furthermore, due to the fact that phenotypic variation for CNS and ALL is determined primarily by an environmental component both type of traits (CNS and ALL) could be controlled more effectively by applying a correct management measures instead of selective breeding on these population. In the Valle del Belice breed, where the current selection is mainly practiced on a "within farm" approach and based on own performance of ewes, it is unlikely that selection for mastitis resistance is successful, independent of the use of infection status or SCS. The results of our study support the presence of significant genetic variation for resistance to one specific pathogen causing mastitis (i.e. Streptococci). The high genetic correlation between ALL and CNS indicate that both are almost the same trait. The opportunistic nature of CNS and the high environmental influence of CNS resistance suggest that improvement of flock management and adequate milking hygiene could reduce significantly the incidence of mastitis caused by this pathogen in Valle del Belice dairy sheep. A, animal; AF, absolute frequency; ALL, all pathogens together; BACIL, Bacillus spp.; CFU, five colony forming units; CNS, coagulase-negative staphylococci; CORLT, Corynebacterium spp.; ESCCL, Escherichia coli; FYS, flock year season; MY, milk yield; OP, order of parity; PASCL, Pasteurella spp.; PE, permanent environmental; PSELT, Pseudomonas spp.; RF, relative frequency; SCC, somatic cell count; SCS, somatic cell score; STHAU, Staphylococcus aureus; STPAG, Streptococcus agalactiae; STPDG, Streptococcus dysgalactiae; STPUB, Streptococcus uberis; STR, Streptococci; STR, Streptococcus spp Barillet F, Rupp R, Mignon-Grasteau S, Astruc JM, Jacquin M. Genetic analysis of mastitis resistance and somatic cell score in French Lacaune dairy sheep. Genet Sel Evol. 2001;33:397–415. Leitner G, Chaffera M, Zamirb S, Mora T, Glickmana A, Winklera M, Weisblita M, Sarana A. Udder disease etiology, milk somatic cell counts and NAGase activity in Israeli Assaf sheep throughout lactation. Small Rumin Res. 2001;39:107–12. Bergonier D, Cremoux R, Rupp R, Lagriffoul G, Berthelot X. Mastitis of dairy small ruminants. Vet Res. 2003;34:689–716. Leitner G, Chaffer M, Shamay A, Shapiro F, Merin U, Ezra E, Saran A, Silanikove N. Changes in milk composition as affected by subclinical mastitis in sheep. J Dairy Sci. 2004;87:46–52. Carlén E, Schneider M, Strandberg E. Survival analysis for genetic evaluation of mastitis in Dairy Cattle: A simulation study. J Dairy Sci. 2005;88:797–803. Barillet F. Genetic improvement for dairy production in sheep and goats. Small Rumin Res. 2007;70:60–75. Legarra A, Ramon M, Ugarte E, Perez-Guzman MD, Arranz J. Economic weights of somatic cell score in dairy sheep. Animal. 2007;1:205–12. Portolano B, Finocchiaro R, Van Kaam J, Riggio V, Maizon DO. Time-to-event analysis of mastitis at first-lactation in Valle del Belice ewes. Livest Sci. 2007;110:273–9. Leitner G, Silanikove N, Merin U. Estimate of milk and curd yield loss of sheep and goats with intramammary infection and its relation to somatic cell count. Small Rumin Res. 2008;74:221–5. Riggio V, Maizon D, Portolano B, Bovenhuis H, van Arendonk J. Effect of somatic cell count level on functional longevity in Valle del Belice dairy sheep assessed using survival analysis. J Dairy Sci. 2009;92:6160–6. Rupp R, Boichard D. Genetics of resistance to mastitis in dairy cattle. Vet Res. 2003;34:671–88. Rupp R, Foucras G. Genetics of mastitis in dairy ruminants. In: Breeding of disease resistance in farm animals. 2010. p. 183–212. Rupp R, Bergonier D, Dion S, Hygonenq MC, Aurel MR, Robert-Granie C, Foucras G. Response to somatic cell count-based selection for mastitis resistance in a divergent selection experiment in sheep. J Dairy Sci. 2009;92:1203–19. Smith K, Hogan J. The world of mastitis 2nd International Symposium on mastitis and milk quality. Canada: Vancouver; 2001. Pengov A. The role of coagulase-negative Staphylococcus spp and associated somatic cell counts in the ovine mammary gland. J Dairy Sci. 2001;84:572–4. Gonzalo C, Ariznabarreta A, Carriedo J, San Primitivo F. Mammary pathogens and their relationship to somatic cell count and milk yield losses in dairy ewes. J Dairy Sci. 2002;85:1460–7. Contreras A, Sierra D, Sánchez A, Corrales JC, Marco JC, Paape MJ, Gonzalo C. Mastitis in small ruminants. Small Rumin Res. 2007;68:145–53. Riggio V, Portolano B. Genetic selection for reduced Somatic Cell Counts in sheep milk: A review. Small Rumin Res. 2015;126(1):33–42. White LJ, Schukken YH, Lam TJG, Medley GF, Chappell MJ. A multispecies model for the transmission and control of mastitis in dairy cows. Epidemiol Infect. 2001;127:567–76. De Haas Y. Somatic cell count patterns Improvement of udder health by genetics and management. Wageningen, NL: PhD Thesis, Wageningen University; 2003. De la Cruz M, Serrano E, Montoro V, Marco J, Romeo M, Baselga R, Albizu I, Amorena B. Etiology and prevalence of subclinical mastitis in the Manchega sheep at mid-late lactation. Small Rumin Res. 1994;14:2175–180. Tolone M, Riggio V, Portolano B. Estimation of genetic and phenotypic parameters for bacteriological status of the udder, somatic cell score, and milk yield in dairy sheep using a threshold animal model. Livest Sci. 2013;151:134–9. Riggio V, Pesce LL, Morreale S, Portolano B. Receiver-operating characteristic curves for somatic cell scores and California mastitis test in Valle del Belice dairy sheep. Vet J. 2013;196:528–32. Maurer J, Schaeren W. Udder health and somatic cell counts in ewes. Agrarforschung. 2007;14:162–7. Riggio V, Portolano B, Bovenhuis H, Bishop S. Genetic parameters for somatic cell score according to udder infection status in Valle del Belice dairy sheep and impact of imperfect diagnosis of infection. Gen Sel Evol. 2010;42:30. Heringstad B, Klemetsdal G, Steine T. Selection responses for clinical mastitis and protein yield in two norwegian dairy cattle selection experiments. J Dairy Sci. 2003;86:2990–9. ICAR (International Committee for Animal Recording), 2014. International agreement of recording practices. Available online: http://www.icar.org/index.php/publications-technicalmaterials/recording-guidelines/ 2014. Gilmour AR, Gogel BJ, Cullis BR, Thompson R. ASReml User Guide Release 3.0 VSN International Ltd, Hemel Hempstead, HP1 1ES, UK www.vsni.co.uk. 2009; Ali A, Shook G. An optimum transformation for somatic cell count in milk. J Dairy Sci. 1980;63:487–90. Mavrogenis A, Koumas A, Kakoyiannis C, Taliotis C. Use of somatic cell counts for the detection of subclinical mastitis in sheep. Small Rumin Res. 1995;17:79–84. Kern G, Traulsen I, Kemper N, Krieter J. Analysis of somatic cell counts and risk factors associated with occurrence of bacteria in ewes of different primary purposes. Livest Sci. 2013;157:597–604. Ariznabarreta A, Gonzalo C, San Primitivo F. Microbiological quality and somatic cell count of ewe milk with special reference to staphylococci. J Dairy Sci. 2002;85:1370–5. Leitner G, Chaffer M, Caraso Y, Ezra E, Kababea D, Winkler M, Glickman A, Saran A. Udder infection and milk somatic cell count, NAGase activity and milk composition–fat, protein and lactose–in Israeli-Assaf and Awassi sheep. Small Rumin Res. 2003;49:157–64. Contreras A, Corrales JC, Sierra D, Marco JC. Prevalence and aetiology of non-clinical intramammary infection in Murciano- Granadina goats. Small Rumin Res. 1995;17:71–8. González-Rodríguez MC, Gonzalo C, San Primitivo F, Cármenes P. Relationship between somatic cell count and intramammary infection of the half udder in dairy ewes. J Dairy Sci. 1995;78:2753–9. Las Heras A, Domínguez L, Fernández-Garayzábal J. Prevalence and aetiology of subclinical mastitis in dairy ewes of the Madrid region. Small Rumin Res. 1999;32:21–9. Marco JC. Mastitis en la oveja Latxa: epidemiología, diagnóstico y control. PhD Thesis. España: Universidad de Zaragoza; 1994. Fthenakis GC. Prevalence and aetiology of subclinical mastitis in ewes of Southern Greece. Small Rumin Res. 1994;13:293. Gonzalo C, Ariznabarreta A, Othmane M, Carriedo J, De La Fuente L, San Primitivo F. Genetic parameters of somatic cell count in dairy sheep considering the type of mammary pathogen effect. J Anim Breed Genet. 2003;120:282–7. Holmberg M, Fikse WF, Andersson-Eklund L, Artursson K, Lunde A. Genetic analyses of pathogen-specific mastitis. J Anim Breed Genet. 2012;129:129–37. Gonzalo C, Carriedo J, Beneitez E, Juarez M, De La Fuente L, San Primitivo F. Short Communication: bulk tank total bacterial count in dairy sheep: factors of variation and relationship with somatic cell count. J Dairy Sci. 2006;89:549–52. We acknowledge Dr. M.L Scatassa (Istituto Zooprofilattico Sperimentale della Sicilia) for bacteriological analyses. No funding was obtained for this study. The data and pedigree supporting our findings are publicly available online at https://figshare.com/articles/data_txt/3467942 and https://figshare.com/articles/pedigree_txt/3467945, respectively. MT and CL contributed equally to this work, they carried out the design of the study, performed statistical analysis and drafted the manuscript. JMY participated in statistical analysis and drafted the manuscript. SN helped to draft the manuscript and to interpret the results. MTS helped to collect the data and revised critically the manuscript. BP participated in the design of the study and gave the final approval of the version to be submitted. All authors read and approved the final manuscript. The milk samples were collected during routine milking so avoiding any harmful process to individuals. The milking procedure followed the A4 recording scheme which is defined by the International Committee for Animal Recording (ICAR, 2014). The consent for sample collection was obtained by the animals' owners. Moreover, Sample collection, animal management and cares were in agreement with the Directive 2010/63/EU. Dipartimento Scienze Agrarie e Forestali, Università degli Studi di Palermo, Viale delle Scienze, Palermo, 90128, Italy Marco Tolone, Maria Teresa Sardina & Baldassare Portolano Faculty of Veterinary and Animal Sciences, University of Chile, Av. Santa Rosa, La Pintana, Santiago, 11735, Chile Cristian Larrondo & José M. Yáñez Genus plc, Hendersonville, TN, 37075, USA Scott Newman Marco Tolone Cristian Larrondo José M. Yáñez Maria Teresa Sardina Baldassare Portolano Correspondence to Marco Tolone. Tolone, M., Larrondo, C., Yáñez, J.M. et al. Assessment of genetic variation for pathogen-specific mastitis resistance in Valle del Belice dairy sheep. BMC Vet Res 12, 158 (2016). https://doi.org/10.1186/s12917-016-0781-x DOI: https://doi.org/10.1186/s12917-016-0781-x
CommonCrawl
Welcome to Yutaka Kano's Page! NAME Yutaka KANO TITLE Professor of Statistics, Doctor of Engineering AFFILIATION Osaka University, Graduate School of Engineering Science, Division of Mathematical Science, joint appointment with Graduate School of Human Science ADDRESS Toyonaka, Osaka 560-8531, JAPAN EMAIL kano AT sigmath.es.osaka-u.ac.jp RESEARCH INTEREST multivariate analysis; psychometrics; incomplete data analysis; structural equation modeling; graphical modeling; statistical causal inference SERVICE Associate Editor of Psychometrika (2000-) Chief Editor of Behaviormetrika (2000-2003) Associate Editor of Journal of Multivariate Analysis (2002-) Board of Trustees of the Psychometric Society (2002-2005) Associate Editor of Annals of the Institute of Statistical Mathematics(2003-) Editorial Board Member of Advances in Data Analysis and Classification (2006-) [IMPS2015 sides] [return] [go to "TOP PAGE"] [go to "TECHNICAL REPORTS"] [44] Shimizu, S. and Kano, Y. (2003). Examination of independence in independent component analysis. In New Developments in Psychometrics (Yanai, H. et al., Eds.), pp.665-672. Springer Verlag: Tokyo. [43] Hyvarinen, A. and Kano, Y. (2003). Independent component analysis for non-normal factor analysis. In New Developments in Psychometrics (Yanai, H. et al., Eds.), pp.649-656. Springer Verlag: Tokyo. [42] Kano, Y. and Azuma, Y. (2003). Use of SEM programs to precisely measure scale reliability. In New Developments in Psychometrics (Yanai, H. et al., Eds.), pp.141-148. Springer Verlag: Tokyo. [pdf file] It is first pointed out that most often used reliability coefficient $\alpha$ and one-factor model based reliability $\rho$ are seriously biased when unique factors are covariated. In the case, the $\alpha$ is no longer a lower bound of the true reliability. Use of Bollen's formula (Bollen 1980) on reliability is highly recommended. A web-based program termed ``STERA" is developed which can make {\it stepwise}\, reliability analysis very easily with the help of factor analysis and structural equation modeling. [41] Kano, Y. (in press). Rejoinder: Use of error covariances and role of specific factors.The Japanese Journal of Behaviormetrics. (In Japanese) The author would like to express his special thanks to Dr.~Toyoda of Wasada University for organizing this exciting special issue and also to the three discussants who gave stimulating discussions to Kano (2002). In this rejoinder, special attentions are paid to error covariances and specific factors in the comparison between SEM and traditional methods. When a factor analysis model receives a poor fit, it does not make sense to simply remove important variables but inconsistent with the factor analysis model, as pointed out by the discussants. It is emphasized that it is a better way to allow for error covariances so as to recover the inconsistency rather than removing them. The model with error covariances guarantees invariance of estimation results over item selection. The discussants pointed out that an important difference between a scale score (sum of items) and a measurement model by effect indicators in SEM is that a scale score includes specific factors whereas a measurement model excludes them. Practitioners could use scale scores when they are interested in effects of specific factors as well as a common factor. It is argued, however, that the error terms of effect indicators contain information on specific factors and thus their use as well as a common factor makes better inference than use of unidimensional scale scores because it can individually evaluate effects of the common factor and each of the specific factors. Other related topics are also discussed. [40] Kano Y. (in press). Does structural equation modeling outperform traditional factor analysis, analysis of variance and path analysis?The Japanese Journal of Behaviormetrics. (In Japanese) It is well-known that structural equation modeling (SEM) can represent a variety of traditional multivariate statistical models. This fact does not, necessarily, mean that SEM should be used for the traditional models. It is often said that a general model can more hardly be handled than a specific model developed for a particular situation given. In this paper, we shall clarify relative advantages between SEM and several traditional statistical models. Rather than comparison in mathematical properties, we shall discuss how and when SEM outperforms corresponding traditional models {\it in practical situations}. A special attention is paid to statistical analysis of a scale score, a sum of indicator variables determined by factor analysis. In concrete, we shall study relative advantages between between (i) confirmatory factor analysis and exploratory factor analysis, (ii) multiple indicator analysis and correlational and regression analysis of scale scores, (iii) analysis of factor means and analysis of variance of scale scores, and (iv) path analysis and multiple regression analysis. [39] Kano, Y. (2002). Variable selection for structural models. Journal of Statistical Planning and Inference, Vol.108, No.1-2, 173-187. [pdf file] Theory of variable selection for structural models that do not have clear dependent variables is developed. Theory is derived within the framework of the curved exponential family of distributions for observed variables. The idea of Rao's score test was taken to construct a test statistic for variable selection, and its statistical properties are examined. In particular, the test statistic is shown to have asymptotic {\it central}\, chi-square distribution under a kind of {\it alternative}\, hypothesis. This fact will provide an evidence for excellent performance of the score statistic for real data sets. [38] Kano, Y. (2000). Towards transition of the statistical paradigm: Statisticians should make significant collaborations with applied researchers. Journal of the Japan Statistical Society, Vol.30, No.3, 305-314 (In Japanese). [37] Kano, Y. (2001). Structural equation modeling for experimental data. In Structural Equation Modeling: Present and Future [A Festschrift in honor of Karl Joreskog] (Eds., Bob Cudek, Stephen du Toit and Dag Sorbom), pp.381-402. SSI: Chicago. (pdf file 791KB) We first review the use of structural equation modeling (SEM) for the analysis of experimental data. Typical examples include ANOVA, ANCOVA and MANOVA with or without a covariance structure. SEM for those experimental data is a mean and covariance structure model in multiple populations with a common covariance matrix. Such analyses can be implemented under the assumption that all observed variables be distributed as normal including fixed-effect exogenous variables, which denote levels of factors for example. Theoretical basis for the usage, based on conditional (likelihood) inference, is explicitly explained. A bias of a path coefficient estimate particularly in standardized solutions is pointed out which comes from the fact that variance estimates of dependent variables contain variation of means. Statistical power of several testing procedures concerning mean vectors across several populations are examined, when a factor model can be assumed for observed variables. The procedures considered here are MANOVA, a mean and covariance structure model implemented by SEM, and ANOVA of a factor score or a weighted sum of observed variables. The SEM is shown to be the most powerful tool in this context. [36] Kano, Y. and Harada, A. (2000). Stepwise variable selection in factor analysis. Psychometrika. Vol.65, No.1, 7-22. It is very important to choose appropriate variables to be analyzed in multivariate analysis when there are many observed variables such as those in questionnaire. What is actually done in scale construction with factor analysis is nothing but variable selection. In this paper, we take several goodness-of-fit statistics as measures of variable selection and develop backward elimination and forward selection procedures in exploratory factor analysis. Once factor analysis is done for a certain number $p$ of observed variables (the $p$-variable model is labeled the current model), simple formulas for fit measures such as chi-square, GFI, CFI, IFI and RMSEA are provided for models obtained by adding an external variable (so that the number of variables is $p+1$) and for those by deleting an internal variable (so that the number is $p-1$), provided that the number of factors is held constant. A program {\sl SEFA}\, (Stepwise Exploratory Factor Analysis) is developed to actually obtain a list of these fit measures for all these models. The list is very useful in determining which variable should be dropped from the current model to improve the fit of the current model. It is also useful in finding a suitable variable that may be added to the current model. A model with more appropriate variables makes more stable inference in general. The criteria traditionally often used for variable selection is magnitude of communalities. This criteria gives a different choice of variables and does not improve fit of the model in most cases. Kew words: Backward elimination, forward selection, goodness-of-fit measure, Lagrange Multiplier test, likelihood ratio test, stepwise variable selection, Wald test, World Wide Web (WWW). [35] Kano, Y. (1999). Delta method approach in a certain irregular condition. Communications in Statistics, Theory and Methods. Vol.28, Nos. 3&4, 789-807. (dvi file) (ps file) Let $\bfX_n$ be a sequence of random $p$-vectors such that $a_n(\bfX_n-\bfb)\L \bfZ$, where $a_n\nearrow\infty$, $\bfb\in R^p$ and $\bfZ$ is a continuously distributed random $p$-vector. Let $\bff(\cdot)$ be a measurable mapping from a domain of $R^p$ to $R^q$, where the domain may not include $\bfb$, i.e., $\bff(\bfb)$ may not be defined. Under this setup, we study the asymptotic distribution of $\bff(\bfX_n)$. Two theorems are developed to obtain the asymptotic distribution. Comprehensive examples are provided to show when and where such an irregular situation takes place and to illustrate the usefulness of these theorems. The examples include the problem of choosing the number of components and noniterative estimation in factor analysis. [34] Kano, Y. (1998). More higher order efficiency. Journal of Multivariate Analysis. Vol. 67, 349-366. Based on concentration probability of estimators about a true parameter, third-order asymptotic efficiency of the first-order bias-adjusted MLE within the class of first-order bias-adjusted estimators has been well established in a variety of probability models. In this paper we consider the class of second-order bias-adjusted Fisher consistent estimators of a structural parameter vector on the basis of an i.i.d.~sample drawn from a curved exponential-type distribution, and study the asymptotic concentration probability, about a true parameter vector, of these estimators up to the fifth-order. In particular, (i) we show that third-order efficient estimators are always fourth-order efficient; (ii) a necessary and sufficient condition for fifth-order efficiency is provided; and finally (iii) the MLE is shown to be fifth-order efficient. [33] Kano, Y. (1998). Improper solutions in exploratory factor analysis: Causes and treatments. In Advances in Data Sciences and Classification (Eds Rizzi, A., Vichi, M. and Bock, H.), pp. 375-382: Springer, Berlin. (dvi file) (ps file) (pdf file) There are many causes of occurrence of improper solutions in factor analysis. Identifying potential causes of the improper solutions gives very useful information on suitability of the model considered for a data set. This paper studies possible causes of improper solutions in exploratory factor analysis, focusing upon (A) sampling fluctuations, (B) model underidentifiable and (C) model unfitted, each having several more detailed items. We then give a checklist to identify the cause of the improper solution obtained and suggest a method of reanalysis of the data set for each cause. [32] Kano, Y. (1997). Beyond third-order efficiency. Sankhya. Vol. 59, Part 2, 179-197. Fifth-order (asymptotic) efficiency of the second-order bias-corrected MLE, minimizing the $n^{-3}$ term of an expansion of the quadratic risk of Fisher-consistent estimators bias-corrected similarly, is established in a general curved exponential family with a structural parameter vector. A characterization theorem of the MLE in terms of its higher-order derivatives is provided, and an alternative bias correction factor is proposed. Both the characterization and the new bias correction play an important role in proving the fifth-order efficiency. The matrix form symmetric tensor and higher-order derivatives are utilized, rather than usual elementwise tensors with Einstein's convention, to derive all the results of this article. [Note: There is a technical report [101] that describes the detailed techinical proofs of the propositions and lemmas of this paper.] [31] Kano, Y. (1997). Exploratory factor analysis with a common factor with two indicators. Behaviometrika, Vol.24, No.2, 129-145. Any exploratory factor analysis model requires at least three indicators (observed variables) for each common factor to ensure model identifiability. If one makes exploratory factor analysis for a data set in which one of common factors would have only two indicators in its population, one would encounter difficulties such as improper solutions and nonconvergence of iterative process in calculating estimates. In this paper, we first develop conditions for {\it partial} identifiability of the remaining factor loadings except for a factor loading vector which relates to a common factor with only two indicators. Two models for analyzing such data sets are then proposed with the help of confirmatory factor analysis and covariance structure analysis. The first model is an exploratory factor analysis model that permits correlation between unique factors; the second model is a kind of confirmatory factor model with equal factor loadings. Two real data sets are analyzed to illustrate usefulness of these models. [30] Aoshima, M. and Kano, Y. (1997). A note on robustness of two-stage procedure for a multivariate compounded normal distribution. Sequential Analysis, Vol.16, No.2, 175-187. In normal populations, Healy (1956) and Takada (1988) have developed two-stage methods for inference on the mean vector, with a confidence region with a specified maximum diameter or with a specified risk. The methods are shown to remain valid even under a multivariate compounded normal distribution, which includes the normal distribution as a special case. A sort of super efficiency on the average required sample number of the procedure is observed. This fact has never appeared in the normal population. [29] Yuan, Ke-Hai, Bentler, P. M. and Kano, Y. (1997). On averaging variables in a confirmatory factor analysis model. Behaviormetrika, Vol.24, No.1, 71-83. The normal theory maximum likelihood and asymptotically distribution free methods are commonly used in covariance structure practice. When the number of observed variables is too large, neither method may give reliable inference due to bad condition numbers or unstable solutions. The main existing solution to the problem of high dimension is to build a model based on marginal variables. This practice is inefficient because the omitted variables may still contain valuable information regarding the structural model. In this paper, we propose a simple method of averaging proper variables which have similar factor structures in a confirmatory factor model. The effects of averaging variables on estimators and tests are investigated. Conditions on the relative errors of the measured variables are given that verify when a model based on averaged variables can give better estimators and tests than one based on omitted variables. Our method is compared to the method of variable selection based on mean square error of predicted factor scores. Some aspects related to averaging, such as improving the normality of observed variables, are also discussed. [28] Kano, Y. (1996). Fourth and fifth order efficiency: Fisher information. In Probability Theory and Mathematical Statistics (Watanabe, S. et.al. EDs.) pp. 193-200. World Scientific: Singapore. (dvi file) (ps file) Loss of (Fisher) information, whose concept was defined by Fisher, is one of important measures of asymptotic efficiency. Based on Fisher information, this paper studies fourth and fifth order asymptotic efficiency of estimators in a curved exponential family of distributions with a structural parameter vector. In particular, we show that the bias-corrected MLE with a certain bias correction factor is fourth and fifth order efficient in a class of Fisher consistent estimators bias-corrected similarly. [27] Kano, Y. (1996). Third-order efficiency implies fourth-order efficiency. Journal of Japan Statistical Society, Vol.26, No.1, 101-117. (dvi file) (ps file) Takeuchi [37], Takeuchi and Akahira [38] and Pfanzagl [27] among others proved that {\it any\,} first-order efficient estimators are second-order efficient. Many authors e.g., Ghosh [15], have conjectured that {\it any} third-order efficient estimators are fourth-order efficient. Based on concentration probability of estimators about a true parameter, this paper gives a positive answer to the conjecture in a curved exponential family with multi-structural parameters. It is seen that choice of bias-correction factors is critical. [26] Kano, Y. (1995). An asymptotic expansion of the distribution of Hotelling's $T^2$-statistic under general distributions. American Journal of Mathematical and Management Sciences, Vol.15, No.3-4, 317-341. Distribution theory in nonnormal populations is important but is not fully exploited particularly in multivariate analysis. In this paper we derive an asymptotic expansion of the distribution of Hotelling's multivariate $T^2$-statistic under general distributions. Our general expansion specializes the existing expansions under elliptical and normal distributions. The previous research on robustness of $T^2$ to violation of normal assumption, based on Monte Carlo study, concludes that nonnormality of an underlined population influences substantially upon the distribution of $T^2$ for small or medium samples and that the third-order cumulants of the underlined distribution affects $T^2$ much more seriously than do the fourth-order cumulants. The derived formula is used to provide theoretical grounds for the experimental results. Matrix manipulations such as Kronecker products and symmetric tensors are utilized to derive all the results, rather than usual elementwise tensors with Einstein's convention. [25] Ihara, M. and Kano, Y. (1995). Identifiability of full, marginal, and conditional factor analysis. Statistics & Probability Letters, Vol.23, No.4, 343-350. Identifiability of full factor analysis model for $\bfx=[X_1,\bfx_2^T]^T$ is discussed, when the marginal model for $\bfx_2$ and/or the conditional model for $bfx_2$ given $X_1$ conform to factor analysis models. Two numerical examples are given for illustrative purposes. [24] Kano, Y.(1994). Consistency property of elliptical probability density functions. Journal of Multivariate Analysis, Vol.51, 343-350. Several conditions are established under which a family of elliptical probability density functions possesses a preferable consistency property. The consistency property ensures that any marginal distribution of a random vector whose distribution belongs to a specific elliptical family also belongs to the family. Elliptical distributions with the property must be a mixture of normal distributions. [23] Berkane, M., Kano, Y. and Bentler, P. M. (1994). Pseudo maximum likelihood estimation in elliptical theory: Effects of misspecification. Computational Statistics and Data Analysis. Vol.18, 255-267. Recently, robust extensions of normal theory statistics have been proposed to permit modeling under a wider class of distributions (e.g., Taylor, 1992). Let $X$ be a $p\times 1$ random vector, $\mu$ a $p\times 1$ location parameter, and $V$ a $p\times p$ scatter matrix. Kano et al. (1993) studied inference in the elliptical class of distributions and gave a criterion for the choice of a particular family within the elliptical class to best describe the data at hand when the latter exhibit serious departure from normality. In this paper, we investigate the criterion for a simple but general set-up, namely, when the operating distribution is multivariate $t$ with $\nu$ degrees of freedom and the model is also a multivariate $t$-distribution with $\alpha$ degrees of freedom. We compute the exact inefficiency of the estimators of $\mu$ and $V$ based on that model and compare it to the one based on the multivariate normal model. Our results provide evidence for the choice of $\nu=4$ proposed by Lange et al. (1989). In addition, we give numerical results showing that for fixed $\nu$, the inflation of the variance of the pseudo maximum likelihood estimator of the scatter matrix, as a function of the hypothesized degrees of freedom $\alpha$, is increasing in its domain. [22] Kano, Y. and Ihara, M. (1994). Identification of inconsistent variates in factor analysis. Psychometrika, Vol.59, 5-20. When some of observed variates do not conform to the model under consideration, they will have a serious effect on the results of statistical analysis. In factor analysis the model with inconsistent variates may result in improper solutions. In this article a useful method for identifying a variate as inconsistent is proposed in factor analysis. The procedure is based on the likelihood principle. Several statistical properties such as the effect of misspecified hypotheses, the problem of multiple comparisons, and robustness to violation of distributional assumptions are investigated. The procedure is illustrated by some examples. [21] Kano, Y., Bentler, P. M. and Mooijaart, A. (1993). Additional information and precision of estimators in multivariate structural models. In Statistical Sciences and Data Analysis: Proceedings of the Third Pacific Area Statistical Conference, (K. Matusita, T. Hayakawa, et al., Eds.) pp. 187-196. VSP International Science Publisher: Zeist, The Netherlands. This paper investigates the effect of additional information upon parameter estimation in multivariate structural model. It is shown that the asymptotic covariances of estimators based on a model with additional variables are smaller than those based on a model with no additional variables, where the estimation methods employed are the methods of maximum likelihood and minimum chi-square. Some applications to moment structure models are provided. [20] Kano, Y. (1993). Asymptotic properties of statistical inference based on Fisher consistent estimators in the analysis of covariance structures. In Proceedings of the International Workshop on Statistical Modelling and Latent Variables, (K. Haagen, D. J. Bartholomew and M. Deistler, Eds.), pp.173-190. Elsevier Science Publisher: Amsterdam, The Netherlands. The methods of maximum likelihood (ML) and generalized least squares (GLS) under the normality assumption are often used for inference on covariance structures, and asymptotic properties and robustness of the statistical inference have been extensively studied. In this article, we generalize these results to inference based on Fisher consistent (FC) estimators which include simple least squares (LS) and noniterative estimation methods as well as ML and GLS. Although the LS and noniterative methods do not yield asymptotically efficient estimators under the normality, for small or moderate samples they are often superior to efficient estimators in mean squared error and less often result in so-called improper solutions. This shows that there do exist cases where such inefficient inference should be made rather than ML and GLS. Thus, the extension to be described here is important. Furthermore, a key relation shown from a property of the FC estimators makes the derivation of asymptotics of the inference very easy and comprehensive. The asymptotic efficiency of the MLE within the class of FC estimators is proved under a situation where the fourth-order moments of observations may not be finite. [19] Kano, Y., Berkane, M. and Bentler, P. M. (1993). Statistical inference based on pseudo maximum likelihood estimators in elliptical populations. Journal of the American Statistical Association --- Theory and Methods, Vol.88, 135-143. In this article we develop statistical inference based on the method of maximum likelihood in elliptical populations with an unknown density function. The method assuming the multivariate normal distribution, using the sample mean and the sample covariance matrix, is basically correct even for elliptical populations under a certain kurtosis adjustment, but is not statistically efficient especially when the kurtosis of the population distribution has more than moderate values. On the other hand, several methods of statistical inference assuming a particular family (e.g., multivariate $T$-distribution) of elliptical distributions have been recommended as a robust procedure against outliers or distributions with heavy tail. Such inference also will be important in order to keep high efficiency of statistical inference in elliptical populations. In practice, however, it is very difficult to choose an appropriate family of elliptical distributions, and one may misspecify the family. Furthermore, extra-parameters (e.g., other than means and covariances) may make computation heavy. Here we investigate the method of maximum likelihood assuming a particular family of elliptical distributions with extra-parameters replaced by inexpensive estimators when the assumed family may be misspecified. Consistency and asymptotic normality of the estimators are proved, and the asymptotic equivalence among the likelihood ratio, Wald and Score test statistics and their chi-squaredness under a constant correction are shown. Two easy methods of estimating extra-parameters are proposed. A criterion on how to choose a family among competing elliptical families is also provided. [18] Ihara, M. and Kano, Y. (1992). Asymptotic equivalence of uniqueness estimators in marginal and conditional factor analysis models. Statistics & Probability Letters, Vol.14, 337-341. It is shown that the maximum likelihood and generalized least-squares estimators of unique variances in the conditional model are asymptotically equivalent to those in the marginal model in factor analysis. The asymptotic covariance matrices of the estimators are expressed in matrix form. [17] Hu, Li-tze, Bentler, P. M. and Kano, Y. (1992). Can test statistics in covariance structure model be trusted? Psychological Bulletin, Vol.112, 351-362. Covariance structure analysis uses $\chi^2$ goodness of fit test statistics whose adequacy is not known. Scientific conclusions based on models may be distorted when researchers violate sample size, variate independence, and distributional assumptions. The behavior of 6 test statistics is evaluated with a Monte Carlo confirmatory factor analysis study. The tests performed drastically differently under 7 distributional conditions at 6 sample sizes. Two normal-theory tests worked well under some conditions but completely broke down under other conditions. A test that permits homogeneous nonzero kurtoses performed variably. A test that permits heterogeneous marginal kurtoses performed better. A distribution-free test performed spectacularly badly in all conditions at all but the largest sample sizes. The Satorra-Bentler scaled test statistic performed best overall. [16] Kano, Y. (1992). Robust statistics for test-of-independence and related structural models. Statistics & Probability Letters. Vol.15, 21-26. Recent research of asymptotic robustness shows that the likelihood ratio (LR) test statistic for test-of-independence based on normal theory remains valid for a general case where only independence is assumed. In contrast, under elliptical populations the LR statistic is correct if a kurtosis adjustment is made. Thus, the LR statistic itself is available for the first case, whereas a certain correction is needed for the second framework, which is seriously inconvenient for practitioners. In this article, we propose an alternative adjustment to the LR statistic which can be utilized for both of the distribution families. Theory is derived in the context of general linear latent variate models. [15] Bentler, P. M., Berkane, M. and Kano, Y. (1991). Covariance structure analysis under a simple kurtosis model. In Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface, (E. M. Keramidas and S. M. Kaufman, Eds.) pp. 463-465. Interface Foundation of North America: VA. A model for the relation between multivariate fourth-order central moments of a set of variables and the marginal kurtoses and covariances among these variables is used to produce an estimator for covariance structure analysis that is asymptotically efficient and yields an asymptotic $\chi^2$ goodness of fit test of the covariance structure while substantially reducing the computations. When the kurtoses of the variables are equal, the method reduces to one based on multivariate elliptical distribution theory, and, when there is no excess kurtosis, to one based on multivariate normal distribution theory. [14] Kano, Y. (1991). The asymptotic distribution of a noniterative estimator in exploratory factor analysis. The Annals of Statistics 19, 272-282. This paper presents the asymptotic distribution of Ihara and Kano's non-iterative estimator of the uniqueness in exploratory factor analysis. When the number of factors is overestimated, the estimator is not a continuous function of the sample covariance matrix and its asymptotic distribution is not normal, but consistency holds. It is also shown that the first-order moment of the asymptotic distribution does not exist. [13] Kano, Y. (1990). Statistical inference in factor analysis: Recent Developments. The Japanese Journal of Behaviormetrics, Vol.18, 3-12. (in Japanese) Several methods of statistical inference in factor analysis are first reviewed under the normality assumption as well as the case where noassumption on the population distribution is made. Normal theory inference, whether it is asymptotically efficient, is shown to be robust against a wide class of distributions which may be encountered in many practical fields. For small or medium sample sizes, simple procedures of estimation, such as simple least-squares and noniterative methods, are recommended, whereas the method of maximum likelihood or other asymptotically efficient ways should be utilized for large sample. [12] Kano, Y., Berkane, M. and Bentler, P. M. (1990). Covariance structure analysis with heterogeneous kurtosis parameters. Biometrika, Vol.77, 575-585. This paper discusses the analysis of covariance structure in a wide class of multivariate distributions whose marginal distributions may have heterogeneous kurtosis parameters. Elliptical distributions often used as a generalization of the normal theory are members of this class. It is shown that a simple adjustment of the weight matrix of normal theory, using kurtosis estimates, results in an asymptotically efficient estimator of structural parameters within the class of estimators that minimize a general discrepancy function. Results are obtained for arbitrary covariance structures as well as those that meet a scale invariance assumption. Two real data sets are analyzed for illustrative purpose. [11] Kano, Y. (1990). Noniterative estimation and the choice of the number of factors in exploratory factor analysis. Psychometrika, Vol.55, 277-291. Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the non-iterative estimation method of Ihara and Kano in exploratory factor analysis. The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions, and for a new method of choosing the number of factors. The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment. [10] Bentler, P. M. and Kano, Y. (1990). On the equivalence of factors and components. Multivariate Behavioral Research, Vol.25, 67-74. Some advantages of the factor analysis model over component analysis are reviewed. We prove that principal components and factor analysis can yield equivalent results under a certain condition. Our proof provides a theoretical explanation for an empirical result obtained by Velicer and Jackson. Relevant results by Guttman, Harris and Kaiser are noted. [9] Kano, Y. (1990). Comparative studies of non-iterative estimators based on Ihara and Kano's method in exploratory factor analysis. Communications in Statistics Part A, Vol.19, 431-444. In a factor analysis model, the asymptotic variance of the non-iterative estimator of Ihara and Kano (1986) is first provided, and five kinds of estimators based on Ihara and Kano's method are constructed by using the asymptotic result. These estimators and that based on the maximum likelihood are compared both theoretically and experimentally. In conclusion, the arithmetic mean of some Ihara and Kano estimators is recommended as a uniqueness estimator at least for small and medium sample sizes. [8] Kano, Y. (1989). A new estimation procedure using G-inverse matrix in factor analysis. Mathematica Japonica, Vol.34, 43-52. A non-iterative estimator using g-inverse matrix is proposed in factor analysis, which is a generalization of Ihara and Kano's estimator. The amount of calculation of the present estimate is much less than that of traditional estimates. [7] Kano, Y. and Shapiro, A. (1987). On asymptotic variances of uniqueness estimators in factor analysis. South African Statistical Journal, Vol.21, 131-139. It is shown that the asymptotic variance of uniqueness estimators in factor analysis decrease as new observed variates are added to the model while the number of factors is held fixed. The limit form of the associated asymptotic covariance matrix is calculated. [6] Ihara, M. and Kano, Y. (1986). A new estimator of the uniqueness in factor analysis. Psychometrika, Vol.51, 563-566. A closed form estimator of the uniqueness (unique variances) in factor analysis is proposed. It has analytically desirable properties such as consistency, asymptotic normality and scale invariance. The estimation procedure is given through the application to the two sets of Emmett's data and Holzinger and Swineford's data. The new estimator is shown to lead to values rather close to the maximum likelihood estimator. [5] Kano, Y. (1986). A condition for the regression predictor to be consistent in a single common factor model. British Journal of Mathematical and Statistical Psychology, Vol.39, 221-227. This paper investigates the prediction of a common factor in a single common factor model with infinite items when the structural parameter vector (factor loadings and unique variances) is unknown. A condition in terms of the sample size n and the number of items p is established under which the regression predictor for a unique common factor, in which the parameter vector is replaced by the least squares estimator, converges to it in quadratic mean. The condition is that $p$ goes to infinity and $p^2/n$ goes to zero under some mild assumptions. [4] Kano, Y. (1986). Consistency conditions on the least squares estimator in single common factor analysis model. Annals of the Institute of Statistical Mathematics, Vol.39, 57-68. This paper is concerned with the consistency of estimators in a single common factor analysis model when the dimension of the observed vector is not fixed. In the model, several conditions on the sample size n and the dimension p are established for the least squares estimator (LSE) to be consistent. Under some assumptions, the condition that $p/n$ goes to zero is necessary and sufficient for the LSE to converge in probability to the true value. A sufficient condition for almost sure convergence is also given. [3] Kano, Y. (1986). Conditions on consistency of estimators in covariance structure model. Journal of the Japan Statistical Society, Vol.16, 75-80. This paper presents a condition that estimators in a covariance structure model are consistent weekly (strongly), which is equivalent to shapiro's condition. The condition is composed of three parts, each of which is simpler and is checked more easily. This result applies to a proof of consistency of estimators in a factor analysis model. The population value of a factor analysis model is given which does not admit any consistent estimator. This fact suggests that the proofs of consistency by the previous authors are not complete. [2] Kano, Y. (1984). Construction of additional variables conforming to a common factor model. Statistics & Probability Letters, Vol.2, 241-244. It is shown that if a random p-vector x conforms to an $r$-common factor model and an external variable $Z$ is given, then there exists a family of linear combinations $X$ of $\bx$ and $Z$ such that $[\bx':X]'$ conforms to an $r$-common factor model. We can choose $X$ such that the revised model has arbitrariness of factor indeterminacy which is smaller than any specified small value. [1] Kano, Y. (1983). Consistency of estimators in factor analysis. Journal of the Japan Statistical Society, Vol.13, 137-144. In factor analysis, both the maximum likelihood estimator and the generalized least squares estimator for the structural parameters (i.e., factor loadings and unique variances) are shown to be consistent weakly and strongly under Anderson and Rubin's sufficient condition for weak identifiability.
CommonCrawl
A new six-dimensional irreducible symplectic variety Author: Kieran G. O'Grady Published electronically: January 14, 2003 Abstract: We construct a six-dimensional irreducible symplectic variety with $b_2=8$. Since the known examples of irreducible symplectic varieties have $b_2=7$ or $b_2=23$, our variety is in a new deformation class. The example is obtained as follows. Let $J$ be the Jacobian of a genus-two curve with its natural principal polarization: results of another paper of ours give a symplectic desingularization of the moduli space of semistable rank-two sheaves on $J$ with $c_1=0$ and $c_2=2$. Let $\mathcal {M}_{\mathbf {v}}$ be this symplectic desingularization: there is a natural locally trivial fibration $\mathcal {M}_{\mathbf {v}}\rightarrow J\times \widehat {J}$. Our example is the fiber over $(0,\widehat {0})$ of this map, we denote it by $\widetilde {\mathcal {M}}$. The main body of the paper is devoted to the proof that $\widetilde {\mathcal {M}}$ is irreducible symplectic and that $b_2(\widetilde {\mathcal {M}})=8$. Applying the generalized Lefschetz Hyperplane Theorem we get that low-dimensional homotopy (or homology) groups of $\widetilde {\mathcal {M}}$ are represented by homotopy (or homology) groups of a subset of $\widetilde {\mathcal {M}}$ which has an explicit description. The main problem is to provide the explicit description and to extract the necessary information on homotopy or homology groups. [B]B A. Beauville, Variétés Kählériennes dont la première classe de Chern est nulle, [1] J. Diff. Geom 18 (1983), 755-782. [BDL]BDL J. Bryan - R. Donagi - Naichung Conan Leung, $G$-Bundles on abelian surfaces, hyperkähler manifolds, and stringy Hodge numbers, AG/0004159 (2000). [HG]HG D. Huybrechts - L. Göttsche, Hodge numbers of moduli spaces of stable bundles on $K3$ surfaces, Internat. J. of Math. 7 (1996), 359-372. [G]G D. Gieseker, On the moduli of vector bundles on an algebraic surface, Ann. of Math 106 (1977), 45-60. [GM]GM M. Goresky - R. MacPherson, Stratified Morse Theory, Ergeb. Math. Grenzgeb. [1] (3. Folge) 14, Springer, 1988. [L1]L1 J. Li, The first two Betti numbers of the moduli space of vector bundles on surfaces, Comm. Anal. Geom. 5 (1997), 625-684. [L2]L2 J. Li, Algebraic geometric interpretation of Donaldson's polynomial invariants, [1] J. Differential Geometry 37 (1993), 417-466. [Mor]Mor J. Morgan, Comparison of the Donaldson polynomial invariants with their algebro geometric analogues, Topology 32 (1993), 449-488. [Muk1]Muk1 S. Mukai, Duality between $D(X)$ and $D(\widehat {X})$ with its application to Picard sheaves, Nagoya Math. J. 81 (1981), 153–175. [Muk2]Muk2 S. Mukai, Symplectic structure of the moduli space of sheaves on an abelian or $K3$ surface, Invent. Math 77 (1984), 101-116. [Muk3]Muk3 S. Mukai, On the moduli space of bundles on $K3$ surfaces, Vector bundles on Algebraic Varieties, Tata Institute Studies in Mathematics, Oxford University Press, 1987. [Mum]Mum D. Mumford, Abelian Varieties, Tata Institute Studies in Mathematics, Oxford University Press, 1970. [O1]O1 K. O'Grady, Desingularized moduli spaces of sheaves on a $K3$, J. Reine Angew. Math. 512 (1999), 49-117. [O2]O2 K. O'Grady, The weight-two Hodge structure of moduli spaces of sheaves on a $K3$ surface, J. Algebraic Geom. 6 (1997), 599-644. [O3]O3 K. O'Grady, Moduli of vector-bundles on surfaces, Algebraic Geometry Santa Cruz 1995, Proc. Symp. Pure Math. vol. 62, Amer. Math. Soc., 1997, pp. 101-126. [O4]O4 K. O'Grady, Relations among Donaldson polynomials of certain algebraic surfaces, I, Forum Math. 8 (1996), 1-61. [Y1]Y1 K. Yoshioka, Some examples of Mukai's reflections on $K3$ surfaces, J. Reine Angew. Math. 515 (1999), 97-123. [Y2]Y2 K. Yoshioka, Moduli spaces of stable sheaves on abelian surfaces, AG/0009001 (2000). [W]W F. W. Warner, Foundations of Differentiable Manifolds and Lie Groups, Graduate Texts in Mathematics 94, Springer-Verlag, 1983. Kieran G. O'Grady Affiliation: Università La Sapienza, Dipartimento di Matematica G. Castelnuovo, Piazzale A Moro 5, 00185 Rome, Italy Email: [email protected] Received by editor(s): November 9, 2000 Additional Notes: Supported by Cofinanziamento MURST 1999-2001 Dedicated: Dedicato a Riccardino
CommonCrawl
Animal Biotelemetry The scale of the whale: using video-tag data to evaluate sea-surface ice concentration from the perspective of individual Antarctic minke whales Jacob M. J. Linsky ORCID: orcid.org/0000-0002-2851-88351 nAff4, Nicole Wilson1, David E. Cade1,2, Jeremy A. Goldbogen2, David W. Johnston3 & Ari S. Friedlaender1 Animal Biotelemetry volume 8, Article number: 31 (2020) Cite this article Advances in biologging technology allow researchers access to previously unobservable behavioral states and movement patterns of marine animals. To relate behaviors with environmental variables, features must be evaluated at scales relevant to the animal or behavior. Remotely sensed environmental data, collected via satellites, often suffers from the effects of cloud cover and lacks the spatial or temporal resolution to adequately link with individual animal behaviors or behavioral bouts. This study establishes a new method for remotely and continuously quantifying surface ice concentration (SIC) at a scale relevant to individual whales using on-animal tag video data. Motion-sensing and video-recording suction cup tags were deployed on 7 Antarctic minke whales (Balaenoptera bonaerensis) around the Antarctic Peninsula in February and March of 2018. To compare the scale of camera-tag observations with satellite imagery, the area of view was simulated using camera-tag parameters. For expected conditions, we found the visible area maximum to be ~ 100m2 which indicates that observations occur at an equivalent or finer scale than a single pixel of high-resolution visible spectrum satellite imagery. SIC was classified into one of six bins (0%, 1–20%, 21–40%, 41–60%, 61–80%, 81–100%) by two independent observers for the initial and final surfacing between dives. In the event of a disagreement, a third independent observer was introduced, and the median of the three observer's values was used. Initial results (n = 6) show that Antarctic minke whales in the coastal bays of the Antarctic Peninsula spend 52% of their time in open water, and only 15% of their time in water with SIC greater than 20%. Over time, we find significant variation in observed SIC, indicating that Antarctic minke occupy an extremely dynamic environment. Sentinel-2 satellite-based approaches of sea ice assessment were not possible because of persistent cloud cover during the study period. Tag-video offers a means to evaluate ice concentration at spatial and temporal scales relevant to the individual. Combined with information on underwater behavior, our ability to quantify SIC continuously at the scale of the animal will improve upon current remote sensing methods to understand the link between animal behavior and these dynamic environmental variables. Advances in animal-borne tag technology enable researchers to record and analyze previously inaccessible in situ behavior and kinematics of animals [1, 2]. Beyond elucidating animal behavior, animal-borne tags have been built to collect oceanographic data during deployments on deep-diving marine mammals [3], and to monitor the spatial overlap between seabirds and fishing vessels at sea [4]. Thus, a precedent has been set for using biologging technology to record information that can be used to evaluate the relationship between an animal, its behavior, and the surrounding environment (both physical and biological). To accurately understand these relationships, data must be collected concurrently in space and time, eliminating offset between behavioral and environmental observations. Remote sensing can be used in conjunction with biologging data to make inferences about how animal distribution and behavior relate to their environment across a range of spatial and temporal scales. In the Polar Regions, ice is a critical feature of the environment and is commonly observed via satellite. Sea ice concentration can be computed from satellite passive microwave radiometry, producing a single estimate of the concentration over a relatively broad area of ~ 10km2 to ~ 2500km2—depending on data and methods used [5, 6]. The resolution of these data provides an excellent platform to measure monthly and annual trends in ice concentration at large spatial scales; however, higher resolution data is required to relate ice concentration to animal behavior at the submesoscale (< 1 km). Visible spectrum satellite imagery offers a more pragmatic resolution for this purpose, with a pixel resolution of up to 10 m (area = 100 m2) [7]. In the visible frequency range, however, clouds often hamper any surface observations in the WAP region, and additionally, any such observations require daylight to resolve features. Clouds and daylight are not the only problems with existing visible spectrum satellite data, but also the regularity with which a high-resolution image is taken at any given location. While coarse-scale passive microwave observations are available at least twice daily, high-resolution optical data at any given location are collected at a much lower cadence, (e.g., Landsat scenes are collected every 16 days, and combining observations from Landsat 8 and Sentinel 2 sensors would achieve at best 4.5 days revisit rate [8]). Synthetic Aperture Radar (SAR) can provide similar fine resolution estimates of ice presence and thickness independent of weather conditions [9]. While the spatial resolution may be appropriate to link ice to animal behavior at submesoscales, the likelihood of timely satellite images coinciding with data collection are low, and in many cases tasking SAR instruments to collect synoptic imagery is expensive (e.g., RADARSAT) and may still not capture appropriate scenes if field operations are out of phase due to local conditions. These challenges necessitate new tools to more accurately determine associations between animals and their surrounding ice environment in polar regions. In both the Arctic and Antarctic, many species have evolved life histories that are dependent on sea ice. From zooplankton like Euphausiid krill that feed on under-ice algal communities, to penguins and seals that haul out on ice floes to rest, to polar bears that traverse and hunt seals on winter pack ice, sea ice is critical. Antarctic minke whales (AMW) are the largest ice-affiliated krill predator and most numerous baleen whale in the Southern Ocean. Antarctic minke are intimately tied to ice, yet very little is known about their behavior, habitat use, foraging ecology, and movement patterns with respect to their environment [10, 11]. This information is critical not only to define how these whales interact with their environment, but to better understand and forecast the impacts of climate change. The Antarctic Peninsula is warming faster than nearly any other region on the planet, manifesting in decreases in the amount, duration, and extent of winter sea ice [12]. Thus, quantifying the ice-covered habitats that AMW utilize at scales that are relevant to the individual animal will allow for a greater understanding of the ecological relationships between these krill predators, their environment, and the impacts that climate change will have on the amount of available habitat. Given the gaps in our current ability to measure the relationships between minke whale behavior and ice coverage in their environment, the goal of this paper is to develop a robust method for evaluating the amount of surface ice that AMWs encounter using animal-borne motion-sensing and video-recording tags. Using new biologging technology, this method can be used to accurately and continuously assess the ice concentration in the environment that AMWs occupy. This method, used to quantify the ecological relationships between AMWs and their environment at scales that traditional remote sensing techniques cannot resolve, will fill a critical gap in our understanding of the complex niche of these pagophilic predators in a rapidly changing environment. Data in this study were collected using Customized Animal Tracking Solutions (CATS) multisensory suction cup attached archival tags [13, 14] between 1/25/2018–3/6/2018. Animal-born video was recorded during 6 tag deployments with 1280 × 720 pixel resolution cameras, and one deployment (see Table 1: deployment #1) at 1920 × 1080 resolution. All deployments occurred in coastal waters on the Western side of the Antarctic Peninsula, with one deployment occurring in the Penola Strait, two in Andvord Bay, and four in Paradise Bay (Fig. 1). Table 1 Deployment details Areas of deployment. Red dots indicate the location of CATS video-recording and motion-sensing tag deployments on Antarctic minke whales. From southwest to northeast, deployments occurred in the Penola Strait, Paradise Bay, and Andvord Bay on the western side of the Antarctic Peninsula Tags were deployed via a hand-held 6-m carbon fiber pole from a ridged-hulled inflatable boat (RHIB) or Zodiac inflatable boat. All tags contained accelerometers that sampled at 400 Hz, magnetometers and gyroscopes at 50 Hz, and pressure, light, temperature and GPS at 10 Hz. All data were decimated to 10 Hz, tag orientation on the animal was corrected for, and animal orientation was calculated using custom-written scripts in Matlab 2014 [13, 15]. Animal speed was determined using the amplitude of tag vibrations [16]. Positions of the whales were calculated at 10 Hz over the duration of the deployment from georeferenced pseudo-tracks (i.e., dead reckoned reconstructed tracks [17]), constructed from animal depth, speed, pitch, and heading, and corrected via known positions. Fast acquiring GPS sensors were enabled on all tags to provide accurate georeferencing throughout the tag deployments when animals surfaced. For additional georeferenced positions of the tag we used the tag-on position and tag-off position collected via hand-held or tag GPS, as well as opportunistically collected positions during focal animal follows from a smavideoll inflatable boat using GPS, estimated range, and bearing. The resulting tracks still had multi-hour gaps without additional location verification, so errors accumulated, particularly when the whale traveled below the threshold of detectable speed (~ 1 m/s)[15]. Since the local habitat was an enclosed environment with complex coastlines, this provided an additional opportunity to anchor positions when a whale's track matched the contour of the coastline but was not proximate to the coastline. By anchoring tracks such that the whale's track assumed to be following the coastline was in the correct location, all tracks could be corrected so that they remained in suitable areas (i.e., not on land). This process was used to generate 6 additional anchor points for deployment bb180227-45, and 3 additional anchor points for deployment bb180304-40. Because these tracks are based on estimated anchors from coastline data, they cannot be assumed to be precise and we estimate that they are accurate within ± 1 km of the calculated location. SENTINEL-2 (L2A, visible and infrared spectrum) satellite data was sourced from the Sentinel-Hub EO Browser [7] between January 24th 2018 and March 7th 2018 (24 h before the beginning of the first deployment and 24 h from the end of the final deployment). Images were collected for the full spatial range of all deployments, rendering a search range of 10,530 km2. Date, time, % cloud cover, and the specific region of each image was recorded for comparison to tag data. The spatial resolution was dependent on the wavelength of light received with a maximum possible resolution of 10 m (100m2) and a minimum of 60m (3600m2). Estimated surface area viewed from camera tag data Area of view simulations were used to compare the scale of on-animal video observations to the pixel size of satellite imagery. Visible surface area (m2) was calculated as a function of typical depth, pitch, visibility and tag angle of view for each tag deployment using a custom MATLAB script. Camera parameters included the video resolution (# of pixels height and width) and in-water camera calibration tests provided the parameters of vertical and horizontal angles of view. The distance from the top of the image to the center of visible surface in the image (Ds) and from center to the edge of visible surface in the image (Dl) were calculated based on initial values including depth (P) the angle from the surface to center (90°—pitch) (\(\upbeta\)), and the angle of view based on camera calibration tests (\(\mathrm{\alpha }\)) (Fig. 2). The angle between the surface and the depth line (θ), was assumed to be 90°. Angles γ, δ and ε can be solved for by subtracting θ and the associated bottom angle from 180. Ds and Dl may then be determined as follows: $$\mathrm{Do}=\mathrm{sin}\left(\beta -\frac{1}{2}\mathrm{\alpha }\right)\times (\frac{P}{\mathrm{sin\gamma }}),$$ $$Ds+Do=sin\upbeta \times \left(\frac{P}{\mathrm{sin\delta }}\right)$$ $$\mathrm{Do}+\mathrm{Ds}+\mathrm{Dl}=\mathrm{sin}\left(\upbeta +\frac{1}{2}\mathrm{\alpha }\right)\times \left(\frac{P}{\mathrm{sin\epsilon }}\right),$$ $$\mathrm{Ds}=\mathrm{results of equation }2-\mathrm{results of eq}. 1$$ $$\mathrm{Dl}=\mathrm{results of equation }3-\mathrm{results of eq}. 2$$ Measurements for area of view estimates. (above) Lateral view of area estimate measurements where vertical angle of view for the camera is represented by angle a, and angle between the depth line and center of view (as calculated from pitch) is represented as angle b. (right) Overhead view of area estimate measurements where Wl represents the width of the camera field of view at the furthest point of visible surface from the tag, Wc represents the width at the center of the visible surface and Ws represents the width of the closest point of visible surface to the tag (top of screen) In instances of high pitch where Do < 0, Do is assumed to be 0 as this indicates there is no gap between the nearest point of surface and the area within the FOV. The width of the image at the first row, center, and at the edge of visible surface in the image, was calculated using a series of right triangles with the side furthest from the tag representing ½ of the width at each of the three points of measurement. X,Y and Z represent the hypotenuse of a right triangle between the tag position below the surface, the top, center and edge of the image (respectively) and the edge of the horizontal angle at each point (Fig. 2 overhead view). The inner sides of these triangles are FOVshort, center of view and FOVlong which are not visible in the overhead view (see Fig. 2 vertical view). The lengths of the inner sides were determined using: $$\mathrm{inner length}=\mathrm{sin\theta }\times \left(\frac{P}{\mathrm{sin}x}\right),$$ where x is the angle (γ, δ, ε) corresponding to the inner side. Angles κ, λ and μ are calculated by subtracting ν (90° angle from the center to edge of image) and ι (½ the horizontal field of view) from 180 (Fig. 2). With this information the distances can be calculated as: $$\frac{1}{2}Ws=\mathrm{sin\iota } \times \left(\frac{\mathrm{FOVshort}}{\mathrm{sin\kappa }}\right),$$ $$\frac{1}{2}Wc=\mathrm{sin\iota } \times \left(\frac{\mathrm{center of view}}{\mathrm{sin}\lambda }\right),$$ $$\frac{1}{2}Wl=\mathrm{sin\iota } \times \left(\frac{\mathrm{FOVlong}}{\mathrm{sin\mu }}\right).$$ Results are then multiplied by two to yield the width at each point. For instances where FOVshort or FOVlong are > visibility, the height (in # of pixels) were truncated to only represent pixels within the visible range. For our simulations, we assumed no camera distortion as all cameras used in this study were designed for in-water film. For other cameras and lenses (such as fisheye, in-air, or VR), distortion should be accounted for as not all pixels may represent a similar portion of the angle of view. A fitted polynomial regression using the polyfit function in Matlab was implemented to interpolate a smooth curve indicating the distance to the far edge of each pixel for both the vertical and horizontal measurements. The height of pixels in each row (N) was determined as: $$\mathrm{Pixel height}\left(N\right)= \mathrm{vertical polyfit} (N)-\mathrm{vertical polyfit}(N-1)$$ Width for each pixel in each row was calculated as: $$\mathrm{Pixel width} \left(N\right)=\frac{\mathrm{Horizontal polyfit} \left(N\right)}{\mathrm{Width }\left(\mathrm{total }\#\mathrm{ of pixels}\right)}$$ Height and width results were multiplied element by element to create a matrix representing the area of each pixel in the photo. The sum of these elements produce an estimated area. The included simulations represent the area at a still frame assuming the given parameters and value at the x-axis. This information may be extrapolated to moving video using the function: $$\mathrm{Video area}= \mathrm{Area estimate }+{\sum }_{n=2}^{\#\mathrm{of} \mathrm{frames}}w{l}_{n}\times (\frac{{\mathrm{speed}}_{n}\mathrm{ in} m/s}{\mathrm{frames per secon}d}-({\mathrm{depth}}_{n-1}-\mathrm{dept}{h}_{n})),$$ where n represents individual video frames. With reliable speed estimates this method may produce a more accurate area estimate for the video assessment, particularly in instances of high speed and low pitch. For our camera parameters, roll was found to be arbitrary in the area estimates (thus all simulations assumed roll = 0). This is to be expected with a forward-facing camera, as the image is centered on the roll axis. However, roll must be accounted for any tag in which the camera is not oriented as such. Consider also that at instances of higher roll in a deployment, the animal's body may take up more of the frame and reduce the visible surface area. For this reason, this study only includes ice observations where the roll of the tag is < ± 90˚. Visibility conditions on the peninsula are known to vary widely with productivity, and anecdotally, visibility has been estimated to be as high as 40 m. During these deployments, the depth at which the surface became visible indicated lower visibility conditions of ~ 5–15 depending on the time and location of the observation. The visibility values tested in our simulations are intended to reflect the conditions occurring during the deployments as opposed to theoretical maximum/minimum conditions reported in the area. Scoring of ice observation Animal tag video was viewed with Behavioral Observation Research Interactive Software (BORIS) [18]. Using BORIS, an observer marked the initial and final surfacing of each surface series for evaluation (Fig. 3). Two independent observers evaluated ice concentration as a % value in one of 6 concentration categories: 0%, 1–20%, 21–40%, 41–60%, 61–80%, 81–100% (Fig. 4, Additional File 1). If scores were not in agreement for a given point, an additional evaluation was made by a third observer, and the median value determined as the ice concentration. Combining environmental and behavioral data. Ice observations in relation to dive profile and behavioral information from a segment of the second deployment. Ice observations are represented by the colored dots over the dive profile in the upper graph, with pitch roll and heading of the animal represented in the lower graph. Ice observations collected from the same platform as the sensor data allow for temporal precision in linking the overhead ice coverage to animal behavior Ice categories. Example of typical ice image for each category from animal-borne tag Each evaluation took place from the first frame in which the surface is visible upon ascent until the frame in which the camera reaches the surface of the water (or when the animal initiates its descent if the camera does not break the surface). The steepest angle to the visible surface (FOVshort) varied between pitches of 23–75 degrees, with an average pitch of 47 degrees. Continuously assessing video allowed the observers to use movement as a cue for the size and depth of larger chunks of ice, as well as help to identify ice concentration in sunny areas. The observations included in this study primarily occurred in open water, or within glacial bays along the coast of the West Antarctic Peninsula. During summer in the bays, the ice coverage primarily consists of broken chunks (brash) consisting of glacial and/or marine ice. In this study, "surface ice" refers to any ice observed in the marine environment, though not all of the ice may be of marine origin. Each ice observation in this study is synchronized with satellite GPS time prior to deployment. This allows for video observations to be accurately aligned with behavioral data provided by other tag sensors (Fig. 3). As a result this methodology provides extremely fine temporal resolution for comparing ice observations with any other tag-derived data. From our search criteria the Sentinel-Hub EO browser yielded 21 satellite scenes with an average cloud cover of 80.47%. 12 of the images in this area overlapped with tag GPS data (regardless of time), with the cloud cover in this subset averaging 79.04% cloud cover. None of the satellite images were taken during periods of tag deployments, though one image was taken within a 24 h window (~ 2 h 34 min prior to the deployment on 02/28/19). Cloud cover for this image was recorded as 91.48%, rendering inadequate view of the surface for ice analysis. Area of view estimates Area of view simulations (Fig. 5) show that under reasonable conditions the visible surface area from tag can be expected to represent an area similar or finer than a typical 10 m (100m2) pixel of visible satellite imagery. As depth decreases (Fig. 5a, d) AOV was found to increase as more pixels come within visible range. When all possible pixels contain visible surface, the AOV then decreases as the camera ascends towards the surface (imagine Fig. 2 as the tag travels up the depth line P). From the point at which visibility impacts the tag field of view, our simulation show a linear decline towards an area of 0m2 as visibility is reduced (Fig. 5b, e). Moderate pitch was shown to produce the largest area estimates, with reduced area as the tag approached a vertical or horizontal position (Fig. 5c, f). To relate these simulations to our deployments, tag data of the first full view of the surface for each tag position (with tag slips determined by changes in the accelerometer orientation) was recorded for 36 independent tag positions (Table 1). Area of view simulations. Simulated area of view for different conditions of depth, pitch, and visibility For the first fully visible frame of surface, we can assume that FOVlong (Fig. 2) represents the visibility. If we assume the mean depth and pitch (Table 2) and solve for the length of FOVlong, we find a typical visibility of 10.00 m. Under these parameters, the average image contains 43.46m2 of visible surface. These findings indicate that the typical area viewed by a camera tag is approximately half the area of a single visible spectrum satellite pixel. Table 2 Tag depth and orientation data Ice observation Across all 7 deployments, observers recorded ice concentration for 863 surfacings (Table 1). The tag positions and ice observations indicated in Table 1 exclude instances where the tag was positioned at a roll angle greater than ± 90 degrees from the dorsal of the animal. For some of the deployments, this means that ice concentration was not recorded for the entirety of the video. Our results indicate that Antarctic minke whales in the coastal bays around the West Antarctic Peninsula occupy low ice content areas for the majority of the time during which video was recorded, with 84.36% of observations in ≤ 20% ice cover (Table 3). Ice was present in 48.55% of observations, however only 8.22% were recorded as > 40% ice cover. Table 3 Observed ice concentration Individual deployments, however, showed variability in observed ice content, depending on the location of the deployment and behavior of the animal. The left side of Fig. 6 shows deployment bb180125-30 from the Penola strait. The pseudo-track reveals that the animal is primarily moving in open water along a fairly straight path, indicating traveling or resting behavior. This contrasts with the right side (Fig. 6) deployment, bb180227-45 from Andvord Bay, in which the animal demonstrates fidelity to locations with high-ice concentration, indicating the occurrence of foraging behavior under ice. By linking ice observations with associated tag-data (Figs. 3, 6), the relationships between location, ice cover and animal behavior can be more accurately investigated. Ice concentration comparison. Pseudo-tracks with ice observations (indicated by colored markers) of deployments bb180125-30 (left) and bb180227-45 (right) and accompanied ice concentration distributions Animal-borne video data offer previously inaccessible insight into how marine species, such as the Antarctic minke whale, interact with their physical environment. Previous work in the region has relied on visual sighting surveys that provide information on animal distribution at relatively coarse spatial scales and link these to satellite imagery at scales orders of magnitude greater than what we are able to measure from the animal's perspective [19, 20]. Similarly, the only published account of Antarctic minke whale tag-derived behavior in relation to surface ice used positions from Argos tags that have error estimates of up to several kilometers, and linked behavioral state (transiting versus area-restricted search) to coarse satellite-derived sea ice concentration data [11]. Our method for describing surface ice concentration at such fine spatiotemporal scales, with continuous reliable information on behavioral state from motion-sensing tags, will now allow greater quantification of how the behavior of this species is affected by ice in the environment. This information is critical for understanding both the ecology of the species and how it will be affected by the changing ice conditions. One of the advances that our method provides relative to current satellite technology is the spatial and temporal precision in linking environmental observation to fine-scale behavioral data. Surface ice is incredibly dynamic and can change quickly depending on local weather and oceanographic conditions [12]. Thus, any differences in the timing of animal behavior and ice measurements could generate spurious results on how ice influences animal behavior. By combining behavioral observations and ice concentration on the same tag platform, we can eliminate any such offset. This will allow future studies to compare environmental features with animal behaviors ranging from feeding rates on a dive-by-dive basis to specific kinematic strategies of a single event over mere seconds (Fig. 3). Our area of view analysis reveals that the spatial scale of our observations are equivalent to or finer than a single satellite pixel representing 10–60 m (100–3600 m2). These results indicate that our method offers significant increases in temporal accuracy from SAR and visible spectrum ice assessments, and in spatial (and likely temporal depending on the timing of satellite passes) accuracy from microwave radiometry. In this study, we chose to include sub-pixel (satellite) information in the form of ice concentration bins. This allowed us to further characterize the percentage of ice-covered environment around the animal, but inevitably introduces subjectivity in the assessments. We chose not to adopt a computer vision or machine learning approach to ice quantification due to the challenges presented by the highly variable surface conditions and backlighting. Still, observer subjectivity may be decreased by limiting the number of bins. For areas where remote data are scarcely available, or where sea ice is permanent (e.g., fast-ice), even a binary (ice or no ice) evaluation may provide valuable information that is otherwise unobtainable. Though our method allows for a robust assessment of the environment proximate to the animal, it lacks the ability to accurately map the ice distribution of the greater environment beyond the tag's visual field of view. To adequately answer questions that require such knowledge, a combination of techniques may be required. Recent advances in remote sensing drone technology may facilitate the development of an approach to study the dynamics of ice distribution at a temporal scale relevant to animal behavior [21]. As drones can cover area quickly and collect high-resolution imagery below clouds, they can be used to establish a precise estimate of the ice-covered habitat available in a given location either as a frequency distribution of % ice cover or some other metric for the amount of ice cover types and concentrations. This information can then be compared to tag-observed surface ice to determine if the animal utilizes ice cover at similar or different frequencies from what is available. This approach will allow a description of animal habitat usage without having to rely on potentially inaccurate GPS data to locate the animal within a specific photograph of ice distribution. While the main function of this technology and method is to better understand the ecology of the animals in question, tag video may also be useful in concert with remote sensing to understand sub-pixel variation for methods of ice classification in coarse resolution satellite data. For example, close range tag-derived ice data may be useful for understanding sub-pixel variation in ice types within the ice coverage presented in passive microwave imagery. However, satellite validation at the submesoscale requires greater spatial coverage of animal-borne video and high-quality locational information, especially for emerging high-resolution satellite data. The ability to precisely and accurately understand the relationships between different animals and their environment is a necessary, and often times lacking, piece of information that must be available in order to effectively understand the impacts of environmental change. This method sets the precedent of using tag video data to evaluate ice conditions, allowing us to link environmental and behavioral observations with spatial and temporal precision that is not currently possible using satellite remote sensing methods. In rapidly changing polar regions, this information is particularly important for the numerous species that have evolved to rely on ice as a substrate for critical life history events. Specifically for Antarctic minke whales, this method will allow us to better understand how and if rapid warming and changes to ice conditions around the Antarctic Peninsula will ultimately make this area unsuitable for them to inhabit. Data are available from the corresponding author upon reasonable request. Surface ice concentration AMW: Antarctic minke whale Brighton CH, Thomas ALR, Taylor GK. Terminal attack trajectories of peregrine falcons are described by the proportional navigation guidance law of missiles. Proc Natl Acad Sci USA. 2017;114(51):13495–500. Gleiss AC, Schallert RJ, Dale JJ, Wilson SG, Block BA. Direct measurement of swimming and diving kinematics of giant Atlantic bluefin tuna (Thunnus thynnus). R Soc Open Sci. 2019;6(5):190203. Boehlert G, Costa D, Crocker D, Green P. Autonomous pinniped environmental samplers: using instrumented animals as oceanographic data collectors. J Atmospheric Ocean Technol. 2001;18(11):1882–933. Weimerskirch H, Filippi DP, Collet J, Waugh SM, Patrick SC. Use of radar detectors to track attendance of albatrosses at fishing vessels. Conserv Biol. 2018;32(1):240–5. Beitsch A, Kaleschke L, Kern S. Investigating high-resolution AMSR2 sea ice concentrations during the February 2013 fracture event in the Beaufort Sea. Remote Sens. 2014;6(5):3841–56. Kern S, Lavergne T, Notz D, Pedersen LT, Tonboe RT, Saldo R, et al. Satellite Passive Microwave Sea-Ice Concentration Data Set Intercomparison: Closed Ice and Ship-Based Observations. Cryosphere Discussions. 2019:1–55. EO Browser. Available from: https://apps.sentinel-hub.com/eo-browser/. Li J, Roy D. A global analysis of sentinel-2A, sentinel-2B and Landsat-8 data revisit intervals and implications for terrestrial monitoring. Remote Sensing. 2017;9(9):902. Zakhvatkina N, Smirnov V, Bychkova I. Satellite SAR data-based sea ice classification: an overview. Geosciences. 2019;9(4):152. Friedlaender AS, Goldbogen JA, Nowacek DP, Read AJ, Johnston D, Gales N. Feeding rates and under-ice foraging strategies of the smallest lunge filter feeder, the Antarctic minke whale (Balaenoptera bonaerensis). J Exp Biol. 2014;217(16):2851–4. Lee JF, Friedlaender AS, Oliver MJ, DeLiberty TL. Behavior of satellite-tracked Antarctic minke whales (Balaenoptera bonaerensis) in relation to environmental factors around the western Antarctic Peninsula. Animal Biotelemetry. 2017;5(1):23. Stammerjohn S, Maksym T. Gaining (and losing) Antarctic sea ice: Variability, trends and mechanisms. Sea Ice. 3rd ed. Chichester: Wiley; 2016. p. 261–289. https://doi.org/10.1002/9781118778371.ch10. Cade DE, Friedlaender AS, Calambokidis J, Goldbogen J. Kinematic diversity in rorqual whale feeding mechanisms. Curr Biol. 2016;26(19):2617–24. Goldbogen JA, Cade DE, Boersma AT, Calambokidis J, Kahane-Rapport SR, Segre PS, et al. Using digital tags with integrated video and inertial sensors to study moving morphology and associated function in large aquatic vertebrates. Anat Rec. 2017;300(11):1935–41. Cade DE, Barr KR, Calambokidis J, Friedlaender AS, Goldbogen JA. Determining forward speed from accelerometer jiggle in aquatic environments. J Exp Biol. 2018;221:2. Cade DE, Barr KR, Calambokidis J, Friedlaender AS, Goldbogen JA. Determining forward speed from accelerometer jiggle in aquatic environments. J Exp Biol. 2018;221(2):170449. Wilson RP, Liebsch N, Davies IM, Quintana F, Weimerskirch H, Storch S, et al. All at sea with animal tracks; methodological and analytical solutions for the resolution of movement. Deep-Sea Res Part II. 2007;54(3–4):193–21010. Friard O, Gamba M. BORIS: a free, versatile open-source event-logging software for video/audio coding and live observations. Methods Ecol Evol. 2016;7(11):1325–30. Herr H, Kelly N, Dorschel B, Huntemann M, Kock KH, Lehnert LS, et al. Aerial surveys for Antarctic minke whales (Balaenoptera bonaerensis) reveal sea ice dependent distribution patterns. Ecol Evol. 2019;9(10):5664–822. Williams R, Kelly N, Boebel O, Friedlaender AS, Herr H, Kock KH, et al. Counting whales in a challenging, changing environment. Sci Rep. 2014;4(1):4170. Williams GD, Fraser AD, Lucieer A, Turner D, Cougnon E. Drones in a cold climate. 2016. We thank the crew ASC science support on the ARSV Laurence M Gould for facilitating the research. We are grateful to Doug Nowacek, Shirel Kahane-Rapport, Julian Dale, Patrick Gray, KC Beirlich, and Emma Levy for their help with field work and data collection, Angela D'Amico for thoughtful comments on our manuscript, and Ryan Reisinger for his help with the deployment maps. We are also grateful to Dr. Jenn Burns, Tim McGovern, Nature McGinn, Polly Penhale, and others at the National Science Foundation for their support of this research. This work was also supported by Chris Johnson and the World Wildlife Fund. All research was conducted under UCSC IACUC/ACUP Friea1706, NMFS Permit 14809, and ACA permit 2015–011. This research was funded by a National Science Foundation Office of Polar Programs grant to ASF, award Number: OPP-1643877. This award provided funds and logistic support for field work and data analysis. Funding was also provided by the World Wildlife Fund as part of an award to ASF to support data analysis. Jacob M. J. Linsky Present address: University of Queensland, Brisbane, QLD, Australia Institute of Marine Sciences, University of California Santa Cruz, Santa Cruz, CA, USA Jacob M. J. Linsky, Nicole Wilson, David E. Cade & Ari S. Friedlaender Hopkins Marine Station, Stanford University, Pacific Grove, CA, USA David E. Cade & Jeremy A. Goldbogen Marine Robotics and Remote Sensing Lab, Duke University Marine Laboratory, Division of Marine Science and Conservation, Nicholas School of the Environment, Duke University, Beaufort, NC, USA David W. Johnston Nicole Wilson David E. Cade Jeremy A. Goldbogen Ari S. Friedlaender JL, NW, DC, DJ, JG, and AF designed the study. DC, JG, and AF collected data. JL, NW, and DC analyzed data. JL, NW, DC, DJ, JG, and AF interpreted the data. JL drafted the manuscript and DC, DJ, JG, NW and AF edited the manuscript. All authors read and approved the final manuscript. Correspondence to Jacob M. J. Linsky. Tag video of ice bins 1-20%, 41-60%, & 81-100% (deployment bb180227-45). Linsky, J.M.J., Wilson, N., Cade, D.E. et al. The scale of the whale: using video-tag data to evaluate sea-surface ice concentration from the perspective of individual Antarctic minke whales. Anim Biotelemetry 8, 31 (2020). https://doi.org/10.1186/s40317-020-00218-8 Received: 16 April 2020 Accepted: 24 September 2020 Tag-video Minke whale Ice concentration Biologging Submission enquiries: [email protected]
CommonCrawl
Fixed-Depth Two-Qubit Circuits and the Monodromy Polytope Eric C. Peterson, Gavin E. Crooks, and Robert S. Smith Rigetti Quantum Computing, 2919 Seventh St, Berkeley, CA 94710 For a native gate set which includes all single-qubit gates, we apply results from symplectic geometry to analyze the spaces of two-qubit programs accessible within a fixed number of gates. These techniques yield an explicit description of this subspace as a convex polytope, presented by a family of linear inequalities themselves accessible via a finite calculation. We completely describe this family of inequalities in a variety of familiar example cases, and as a consequence we highlight a certain member of the ``$\mathrm{XY}$--family'' for which this subspace is particularly large, i.e., for which many two-qubit programs admit expression as low-depth circuits. Featured image: Two-qubit programs which admit circuits using 3 (red), 4 (yellow), and 5 (blue) applications of √CZ. @article{Peterson2020fixeddepthtwoqubit, doi = {10.22331/q-2020-03-26-247}, url = {https://doi.org/10.22331/q-2020-03-26-247}, title = {Fixed-{D}epth {T}wo-{Q}ubit {C}ircuits and the {M}onodromy {P}olytope}, author = {Peterson, Eric C. and Crooks, Gavin E. and Smith, Robert S.}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {4}, pages = {247}, month = mar, year = {2020} } [1] S. Agnihotri and C. Woodward. Eigenvalues of products of unitary matrices and quantum Schubert calculus. Math. Res. Lett., 5(6):817–836, 1998. 10.4310/​MRL.1998.v5.n6.a10. https:/​/​doi.org/​10.4310/​MRL.1998.v5.n6.a10 [2] M. F. Atiyah. Convexity and commuting Hamiltonians. Bull. London Math. Soc., 14(1):1–15, 1982. 10.1112/​blms/​14.1.1. https:/​/​doi.org/​10.1112/​blms/​14.1.1 [3] M. F. Atiyah and R. Bott. The Yang-Mills equations over Riemann surfaces. Philos. Trans. Roy. Soc. London Ser. A, 308(1505):523–615, 1983. 10.1098/​rsta.1983.0017. [4] D. Avis. Living with lrs. In Discrete and computational geometry (Tokyo, 1998), volume 1763 of Lecture Notes in Comput. Sci., pages 47–56. Springer, Berlin, 2000. 10.1007/​978-3-540-46515-7_4. [5] D. Avis and K. Fukuda. Reverse search for enumeration. volume 65, pages 21–46. 1996. 10.1016/​0166-218X(95)00026-N. First International Colloquium on Graphs and Optimization (GOI), 1992 (Grimentz). https:/​/​doi.org/​10.1016/​0166-218X(95)00026-N [6] P. Belkale. Local systems on $\Bbb P^1-S$ for $S$ a finite set. Compositio Math., 129(1):67–86, 2001. 10.1023/​A:1013195625868. https:/​/​doi.org/​10.1023/​A:1013195625868 [7] A. Bertram, I. Ciocan-Fontanine, and W. Fulton. Quantum multiplication of Schur polynomials. J. Algebra, 219(2):728–746, 1999. 10.1006/​jabr.1999.7960. https:/​/​doi.org/​10.1006/​jabr.1999.7960 [8] A. Buch. Littlewood–Richardson calculator. http:/​/​sites.math.rutgers.edu/​ asbuch/​lrcalc/​, 1999-2014. http:/​/​sites.math.rutgers.edu/​~asbuch/​lrcalc/​ [9] P. Bürgisser, C. Franks, A. Garg, R. M. de Oliveira, M. Walter, and A. Wigderson. Efficient algorithms for tensor scaling, quantum marginals, and moment polytopes. In M. Thorup, editor, 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 7-9, 2018, pages 883–897. IEEE Computer Society, 2018. 10.1109/​FOCS.2018.00088. https:/​/​doi.org/​10.1109/​FOCS.2018.00088 [10] S. A. Caldwell, N. Didier, C. A. Ryan, E. A. Sete, A. Hudson, P. Karalekas, R. Manenti, M. P. da Silva, R. Sinclair, E. Acala, N. Alidoust, J. Angeles, A. Bestwick, M. Block, B. Bloom, A. Bradley, C. Bui, L. Capelluto, R. Chilcott, J. Cordova, G. Crossman, M. Curtis, S. Deshpande, T. E. Bouayadi, D. Girshovich, S. Hong, K. Kuang, M. Lenihan, T. Manning, A. Marchenkov, J. Marshall, R. Maydra, Y. Mohan, W. O'Brien, C. Osborn, J. Otterbach, A. Papageorge, J. P. Paquette, M. Pelstring, A. Polloreno, G. Prawiroatmodjo, V. Rawat, M. Reagor, R. Renzas, N. Rubin, D. Russell, M. Rust, D. Scarabelli, M. Scheer, M. Selvanayagam, R. Smith, A. Staley, M. Suska, N. Tezak, D. C. Thompson, T. W. To, M. Vahidpour, N. Vodrahalli, T. Whyland, K. Yadav, W. Zeng, and C. Rigetti. Parametrically Activated Entangling Gates Using Transmon Qubits. Physical Review Applied, 10:034050, Sep 2018, 1706.06562. 10.1103/​PhysRevApplied.10.034050. https:/​/​doi.org/​10.1103/​PhysRevApplied.10.034050 [11] A. W. Cross, L. S. Bishop, S. Sheldon, P. D. Nation, and J. M. Gambetta. Validating quantum computers using randomized model circuits. Phys. Rev. A, 100:032328, Sep 2019. 10.1103/​PhysRevA.100.032328. https:/​/​doi.org/​10.1103/​PhysRevA.100.032328 [12] S. K. Donaldson. A new proof of a theorem of narasimhan and seshadri. J. Differential Geom., 18(2):269–277, 1983. 10.4310/​jdg/​1214437664. https:/​/​doi.org/​10.4310/​jdg/​1214437664 [13] S. K. Donaldson. Boundary value problems for Yang-Mills fields. J. Geom. Phys., 8(1-4):89–122, 1992. 10.1016/​0393-0440(92)90044-2. [14] A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl., 20(2):303–353, 1999. 10.1137/​S0895479895290954. [15] L. Euler. Novi commentarii Academiae Scientiarum Imperialis Petropolitanae, volume t.20. Petropolis, Typis Academiae Scientarum, 1775. [16] E. Falbel and R. A. Wentworth. Eigenvalues of products of unitary matrices and Lagrangian involutions. Topology, 45(1):65–99, 2006. 10.1016/​j.top.2005.06.003. https:/​/​doi.org/​10.1016/​j.top.2005.06.003 [17] C. Franks. Operator scaling with specified marginals. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, page 190–203, New York, NY, USA, 2018. Association for Computing Machinery. 10.1145/​3188745.3188932. https:/​/​doi.org/​10.1145/​3188745.3188932 [18] W. Fulton and R. Pandharipande. Notes on stable maps and quantum cohomology. In Algebraic geometry—Santa Cruz 1995, volume 62 of Proc. Sympos. Pure Math., pages 45–96. Amer. Math. Soc., Providence, RI, 1997. 10.1090/​pspum/​062.2/​1492534. https:/​/​doi.org/​10.1090/​pspum/​062.2/​1492534 [19] A. N. Glaudell, N. J. Ross, and J. M. Taylor. Optimal Two-Qubit Circuits for Universal Fault-Tolerant Quantum Computation. arXiv e-prints, page arXiv:2001.05997, Jan. 2020, 2001.05997. [20] V. Guillemin and S. Sternberg. Convexity properties of the moment mapping. Invent. Math., 67(3):491–513, 1982. 10.1007/​BF01398933. [21] V. Guillemin and S. Sternberg. Convexity properties of the moment mapping. II. Invent. Math., 77(3):533–546, 1984. 10.1007/​BF01388837. [22] B. Hall. Lie groups, Lie algebras, and representations, volume 222 of Graduate Texts in Mathematics. Springer, Cham, second edition, 2015. 10.1007/​978-3-319-13467-3. An elementary introduction. https:/​/​doi.org/​10.1007/​978-3-319-13467-3 [23] R. Hartshorne. Algebraic geometry. Springer-Verlag, New York-Heidelberg, 1977. https:/​/​doi.org/​10.1007/​978-1-4757-3849-0. Graduate Texts in Mathematics, No. 52. https:/​/​doi.org/​https:/​/​doi.org/​10.1007/​978-1-4757-3849-0 [24] A. Horn. Eigenvalues of sums of Hermitian matrices. Pacific J. Math., 12:225–241, 1962. URL http:/​/​projecteuclid.org/​euclid.pjm/​1103036720. http:/​/​projecteuclid.org/​euclid.pjm/​1103036720 [25] J. E. Humphreys. Reflection groups and Coxeter groups, volume 29 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1990. 10.1017/​CBO9780511623646. [26] L. C. Jeffrey and J. Weitsman. Bohr-Sommerfeld orbits in the moduli space of flat connections and the Verlinde dimension formula. Communications in Mathematical Physics, 150(3):593–630, Dec. 1992. 10.1007/​BF02096964. [27] F. Kirwan. Convexity properties of the moment mapping. III. Invent. Math., 77(3):547–552, 1984. 10.1007/​BF01388838. [28] A. A. Klyachko. Stable bundles, representation theory and Hermitian operators. Selecta Math. (N.S.), 4(3):419–445, 1998. 10.1007/​s000290050037. [29] A. Knutson. The symplectic and algebraic geometry of horn's problem. Linear Algebra and its Applications, 319(1):61 – 81, 2000. https:/​/​doi.org/​10.1016/​S0024-3795(00)00220-2. https:/​/​doi.org/​https:/​/​doi.org/​10.1016/​S0024-3795(00)00220-2 [30] M. Kontsevich and Y. Manin. Gromov-Witten classes, quantum cohomology, and enumerative geometry. Comm. Math. Phys., 164(3):525–562, 1994. URL http:/​/​projecteuclid.org/​euclid.cmp/​1104270948. http:/​/​projecteuclid.org/​euclid.cmp/​1104270948 [31] B. Kraus and J. I. Cirac. Optimal creation of entanglement using a two-qubit gate. Physical Review A, 63:062309, Jun 2001, quant-ph/​0011050. 10.1103/​PhysRevA.63.062309. [32] J. Lawrence. Polytope volume computation. Math. Comp., 57(195):259–271, 1991. 10.2307/​2938672. https:/​/​doi.org/​10.2307/​2938672 [33] Y. Makhlin. Nonlocal properties of two-qubit gates and mixed states, and the optimization of quantum computations. Quantum Information Processing, 1(4):243–252, Aug. 2002. 10.1023/​A:1022144002391. [34] V. B. Mehta and C. S. Seshadri. Moduli of vector bundles on curves with parabolic structures. Math. Ann., 248(3):205–239, 1980. 10.1007/​BF01420526. [35] E. Meinrenken and C. Woodward. A symplectic proof of Verlinde factorization. eprint arXiv:dg-ga/​961201, Dec. 1996, dg-ga/​9612018. arXiv:dg-ga/9612018 [36] E. Meinrenken and C. Woodward. Hamiltonian loop group actions and verlinde factorization. J. Differential Geom., 50(3):417–469, 1998. 10.4310/​jdg/​1214424966. [37] M. S. Narasimhan and C. S. Seshadri. Stable and unitary vector bundles on a compact Riemann surface. Ann. of Math. (2), 82:540–567, 1965. 10.2307/​1970710. [38] M. A. Nielsen. A simple formula for the average gate fidelity of a quantum dynamical operation. Physics Letters A, 303(4):249 – 252, 2002. https:/​/​doi.org/​10.1016/​S0375-9601(02)01272-0. [39] L. H. Pedersen, N. M. Møller, and K. Mølmer. Fidelity of quantum operations. Phys. Lett. A, 367:47–51, July 2007, quant-ph/​0701138. 10.1016/​j.physleta.2007.02.069. https:/​/​doi.org/​10.1016/​j.physleta.2007.02.069 [40] J. Råde. On the Yang-Mills heat equation in two and three dimensions. J. Reine Angew. Math., 431:123–163, 1992. 10.1515/​crll.1992.431.123. https:/​/​doi.org/​10.1515/​crll.1992.431.123 [41] A. Sard. The measure of the critical values of differentiable maps. Bull. Amer. Math. Soc., 48:883–890, 1942. 10.1090/​S0002-9904-1942-07811-6. https:/​/​doi.org/​10.1090/​S0002-9904-1942-07811-6 [42] N. Schuch and J. Siewert. Natural two-qubit gate for quantum computation using the XY interaction. Physical Review A, 67:032301, Mar 2003, quant-ph/​0209035. 10.1103/​PhysRevA.67.032301. [43] V. V. Shende, S. S. Bullock, and I. L. Markov. Recognizing small-circuit structure in two-qubit operators. Physical Review A, 70:012310, July 2004, quant-ph/​0308045. 10.1103/​PhysRevA.70.012310. [44] V. V. Shende, S. S. Bullock, and I. L. Markov. Synthesis of quantum-logic circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 25(6):1000–1010, 2006. 10.1109/​TCAD.2005.855930. https:/​/​doi.org/​10.1109/​TCAD.2005.855930 [45] V. V. Shende, I. L. Markov, and S. S. Bullock. Minimal universal two-qubit controlled-NOT-based circuits. Phys. Rev. A, 69:062321, 2004. 10.1103/​PhysRevA.69.062321. [46] R. S. Smith, M. J. Curtis, and W. J. Zeng. A Practical Quantum Instruction Set Architecture. arXiv e-prints, page arXiv:1608.03355, Aug. 2016, 1608.03355. [47] P. Watts, J. Vala, M. M. Müller, T. Calarco, K. B. Whaley, D. M. Reich, M. H. Goerz, and C. P. Koch. Optimizing for an arbitrary perfect entangler. i. functionals. Phys. Rev. A, 91:062306, Jun 2015. 10.1103/​PhysRevA.91.062306. [48] H.-R. Wei and Y.-M. Di. Decomposition of orthogonal matrix and synthesis of two-qubit and three-qubit orthogonal gates. Quantum Info. Comput., 12(3–4):262–270, Mar. 2012. [49] J. Zhang, J. Vala, S. Sastry, and K. B. Whaley. Geometric theory of nonlocal two-qubit operations. Phys. Rev. A, 67:042313, 2003. 10.1103/​PhysRevA.67.042313. [50] J. Zhang, J. Vala, S. Sastry, and K. B. Whaley. Minimum Construction of Two-Qubit Quantum Operations. Phys. Rev. Lett. , 93:020502, Jul 2004, quant-ph/​0312193. 10.1103/​PhysRevLett.93.020502. https:/​/​doi.org/​10.1103/​PhysRevLett.93.020502 [1] Deanna M. Abrams, Nicolas Didier, Blake R. Johnson, Marcus P. da Silva, and Colm A. Ryan, "Implementation of XY entangling gates with a single calibrated pulse", Nature Electronics 3 12, 744 (2020). [2] A. D. Corcoles, A. Kandala, A. Javadi-Abhari, D. T. McClure, A. W. Cross, K. Temme, P. D. Nation, M. Steffen, and J. M. Gambetta, "Challenges and Opportunities of Near-Term Quantum Computing Systems", arXiv:1910.02894. [3] Gavin E. Crooks, "Gradients of parameterized quantum gates using the parameter-shift rule and gate decomposition", arXiv:1905.13311. [4] Deanna M. Abrams, Nicolas Didier, Blake R. Johnson, Marcus P. da Silva, and Colm A. Ryan, "Implementation of the XY interaction family with calibration of a single pulse", arXiv:1912.04424. [5] Seyon Sivarajah, Silas Dilkes, Alexander Cowtan, Will Simmons, Alec Edgington, and Ross Duncan, "t|ket⟩: a retargetable compiler for NISQ devices", Quantum Science and Technology 6 1, 014003 (2021). [6] Peter J. Karalekas, Nikolas A. Tezak, Eric C. Peterson, Colm A. Ryan, Marcus P. da Silva, and Robert S. Smith, "A quantum-classical cloud platform optimized for variational hybrid algorithms", Quantum Science and Technology 5 2, 024003 (2020). [7] M. Sohaib Alam, "Quantum Logic Gate Synthesis as a Markov Decision Process", arXiv:1912.12002. [8] R. S. Smith, E. C. Peterson, M. G. Skilbeck, and E. J. Davis, "An open-source, industrial-strength optimizing compiler for quantum programs", Quantum Science and Technology 5 4, 044001 (2020).
CommonCrawl
Current strategies for treatment of intervertebral disc degeneration: substitution and regeneration possibilities Sebastião van Uden ORCID: orcid.org/0000-0001-9031-23711,2,4,5, Joana Silva-Correia1,2, Joaquim Miguel Oliveira1,2,3 & Rui Luís Reis1,2,3 Biomaterials Research volume 21, Article number: 22 (2017) Cite this article Intervertebral disc degeneration has an annual worldwide socioeconomic impact masked as low back pain of over 70 billion euros. This disease has a high prevalence over the working age class, which raises the socioeconomic impact over the years. Acute physical trauma or prolonged intervertebral disc mistreatment triggers a biochemical negative tendency of catabolic-anabolic balance that progress to a chronic degeneration disease. Current biomedical treatments are not only ineffective in the long-run, but can also cause degeneration to spread to adjacent intervertebral discs. Regenerative strategies are desperately needed in the clinics, such as: minimal invasive nucleus pulposus or annulus fibrosus treatments, total disc replacement, and cartilaginous endplates decalcification. Herein, it is reviewed the state-of-the-art of intervertebral disc regeneration strategies from the perspective of cells, scaffolds, or constructs, including both popular and unique tissue engineering approaches. The premises for cell type and origin selection or even absence of cells is being explored. Choice of several raw materials and scaffold fabrication methods are evaluated. Extensive studies have been developed for fully regeneration of the annulus fibrosus and nucleus pulposus, together or separately, with a long set of different rationales already reported. Recent works show promising biomaterials and processing methods applied to intervertebral disc substitutive or regenerative strategies. Facing the abundance of studies presented in the literature aiming intervertebral disc regeneration it is interesting to observe how cartilaginous endplates have been extensively neglected, being this a major source of nutrients and water supply for the whole disc. Several innovative avenues for tackling intervertebral disc degeneration are being reported – from acellular to cellular approaches, but the cartilaginous endplates regeneration strategies remain unaddressed. Interestingly, patient-specific approaches show great promise in respecting patient anatomy and thus allow quicker translation to the clinics in the near future. The intervertebral disc (IVD) can be subjected to a great range of changes throughout a person's life [1]. An underlying IVD degeneration (IDD) might be developing alongside with these age-related changes. IDD can be triggered by a single event of acute overloading, such as the lifting of a heavy object. It could alternatively be derived from a long-term repetitive IVD mistreatment without providing the proper conditions and time for the tissue to recover. With aging, this disease can be induced with a single event of acute over-loading of progressively less intensity. IDD may progress to serious condition due to important physiological changes that can include: Water loss, healthy extracellular matrix (ECM) synthesis decrease and phenotype change, increased cell senescence, and several other modifications at the biomolecular level [2, 3]. Ultimately, such biological transformations can lead to severe morphological changes, expressed in the form of pathologies. The biomechanical functioning of IVD relies on a balance between the three main tissues that compose it: Two cartilaginous endplates – hyaline-like tissue located at the edge of the neighbour vertebras, the nucleus pulposus (NP) – gelatinous tissue at the centre of the IVD, and the outer and inner annulus fibrosus (AF) – two partially concentric strong elastic-like tissues surrounding the whole IVD [4]. The latter two, although slightly different from each other, will be regarded in this review as one AF when its division is not required, for simplicity purposes. The AF, as a whole, behaves like a strong elastic material. This, however, could be a reductionist view of this tissue, since this behaviour could be derived from a low concentration of strategically located elastic fibres that convey recoil properties to the collagen fibre bundles, returning these to their pre-stressed dimensions when relaxed. Biomechanics is strongly influenced by the NP's water concentration. Water presence is, by its turn, directly related with the biochemical composition of the tissue. In this respect, proteoglycans (PG's) specifically play a major role in this water-biochemistry relation, due to their extremely hydrophilic nature. PG's are responsible for the presence of up to 80% water concentration within a young and healthy NP [5]. When the IVD is compressed, the water molecules are released from the PG's, following the lines of mechanical tension that progress from the cartilaginous endplates through the NP until the outer edges of the AF. After the loading cycle is finished water is again attracted towards the PG's at the centre of the IVD, through diffusion derived from the closest blood vessels. In healthy IVDs, this near vascularisation is located at the cartilaginous endplates, which provide hydration as well as nutrition to the whole IVD [6]. With aging, the cartilaginous endplates progressively lose permeability due to calcification, cutting this essential water supply. However, due to the complexity of the pathways of water, nutrients, waste and oxygen within the whole IVD, the literature is not absolutely conclusive [7]. It seems that along the progressive shutdown of the endplates' pathway, the water concentration gradually decreases within the IVD. This might force the water to return to the NP through the AF influencing a non-healthy region homeostasis with debilitated water, nutrient and waste renewability. It is clear, nevertheless, that it leads to: NP ECM remodelling unbalance, IVD loss of hydration, as well as height decrease, and abnormal force distribution. Ultimately, all these changes are responsible for the appearance of IDD morphological signs [8]. The fundamentals related to the biological and molecular changes derived from an IVD under degeneration are described. The set of prospective and recent studies that have been reported, ranging from biomaterials-based to cellular approaches, are also herein overviewed (Fig. 1). Image bundle of current state-of-the-art research strategies to treat IDD, described in this review, such as (1) injectable NP hydrogel, (2) minimal invasive scaffold for AF scaffold, (3) full AF scaffold, and (4) biphasic scaffold for total IVD replacement. Permissions: 1 – images used in this scheme were adapted from two articles of Silva-Correia et al. [62, 81]; 2 – images used in this scheme were adapted from Xin et al. [93] under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/); 3 – images used in this scheme were adapted from van Uden et al. [99] © IOP Publishing. Reproduced with permission. All rights reserved; 4 – images used in this scheme were adapted from Choy et al. [95] under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) Intervertebral disc degeneration (IDD): biological and molecular changes Biological and molecular changes underlie the morphological signs of IDD. Several cellular changes occur during IDD, namely in: Type, concentration, death, proliferation, senescence, and phenotype. All these changes have a non-healthy impact on the biochemical synthesis and its consequent composition, which leads to an unbalanced ECM remodelling, and ultimately to hydration loss [9]. The IVD is composed of different types of cells and cell type concentrations in its different tissues. The cartilaginous endplates, in resemblance to the hyaline cartilage, are composed of chondrocytes. The elongated fibroblast-like cells can be found in the outer part of the AF, while the inner part of the AF has more rounded chondrocyte-like cells. The cell population in these tissues does not change significantly with aging. However, it can greatly vary within the NP [1]. The cartilaginous endplates and the whole AF, as most of the spinal structures, are derived from the mesoderm germ layer, but not the NP [10]. The NP originates in the endoderm germ layer, which is also the notochordal cell's origin. It is believed, that the young NP tissue does not have the chondrocyte-like cells, but only notochordal cells, that disappear by the end of the first decade of life [11]. The notochordal cells are gradually replaced by chondrocyte-like cells, which are believed to migrate from the inner AF, from the cartilaginous endplates, or both [12]. There is, however, some controversy around the cell concentration changes within the NP. Zhao et al. [1] reported that cell concentration has its turnovers in parallel with the change of cell type predominance. As notochordal cells start to decrease, the cellular concentration significantly declines creating a positive feedback for chondrocyte-like cells to migrate and proliferate into the NP. These cells gain concentration along with IDD progression. Bae et al. [13], however, claims that the degenerated IVD cellular concentration is lower than in healthy IVDs due to gradual loss of optimal cellular environment conditions. Both hypotheses present strong arguments, possibly indicating a wide variability of possible IVD aging and degeneration progressions in humans. Cellular environment progressively changes due to scarcity of nutrition and waste removal pathways as cartilaginous endplates calcify, causing initial cell death (Fig. 2) [14]. With IDD progression, ingrowth of blood vessels accompanied by nerve growth occurs into the AF continuing into the NP, in a later stage [15]. This tends to increase nutritional accessibility and waste removal rates, which by its turn can provide conditions to increase cellular concentration within the NP. With this neo-vascularisation proximity, however, also the oxygen concentration rises, leading to overall biochemical environment towards a normoxic NP environment that diverges from a healthy IVD [16]. NP cells' phenotype and activity is stimulated by hypoxia. Ultimately, a prolonged excessive availability of oxygen leads these cells to a senescent state, which negatively impacts long term cellular concentration [17, 18]. Cascade of events associated with IDD morphological signs. Starting by a contribution of an acute or a repeated set of acute loading forces that, as tissue aging progresses, becomes lighter to set IDD. The highlighted areas divide the events by tissue/groups of tissue – Red: IVD; Green: NP; Blue: AF and cartilaginous endplates. NP: nucleus pulposus; AF: annulus fibrosus; IVD: intervertebral disc; CEP: cartilaginous endplates; IDD: IVD degeneration disease Scheme of a tissue engineering strategy applied to the IVD Micrograph of human NP cells after 3 days in culture. Scale bar: 50 μm Methacrylated gellan gum discs with a diameter of 10 mm and a height of 5 mm. Scale bar: 10 mm Micrograph of methacrylated gellan gum hydrogel with one million rabbit NP cells encapsulated, after overnight culturing. Scale bar: 200 μm Micro computed tomography image top view of the rabbit AF, surrounding a dark area, which is the NP. Acquisition parameters: pixel size – 13.18 μm, source – 89 kV and 112 μA. Scale bar: 250 μm Photograph of a 3D printed PCL rabbit IVD replica. Scale bar: 5 mm A healthy AF is largely composed of collagen type I whereas the NP is mainly composed of collagen type II and aggrecan [1]. Other molecules are also present in the NP at low concentration, namely: Fibronectin, collagen type I, III, V, VI, IX and XI, and other types of PG's, such as: Biglycans, decorin and fibromodulin [9]. In healthy IVD, there is also a production of catabolic molecules with increased expression while degeneration develops. These molecules are matrix metalloproteinases and aggrecanases, responsible for breaking down the ECM to allow its natural remodelling cycle. Matrix metalloprotainases-1, 2, 3, 7, 8 and 13 are expressed in the NP. In addition, there is also an increased cytokine production, such as: Tumour necrosis factor-α, interleukin-1α and -1β. These molecules also promote matrix metalloproteinases' synthesis, which in degenerated IVDs can reach levels that have a devastating effect over the ECM [19]. Zhao et al. have summarized the biochemical changes caused by IDD, which complements this brief description of matrix remodelling mediation [1]. The phenotype change undergone by NP cells is possibly the highest responsible factor, within the NP, for the morphological transformations derived from IDD. This shift in cell expression influences directly the hydrophilic anabolic-catabolic ECM balance. Overall, IVD hydration decreases while more matrix is degraded than it is produced, ultimately leading to IVD hardening and unhealthy biomechanical behaviour [18]. NP hydraulic permeability greatly depends on the magnitude of compression force requested by the IVD. Heneghan et al. [20] defined a mathematical formula explaining this phenomenon, which is given by equation 1. $$ k\left(\lambda \right)=1.59\times {10}^{-15}{\left(\frac{\lambda -0.2}{0.8}\right)}^{1.13}{e}^{\left[-\frac{0.02\left({\lambda}^2-1\right)}{2}\right]} $$ The function of IVD permeability is given by k in relation to the stretch ratio λ, corresponding to the ratio between the compressed sample's height (h) and the original uncompressed height (h0). On the apparatus used by this research group, a human IVD is submitted to a certain compression strain, while the inlet and outlet flows are measured. This experiment describes an exponential relation between permeability and compression (0 ≤ λ ≤ 1). Interestingly, until a certain magnitude of compression (λ ≤ 0.2) the tissue is not permeable, possibly implying the need for IVD loading cycles to create an optimal micro-environment [20]. However, this formula can only be applied when the NP is healthy, when the degeneration starts the PG number decreases and the permeability increases together with it, i.e. the water retention decreases and the whole IVD decreases its biomechanical performance. A regular loading becomes an overloading, which also becomes a factor to progress further on the degeneration state that will in the end reduce even more the mechanical capacity, and the cycle continuous following a recessive spiral [21, 22]. Prospective strategies for intervertebral disc (IVD) regeneration In accordance with the severity of the IDD, different treatment strategies can be applied. These can be invasive or not, and pharmacological or not. A Swedish massage or acupuncture, for example, is a non-pharmacological and non-invasive treatment commonly used for managing low back pain, a symptom that is strongly related with IDD [23]. Pharmacological treatments are also used for acute strong pain expressivity, such as lumbar muscle spasm commonly managed with muscle relaxant. However, non-invasive treatments have a limit and do not help resolving more severe conditions, in which repair strategies such as spinal fusion or screws implantation must be applied. These, are very effective in eliminating the pain, but reduce spine flexibility [24]. Additionally, due to the mechanical properties' discrepancy between the implants and the biological tissues, local trauma can produce dramatic repercussions. If that is not the case, long term presence of these devices in the spine alters progressively the biological properties of adjacent tissues, possibly inducing degeneration of neighbour IVDs [24]. Advanced regenerative strategies, currently under research, aim for total IVD regeneration by addressing all previous signs of degeneration, maintaining the treated IVD in homeostasis with its environment. Tissue engineering is a clinically innovative driving force focused on this objective. The gold standard diagnostic tool for IDD is magnetic resonance imaging (MRI). The degeneration-associated morphological changes observed, using the equipment, are more evident on the NP than for the AF or the cartilaginous endplates [25, 26]. Possibly, for this reason, the tissue engineering's initial IVD research efforts were carried out to address the huge challenge of NP regeneration (Fig. 3). Cell-loaded hydrogel injection solutions were initially envisioned to regenerate the NP, believing this could be the pivot for a total IVD regeneration. Some cases can benefit from this minimally invasive intervention, mostly light to medium severity IDD conditions. In advanced states, however, the AF has lost its mechanical integrity to support a full NP's volume and its highly hydrated consistency, creating a tendency for herniation [27]. Research strategies for the combined regeneration of the AF and NP started to become more common. However, results were not as effective as possibly predicted. It is conceivable that ignoring the essential role of the cartilaginous endplates is the missing piece for a successful regeneration strategy. The tissues that compose the IVD are closely related, working symbiotically to maintain the whole IVD healthy and functional. If one tissue is left degenerated, the applied regenerative strategy, although demonstrating short to medium term positive results, will probably fail in the long run. The cartilaginous endplate nutrition and waste removal pathway needs to be re-established for a complete IVD healthy homeostasis. Nonetheless, without a vigorous NP hydrophilic matrix, water retention remains scarce, and this pathway remains blocked or minimized, and the hidden tissue regeneration potential is not stimulated as a whole – the NP needs to be addressed as well. Yet again, if the AF does not have the capability to withhold the NP, the material used for regenerating the NP will herniate outside the IVD, possibly causing low back pain. With this rationale, we return to the beginning of the regeneration strategies' loop, indicating that every degenerated IVD tissue should be treated for the whole strategy to succeed. Extensive innovative efforts for NP and AF regeneration have been investigated, and the most important examples are herein discussed. Cartilaginous endplates' targeted regeneration seems to be the missing link that will enable their full success. Nucleus pulposus (NP) When aiming to regenerate the NP, the water content must be addressed if the biochemical environment has been compromised. Hydrophilic materials, such as hydrogels are used to substitute the water uptake responsibility, while newly implanted cells have time to produce native PG-based ECM. If, however, the native matrix is still mildly rich in hydrophilic molecules a cell-based strategy alone, carried by a non-polymerizable carrier might be enough. Cell-based strategies aimed to regenerate the IVD concern strictly on increasing the NP cell concentration to renew biochemical environment by synthesis of ECM. This is addressed by cell injection not only to increase cell number but, more importantly, to boost the active cellular population [28, 29]. Since, as aforementioned, the matrix is gradually lost due to the native cells' change in behaviour, which seize to produce PG's at an even rate as the remodelling proteins catalyse the ECM [13]. The cells of a NP with advanced state of degeneration are senescent and need to be replaced with resilient and vigorous ECM producers. Cells for nucleus pulposus (NP) regeneration Cell therapy approaches for IVD regeneration are based on injecting NP-like cells, responsible for synthesizing PG's when in situ. Therefore, the first decision factor is - what is the right type of cells to be used? If native NP cells (Fig. 4) need to be replaced they surely cannot be used, due to their senescence or diseased phenotype expression [30, 31]. Fortunately, it is possible for this to be a wrong argument in the near future, since Abbott et al. [32] have been working on increasing senescent human NP cells metabolic activity, proliferation, glycosaminoglycan production, and stimulate non-degenerated phenotype. They believe that native cells from a NP in a severe state of degeneration have, in fact, a regenerative potential that can be explored. Exposing this type of cells to a cocktail of specific growth factors, to notochordal cells' conditioned medium, or both, constitutes a promising strategy to enhance glycosaminoglycan production phenotype as a means for an effective treatment. In their study, they retrieved cells from a human source. After expansion, under the three aforementioned conditions, cells were implanted in rabbit models with induced IDD (induced by needle-punctures). Results demonstrated not only that cells stayed viable for up to 24 weeks, but also that this strategy delayed the degeneration progression [32]. A specific follow-up of this work would be very interesting to increase scientific information on the strategy's potential. Nonetheless, work being developed by these authors has been very revelling, i.e. by Iatridis, Purmessur and co-workers, such as the work on the effect of notochordal conditioned media and its derived factors in inhibiting vessel and nerve growth [33, 34]. The use of notochordal cells to be implanted in degenerated IVDs is also being considered as a cell-based strategy, whether from stem cell-derived allogeneic or even xenogeneic origin, or from autologous origin, based on specific differentiation protocols. However, as Arkesteijn et al. observed [35], there was no change in anabolic response within the ex vivo NP culture when the xenogeneic notochordal cells were implanted (porcine cells in bovine NP). Additionally, seeded notochordal cells did not display a native morphology by the end of the culture period – 42 days. A different conclusion from the observations reported by other studies [36, 37], possibly due to a non-realistic clinical approach regarding a very high number of notochordal cells used in the other studies, they say. A discrepancy in ratio of notochordal cells in relation to native NP cells of 20:80 instead of 50:50 might be the reason for the lack of success [35]. Possibly unrealistic clinical approaches, although good at extending scientific knowledge, might not be able to be directly used as a patient treatment, but can certainly provide a basis for other strategies that will. Properties of notochordal cells' conditioned medium have also been in the line of research of Bach et al. [38], whose research led to the conclusion that notochordal cells condition medium derived from human, canine and porcine, had a regenerative effect on human chondrocyte-like cells (herein named as NP cells for simplicity purposes). Meaning that human notochordal condition medium is not required to achieve that effect, since canine or porcine have an approximate effect [38]. Porcine notochordal cells conditioned medium can, therefore, be used for different treatment strategy purposes, due to its potential for extensive availability. Autologous senescent NP cells could be isolated and cultured with this conditioned medium and be delivered back to the diseased IVD. Differentiated autologous stem cells could be cultured in this medium to boost their maximum hydrophilic matrix synthesis potential before implantation. The prospect of using stem cells in cell-based strategies for IVD regeneration brings with it the concern of how to guide differentiation into fully functional NP cells. Stem cell research efforts (e.g. induced pluripotent stem cells) [39] have been helping to reduce the obstacle of stem cell number availability. Concerning specifically the differentiation of stem cells into NP cells, the initial number of cells required significantly depends on the method that is being followed. It is feasible, in principle, either in vitro or in situ. The in vitro differentiation method has the advantage of assuring that the implanted cells have the optimal phenotype, though, with the disadvantage of more stem cells being required. Additionally, stem cell expansion is not trivial, and many cells are lost while differentiating into NP cells [40]. The key factor, in IVD regeneration is hypoxia (2% O2), since this is the environment that NP cells are acquainted. In vivo, these cells can be up to 2–3 mm away from the closest blood vessel [17, 18]. Therefore, the most promising strategy to ensure that stem cells differentiate into metabolically active NP cells might be to culture them in a hypoxic environment. However, a lot of stem cells tend to die due to the lack of oxygen, requiring even more stem cells at start. Fang and his co-workers [40] have been studying the hypothesis of manipulating mesenchymal stem cells in order to make them resistant to hypoxia by adding an anti-apoptotic gene called B-cell lymphoma-2. By avoiding stem cells to suffer apoptosis at the early stage of differentiation, pre-chondrocyte-like phenotype cells were produced to be resistant to low concentrations of oxygen, thus being able to maintain cell numbers under this condition. Half of the number of stem cells needed was used. Nevertheless, a serious doubt remains – does this not increase the chance of cancer cell formation? Manipulating directly stem cells by decreasing their ability to resort to apoptosis is a step towards cells being unable to commit apoptosis, and losing the ability to die, which would make them virtually cancer cells. Nevertheless, this work brings great promise and, if proven safe, there is no reason why this method cannot be applied as a treatment strategy for IVD regeneration. In situ differentiation method has the advantage of using a standard approach for stem cell expansion. Leckie and co-workers [28] developed a work following this type of cell-based method. Its purpose was to determine whether injecting human umbilical tissue-derived cells into the NP would improve the course of IDD. The cells were injected with phosphate buffer solution, EVICEL®-based carrier alone and cell-laden EVICEL®-based carrier. Briefly, EVICEL® is a combination of a fibrinogen-based solution with a thrombin-based solution, both derived from human plasma. Follow-up was based on MRI, biomechanics and histologic findings. The results were significantly away from the positive control (non-punctured IVDs), and failed to fully restore MRI signs for non-degenerated IVDs, possibly due to the unclear choice of carrier. A different hydrogel might have produced significantly more interesting results. However, they were successful in slowing down the degeneration process and showed better results than the negative control (punctured IVDs). At 12 weeks, the MRI results showed that cells alone and cells delivered using EVICEL®-based carriers were significantly distinct from punctured values. Regarding the viscoelastic properties, the cell-free carrier and cells in EVICEL®-based carrier were significantly closer to positive control (non-punctured IVDs) than the cells alone. However, if the biological and biochemical conditions within this tissue are not modelled, the use of this method could stimulate the growth of blood vessels and nerve endings inside the NP. In the literature, there are evidences that the appearance of blood vessels and nerve endings inside the NP might be originated by native stem cells [41]. Therefore, the injection of more stem cells without a modulated differentiation pathway in a diseased environment could promote this, as well as deteriorate the state of the tissue by further degenerating it. Processing of scaffolds for the nucleus pulposus (NP) When the NP biochemical environment has no conditions to maintain a healthy cellular activity, a highly hydrophilic material should be implanted. This material, typically a hydrogel, is meant to create suitable conditions for cells to express healthy chondrocyte-like phenotype. With time, native hydrophilic ECM is produced while the hydrogel degrades, ideally, at an even rate. Therefore, when choosing the right biomaterial to mimic a healthy NP cellular environment, several properties must be considered, as described in Table 1 [5, 42,43,44,45,46]. Table 1 Overview on the NP treatment approach from the materials engineering perspective Considering all the properties in Table 1, hydrogels stand out as an ideal candidate material (Fig. 5). Hydrogels are polymeric networks with the capacity to absorb water from 10 up to 100 times its dry weight. [47]. Several kinds of hydrogels, natural and synthetic, have been studied for IVD tissue engineering strategies [48]. There is a growing interest in natural-origin hydrogels that are able to be processed but not synthesised, which significantly reduces production costs. Other reasons for why they are becoming more attractive might be their low level of cytotoxicity, their wide range of possible tissue engineering applications, as well as astonishing biological properties, such as: Bioactivity and bioactive degradation, while available for cellular remodelling and cell adherence [49,50,51]. Although hydrogels from natural-origin offer a wide range of biological advantages, it might be difficult to find the desired range of physical properties. Nonetheless, know-how on these materials is widening, new materials are being studied and new processes are being developed to tune the right properties [47, 52]. The IVD has a slow metabolism, which leads to long regeneration time. The degradation rate of the hydrogel to be implanted needs to match this time frame [53]. Some natural-origin hydrogels have been proposed in the literature. The hydrogels that seem to be more popular for NP regeneration are based on the following raw materials: Alginate [54,55,56], chitosan [48, 51, 57, 58], collagen [51, 59, 60], gellan gum [48, 49, 61,62,63,64,65], and hyaluronic acid [48, 51, 57, 66] (Table 2 [48, 49, 51, 54,55,56,57,58,59,60, 66, 67]). Table 2 Summary on hydrogels applied in IVD tissue engineering research In alternative, synthetic hydrogels provide predictable and reproducible chemical and physical properties that might be tuned for different tissue engineering applications, e.g. degradation rate according to the aimed tissue regeneration rate. Moreover, they easily blend with polymers that broaden even more the properties possibilities [68]. Since they are made of well-known molecules, when pure, they have a low risk of immunogenicity, infection and toxicity [51, 69]. Though, they lack bioactivity that is characteristic of natural-origin materials, and their manufacturing process is, in general, economically less attractive, which are two key disabling factors for a tissue engineering application. Hence, while natural-origin materials lack diversity of properties, synthetic materials lack options that are economically viable. Some examples of synthetic hydrogels being applied in NP tissue engineering strategies are: Polyethylene glycol [27, 70], polyvinyl alcohol [71, 72] and polyvinylpyrrolidone [51, 73, 74] (Table 2). There are, indeed, not many studies using just synthetic-based materials for NP regeneration. The interesting possibility, however, is to use a natural-based material modified with synthetic polymers to achieve the advantages of both types [56, 60]. Combined therapy: cell-seeded scaffolds for nucleus pulposus (NP) regeneration A self-assembling peptide called RADA16-I [Ac-(RADA)4-CONH2], with the ability to form hydrogels, was proposed by Tao et al. [75] as cell carrier for NP tissue engineering. Interesting predisposition of this material to form β-sheet configurations when within water was reported. Due to its C-terminus, it is possible to conjugate this peptide with various bioactive short-peptide motifs. The conjugation of three different short peptides of bone morphogenetic proteins-7 was evaluated [75], and a 1:1 ratio of RADA-KPSS [AC-(RADA)4-GG-KPSSAPTQLN-CONH2] with RADA16-I, named RADA-KPS, was found as the optimal functional formulation to promote proliferation and activate, in vitro, degenerated NP cells to express collagen II and aggrecan. Wu et al. [76] further demonstrated that bone marrow mesenchymal stem cells, encapsulated within RADA-KPS, had a time-related increase in expression of factors typical of metabolically active NP cells – collagen II and aggrecan. The idea of using collagen II hydrogels for NP regeneration is natural, since it is one of its most prevalent ECM components, when healthy [1]. Although this material has a very high cost, its advantages might compensate for that matter. Tao et al. [59] demonstrated that collagen II hydrogel formulations with higher concentration increase the tendency of initially seeded adipose derived stem cells to express a NP cell-like phenotype. However, collagen II on its own does not seem to possess the required mechanical properties for NP tissue engineering, a cross-linker or a material to blend seems to be required [77]. From the same research group, Zhou et al. [60] used N,N-(3-dimethylaminopropyl)-N'-ethyl carbodiimide together with N-hydroxysuccinimide, which according to their literature search are non-cytotoxic. In their study, it was evaluated the proliferation and differentiation effect of this cross-link formula on collagen II hydrogels with encapsulated adipose derived stem cells. Cell proliferation almost doubled from 7 to 14 days of cultures, such as the control – same cell type on tissue culture polystyrene substrate, demonstrating no cytocompatibility change due to cross-linking. The substantial positive evidence lays on the gene expression profiles, since the control seems to show no significant difference on the typical NP cells' phenotype profile between 7 and 14 days of culture. While for the cross-linked collagen II hydrogels, all gene expressions analysed increased about 20-50% over the non-cross-linked on both time-points, with a slight drop difference on the collagen I expression – a non-typical ECM molecule of healthy NP. Continuing comparing these results with the non-cross-linked collagen II hydrogels, the gene profile expression was more interesting but the increase in rate of expression was practically the same. Demonstrating that it is possible to maintain a good differentiation profile, while achieving higher mechanical properties with this cross-link formula. Gellan gum material (Fig. 6) was first proposed by Oliveira et al. [78,79,80] for cartilage tissue regeneration. Cytocompatibility towards chondrocytes was demonstrated as well as the ability to polymerize with both temperature and pH change. Silva-Correia et al. [47], however, first specifically proposed its application for NP regeneration. In several studies the gellan gum's properties were extensively characterized, such as: Rheology [63], biocompatibility [81], and non-angiogenic potential [62]. Pereira et al. [82] proposed the use of gellan gum microparticles within a gellan gum matrix with different ratios of high- or low-acyl gellan gum. An interesting rationale considering the cellular capsules' configuration typically observed in native NP [83]. Moreover, the independent reactions of glycidyl methacrylate and methyl benzoylformate, reported by this research group, demonstrated the versatility of this material to be tuned to acquire stronger mechanical properties and ultra-violet light polymerization [84]. In contrast with methacrylation reaction, an alternative to increase gellan gum's mechanical properties was developed by Thorvaldsson et al., who proposed an interesting rationale to develop hydrogels of that same material reinforced with electrospun polycaprolactone (PCL) nanofibers. Therefore, mimicking the natural molecular organization and morphology of native NP ECM. The scaffold was developed by combining PCL electrospinning with gellan gum air brush spraying on a rotating collector partially immersed in calcium chloride polymerizing solution [85]. Expectation is high to understand if the remaining non-evaporated chloroform-methanol solvent residues, used on the electrospinning side, are below the cytotoxicity threshold for cells loaded in the gellan gum solution when both reach the collector. In addition, the polymerization time of gellan gum in ionic solution is very rapid, a characteristic of interest to be applied in contemporary fabrication technologies, such as bioprinting. More recent works propose, as well, the possibility for gellan gum to be applied in IVD regeneration [64, 65], corroborating its potential for this application. Alongside gellan gum is the carboxymethylcellulose, another polysaccharide suggested for NP tissue engineering, in this case by Reza and Nicoll [86, 87]. As gellan gum, carboxymethylcellulose can also be methacrylated, acquiring the photo-polymerisation ability. However, its application for NP degeneration treatment has only been proposed in the methacrylated form [88,89,90], due to the reversible nature of conventional ionic cross-linking techniques [86]. Since 2009/10, this research team has extensively characterised methacrylated carboxymethylcellulose material's properties, namely: Mechanical, swelling ratio, and diffusion, with and without seeded cells [86, 88, 90]. Reza et al. [87] analysed the ECM development and functional properties of NP cells within this hydrogel. They compared two different types of culture medium - the standard serum-containing medium formulation versus a serum-free chemically defined medium, both supplemented with transforming growth factor-β3. Glycosaminoglycan and collagen II content was significantly greater in serum-free constructs, as well as the Young's modulus, and the equilibrium weight-swelling ratio of the same constructs approached that of the native NP tissue. The results showed why the use of a chemically defined medium is so relevant, undoubtedly weighted by its high cost, therefore, although pertinent, it does not indicate that it is economically viable. More recently, the differentiation profile of mesenchymal stem cells into NP-like cells was studied, within this material, based on the transforming growth factor-β3's rate of exposure and cross-linking density. It was reported that low molecular weight carboxymethylcellulose with low methacrylation and monomer concentration, provides more promising results for NP tissue engineering [89]. On another perspective, the continuous supplement of transforming growth factor-β3 although stimulating cells to more interesting results, might not compensate its significantly higher cost in relation to a transient supplement. Transient exposure rate seems enough to reach biochemical and mechanical levels of construct maturation observed in native NP [88]. To overcome the increased cost of growth factors and chemically defined serum culture medium for stem cell differentiation towards NP phenotype, as well as overall construct maturation, Thorpe et al. [91] cultured human mesenchymal stem cells in 5% oxygen conditions within a hydrogel of poly(N-isopropylacrylamide)-(N,N'-dimethylacrylamide)-Laponite® – shortly named as pNIPAM-DMAc-Laponite®. Results shown expression of native NP-like phenotypic markers and ECM. Therefore, by not using a chondrogenic inducing medium or supplemented growth factors the regeneration strategy is simplified [91] and has a lower cost. Annulus fibrosus (AF) An optimal treatment for the AF (Fig. 7), aiming its full regeneration, depends largely on its state of degeneration. A mildly degenerated AF having a localized tissue damage might be treated with a minimal repair intervention [92,93,94], which can be sufficient to bring the whole AF to a complete healthy condition. However, engineering strategies reported in the literature describe that in case the AF, as a whole, is compromised a total IVD replacement is required, i.e. including the NP [95,96,97]. Considering the biphasic nature of a young healthy IVD [95], it seems promising to mimic those material properties on both phases. Research for a biphasic total IVD replacement can, naturally, be developed for both tissues at the same time or strictly focused on the AF analogue. With such a high attention by the research community on NP regeneration throughout the last decade, as aforementioned, there are very promising strategies that could be coupled in the development of a biphasic scaffold for total IVD regeneration. The AF connects both cartilaginous endplates and, together with them, surrounds the NP. Each one of the cartilaginous endplates is connected to one of the two adjacent vertebras [4]. Therefore, if the AF is going to be replaced it needs to be with a scaffold that maintains a functional structure while providing the conditions for peripheral ECM to develop across the border between the AF and the cartilaginous endplates, before biomaterial degradation. The IVD's metabolism is relatively slow, probably due to their low vascularity nature [98]. Selecting the appropriate material to develop an AF replacement scaffold will probably have to respect that time frame, to achieve a positive long-term outcome. Cells for annulus fibrosus (AF) regeneration Unlike the NP, the cellular aspect of AF regeneration strategies described in the literature is not as complex or, at least, not explored to the same extent. In summary, an AF scaffold can be developed mainly via three different possibilities: Acellular [99], AF cells [97], or stem cells [100]. AF cells are classified as fibroblast-like [101], and fibroblasts are one of the easiest primary cell types, if not the most, to culture in vitro. In fact, usually they are unwanted, since sometimes they contaminate the isolated cell culture due to their higher rate of proliferation, representing the worst and most common type of autologous contamination in cell isolation procedures [102]. This proliferative advantage is, in this case, a benefit that can be explored. Whether through the easiness to isolate and expand native AF cells, which already have the desired phenotype, or by following an acellular scaffold strategy and rely on native taxis of fibroblasts or AF cells (in the case of minimal intervention) from adjacent tissue(s). Acellular strategies reduce critical procedure steps, costs and time significantly when compared with seeding scaffold approaches. It has the potential to be, in practical terms, off-the-shelf. Total IVD replacement strategies, encompassing a cellular AF-like scaffold, are required to focus on achieving a final multi-type cellular population. This includes NP and AF cells, as well as respective progenitor cells to maintain their concentrations within the desired regions long after the patient's intervention. Perhaps for this reason, stem cells are viewed as an interesting cell source to seed annular scaffolds, since the same cells can be used to seed the NP scaffold, while maintaining the potentiality to differentiate into the respective aimed phenotypes. Recent research indicates that the most popular stem cell types for AF regeneration are mesenchymal stem cells [94, 103] and AF-derived stem cells [100, 104]. Valadà et al. made a very comprehensive review on stem cell sources for IVD regeneration, including: Muscle-derived, olfactory, induced pluripotent, hematopoietic, synovial, embryonic and Wharton's jelly [105], however, it was not distinguished the application of each stem cell type forAF or NP. There is, however, who believes it might be interesting to explore cell injection therapy for AF regeneration. Freeman et al. claimed, to their knowledge, to be the first to follow that strategy [106]. They started by inducing IDD on ovine models, using a method they previously described [107], which comprises a postero-lateral annulotomy incision on the left side. This was followed by 3 days of post-operative recovery and 6 months of IDD maturation, confirmed by MRI. A postero-lateral injection of one million allogeneic bone-derived mesenchymal stem cells, isolated from the iliac crest, was further administrated on the opposite side of the lesion. MRI was made after 3 and 6 months, while animals were sacrificed at the latter time point for a biochemical and histological analyses of each IVD. Disc height index results expressed a continued recovery until the last time-point, when its index equalizes the positive control group, which did not suffer annulotomy and received a phosphate buffer saline injection. It is interesting to add that the negative control, i.e. untreated degenerated IVDs, kept a constant index between 6 and 12 months, meaning that all annulotomy IVDs at 6 months were at the lowest level of disc height index. This study, however, is limited to the type of IDD originated by physical damage to the AF. Additionally, it is not possible to infer if this strategy would reverse IDD derived from cartilaginous endplates calcification [7, 14]. Nonetheless, the results demonstrate the importance of further exploring a more biological and less material-oriented approach for AF regeneration [106]. Processing of scaffolds for the annulus fibrosus (AF) The AF varies greatly along its radius, as a consequence, perhaps, part of the literature on the subject makes a distinction between inner and outer AF [82, 108]. The AF and NP are actually a biocomposite structure with no clear evident borders. Its biochemistry seems to change gradually as well as significantly along its radius – from the centre of the NP to the most peripheral layer of the outer AF. It is not easy to mimic the complex arrangement of the ECM molecules, as if it was a continuous medium, to form the macro-structure of the IVD. The most promising way might be to produce a whole IVD construct by observing the IVD as a discrete assembly of few regions. There is at least one different type of ECM component with a concentration that prevails in each one of the three structures [5]. In opposition is the biochemical and morphological composition of the outer AF. This anisotropic tissue [109] is mostly made of collagen I, making it an extremely elastic tissue [95]. The inner AF, although having more collagenII than the other two types of tissue, is like a mixture of the other two in terms of mechanics and biology, while having a morphological organisation closer to the outer AF [110]. Several biomaterials have been developed and used as prime-material for preparing scaffolds to address AF regeneration. Figure 8 shows a typical AF shape that was replicated by 3D printing PCL based on a reverse engineered rabbit IVD micro-computed tomography (micro-CT) acquisition [99]. Other interesting biomaterials proposed on the literature are described in the form of a significant number of different composites and formulations, based on the same prime-materials, as: Polylactic-co-glycolic acid [93], silk fibroin [111], collagen [95, 112], and PCL [97, 99, 113, 114]. As aforementioned, the IVD is an anatomical structure highly subjected to complex movements and strains [115]. If a construct without pre-maturation is envisioned for implantation, the material chosen to replace the AF, in the short-term, must be able to withstand that biomechanical responsibility until neo-ECM is able to do it. It should have a similar Young's modulus that enables the material to sustain high forces while remaining free from plastic deformation. In the long-term, it must degrade at the same rate that cells synthesize the neo-ECM. ECM synthesis and remodelling depends on the biochemical balance existing inside the cell construct. The piezoelectric properties (i.e. conversion of mechanical force into electrical signals, and vice-versa) of collagen molecules, present within it, possibly play an important role in this process, dependant on the constant cyclic loads applied to the IVD [116]. The lines of tension change dynamically according with this load cycles, resulting in degradation stimulation of the AF scaffold and synthesis of new ECM, which probably affects piezoelectric effect inside the structure. Cells are attracted to where pressure is felt, since the collagen upon mechanical stimulation releases electrical stimulus that attract and stimulate cells to produce more ECM, for the whole tissue to be mechanically balanced. This effect is similar to what happens in almost every tissue of the human body. Bone is a good example, since trabeculae and its mineralization are coincident with tension lines felt by the tissue [117]. Scaffolds with piezoelectric properties, such as collagen-based scaffolds for the AF, have a head start advantage. In fact, constructs with no piezoelectric properties, when implanted, probably have a cell taxis disadvantage in relation to surrounding native tissues. In summary, the way that the scaffolds degrade, and how its mechanical properties vary with it, is important for the cell construct to be successful upon implantation. The matrix of the AF is mainly made of collagen, being the principal reason why several research groups have been proposing it for annular scaffold preparation [112, 118, 119]. Scaffolds made of collagen I or II have already shown to stimulate IVD cells to produce big PG's and long glycosaminoglycan chains [119, 120]. Choy et al. developed a strategy to produce biphasic scaffolds composed of mechanically tuned collagen and glycosaminoglycan molecules, with collagen-glycosaminoglycan modifications and photochemical-cross-linked collagen membrane previously developed in separate by their group [121, 122]. An interesting layer-by-layer approach was established to progressively prepare the AF scaffold surrounding the NP replacing collagen-glycosaminoglycan core, this way mimicking the native AF layered morphology. Each layer is produced via the following cycle – the core is encapsulated in the modified collagen solution and further removed for photo-cross-linking, it is then dehydrated by rolling it over a filter paper and rehydrated again to create a phase-border between the next layer, which starts by repeating the first step [95]. PCL has been proposed for numerous biomedical applications due to its chemical versatility [123], easy processability and tuning possibility for matching its mechanical properties to the desired role [124]. In particular, PCL has a larger elastic domain than other candidate materials for AF [125], due to its low melting-point of approximately 60°C [126]. In a recent work at our group, a 3D printed scaffold made of PCL was developed using a custom-tailored geometry of a rabbit AF, acquired by micro-CT and segmented 3D modelling [99]. The thermoplastic properties of this material allow it to be processed by fused deposition modelling 3D printing, providing all the possibilities that this technology offers. Cultured AF cells with leachables extracted from those PCL scaffolds did not develop a cytotoxic effect on AF cells [99]. A follow-up work was developed that progresses the proof-of-concept of a custom-tailored scaffold biofabrication to a patient-specific type. Oner et al. [127] used PCL to 3D print a patient-specific AF using MRI data acquired within a clinical environment. This contrasts with the previous work that used a micro-CT system, which provides significantly more data allowing the development of a 3D model replica with equally higher fidelity to the original scanned tissue. However, neither CT nor MRI standard clinical imaging systems have this level of resolution. Demonstrating the increased need for the clinical environment to keep up with equipment requirements of immerging treatment strategies. Wismer et al. [113] studied the ability of PCL material to support AF tissue regeneration and its interaction with its native cells. It was demonstrated that AF cells could proliferate in electrospun oriented PCL sheets, and that the ECM produced was rich in glycosaminoglycans [113]. Other studies have also reported similar findings regarding the compatibility of PCL with AF cells [128, 129]. Minimal invasive treatments of AF defects, in its turn, are an approach that carries less risks than a tissue engineering total IVD replacement approach. Its certification and application in the clinics would be probably easier and fits well with standard procedures, like discectomy. Facing an IVD herniation, the surgeon can decide to remove the herniated NP tissue and use a suturing approach to close the damaged region of the AF [130, 131]. Regenerated strategies are being developed to repair the AF defect and stimulate its regeneration [92, 93]. Xin et al. developed small cylindrical scaffold plugs, with a diameter of 1.8 mm, made of polylactic-co-glycolic acid containing gelatine particles with a diameter range of 280 to 450 μm. After leaching, a scaffold is produced with an average pore size within the same diameter range of the gelatine particles. The plugs were implanted in rabbit models of AF degeneration, and a successful repair of the AF defect was verified with an overall degeneration regression [93]. Polytrimethylene carbonate material is generating interest to fabricate space-fillers for AF defects [92, 94, 132]. Long et al. [92] developed a composite repair strategy that combines a conical space-filler composed of polytrimethylene carbonate secured in place by fibrin-genipin adhesive sealant, previously tuned to match the shear properties of native AF tissue [130]. To completely ensure the non-extrusion of the implant, a polyurethane membrane patch is also applied over the implant surrounding the native tissue. The implantation was carried using previously extracted bovine coccygeal region-of-motion IVDs, which were further submitted to biomechanical assessment. However, the membrane was not enough to eliminate the risk of implant extrusion. Interestingly, the results showed an increased performance for the application of only fibrin-genipin, which decreased the re-herniation risk to low levels. It is less clear, however, after surpassing the risk of re-herniation, if the polytrimethylene carbonate would not achieve better tissue regeneration results in the long run. Combined therapy: cell-seeded scaffolds for annulus fibrosus (AF) regeneration The main advantages of acellular strategies are related to its simplicity of production and regulatory point of views, storage simplicity, sterilization process and low cost. Ideally, as soon as the scaffold is implanted the degradation time should start. As previously mentioned, the biomechanical balance can only be compensated by the ECM synthesis. Cellular strategies prevail when this goal is mainly considered, since cells can start producing matrix as soon as they are seeded and stably attached. Making taxis of migrating fibroblasts, from surrounding environment to the cell construct, synergic instead of fundamental. It is not clear, however, what approach would achieve more interesting results in the long run, since there are advantages and disadvantages on both sides, as herein reported. Further in vivo studies on both approaches are required. The application of urethane-based polymers for AF is a very interesting idea, due to its very elastic properties. Nonetheless, a series studies developed last year by Li et al. use a polyether-carbonate-urethane-urea to electrospun AF scaffolds, with the promise of being biodegradable [100, 103, 104]. The work by Zhu et al., in particular, encompassed a degradation assay, [100], which was obtained following the synthesis protocol of the polymer, while demonstrating its non-cytotoxic character [133]. What is interesting, however, is the use of AF-derived stem cells as the cell source. Liu et al. demonstrated that aligned electrospun fibres increase the AF cell phenotype expression in comparison with random orientation [104]. Zhu et al. tuned several types of polyether-carbonate-urethane-urea for different elastic profiles and its effect on AF stem cell's gene expression was evaluated [100]. The authors reported that collagen I expression increased alongside the material's elasticity, while collagen II and aggrecan had a contrary relation. A biphasic scaffold composed of lamellar regenerated silk fibroin surrounding a fibrin-hyaluronic acid NP, seeded with AF cells and chondrocytes, respectively, was reported by Park et al. [111] following a biomimetic approach. After silk fibroin dissolution, a sodium alginate solution was added and injected into cylindrical shaped silicon moulds. The assemblies were subjected to freeze-drying and further crystallized in water for 6 hours to generate β-sheet formation within the silk fibroin. The alginate was removed from the scaffold by immersion in water for 24 hours and the toroidal-shaped silk fibroin scaffolds were achieved by simple punching. The results showed dense lamellar structures with spaces between lamellas varying from 10 to 400 μm. Cell seeding was more efficient and homogeneous throughout the scaffold than in their previous study, where a porous silk fibroin scaffold instead of lamellar was produced [134]. The morphological structure of the silk fibroin toroidal IVDs seemed to guide collagen deposition very efficiently [111]. Maybe, with a culture time longer than two weeks, the cell constructs could obtain the desired mechanical properties, since silk fibroin is strong but not elastic, therefore, with further deposition of collagen the cell construct could become elastic. Until now, unfortunately, it does not seem to exist a follow up of this work, which would be interesting. IVD tissue engineering research is already extensive and several obstacles are being surpassed, though there is an argument not typically addressed, which is the structural compatibility, not in terms of morphology but in terms of optimal fitting within the intervertebral space. The AF is often mimicked by a toroidal shape scaffold, with not even an IVD-like geometry. Even if the IVD-like shape is mimicked, the top and down surfaces are not prepared according to the endplates' surfaces, which have unique topographies in each person, especially in diseased IVDs. Therefore, creating a high potential for displacement, once implanted, upon cyclic spine loading. This was recently addressed, by creating a fused deposition modelling 3D printed AF scaffold, in which its computer aided design was a result of 3D modelling based on a real IVD imaging dataset [99]. Briefly, the micro computed tomographic acquisition focused on the spinal motion segment. During segmentation, the osteochondral borders between the IVD and the adjacent vertebras were carefully followed as the top and down edges of the IVD model, therefore creating a full replica of the original IVD and providing the potential for a precise fitting within the intervertebral space. An alternative for secure implantation between adjacent vertebras is to follow, for example, what is already done with spinal fusion, by removing the IVD and joining the two flat vertebral surfaces together creating one continuous double vertebra [135]. Instead of joining those surfaces, however, it might be also possible to introduce a tissue-engineered spinal motion segment that possess top and down osteochondral layers, leaving exposed a bone-like surface that can attach to the native sectioned vertebras. Therefore, in line with the strategy developed by Choy et al. [95], described in the previous section, Chik et al., envisioned a scaffold that mimics an entire spinal motion segment [136]. This composite construct is a full IVD assembly of NP-, AF-, and endplate-like tissues. It employs a strategy previously described by the same research team, based on micro encapsulation of mesenchymal stem cells in collagen I to produce microspheres, named naive subunits. These are used in two separate differentiating mediums – chondrogenic and osteogenic. At the end of the third week, both layers are assembled together with an additional thin layer of collagen and mesenchymal stem cells in between [137]. The NP core is also prepared following the same method they previously described [121], based on collagen and glycosaminoglycans. The NP core was placed between two osteochondral units with both chondral layers facing the NP. These assembly was then processed following the same strategy described by Choy et al. [95], which allows the layer-by-layer encapsulation and polymerization of photochemically-cross-linked collagen I. In this work, however, mesenchymal stem cells were added between each collagen layer, as well as in the NP core. This study served as a proof-of-concept for this complex strategy, naturally with that many steps and details, some tuning is still required, since the construct is not yet able to match the maturity of a native spinal motion segment in several aspects. Considering the amount of collagen, glycosaminoglycans, and mesenchymal stem cells are required to produce the whole structure. The costs of applying such strategy, as a treatment, are probably extremely high. However, it is our understanding that no one has ever been able to replicate so faithfully a native IVD in vitro [136]. Reviewing the literature on tissue engineering strategies for IVD regeneration it is possible to state that extensive efforts have been made to regenerate the NP alone, but increasingly for the simultaneous NP and AF has been addressed. Two of these strategies have stirred the research community focused on IVD regeneration: In vivo and ex vivo approaches that balance risk of adverse effects with treatment efficiency, with the first being potentially fatal but more efficient. In tissue engineering field, the literature seems to be in consensus about using a polysaccharide as a low-cost NP substituting material. Additionally, due to the high avascular nature of this tissue a tight compromise is in order – the degradable biomaterial must allow cell adhesion and proliferation, while not hampering for angiogenesis to occur. Gellan gum seems to fulfil these requirements, having already demonstrated its non-angiogenic potential, as well as providing an optimal environment for chondrocyte culturing, both in vitro and in vivo. Collagen II, as a more expensive alternative to polysaccharides, has shown excellent results in vitro regarding NP native-like ECM production, possibly aided by its natural piezoelectric properties when under mechanical stimulated culture conditions. It would be interesting, perhaps, to assess the combination of gellan gum doped with collagen II to accomplish the positive results of both materials. Nonetheless, if a patient also requires AF regeneration, the NP substituting hydrogel needs to be integrated in an AF scaffold. Several approaches for integrating both tissue regeneration substitutes have been reported, being integrated from the beginning of construct preparation or only just before application. PCL has been very popular due to its mechanical properties, having been processed in several different manners with interesting final results. It has, in fact, also been applied within the biofabrication field, showing the possibility to produce 3D printed custom-tailored AF scaffolds based on the host intervertebral geometry. This way further expanding processability into a wider range of scaffold shapes and details, with low-cost and in a short time. Patient-specific therapies are becoming popular and allow the development of scaffolds or constructs that fit the patient's needs in a precise manner. This increases the potential for construct integration into the surrounding tissues, and lowers the risk of complications, such as scaffold displacement or mechanical destabilization. Although the recent reports serve the great purpose of providing tools to regenerate a greater tissue volume of the IVD, few strategies have been developed to regenerate the cartilaginous endplates. The healthiness of these plates is, in fact, one of the keys for a healthy cellular environment within the whole IVD. Regenerating the other two tissues, alone or together, when the endplates are calcified is, therefore, a desperately frustrating task, ultimately leading towards failure in the long-run. The strong bonds between the three tissues that compose the IVD, on all levels, imply that the tissue engineering path of less resistance is a strategy that encompasses all tissues together, when considering late-stage IDD cases. Having into account what has been developed for NP and AF regeneration leads to the conclusion that once a successful cartilaginous endplate regeneration strategy has been developed the remaining strategies will possibly achieve significantly better results when applied together. However, full IDD is not the only clinical condition of this disease. Research on minimally invasive surgical approaches and repair/regeneration strategies that rely on the use of patient-specific implants, for the AF and NP, play an important role in finding appropriate clinical solutions for less severe, but still painful, conditions of IDD. IVD: Intervertebral disc IDD: Intervertebral disc degeneration NP: Nucleus pulposus Annulus fibrosus Proteoglycan MRI: PCL: Polycaprolactone Zhao C-Q, Wang L-M, Jiang L-S, Dai L-Y. The cell biology of intervertebral disc aging and degeneration. Ageing Res Rev. 2007;6:247–61. doi:10.1016/j.arr.2007.08.001. Jarman JP, Arpinar VE, Baruah D, Klein AP, Maiman DJ, Tugan Muftuler L. Intervertebral disc height loss demonstrates the threshold of major pathological changes during degeneration. Eur Spine J. 2015;24:1944–50. doi:10.1007/s00586-014-3564-8. LA Binch A, A a C, Breakwell LM, Michael AL, Chiverton N, Cross AK, et al. Expression and regulation of neurotrophic and angiogenic factors during human intervertebral disc degeneration. Arthritis Res Ther. 2014;16:416. doi:10.1186/s13075-014-0416-1. Coventry MB, Ghormley RK, Kernohan JW. The intervertebral disc: its microscopic anatomy and pathology part i. anatomy, development, and physiology. J bone Jt Surg. 1945;27:105–12. Périé D, Korda D, Iatridis JC. Confined compression experiments on bovine nucleus pulposus and annulus fibrosus: sensitivity of the experiment in the determination of compressive modulus and hydraulic permeability. J Biomech. 2005;38:2164–71. doi:10.1016/j.jbiomech.2004.10.002. Woods BI, Sowa G, Vo N, Kang JD. A Change in Strategy: The Use of Regenerative Medicine and Tissue Engineering to Augment the Course of Intervertebral Disc Degeneration. Oper Tech Orthop. 2010;20:144–53. doi:10.1053/j.oto.2009.10.009. Naresh-Babu J, Neelima G, Reshma Begum S, Siva-Leela V. Diffusion characteristics of human annulus fibrosus—a study documenting the dependence of annulus fibrosus on end plate for diffusion. Spine J. 2016;16:1007–14. doi:10.1016/j.spinee.2016.03.046. Silva-Correia J, Correia SI, Oliveira JM, Reis RL. Tissue engineering strategies applied in the regeneration of the human intervertebral disk. Biotechnol Adv. 2013;31:1514–31. doi:10.1016/j.biotechadv.2013.07.010. Richardson SM, Mobasheri A, Freemont AJ, Hoyland JA. Intervertebral disc biology, degeneration and novel tissue engineering and regenerative medicine therapies. Histol Histopathol. 2007;22:1033–41. Feng G, Yang X, Shang H, Marks IW, Shen FH, Katz A, et al. Multipotential differentiation of human anulus fibrosus cells: an in vitro study. J Bone Jt Surg. 2010;92:675–85. doi:10.2106/JBJS.H.01672. Kim K-W, Ha K-Y, Lee J-S, Nam S-W, Woo Y-K, Lim T-H, et al. Notochordal cells stimulate migration of cartilage end plate chondrocytes of the intervertebral disc in in vitro cell migration assays. Spine J. 2009;9:323–9. doi:10.1016/j.spinee.2008.05.003. Roberts S. Histology and Pathology of the Human Intervertebral Disc. J Bone Jt Surg. 2006;88(suppl_2):10. doi:10.2106/JBJS.F.00019. Bae WC, Masuda K. Emerging Technologies for Molecular Therapy for Intervertebral Disk Degeneration. Orthop Clin North Am. 2011;42:585–601. doi:10.1016/j.ocl.2011.07.004. DeLucca JF, Cortes DH, Jacobs NT, Vresilovic EJ, Duncan RL, Elliott DM. Human cartilage endplate permeability varies with degeneration and intervertebral disc site. J Biomech. 2016;49:550–7. doi:10.1016/j.jbiomech.2016.01.007. Lee JM, Song JY, Baek M, Jung H-Y, Kang H, Han IB, et al. Interleukin-1β induces angiogenesis and innervation in human intervertebral disc degeneration. J Orthop Res. 2011;29:265–9. doi:10.1002/jor.21210. Nasto LA, Robinson AR, Ngo K, Clauson CL, Dong Q, St. Croix C, et al. Mitochondrial-derived reactive oxygen species (ROS) play a causal role in aging-related intervertebral disc degeneration. J Orthop Res. 2013;31:1150–7. doi:10.1002/jor.22320. Le Maitre CL, Freemont AJ, Hoyland JA. A preliminary in vitro study into the use of IL-1Ra gene therapy for the inhibition of intervertebral disc degeneration. Int J Exp Pathol. 2006;87:17–28. doi:10.1111/j.0959-9673.2006.00449.x. Roberts S, Evans EH, Kletsas D, Jaffray DC, Eisenstein SM. Senescence in human intervertebral discs. Eur Spine J. 2006;15:312–6. doi:10.1007/s00586-006-0126-8. Richardson SM, Hoyland JA. Stem cell regeneration of degenerated intervertebral discs: current status. Curr Pain Headache Rep. 2008;12:83–8. Heneghan P, Riches PE. Determination of the strain-dependent hydraulic permeability of the compressed bovine nucleus pulposus. J Biomech. 2008;41:903–6. doi:10.1016/j.jbiomech.2007.11.014. Chan SCW, Ferguson SJ, Gantenbein-Ritter B. The effects of dynamic loading on the intervertebral disc. Eur Spine J. 2011;20:1796–812. doi:10.1007/s00586-011-1827-1. Vergroesen P-PA, Kingma I, Emanuel KS, Hoogendoorn RJW, Welting TJ, van Royen BJ, et al. Mechanics and biology in intervertebral disc degeneration: a vicious circle. Osteoarthr Cartil. 2015;23:1057–70. doi:10.1016/j.joca.2015.03.028. Chou R, Deyo R, Friedly J, Skelly A, Hashimoto R, Weimer M, et al. Noninvasive Treatments for Low Back Pain. In: Comparative Effectiveness Review, vol. 169. Rockville, MD: Agency for Healthcare Research and Quality; 2016. Buttermann GR, Beaubien BP. Biomechanical characterization of an annulus-sparing spinal disc prosthesis. Spine J. 2009;9:744–53. doi:10.1016/j.spinee.2009.04.026. Thompson JP, Pearce RH, Schechter MT, Adams ME, Tsang IK, Bishop PB. Preliminary evaluation of a scheme for grading the gross morphology of the human intervertebral disc. Spine (Phila Pa 1976). 1990;15:411–5. Urban JPG, Winlove CP. Pathophysiology of the intervertebral disc and the challenges for MRI. J Magn Reson Imaging. 2007;25:419–32. doi:10.1002/jmri.20874. Schmocker A, Khoushabi A, Frauchiger DA, Gantenbein B, Schizas C, Moser C, et al. A photopolymerized composite hydrogel and surgical implanting tool for a nucleus pulposus replacement. Biomaterials. 2016;88:110–9. doi:10.1016/j.biomaterials.2016.02.015. Leckie SK, G a S, Bechara BP, R a H, Coelho JP, Witt WT, et al. Injection of human umbilical tissue–derived cells into the nucleus pulposus alters the course of intervertebral disc degeneration in vivo. Spine J. 2013;13:263–72. doi:10.1016/j.spinee.2012.12.004. Benneker LM, Andersson G, Iatridis JC, Sakai D, Härtl R, Ito K, et al. Cell therapy for intervertebral disc repair: advancing cell therapy from bench to clinics. Eur Cell Mater. 2014;27:5–11. Jiang L, Zhang X, Zheng X, Ru A, Ni X, Wu Y, et al. Apoptosis, senescence, and autophagy in rat nucleus pulposus cells: Implications for diabetic intervertebral disc degeneration. J Orthop Res. 2013;31:692–702. doi:10.1002/jor.22289. Tang X, Jing L, Richardson WJ, Isaacs RE, Fitch RD, Brown CR, et al. Identifying molecular phenotype of nucleus pulposus cells in human intervertebral disc with aging and degeneration. J Orthop Res. 2016;34:1316–26. doi:10.1002/jor.23244. Abbott RD, Purmessur D, Monsey RD, Iatridis JC. Regenerative potential of TGFβ3 + Dex and notochordal cell conditioned media on degenerated human intervertebral disc cells. J Orthop Res. 2012;30:482–8. doi:10.1002/jor.21534. Purmessur D, Cornejo MC, Cho SK, Roughley PJ, Linhardt RJ, Hecht AC, et al. Intact glycosaminoglycans from intervertebral disc-derived notochordal cell-conditioned media inhibit neurite growth while maintaining neuronal cell viability. Spine J. 2015;15:1060–9. doi:10.1016/j.spinee.2015.02.003. Cornejo MC, Cho SK, Giannarelli C, Iatridis JC, Purmessur D. Soluble factors from the notochordal-rich intervertebral disc inhibit endothelial cell invasion and vessel formation in the presence and absence of pro-inflammatory cytokines. Osteoarthr Cartil. 2015;23:487–96. doi:10.1016/j.joca.2014.12.010. Arkesteijn I, Potier E, Ito K. The Regenerative Potential of Notochordal Cells in a Nucleus Pulposus Explant. Glob. Spine J. 2016; doi:10.1055/s-0036-1583174. Gantenbein-Ritter B, Chan SCW. The evolutionary importance of cell ratio between notochordal and nucleus pulposus cells: an experimental 3-D co-culture study. Eur Spine J. 2012;21:819–25. doi:10.1007/s00586-011-2026-9. Aguiar DJ, Johnson SL, Oegema TR. Notochordal Cells Interact with Nucleus Pulposus Cells: Regulation of Proteoglycan Synthesis. Exp Cell Res. 1999;246:129–37. doi:10.1006/excr.1998.4287. Bach FC, de Vries SAH, Krouwels A, Creemers LB, Ito K, Meij BP, et al. The species-specific regenerative effects of notochordal cell-conditioned medium on chondrocyte-like cells derived from degenerated human intervertebral discs. Eur Cell Mater. 2015;30 May:132:46-47. Takahashi K, Yamanaka S. Induction of Pluripotent Stem Cells from Mouse Embryonic and Adult Fibroblast Cultures by Defined Factors. Cell. 2006;126:663–76. doi:10.1016/j.cell.2006.07.024. Fang Z, Yang Q, Luo W, Li G, Xiao J, Li F, et al. Differentiation of GFP-Bcl-2-engineered mesenchymal stem cells towards a nucleus pulposus-like phenotype under hypoxia in vitro. Biochem Biophys Res Commun. 2013;432:444–50. doi:10.1016/j.bbrc.2013.01.127. Huang Y-C, Leung VYL, WW L, Luk KDK. The effects of microenvironment in mesenchymal stem cell–based regeneration of intervertebral disc. Spine J. 2013;13:352–62. doi:10.1016/j.spinee.2012.12.005. Boyd LM, Carter AJ. Injectable biomaterials and vertebral endplate treatment for repair and regeneration of the intervertebral disc. Eur Spine J. 2006;15:414–21. doi:10.1007/s00586-006-0172-2. Wilke H-J, Heuer F, Neidlinger-Wilke C, Claes L. Is a collagen scaffold for a tissue engineered nucleus replacement capable of restoring disc height and stability in an animal model? Eur Spine J. 2006;15:433–8. doi:10.1007/s00586-006-0177-x. Van Tomme SR, Storm G, Hennink WE. In situ gelling hydrogels for pharmaceutical and biomedical applications. Int J Pharm. 2008;355:1–18. doi:10.1016/j.ijpharm.2008.01.057. Roughley P, Hoemann C, DesRosiers E, Mwale F, Antoniou J, Alini M. The potential of chitosan-based gels containing intervertebral disc cells for nucleus pulposus supplementation. Biomaterials. 2006;27:388–96. doi:10.1016/j.biomaterials.2005.06.037. Alsberg E, Kong HJ, Hirano Y, Smith MK, Albeiruti A, Mooney DJ. Regulating bone formation via controlled scaffold degradation. J Dent Res. 2003;82:903–8. Silva-Correia J, Oliveira JM, Caridade SG, Oliveira JT, Sousa RA, Mano JF, et al. Gellan gum-based hydrogels for intervertebral disc tissue-engineering applications. J Tissue Eng Regen Med. 2011;5:e97–107. doi:10.1002/term.363. Varghese S, Elisseeff JH. Hydrogels for Musculoskeletal Tissue Engineering. In: Werner C, editor. Polymers for Regenerative Medicine. Berlin: Springer; 2006. p. 95–144. doi:10.1007/12_072. Shogren RL, Bagley EB. Natural Polymers as Advanced Materials: Some Research Needs and Directions. Biopolymers. 1999:2–11. doi:10.1021/bk-1999-0723.ch001. Malafaya PB, G a S, Reis RL. Natural–origin polymers as carriers and scaffolds for biomolecules and cell delivery in tissue engineering applications. Adv Drug Deliv Rev. 2007;59:207–33. doi:10.1016/j.addr.2007.03.012. Puppi D, Chiellini F, Piras AM, Chiellini E. Polymeric materials for bone and cartilage repair. Prog Polym Sci. 2010;35:403–40. doi:10.1016/j.progpolymsci.2010.01.006. Mano JF, Silva GA, Azevedo HS, Malafaya PB, Sousa RA, Silva SS, et al. Natural origin biodegradable systems in tissue engineering and regenerative medicine: present status and some moving trends. J R Soc Interface. 2007;4:999–1030. doi:10.1098/rsif.2007.0220. Temenoff JS, Mikos AG. Review: tissue engineering for regeneration of articular cartilage. Biomaterials. 2000;21:431–40. doi:10.1016/S0142-9612(99)00213-6. Growney Kalaf EA, Flores R, Bledsoe JG, Sell SA. Characterization of slow-gelling alginate hydrogels for intervertebral disc tissue-engineering applications. Mater Sci Eng C. 2016;63:198–210. doi:10.1016/j.msec.2016.02.067. Bron JL, L a V, Smit TH, Koenderink GH. Engineering alginate for intervertebral disc repair. J Mech Behav Biomed Mater. 2011;4:1196–205. doi:10.1016/j.jmbbm.2011.04.002. Sun Z, Luo B, Liu Z, Huang L, Liu B, Ma T, et al. Effect of perfluorotributylamine-enriched alginate on nucleus pulposus cell: Implications for intervertebral disc regeneration. Biomaterials. 2016;82:34–47. doi:10.1016/j.biomaterials.2015.12.013. Hayami JWS, Waldman SD, Amsden BG. Chondrocyte Generation of Cartilage-Like Tissue Following Photoencapsulation in Methacrylated Polysaccharide Solution Blends. Macromol Biosci. 2016;16:1083–95. doi:10.1002/mabi.201500465. Karimi Z, Ghorbani M, Hashemibeni B, Bahramian H. Evaluation of the proliferation and viability rates of nucleus pulposus cells of human intervertebral disk in fabricated chitosan-gelatin scaffolds by freeze drying and freeze gelation methods. Adv Biomed Res. 2015;4:251. doi:10.4103/2277-9175.170676. Tao Y, Zhou X, Liu D, Li H, Liang C, Li F, et al. Proportion of collagen type II in the extracellular matrix promotes the differentiation of human adipose-derived mesenchymal stem cells into nucleus pulposus cells. Biofactors. 2016;42:212–23. doi:10.1002/biof.1266. Zhou X, Tao Y, Wang J, Liu D, Liang C, Li H, et al. Three-dimensional scaffold of type II collagen promote the differentiation of adipose-derived stem cells into a nucleus pulposus-like phenotype. J Biomed Mater Res Part A. 2016;104:1687–93. doi:10.1002/jbm.a.35701. Coutinho DF, Sant SV, Shin H, Oliveira JT, Gomes ME, Neves NM, et al. Modified Gellan Gum hydrogels with tunable physical and mechanical properties. Biomaterials. 2010;31:7494–502. doi:10.1016/j.biomaterials.2010.06.035. Silva-Correia J, Miranda-Gonçalves V, Salgado AJ, Sousa N, Oliveira JM, Reis RM, et al. Angiogenic Potential of Gellan-Gum-Based Hydrogels for Application in Nucleus Pulposus Regeneration: In Vivo Study. Tissue Eng Part A. 2012;18:1203–12. doi:10.1089/ten.tea.2011.0632. Silva-Correia J, Gloria A, Oliveira MB, Mano JF, Oliveira JM, Ambrosio L, et al. Rheological and mechanical properties of acellular and cell-laden methacrylated gellan gum hydrogels. J Biomed Mater Res Part A. 2013;101:3438–46. doi:10.1002/jbm.a.34650. Khang G, Lee S, Kim H, Silva-Correia J, Gomes M, Viegas C, et al. Biological evaluation of intervertebral disc cells in different formulations of gellan gum-based hydrogels. J Tissue Eng Regen Med. 2015;9:265–75. doi:10.1002/term.1625. Tsaryk R, Silva-Correia J, Oliveira JM, Unger RE, Landes C, Brochhausen C, et al. Biological performance of cell-encapsulated methacrylated gellan gum-based hydrogels for nucleus pulposus regeneration. J Tissue Eng Regen Med. 2014;4:n/a–a. doi:10.1002/term.1959. Crevensten G, Walsh AJL, Ananthakrishnan D, Page P, Wahba GM, Lotz JC, et al. Intervertebral Disc Cell Therapy for Regeneration: Mesenchymal Stem Cell Implantation in Rat Intervertebral Discs. Ann Biomed Eng. 2004;32:430–4. doi:10.1023/B:ABME.0000017545.84833.7c. Berger J, Reist M, Mayer J, Felt O, Gurny R. Structure and interactions in chitosan hydrogels formed by complexation or aggregation for biomedical applications. Eur J Pharm Biopharm. 2004;57:35–52. doi:10.1016/S0939-6411(03)00160-7. Place ES, George JH, Williams CK, Stevens MM. Synthetic polymer scaffolds for tissue engineering. Chem Soc Rev. 2009;38:1139. doi:10.1039/b811392k. Rezwan K, Chen QZ, Blaker JJ, Boccaccini AR. Biodegradable and bioactive porous polymer/inorganic composite scaffolds for bone tissue engineering. Biomaterials. 2006;27:3413–31. doi:10.1016/j.biomaterials.2006.01.039. Thomas JD, Fussell G, Sarkar S, Lowman AM, Marcolongo M. Synthesis and recovery characteristics of branched and grafted PNIPAAm–PEG hydrogels for the development of an injectable load-bearing nucleus pulposus replacement. Acta Biomater. 2010;6:1319–28. doi:10.1016/j.actbio.2009.10.024. Wang BH, Campbell G. Formulations of Polyvinyl Alcohol Cryogel That Mimic the Biomechanical Properties of Soft Tissues in the Natural Lumbar Intervertebral Disc. Spine (Phila Pa 1976). 2009;34:2745–53. doi:10.1097/BRS.0b013e3181b4abf5. Neo PY, Shi P, Goh JC-H, Toh SL. Characterization and mechanical performance study of silk/PVA cryogels: towards nucleus pulposus tissue engineering. Biomed Mater. 2014;9:65002. doi:10.1088/1748-6041/9/6/065002. Joshi A, Fussell G, Thomas J, Hsuan A, Lowman A, Karduna A, et al. Functional compressive mechanics of a PVA/PVP nucleus pulposus replacement. Biomaterials. 2006;27:176–84. doi:10.1016/j.biomaterials.2005.06.003. Spiller KL, Laurencin SJ, Charlton D, S a M, Lowman AM. Superporous hydrogels for cartilage repair: Evaluation of the morphological and mechanical properties. Acta Biomater. 2008;4:17–25. doi:10.1016/j.actbio.2007.09.001. Tao H, Wu Y, Li H, Wang C, Zhang Y, Li C, et al. BMP7-Based Functionalized Self-Assembling Peptides for Nucleus Pulposus Tissue Engineering. ACS Appl Mater Interfaces. 2015;7:17076–87. doi:10.1021/acsami.5b03605. Wu Y, Jia Z, Liu L, Zhao Y, Li H, Wang C, et al. Functional Self-Assembled Peptide Nanofibers for Bone Marrow Mesenchymal Stem Cell Encapsulation and Regeneration in Nucleus Pulposus. Artif Organs. 2016;40:E112–9. doi:10.1111/aor.12694. Calderon L, Collin E, Velasco-Bayon D, Murphy M, O'Halloran D, Pandit A, Type II. collagen-hyaluronan hydrogel--a step towards a scaffold for intervertebral disc tissue engineering. Eur Cell Mater. 2010;20:134–48. doi:10.1002/biof.1266. Oliveira JT, Santos TC, Martins L, Silva MA, Marques AP, Castro AG, et al. Performance of new gellan gum hydrogels combined with human articular chondrocytes for cartilage regeneration when subcutaneously implanted in nude mice. J Tissue Eng Regen Med. 2009;3:493–500. doi:10.1002/term.184. Oliveira JT, Martins L, Picciochi R, Malafaya PB, Sousa RA, Neves NM, et al. Gellan gum: A new biomaterial for cartilage tissue engineering applications. J Biomed Mater Res Part A. 2009;9999A:NA–A. doi:10.1002/jbm.a.32574. Oliveira JT, Santos TC, Martins L, Picciochi R, Marques AP, Castro AG, et al. Gellan Gum Injectable Hydrogels for Cartilage Tissue Engineering Applications: In Vitro Studies and Preliminary In Vivo Evaluation. Tissue Eng Part A. 2010;16:343–53. doi:10.1089/ten.tea.2009.0117. Silva-Correia J, Zavan B, Vindigni V, Silva TH, Oliveira JM, Abatangelo G, et al. Biocompatibility Evaluation of Ionic- and Photo-Crosslinked Methacrylated Gellan Gum Hydrogels: In Vitro and In Vivo Study. Adv Healthc Mater. 2013;2:568–75. doi:10.1002/adhm.201200256. Pereira DR, Silva-Correia J, Caridade SG, Oliveira JT, Sousa RA, Salgado AJ, et al. Development of Gellan Gum-Based Microparticles/Hydrogel Matrices for Application in the Intervertebral Disc Regeneration. Tissue Eng Part C Methods. 2011;17:961–72. doi:10.1089/ten.tec.2011.0115. Endres M, Abbushi A, Thomale UW, Cabraja M, Kroppenstedt SN, Morawietz L, et al. Intervertebral disc regeneration after implantation of a cell-free bioresorbable implant in a rabbit disc degeneration model. Biomaterials. 2010;31:5836–41. doi:10.1016/j.biomaterials.2010.03.078. Silva-Correia J, Oliveira J, Oliveira J, Amandi R, Reis R. Photo-crosslinked gellan gum-based hydrogels: preparation methods and uses thereof. WO Patent 2011119059. 2011;:7. Thorvaldsson A, Silva-Correia J, Oliveira JM, Reis RL, Gatenholm P, Walkenström P. Development of nanofiber-reinforced hydrogel scaffolds for nucleus pulposus regeneration by a combination of electrospinning and spraying technique. J Appl Polym Sci. 2013;128:1158–63. doi:10.1002/app.38316. Reza AT, Nicoll SB. Characterization of novel photocrosslinked carboxymethylcellulose hydrogels for encapsulation of nucleus pulposus cells. Acta Biomater. 2010;6:179–86. doi:10.1016/j.actbio.2009.06.004. Reza AT, Nicoll SB. Serum-free, chemically defined medium with TGF-beta(3) enhances functional properties of nucleus pulposus cell-laden carboxymethylcellulose hydrogel constructs. Biotechnol Bioeng. 2010;105:384–95. doi:10.1002/bit.22545. Gupta MS, Nicoll SB. Duration of TGF-β3 Exposure Impacts the Chondrogenic Maturation of Human MSCs in Photocrosslinked Carboxymethylcellulose Hydrogels. Ann Biomed Eng. 2015;43:1145–57. doi:10.1007/s10439-014-1179-1. Lin HA, Gupta MS, Varma DM, Gilchrist ML, Nicoll SB. Lower crosslinking density enhances functional nucleus pulposus-like matrix elaboration by human mesenchymal stem cells in carboxymethylcellulose hydrogels. J Biomed Mater Res Part A. 2016;104:165–77. doi:10.1002/jbm.a.35552. Gupta MS, Nicoll SB. Functional nucleus pulposus-like matrix assembly by human mesenchymal stromal cells is directed by macromer concentration in photocrosslinked carboxymethylcellulose hydrogels. Cell Tissue Res. 2014;358:527–39. doi:10.1007/s00441-014-1962-1. Thorpe AA, Boyes VL, Sammon C, Le Maitre CL. Thermally triggered injectable hydrogel, which induces mesenchymal stem cell differentiation to nucleus pulposus cells: Potential for regeneration of the intervertebral disc. Acta Biomater. 2016;36:99–111. doi:10.1016/j.actbio.2016.03.029. Long RG, Bürki A, Zysset P, Eglin D, Grijpma DW, Blanquer SBG, et al. Mechanical restoration and failure analyses of a hydrogel and scaffold composite strategy for annulus fibrosus repair. Acta Biomater. 2016;30:116–25. doi:10.1016/j.actbio.2015.11.015. Xin L, Zhang C, Zhong F, Fan S, Wang W, Wang Z. Minimal invasive annulotomy for induction of disc degeneration and implantation of poly (lactic-co-glycolic acid) (PLGA) plugs for annular repair in a rabbit model. Eur J Med Res. 2016;21:7. doi:10.1186/s40001-016-0202-4. Pirvu T, Blanquer SBG, Benneker LM, Grijpma DW, Richards RG, Alini M, et al. A combined biomaterial and cellular approach for annulus fibrosus rupture repair. Biomaterials. 2015;42:11–9. doi:10.1016/j.biomaterials.2014.11.049. Choy ATH, Chan BP. A Structurally and Functionally Biomimetic Biphasic Scaffold for Intervertebral Disc Tissue Engineering. PLoS One. 2015;10:e0131827. doi:10.1371/journal.pone.0131827. Xu B, Xu H, Wu Y, Li X, Zhang Y, Ma X, et al. Intervertebral Disc Tissue Engineering with Natural Extracellular Matrix-Derived Biphasic Composite Scaffolds. PLoS One. 2015;10:e0124774. doi:10.1371/journal.pone.0124774. Xu B, Du L, Zhang J, Zhu M, Ji S, Zhang Y, et al. Circumferentially oriented microfiber scaffold prepared by wet-spinning for tissue engineering of annulus fibrosus. RSC Adv. 2015;5:42705–13. doi:10.1039/C5RA03347K. Zhu Q, Gao X, Brown MD, Temple HT, Gu W. Simulation of water content distributions in degenerated human intervertebral discs. J Orthop Res. 2016; April 2016:1–20. doi:10.1002/jor.23284. van Uden S, Silva-Correia J, Correlo VM, Oliveira JM, Reis RL. Custom-tailored tissue engineered polycaprolactone scaffolds for total disc replacement. Biofabrication. 2015;7:15008. doi:10.1088/1758-5090/7/1/015008. Zhu C, Li J, Liu C, Zhou P, Yang H, Li B. Modulation of the gene expression of annulus fibrosus-derived stem cells using poly(ether carbonate urethane)urea scaffolds of tunable elasticity. Acta Biomater. 2016;29:228–38. doi:10.1016/j.actbio.2015.09.039. Guterl CC, See EY, Blanquer SBG, Pandit A, Ferguson SJ, Benneker LM, et al. Challenges and strategies in the repair of ruptured annulus fibrosus. Eur Cell Mater. 2013 Jan 2;25:1–21. Bangel-Ruland N, Tomczak K, Fernández Fernández E, Leier G, Leciejewski B, Rudolph C, et al. Cystic fibrosis transmembrane conductance regulator-mRNA delivery: a novel alternative for cystic fibrosis gene therapy. J Gene Med. 2013;15:414–26. doi:10.1002/jgm.2748. Guo Q, Liu C, Li J, Zhu C, Yang H, Li B. Gene expression modulation in TGF-β3-mediated rabbit bone marrow stem cells using electrospun scaffolds of various stiffness. J Cell Mol Med. 2015;19:1582–92. doi:10.1111/jcmm.12533. Liu C, Zhu C, Li J, Zhou P, Chen M, Yang H, et al. The effect of the fibre orientation of electrospun scaffolds on the matrix production of rabbit annulus fibrosus-derived stem cells. Bone Res. 2015;3 April:15012. doi:10.1038/boneres. 2015:12. Vadalà G, Russo F, Ambrosio L, Loppini M, Denaro V. Stem cells sources for intervertebral disc regeneration. World J Stem Cells. 2016;8:185–201. doi:10.4252/wjsc.v8.i5.185. Freeman BJC, Kuliwaba JS, Jones CF, Shu CC, Colloca CJ, Zarrinkalam MR, et al. Allogeneic Mesenchymal Stem Cells Promote Healing in Postero-Lateral Annular Lesions and Improve Indices of Lumbar Intervertebral Disc Degeneration in an Ovine Model. Spine (Phila Pa 1976). 2016; March:1. doi:doi:10.1097/BRS.0000000000001528. Freeman BJC, Walters RM, Moore RJ, Fraser RD. Does Intradiscal Electrothermal Therapy Denervate and Repair Experimentally Induced Posterolateral Annular Tears in an Animal Model? Spine (Phila Pa 1976). 2003;28:2602–8. doi:10.1097/01.BRS.0000097889.01759.05. Cassinelli EH, Hall RA, Kang JD. Biochemistry of intervertebral disc degeneration and the potential for gene therapy applications. Spine J. 2001;1:205–14. doi:10.1016/S1529-9430(01)00021-3. Sun DDN, Leong KW. A Nonlinear Hyperelastic Mixture Theory Model for Anisotropy, Transport, and Swelling of Annulus Fibrosus. Ann Biomed Eng. 2004;32:92–102. doi:10.1023/B:ABME.0000007794.87408.1e. Iatridis JC, Nicoll SB, Michalek AJ, B a W, Gupta MS. Role of biomechanics in intervertebral disc degeneration and regenerative therapies: what needs repairing in the disc and what are promising biomaterials for its repair? Spine J. 2013;13:243–62. doi:10.1016/j.spinee.2012.12.002. Park S-H, Gil ES, Cho H, Mandal BB, Tien LW, Min B, et al. Intervertebral Disk Tissue Engineering Using Biphasic Silk Composite Scaffolds. Tissue Eng Part A. 2012;18:447–58. doi:10.1089/ten.tea.2011.0195. Bowles RD, Gebhard HH, Dyke JP, Ballon DJ, Tomasino A, Cunningham ME, et al. Image-based tissue engineering of a total intervertebral disc implant for restoration of function to the rat lumbar spine. NMR Biomed. 2012;25:443–51. doi:10.1002/nbm.1651. Wismer N, Grad S, Fortunato G, Ferguson SJ, Alini M, Eglin D. Biodegradable Electrospun Scaffolds for Annulus Fibrosus Tissue Engineering: Effect of Scaffold Structure and Composition on Annulus Fibrosus Cells In Vitro. Tissue Eng Part A. 2014;20:140123085256009. doi:10.1089/ten.tea.2012.0679. Martin JT, Milby AH, Ikuta K, Poudel S, Pfeifer CG, Elliott DM, et al. A radiopaque electrospun scaffold for engineering fibrous musculoskeletal tissues: Scaffold characterization and in vivo applications. Acta Biomater. 2015;26:97–104. doi:10.1016/j.actbio.2015.08.001. Pattappa G, Li Z, Peroglio M, Wismer N, Alini M, Grad S. Diversity of intervertebral disc cells: phenotype and function. J Anat. 2012;221:480–96. doi:10.1111/j.1469-7580.2012.01521.x. Denning D, Paukshto MV, Habelitz S, Rodriguez BJ. Piezoelectric properties of aligned collagen membranes. J Biomed Mater Res Part B Appl Biomater. 2014;102:284–92. doi:10.1002/jbm.b.33006. Marino AA, Becker RO. Piezoelectric Effect and Growth Control in Bone. Nature. 1970;228:473–4. doi:10.1038/228473a0. Sato M, Asazuma T, Ishihara M, Kikuchi T, Masuoka K, Ichimura S, et al. An atelocollagen honeycomb-shaped scaffold with a membrane seal (ACHMS-scaffold) for the culture of annulus fibrosus cells from an intervertebral disc. J Biomed Mater Res. 2003;64A:248–56. doi:10.1002/jbm.a.10287. Saad L, Spector M. Effects of collagen type on the behavior of adult canine annulus fibrosus cells in collagen-glycosaminoglycan scaffolds. J Biomed Mater Res. 2004;71A:233–41. doi:10.1002/jbm.a.30150. Schneider TO, Mueller SM, Shortkroff S, Spector M. Expression of alpha-smooth muscle actin in canine intervertebal disc cellsin situ and in collagen-glycosaminoglycan matricesin vitro. J Orthop Res. 1999;17:192–9. doi:10.1002/jor.1100170207. Choy ATH, Leong KW, Chan BP. Chemical modification of collagen improves glycosaminoglycan retention of their co-precipitates. Acta Biomater. 2013;9:4661–72. doi:10.1016/j.actbio.2012.09.016. Chan BP, Hui TY, Chan OCM, So K-F, Lu W, Cheung KMC, et al. Photochemical Cross-Linking for Collagen-Based Scaffolds: A Study on Optical Properties, Mechanical Properties, Stability, and Hematocompatibility. Tissue Eng. 2007;13:73–85. doi:10.1089/ten.2006.0004. Suggs LJ, Moore SA, Mikos AG. Synthetic Biodegradable Polymers for Medical Applications. In: JE M, editor. Physical Properties of Polymers Handbook. New York, NY: Springer; 2007. p. 939–50. doi:10.1007/978-0-387-69002-5_55. Htay a S, Teoh SH, Hutmacher DW. Development of perforated microthin poly(ε-caprolactone) films as matrices for membrane tissue engineering. J Biomater Sci Polym Ed. 2004;15:683–700. doi:10.1163/156856204323046933. Hudson KD, Alimi M, Grunert P, Härtl R, Bonassar LJ. Recent advances in biological therapies for disc degeneration: tissue engineering of the annulus fibrosus, nucleus pulposus and whole intervertebral discs. Curr Opin Biotechnol. 2013;24:872–9. doi:10.1016/j.copbio.2013.04.012. Stroganov V, Al-Hussein M, Sommer J-U, Janke A, Zakharchenko S, Ionov L. Reversible Thermosensitive Biodegradable Polymeric Actuators Based on Confined Crystallization. Nano Lett. 2015;15:1786–90. doi:10.1021/nl5045023. Oner T, Cengiz I, Pitikakis M, Cesario L, Parascadolo P, Vosilla L, et al. 3D segmentation of intervertebral discs : from concept to the fabrication of patient-specific scaffolds. J 3D Print Med. 2017;1:91–101. Martin JT, Milby AH, J a C, Kim DH, Hebela NM, Smith LJ, et al. Translation of an engineered nanofibrous disc-like angle-ply structure for intervertebral disc replacement in a small animal model. Acta Biomater. 2014;10:2473–81. doi:10.1016/j.actbio.2014.02.024. Koepsell L, Zhang L, Neufeld D, Fong H, Deng Y. Electrospun Nanofibrous Polycaprolactone Scaffolds for Tissue Engineering of Annulus Fibrosus. Macromol Biosci. 2011;11:391–9. doi:10.1002/mabi.201000352. Guterl CC, Torre OM, Purmessur D, Dave K, Likhitpanichkul M, Hecht AC, et al. Characterization of Mechanics and Cytocompatibility of Fibrin-Genipin Annulus Fibrosus Sealant with the Addition of Cell Adhesion Molecules. Tissue Eng Part A. 2014;20:2536–45. doi:10.1089/ten.tea.2012.0714. Likhitpanichkul M, Dreischarf M, Illien-Junger S, Walter BA, Nukaga T, Long RG, et al. Fibrin-genipin adhesive hydrogel for annulus fibrosus repair: performance evaluation with large animal organ culture, in situ biomechanics, and in vivo degradation tests. Eur Cell Mater. 2014;28:25–37. Blanquer SBG, Sharifi S, Grijpma DW. Development of poly(trimethylene carbonate) network implants for annulus fibrosus tissue engineering. J Appl Biomater Funct Mater. 2012;10:177–84. doi:10.5301/JABFM.2012.10354. Wang F, Li Z, Lannutti JL, Wagner WR, Guan J. Synthesis, characterization and surface modification of low moduli poly(ether carbonate urethane)ureas for soft tissue engineering. Acta Biomater. 2009;5:2901–12. doi:10.1016/j.actbio.2009.04.016. Chang G, Kim H-J, Kaplan D, Vunjak-Novakovic G, R a K. Porous silk scaffolds can be used for tissue engineering annulus fibrosus. Eur Spine J. 2007;16:1848–57. doi:10.1007/s00586-007-0364-4. Yasen M, Li X, Jiang L, Yuan W, Che W, Dong J. Effect of zoledronic acid on spinal fusion outcomes in an ovariectomized rat model of osteoporosis. J Orthop Res. 2015;33:1297–304. doi:10.1002/jor.22763. Chik TK, Chooi WH, Li YY, Ho FC, Cheng HW, Choy TH, et al. Bioengineering a Multicomponent Spinal Motion Segment Construct-A 3D Model for Complex Tissue Engineering. Adv Healthc Mater. 2015;4:99–112. doi:10.1002/adhm.201400192. Cheng H, Luk KDK, Cheung KMC, Chan BP. In vitro generation of an osteochondral interface from mesenchymal stem cell–collagen microspheres. Biomaterials. 2011;32:1526–35. doi:10.1016/j.biomaterials.2010.10.021. Schexnailder P, Schmidt G. Nanocomposite polymer hydrogels. Colloid Polym Sci. 2009;287:1–11. doi:10.1007/s00396-008-1949-0. Zhu J. Bioactive modification of poly(ethylene glycol) hydrogels for tissue engineering. Biomaterials. 2010;31:4639–56. doi:10.1016/j.biomaterials.2010.02.044. The authors would like to acknowledge the support provided by the Portuguese Foundation for Science and Technology (FCT) through the project EPIDisc (UTAP-EXPL/BBBECT/0050/2014), funded in the Framework of the "International Collaboratory for Emerging Technologies, CoLab", UT Austin|Portugal Program. The FCT distinctions attributed to J. Miguel Oliveira (IF/00423/2012 and IF/01285/2015) and J. Silva-Correia (IF/00115/2015) under the Investigator FCT program are also greatly acknowledged. 3B's Research Group—Biomaterials, Biodegradables and Biomimetics, University of Minho, Headquarters of the European Institute of Excellence on Tissue Engineering and Regenerative Medicine, AvePark, Parque de Ciência e Tecnologia, Zona Industrial da Gandra, 4805-017 Barco GMR, Gandra, Portugal Sebastião van Uden, Joana Silva-Correia, Joaquim Miguel Oliveira & Rui Luís Reis ICVS/3B's—PT Government Associate Laboratory, Guimarães, Braga, Portugal The Discoveries Centre for Regenerative and Precision Medicine, Headquarters at University of Minho, Avepark, 4805-017 Barco, Guimarães, Portugal Joaquim Miguel Oliveira & Rui Luís Reis Present Address: Bioengineering Laboratories Srl, Viale Brianza 8, Meda, Italy Sebastião van Uden Present Address: Politecnico di Milano, Piazza Leonardo da Vinci, 32, Milan, Italy Joana Silva-Correia Joaquim Miguel Oliveira Rui Luís Reis SvU wrote the raw manuscript closely supervised by JSC and JMO considering the backbone structure of the manuscript, who applied all revisions supplied by JSC, JMO and RLR on the several versions of the manuscript until all authors read and approved the final manuscript. Correspondence to Sebastião van Uden. van Uden, S., Silva-Correia, J., Oliveira, J.M. et al. Current strategies for treatment of intervertebral disc degeneration: substitution and regeneration possibilities. Biomater Res 21, 22 (2017). https://doi.org/10.1186/s40824-017-0106-6 Regenerative strategies
CommonCrawl
This is the latest accepted revision, reviewed on 11 January 2022. Composite statistic of life expectancy, education, and income indices "HDI" redirects here. For other uses, see HDI (disambiguation). For the complete ranking of countries, see List of countries by Human Development Index. World map representing Human Development Index categories (based on 2019 data, published in 2020). Very high (≥ 0.800) High (0.700–0.799) Medium (0.550–0.699) Low (≤ 0.549) Data unavailable World map of countries by Human Development Index categories in increments of 0.050 (based on 2019 data, published in 2020). ≥ 0.900 ≤ 0.399 The Human Development Index (HDI) is a statistic composite index of life expectancy, education (mean years of schooling completed and expected years of schooling upon entering the education system), and per capita income indicators, which are used to rank countries into four tiers of human development. A country scores a higher HDI when the lifespan is higher, the education level is higher, and the gross national income GNI (PPP) per capita is higher. It was developed by Pakistani economist Mahbub ul Haq and was further used to measure a country's development by the United Nations Development Programme (UNDP)'s Human Development Report Office.[1][2][3] The 2010 Human Development Report introduced an Inequality-adjusted Human Development Index (IHDI). While the simple HDI remains useful, it stated that "the IHDI is the actual level of human development (accounting for inequality), while the HDI can be viewed as an index of 'potential' human development (or the maximum level of HDI) that could be achieved if there were no inequality."[4] The index is based on the human development approach, developed by Mahbub ul Haq, anchored in Amartya Sen's work on human capabilities, often framed in terms of whether people are able to "be" and "do" desirable things in life. Examples include – being: well fed, sheltered, healthy; doing: work, education, voting, participating in community life. The freedom of choice is central – someone choosing to be hungry (as during a religious fast) is quite different from someone who is hungry because they cannot afford to buy food, or because the country is in a famine.[5] The index does not take into account several factors, such as the net wealth per capita or the relative quality of goods in a country. This situation tends to lower the ranking for some of the most advanced countries, such as the G7 members and others.[6] 2 Dimensions and calculation 2.1 New method (2010 HDI onwards) 2.2 Old method (HDI before 2010) 3 2019 Human Development Index (2020 report) 3.1 Inequality-adjusted HDI (2020 report) 4 Past top countries 4.1 In each original HDI 5 Geographical coverage 6 Country/region specific HDI lists 7.1 Sources of data error The origins of the HDI are found in the annual Human Development Reports produced by the Human Development Report Office of the United Nations Development Programme (UNDP). These were devised and launched by Pakistani economist Mahbub ul Haq in 1990, and had the explicit purpose "to shift the focus of development economics from national income accounting to people-centered policies". Haq believed that a simple composite measure of human development was needed to convince the public, academics, and politicians that they can and should evaluate development not only by economic advances but also improvements in human well-being. The underlying principles behind the Human Development Index.[5] Dimensions and calculation[edit] New method (2010 HDI onwards)[edit] Published on 4 November 2010 (and updated on 10 June 2011), the 2010 Human Development Report calculated the HDI combining three dimensions:[7][8] A long and healthy life: Life expectancy at birth Education index: Mean years of schooling and Expected years of schooling A decent standard of living: GNI per capita (PPP international dollars) In its 2010 Human Development Report, the UNDP began using a new method of calculating the HDI. The following three indices are used: 1. Life Expectancy Index (LEI) = LE − 20 85 − 20 {\displaystyle ={\frac {{\textrm {LE}}-20}{85-20}}} LEI is 1 when Life expectancy at birth is 85 and 0 when Life expectancy at birth is 20. 2. Education Index (EI) = MYSI + EYSI 2 {\displaystyle ={\frac {{\textrm {MYSI}}+{\textrm {EYSI}}}{2}}} [9] 2.1 Mean Years of Schooling Index (MYSI) = MYS 15 {\displaystyle ={\frac {\textrm {MYS}}{15}}} [10] Fifteen is the projected maximum of this indicator for 2025. 2.2 Expected Years of Schooling Index (EYSI) = EYS 18 {\displaystyle ={\frac {\textrm {EYS}}{18}}} [11] Eighteen is equivalent to achieving a master's degree in most countries. 3. Income Index (II) = ln ⁡ ( GNIpc ) − ln ⁡ ( 100 ) ln ⁡ ( 75 , 000 ) − ln ⁡ ( 100 ) {\displaystyle ={\frac {\ln({\textrm {GNIpc}})-\ln(100)}{\ln(75,000)-\ln(100)}}} II is 1 when GNI per capita is $75,000 and 0 when GNI per capita is $100. Finally, the HDI is the geometric mean of the previous three normalized indices: HDI = LEI ⋅ EI ⋅ II 3 . {\displaystyle {\textrm {HDI}}={\sqrt[{3}]{{\textrm {LEI}}\cdot {\textrm {EI}}\cdot {\textrm {II}}}}.} LE: Life expectancy at birth MYS: Mean years of schooling (i.e. years that a person aged 25 or older has spent in formal education) EYS: Expected years of schooling (i.e. total expected years of schooling for children under 18 years of age) GNIpc: Gross national income at purchasing power parity per capita Old method (HDI before 2010)[edit] The HDI combined three dimensions last used in its 2009 report: Life expectancy at birth, as an index of population health and longevity to HDI Knowledge and education, as measured by the adult literacy rate (with two-thirds weighting) and the combined primary, secondary, and tertiary gross enrollment ratio (with one-third weighting). Standard of living, as indicated by the natural logarithm of gross domestic product per capita at purchasing power parity. HDI trends between 1975 and 2004 Europe (not in the OECD), and CIS This methodology was used by the UNDP until their 2011 report. The formula defining the HDI is promulgated by the United Nations Development Programme (UNDP).[12] In general, to transform a raw variable, say x {\displaystyle x} , into a unit-free index between 0 and 1 (which allows different indices to be added together), the following formula is used: x index = x − a b − a {\displaystyle x{\text{ index}}={\frac {x-a}{b-a}}} where a {\displaystyle a} and b {\displaystyle b} are the lowest and highest values the variable x {\displaystyle x} can attain, respectively. The Human Development Index (HDI) then represents the uniformly weighted sum with 1⁄3 contributed by each of the following factor indices: Life Expectancy Index = L E − 25 85 − 25 {\displaystyle {\frac {LE-25}{85-25}}} Education Index = 2 3 × A L I + 1 3 × G E I {\displaystyle {\frac {2}{3}}\times ALI+{\frac {1}{3}}\times GEI} Adult Literacy Index (ALI) = A L R − 0 100 − 0 {\displaystyle {\frac {ALR-0}{100-0}}} Gross Enrollment Index (GEI) = C G E R − 0 100 − 0 {\displaystyle {\frac {CGER-0}{100-0}}} GDP = log ⁡ ( G D P p c ) − log ⁡ ( 100 ) log ⁡ ( 40000 ) − log ⁡ ( 100 ) {\displaystyle {\frac {\log \left(GDPpc\right)-\log \left(100\right)}{\log \left(40000\right)-\log \left(100\right)}}} 2019 Human Development Index (2020 report)[edit] Main article: List of countries by Human Development Index The Human Development Report 2020 by the United Nations Development Programme was released on 15 December 2020, and calculates HDI values based on data collected in 2019.[13] The list comprises countries and territories with very high human development: = increase. = steady. = decrease. 2019 data (2020 report)​[14] Change over 5 years (2014)​[15] Average annual HDI growth (2010-2019)​[15] Very high human development 1 Norway 0.957 0.20% 2 (7) Ireland 0.955 0.65% 2 Switzerland 0.955 0.16% 4 (7) Hong Kong 0.949 0.54% 4 (4) Iceland 0.949 0.62% 6 (3) Germany 0.947 0.24% 7 (3) Sweden 0.945 0.41% 8 (2) Australia 0.944 0.17% 8 (1) Netherlands 0.944 0.32% 10 (6) Denmark 0.940 0.28% 11 (2) Finland 0.938 0.26% 11 Singapore 0.938 0.35% 13 United Kingdom 0.932 0.24% 14 (1) Belgium 0.931 0.25% 14 (3) New Zealand 0.931 0.30% 16 (1) Canada 0.929 0.34% 17 (3) United States 0.926 0.12% 18 Austria 0.922 0.22% 19 (1) Israel 0.919 0.29% 19 (2) Japan 0.919 0.39% 19 Liechtenstein 0.919 0.18% 22 (2) Slovenia 0.917 0.35% 23 (1) South Korea 0.916 0.33% 23 Luxembourg 0.916 0.22% 25 (1) Spain 0.904 0.40% 26 (1) France 0.901 0.28% 27 (1) Czech Republic 0.900 0.38% 28 (2) Malta 0.895 0.54% 29 (2) Estonia 0.892 0.51% 29 (1) Italy 0.892 0.16% 31 (6) United Arab Emirates 0.890 0.91% 32 (3) Greece 0.888 0.29% 33 Cyprus 0.887 0.40% 34 Lithuania 0.882 0.66% 35 Poland 0.880 0.52% 36 (4) Andorra 0.868 0.40% 37 (3) Latvia 0.866 0.55% 38 (1) Portugal 0.864 0.46% 39 (2) Slovakia 0.860 0.38% 40 (1) Hungary 0.854 0.30% 40 (4) Saudi Arabia 0.854 0.60% 42 (6) Bahrain 0.852 0.70% 43 Chile 0.851 0.65% 43 (2) Croatia 0.851 0.48% 45 Qatar 0.848 0.19% 46 (2) Argentina 0.845 0.21% 47 (6) Brunei 0.838 0.15% 48 (2) Montenegro 0.829 0.37% 49 (2) Romania 0.828 0.31% 50 (3) Palau 0.826 0.55% 51 (7) Kazakhstan 0.825 0.86% 52 (1) Russia 0.824 0.60% 53 (4) Belarus 0.823 0.39% 54 (5) Turkey 0.820 1.16% 55 (1) Uruguay 0.817 0.49% 56 (2) Bulgaria 0.816 0.39% 57 (5) Panama 0.815 0.58% 58 (3) Bahamas 0.814 0.12% 58 (6) Barbados 0.814 0.23% 60 (3) Oman 0.813 0.43% 61 (7) Georgia 0.812 0.87% 62 (3) Costa Rica 0.810 0.64% 62 (1) Malaysia 0.810 0.54% 64 (5) Kuwait 0.806 0.25% 64 (3) Serbia 0.806 0.57% 66 (2) Mauritius 0.804 0.76% Inequality-adjusted HDI (2020 report)[edit] Main article: List of countries by inequality-adjusted HDI The Inequality-adjusted Human Development Index (IHDI)[16] "equals the HDI when there is no inequality across people but is less than the HDI as inequality rises. In this sense, the IHDI is the actual level of human development (accounting for this inequality), while the HDI can be viewed as an index of 'potential' human development (or the maximum level of HDI) that could be achieved if there was no inequality. The 'loss' in potential human development due to inequality is given by the difference between the HDI and the IHDI and can be expressed as a percentage." The list comprises countries and territories with very high and high human development: 2019 estimates (2020 report)[17][18][19] IHDI Overall loss (%) 1 Norway 0.899 0.957 6.1 0.021 2 Iceland 0.894 0.949 5.8 0.055 3 Switzerland 0.889 0.955 6.9 0.015 4 Finland 0.888 0.938 5.3 0.040 5 Ireland 0.885 0.955 7.3 0.066 6 Denmark 0.883 0.940 6.1 0.025 7 Sweden 0.882 0.945 6.7 0.033 8 Netherlands 0.878 0.944 7.0 0.036 9 Slovenia 0.875 0.917 4.6 0.047 10 Germany 0.869 0.947 8.2 0.016 11 Australia 0.867 0.944 8.2 0.011 12 Czech Republic 0.860 0.900 4.4 0.042 13 Belgium 0.859 0.931 7.7 0.026 14 New Zealand 0.859 0.931 7.7 NA 15 Austria 0.857 0.922 7.0 0.021 16 United Kingdom 0.856 0.932 8.2 0.032 17 Canada 0.848 0.929 8.7 0.025 18 Japan 0.843 0.919 8.3 0.053[a] 19 Estonia 0.829 0.882 7.1 0.051 20 Luxembourg 0.826 0.916 9.8 0.009 21 Hong Kong 0.824 0.949 13.2 NA 22 Malta 0.823 0.895 8.0 0.033[b] 23 France 0.820 0.901 9.0 0.022 24 South Korea 0.815 0.916 11.0 0.074 25 Israel 0.814 0.919 11.4 0.031 26 Singapore 0.813 0.938 13.3 NA 26 Poland 0.813 0.880 7.6 0.063 28 United States 0.808 0.926 12.7 0.004 29 Slovakia 0.807 0.860 6.2 0.032 30 Cyprus 0.805 0.887 9.2 0.048 High human development 31 Hungary 0.791 0.854 7.4 0.032 31 Lithuania 0.791 0.882 10.3 0.055 31 Greece 0.791 0.888 10.9 0.014 34 Italy 0.783 0.892 12.2 0.010 34 Latvia 0.783 0.866 9.6 0.050 34 Croatia 0.783 0.851 8.0 0.092 34 Spain 0.783 0.904 13.4 0.004 38 Belarus 0.771 0.823 6.3 0.050 39 Kazakhstan 0.766 0.825 7.2 0.105 40 Portugal 0.761 0.850 12.7 0.031 41 Montenegro 0.749 0.829 9.7 0.026 42 Russia 0.740 0.824 10.2 0.049 43 Romania 0.730 0.828 11.8 0.022 44 Argentina 0.729 0.845 13.7 0.063 45 Ukraine 0.728 0.779 6.4 0.035 46 Bulgaria 0.721 0.816 11.6 0.022 47 Georgia 0.716 0.812 11.8 0.093 48 Uruguay 0.712 0.817 12.7 0.055 49 Chile 0.709 0.851 16.7 0.058 50 Albania 0.708 0.795 10.9 0.058 51 Oman 0.706 0.813 13.2 NA 52 Serbia 0.705 0.806 12.5 0.021 Past top countries[edit] The list below displays the top-ranked country from each year of the Human Development Index. Norway has been ranked the highest sixteen times, Canada eight times, and Japan and Iceland twice. In each original HDI[edit] The year represents the time period from which the statistics for the index were derived. In parentheses is the year when the report was published. 2019 (2020): Norway 2006 (2008): Iceland 1998 (2000): Canada ???? (1994): Canada ???? (1993): Japan 1990 (1991): Japan Geographical coverage[edit] The HDI has extended its geographical coverage: David Hastings, of the United Nations Economic and Social Commission for Asia and the Pacific, published a report geographically extending the HDI to 230+ economies, whereas the UNDP HDI for 2009 enumerates 182 economies and coverage for the 2010 HDI dropped to 169 countries.[20][21] Country/region specific HDI lists[edit] Argentinean provinces Australian states Baltic Regions Bolivian departments Brazilian states Canadian provinces and territories Chilean regions Chinese administrative divisions Colombian departments Danish regions Dutch provinces Ethiopian regions Greek regions Tamil Nadu districts Indonesian provinces Iranian provinces Iraqi governorates Japanese prefectures Latin American countries Mexican states Nigerian states Pakistani administrative units Philippine provinces Palestinian regions Polish voivodeships Russian federal subjects South African provinces Spanish communities Swiss regions UK countries and regions of England U.S. states (American Human Development Report (AHDR)) Venezuelan states HDI vs. ecological footprint The Human Development Index has been criticized on a number of grounds, including alleged lack of consideration of technological development or contributions to the human civilization, focusing exclusively on national performance and ranking, lack of attention to development from a global perspective, measurement error of the underlying statistics, and on the UNDP's changes in formula which can lead to severe misclassification in the categorisation of "low", "medium", "high" or "very high" human development countries.[22] Sources of data error[edit] Economists Hendrik Wolff, Howard Chong and Maximilian Auffhammer discuss the HDI from the perspective of data error in the underlying health, education and income statistics used to construct the HDI. They identified three sources of data error which are due to (i) data updating, (ii) formula revisions and (iii) thresholds to classify a country's development status and conclude that 11%, 21% and 34% of all countries can be interpreted as currently misclassified in the development bins due to the three sources of data error, respectively. The authors suggest that the United Nations should discontinue the practice of classifying countries into development bins because: the cut-off values seem arbitrary, can provide incentives for strategic behavior in reporting official statistics, and have the potential to misguide politicians, investors, charity donors and the public who use the HDI at large.[22] In 2010, the UNDP reacted to the criticism and updated the thresholds to classify nations as low, medium, and high human development countries. In a comment to The Economist in early January 2011, the Human Development Report Office responded[23] to a 6 January 2011 article in the magazine[24] which discusses the Wolff et al. paper. The Human Development Report Office states that they undertook a systematic revision of the methods used for the calculation of the HDI, and that the new methodology directly addresses the critique by Wolff et al. in that it generates a system for continuously updating the human-development categories whenever formula or data revisions take place. In 2013, Salvatore Monni and Alessandro Spaventa emphasized that in the debate of GDP versus HDI, it is often forgotten that these are both external indicators that prioritize different benchmarks upon which the quantification of societal welfare can be predicated. The larger question is whether it is possible to shift the focus of policy from a battle between competing paradigms to a mechanism for eliciting information on well-being directly from the population.[25] Indices[edit] Modern history portal World portal Bhutan GNH Index Broad measures of economic progress Corruption Perceptions Index Democracy Index Fragile States Index Gender Inequality Index Gender-related Development Index Genuine Progress Indicator (GPI) Global Peace Index (GPI) Green gross domestic product (Green GDP) Green national product Gross National Well-being (GNW) Happy Planet Index (HPI) Human Poverty Index Inequality-adjusted Human Development Index (IHDI) Legatum Prosperity Index List of countries by Human Development Index Living planet index Multidimensional Poverty Index OECD Better Life Index (BLI) Planetary pressures–adjusted Human Development Index (PHDI) Rule of Law Index Social Progress Index Where-to-be-born Index Other[edit] Ethics of care Happiness economics Human Development and Capability Association Humanistic economics List of countries by percentage of population living in poverty List of countries by share of income of the richest one percent Right to an adequate standard of living Subjective life satisfaction ^ Since 2013 ^ A. Stanton, Elizabeth (February 2007). "The Human Development Index: A History". PERI Working Papers: 14–15. Archived from the original on 28 February 2019. Retrieved 28 February 2019. ^ "Human Development Index". Economic Times. Archived from the original on 1 December 2017. Retrieved 29 November 2017. ^ "The Human Development concept". UNDP. 2010. Archived from the original on 15 April 2012. Retrieved 29 July 2011. ^ Human Development Index, "Composite indices — HDI and beyond", Retrieved 16 January 2021. ^ a b "What is Human Development". UNDP. 2017. Archived from the original on 27 October 2017. Retrieved 27 October 2017. ... human development approach, developed by the economist Mahbub Ul Haq ...' ^ The Courier. Commission of the European Communities. 1994. ^ "Human Development Report 2010". UNDP. 4 November 2010. Archived from the original on 22 December 2015. Retrieved 15 December 2015. ^ "Technical notes" (PDF). UNDP. 2013. Archived (PDF) from the original on 16 June 2015. Retrieved 15 December 2015. ^ "New method of calculation of Human Development Index (HDI)". India Study Channel. 1 June 2011. Archived from the original on 10 November 2017. Retrieved 19 November 2017. ^ Mean years of schooling (of adults) (years) is a calculation of the average number of years of education received by people ages 25 and older in their lifetime based on education attainment levels of the population converted into years of schooling based on theoretical duration of each level of education attended. Source: Barro, R. J.; Lee, J.-W. (2010). "A New Data Set of Educational Attainment in the World, 1950–2010". NBER Working Paper No. 15902. doi:10.3386/w15902. Archived from the original on 7 August 2011. Retrieved 29 July 2011. ^ (ESYI is a calculation of the number of years a child is expected to attend school, or university, including the years spent on repetition. It is the sum of the age-specific enrollment ratios for primary, secondary, post-secondary non-tertiary and tertiary education and is calculated assuming the prevailing patterns of age-specific enrollment rates were to stay the same throughout the child's life. Expected years of schooling is capped at 18 years. (Source: UNESCO Institute for Statistics (2010). Correspondence on education indicators. March. Montreal.) ^ "Definition, Calculator, etc. at UNDP site". Archived from the original on 20 December 2007. Retrieved 26 May 2020. ^ Human Development Report 2020 The Next Frontier: Human Development and the Anthropocene (PDF). United Nations Development Programme. 15 December 2020. pp. 343–350. ISBN 978-92-1-126442-5. Retrieved 17 December 2020. ^ a b c d Human Development Report 2020 The Next Frontier: Human Development and the Anthropocene (PDF). United Nations Development Programme. 15 December 2020. pp. 343–346. ISBN 978-92-1-126442-5. Retrieved 15 December 2020. ^ Human Development Reports. Composite indices — HDI and beyond. United Nations Development Programme. Retrieved: December 22, 2020. ^ a b Human Development Report 2020 The Next Frontier: Human Development and the Anthropocene (PDF). United Nations Development Programme. 15 December 2020. pp. 351–355. ISBN 978-92-1-126442-5. Retrieved 17 December 2020. ^ a b "Inequality-adjusted HDI (IHDI)". hdr.undp.org. UNDP. Retrieved 15 December 2020. ^ a b "Human Development Report 2020 – "Human Development Indices and Indicators"". HDRO (Human Development Report Office). ^ Hastings, David A. (2009). "Filling Gaps in the Human Development Index". United Nations Economic and Social Commission for Asia and the Pacific, Working Paper WP/09/02. Archived from the original on 30 April 2011. Retrieved 1 December 2009. ^ Hastings, David A. (2011). "A "Classic" Human Development Index with 232 Countries". HumanSecurityIndex.org. Archived from the original on 3 May 2011. Retrieved 9 March 2011. Information Note linked to data ^ a b Wolff, Hendrik; Chong, Howard; Auffhammer, Maximilian (2011). "Classification, Detection and Consequences of Data Error: Evidence from the Human Development Index". Economic Journal. 121 (553): 843–870. doi:10.1111/j.1468-0297.2010.02408.x. hdl:1813/71597. S2CID 18069132. ^ "UNDP Human Development Report Office's comments". The Economist. January 2011. Archived from the original on 11 February 2011. Retrieved 12 January 2011. ^ "The Economist (pages 60–61 in the issue of Jan 8, 2011)". 6 January 2011. Archived from the original on 13 January 2011. Retrieved 12 January 2011. ^ Monni, Salvatore; Spaventa, Alessandro (2013). "Beyond Gdp and HDI: Shifting the focus from Paradigms to Politics". Development. 56 (2): 227–231. doi:10.1057/dev.2013.30. S2CID 84722678. Wikimedia Commons has media related to Human Development Index. Human Development Tools and Rankings Economic classification of countries Developed country World Bank high-income economy Newly industrialized country Heavily indebted poor countries Three-World Model First World Fourth World past and projected power parity (PPP) future estimates per person employed Gross national income (GNI) (Nominal, Atlas method) per capita (PPP) per capita Average wage Employee compensation (per hour) List of countries by median wage Wealth per adult Financial assets per capita Other national accounts Gross National Happiness Net material product Research and development spending Stock market capitalization inequality-adjusted planetary pressures–adjusted HDI Percentage living in poverty Human Capital Index ICT Development Index Number of broadband Internet subscriptions Number of Internet users Net international investment position (NIIP) Per capita (creditors) Per capita (debtors) Economics portal • World portal Lists of countries by population statistics Demographics of the world Continents/subregions Cities/urban areas Megacities Megalopolises Past and future population World population estimates Population milestones Past and future population density Current real density based on food growing capacity Growth indicators Population growth rate Mortality rate Past fertility rate Other demographics Age at first marriage Ethnic and cultural diversity level Linguistic diversity Net migration rate Number of households Sex ratio Urban population Antidepressant consumption Antiviral medications for pandemic influenza HIV/AIDS adult prevalence rate Infant and under-five mortality rates Obesity rate Past life expectancy Percentage suffering from undernourishment Health expenditure by country by type of financing Total health expenditure per capita Education and innovation Bloomberg Innovation Index Education Index International Innovation Index Literacy rate Programme for International Student Assessment (PISA) Programme for the International Assessment of Adult Competencies Progress in International Reading Literacy Study (PIRLS) Tertiary education attainment Trends in International Mathematics and Science Study (TIMSS) World Intellectual Property Indicators Access to financial services Development aid given Official Development Assistance received Irrigated land area Income equality Share of income of top 1% Number of millionaires (US dollars) Number of billionaires (US dollars) List of international rankings Lists by country Lists of countries by quality of life rankings Happy Planet Index planetary pressures–adjusted Good Country Index Satisfaction with Life Index Net take-home pay Long-term unemployment rate Smartphone ownership rate Greenhouse gas emissions per capita Dashboard of sustainability Environmental Performance Index Natural disaster risk Sustainable Society Index Cancer rate Child Development Index Global Health Security Index Global Hunger Index Number of physicians Risk of death from non-communicable disease Teenage pregnancy rate Social/Political Discrimination and violence against minorities Gender Development Index Guaranteed minimum income Global Slavery Index Global Terrorism Index Global Competitiveness Index Political terror scale Press Freedom Index Time devoted to leisure and personal care Wealth equality Worldwide Governance Indicators Deprivation and poverty indicators Social vulnerability Relative deprivation Fushūgaku Hikikomori Social determinants of health in poverty psychological poverty Poverty and mental health Money-rich, time-poor Poverty threshold Secondary poverty Asset poverty Housing stress Income deficit Survival sex Precariat Extreme poverty Below Poverty Line (India) Homeless Vulnerability Index Misery index (economics) Gini coefficient Poverty gap index Poverty and Violence Disability and poverty India State Hunger Index Disability-adjusted life year (DALYs) Human Poverty Index (HPI) Human Development Index (HDI) Multidimensional Poverty Index (MPI) Physical Quality of Life Index (PQLI) Laeken indicators (EU) Scottish index of multiple deprivation Townsend deprivation index Living Planet Index (LPI) Progress out of Poverty Index Feminization of poverty Gender-related Development Index (GDI) Gender Parity Index Categories: Income inequality metrics · Measurements and definitions of poverty · Social responsibility organizations Indices of Deprivation National (general deprivation) Multiple Deprivation 2000 (IMD2000) Deprivation 2004 (ID2004) National (subject specific deprivation) Underprivileged area score Department of Environment's Commons categories: Information graphics about poverty · Poverty-related maps Retrieved from "https://en.wikipedia.org/w/index.php?title=Human_Development_Index&oldid=1065124747" Science and technology in India Science and technology in Pakistan American inventions British inventions Indian inventions Pakistani inventions 1990 establishments
CommonCrawl
Symmetric Matrix and Its Eigenvalues, Eigenspaces, and Eigenspaces Let $A$ be a $4\times 4$ real symmetric matrix. Suppose that $\mathbf{v}_1=\begin{bmatrix} -1 \\ 2 \\ \end{bmatrix}$ is an eigenvector corresponding to the eigenvalue $1$ of $A$. Suppose that the eigenspace for the eigenvalue $2$ is $3$-dimensional. (a) Find an orthonormal basis for the eigenspace of the eigenvalue $2$ of $A$. (b) Find $A\mathbf{v}$, where \[ \mathbf{v}=\begin{bmatrix} \end{bmatrix}.\] (The University of Tokyo Linear Algebra Exam) Calculate $A^{10}$ for a Given Matrix $A$ Find $A^{10}$, where $A=\begin{bmatrix} 4 & 3 & 0 & 0 \\ 3 &-4 & 0 & 0 \\ 0 & 0 & 1 & 1 \end{bmatrix}$. (Harvard University Exam) Find a Basis of the Subspace of All Vectors that are Perpendicular to the Columns of the Matrix Find a basis for the subspace $W$ of all vectors in $\R^4$ which are perpendicular to the columns of the matrix \[A=\begin{bmatrix} 11 & 12 & 13 & 14 \\ 21 &22 & 23 & 24 \\ 41 & 42 & 43 & 44 Given the Characteristic Polynomial of a Diagonalizable Matrix, Find the Size of the Matrix, Dimension of Eigenspace Suppose that $A$ is a diagonalizable matrix with characteristic polynomial \[f_A(\lambda)=\lambda^2(\lambda-3)(\lambda+2)^3(\lambda-4)^3.\] (a) Find the size of the matrix $A$. (b) Find the dimension of $E_4$, the eigenspace corresponding to the eigenvalue $\lambda=4$. (c) Find the dimension of the kernel(nullspace) of $A$. (Stanford University Linear Algebra Exam) If the Kernel of a Matrix $A$ is Trivial, then $A^T A$ is Invertible Let $A$ be an $m \times n$ real matrix. Then the kernel of $A$ is defined as $\ker(A)=\{ x\in \R^n \mid Ax=0 \}$. The kernel is also called the null space of $A$. Suppose that $A$ is an $m \times n$ real matrix such that $\ker(A)=0$. Prove that $A^{\trans}A$ is invertible. Diagonalizable Matrix with Eigenvalue 1, -1 Suppose that $A$ is a diagonalizable $n\times n$ matrix and has only $1$ and $-1$ as eigenvalues. Show that $A^2=I_n$, where $I_n$ is the $n\times n$ identity matrix. See below for a generalized problem. Find a Formula for a Linear Transformation If $L:\R^2 \to \R^3$ is a linear transformation such that L\left( \begin{bmatrix} \end{bmatrix}\right) =\begin{bmatrix} \end{bmatrix}, \,\,\,\, \end{bmatrix}. (a) find $L\left( \begin{bmatrix} \end{bmatrix}\right)$, and (b) find the formula for $L\left( \begin{bmatrix} x \\ \end{bmatrix}\right)$. If you think you can solve (b), then skip (a) and solve (b) first and use the result of (b) to answer (a). (Part (a) is an exam problem of Purdue University) Find the Rank of the Matrix $A+I$ if Eigenvalues of $A$ are $1, 2, 3, 4, 5$ Let $A$ be an $n$ by $n$ matrix with entries in complex numbers $\C$. Its only eigenvalues are $1,2,3,4,5$, possibly with multiplicities. What is the rank of the matrix $A+I_n$, where $I_n$ is the identity $n$ by $n$ matrix. (UCB-University of California, Berkeley, Exam) Stochastic Matrix (Markov Matrix) and its Eigenvalues and Eigenvectors (a) Let a_{11} & a_{12}\\ a_{21}& a_{22} \end{bmatrix}\] be a matrix such that $a_{11}+a_{12}=1$ and $a_{21}+a_{22}=1$. Namely, the sum of the entries in each row is $1$. (Such a matrix is called (right) stochastic matrix (also termed probability matrix, transition matrix, substitution matrix, or Markov matrix).) Then prove that the matrix $A$ has an eigenvalue $1$. (b) Find all the eigenvalues of the matrix \[B=\begin{bmatrix} 0.3 & 0.7\\ 0.6& 0.4 (c) For each eigenvalue of $B$, find the corresponding eigenvectors. The Subspace of Matrices that are Diagonalized by a Fixed Matrix Suppose that $S$ is a fixed invertible $3$ by $3$ matrix. This question is about all the matrices $A$ that are diagonalized by $S$, so that $S^{-1}AS$ is diagonal. Show that these matrices $A$ form a subspace of $3$ by $3$ matrix space. (MIT-Massachusetts Institute of Technology Exam) Find all Values of x such that the Given Matrix is Invertible \[ A=\begin{bmatrix} 2 & 0 & 10 \\ 0 &7+x &-3 \\ 0 & 4 & x \end{bmatrix}.\] Find all values of $x$ such that $A$ is invertible. The Center of the Symmetric group is Trivial if $n>2$ Show that the center $Z(S_n)$ of the symmetric group with $n \geq 3$ is trivial. Group of Order $pq$ is Either Abelian or the Center is Trivial Let $G$ be a group of order $|G|=pq$, where $p$ and $q$ are (not necessarily distinct) prime numbers. Then show that $G$ is either abelian group or the center $Z(G)=1$. Equivalent Conditions to be a Unitary Matrix A complex matrix is called unitary if $\overline{A}^{\trans} A=I$. The inner product $(\mathbf{x}, \mathbf{y})$ of complex vector $\mathbf{x}$, $\mathbf{y}$ is defined by $(\mathbf{x}, \mathbf{y}):=\overline{\mathbf{x}}^{\trans} \mathbf{y}$. The length of a complex vector $\mathbf{x}$ is defined to be $||\mathbf{x}||:=\sqrt{(\mathbf{x}, \mathbf{x})}$. Let $A$ be an $n \times n$ complex matrix. Prove that the followings are equivalent. (a) The matrix $A$ is unitary. (b) $||A \mathbf{x}||=|| \mathbf{x}||$ for any $n$-dimensional complex vector $\mathbf{x}$. (c) $(A\mathbf{x}, A\mathbf{y})=(\mathbf{x}, \mathbf{y})$ for any $n$-dimensional complex vectors $x, y$ Finite Order Matrix and its Trace Let $A$ be an $n\times n$ matrix and suppose that $A^r=I_n$ for some positive integer $r$. Then show that (a) $|\tr(A)|\leq n$. (b) If $|\tr(A)|=n$, then $A=\zeta I_n$ for an $r$-th root of unity $\zeta$. (c) $\tr(A)=n$ if and only if $A=I_n$. Solve a System of Linear Equations by Gauss-Jordan Elimination Solve the following system of linear equations using Gauss-Jordan elimination. 6x+8y+6z+3w &=-3 \\ 6x-8y+6z-3w &=3\\ 8y \,\,\,\,\,\,\,\,\,\,\,- 6w &=6 A Matrix is Invertible If and Only If It is Nonsingular In this problem, we will show that the concept of non-singularity of a matrix is equivalent to the concept of invertibility. That is, we will prove that: A matrix $A$ is nonsingular if and only if $A$ is invertible. (a) Show that if $A$ is invertible, then $A$ is nonsingular. (b) Let $A, B, C$ be $n\times n$ matrices such that $AB=C$. Prove that if either $A$ or $B$ is singular, then so is $C$. (c) Show that if $A$ is nonsingular, then $A$ is invertible. Properties of Nonsingular and Singular Matrices An $n \times n$ matrix $A$ is called nonsingular if the only solution of the equation $A \mathbf{x}=\mathbf{0}$ is the zero vector $\mathbf{x}=\mathbf{0}$. Otherwise $A$ is called singular. (a) Show that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. (b) Show that if $A$ is nonsingular, then the column vectors of $A$ are linearly independent. (c) Show that an $n \times n$ matrix $A$ is nonsingular if and only if the equation $A\mathbf{x}=\mathbf{b}$ has a unique solution for any vector $\mathbf{b}\in \R^n$. Do not use the fact that a matrix is nonsingular if and only if the matrix is invertible. Solving a System of Linear Equations Using Gaussian Elimination Solve the following system of linear equations using Gaussian elimination. x+2y+3z &=4 \\ 5x+6y+7z &=8\\ 9x+10y+11z &=12 Click here if solved 115 How to Find Eigenvalues of a Specific Matrix. Find all eigenvalues of the following $n \times n$ matrix. A=\begin{bmatrix} 0 & 0 & \cdots & 0 &1 \\ 1 & 0 & \cdots & 0 & 0\\ 0 & 1 & \cdots & 0 &0\\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0&\cdots & 1& 0 \\ \end{bmatrix} Page 36 of 38« First«...1020...3132333435363738» Give a Formula For a Linear Transformation From $\R^2$ to $\R^3$ Compute and Simplify the Matrix Expression Including Transpose and Inverse Matrices Condition that Two Matrices are Row Equivalent If the Localization is Noetherian for All Prime Ideals, Is the Ring Noetherian?
CommonCrawl
115.030.10.06 Economics - Reading 12 - 6. Marginal Revenue, Marginal Cost, and Profit Maximization 6. Marginal Revenue, Marginal Cost, and Profit Maximization d. describe the phenomenon of diminishing marginal returns; What is Total Revenue and what is the formula? - Total revenue (TR) is the sum of individual units sold multiplied by their respective prices - $$\sum P_i \times Q_i$$ What is the formula for Average Revenue (AR)? $$\frac{total revenue}{quantity} = \frac{TR}{Q}$$ What is marginal revenue and what is the formula? - Marginal revenue (MR) is the change in revenue from selling one extra unit of output - $$MR = \frac{\Delta T R}{\Delta Q}$$ Why and how does reducing price (and thus increasing sales) impact a firm's marginal revenue? The lower cost will cause more sales, but since each unit is at a lower price, the marginal revenue (the change in revenue from selling one extra unit of output) will always be less than the price of the good. What is Total Fixed Cost? The sum of the costs that do not vary with output. They will be incurred as long as a firm continues in business and the assets have alternative uses. Examples of fixed costs include rent, property taxes and insurance premiums. What is Average Fixed Cost? Total fixed cost divided by the number of units produced. It always declines as output increases. What is Total Variable Cost? The sum of those costs that rise as output increases. Total variable costs are zero if output is zero. Examples are wages paid to workers and payments for raw materials. What is Average Variable Cost? The total variable cost divided by the number of units produced. What is Average Total Cost? - Total cost divided by the number of units produced. It is sometimes called per unit cost. - ATC is high at low levels of output, decreases as output increases (since fixed costs are spread across more units), and then increases as the firm's maximum capacity is approached (since marginal costs increase). What is Marginal Cost? The change in total cost required to produce an additional unit of output.
CommonCrawl
Large-scale changes in cortical dynamics triggered by repetitive somatosensory electrical stimulation April K. Hishinuma1,2, Tanuj Gulati2,4, Mark J. Burish2,3 & Karunesh Ganguly ORCID: orcid.org/0000-0002-2570-99431,2 Repetitive somatosensory electrical stimulation (SES) of forelimb peripheral nerves is a promising therapy; studies have shown that SES can improve motor function in stroke subjects with chronic deficits. However, little is known about how SES can directly modulate neural dynamics. Past studies using SES have primarily used noninvasive methods in human subjects. Here we used electrophysiological recordings from the rodent primary motor cortex (M1) to assess how SES affects neural dynamics at the level of single neurons as well as at the level of mesoscale dynamics. We performed acute extracellular recordings in 7 intact adult Long Evans rats under ketamine-xylazine anesthesia while they received transcutaneous SES. We recorded single unit spiking and local field potentials (LFP) in the M1 contralateral to the stimulated arm. We then compared neural firing rate, spike-field coherence (SFC), and power spectral density (PSD) before and after stimulation. Following SES, the firing rate of a majority of neurons changed significantly from their respective baseline values. There was, however, a diversity of responses; some neurons increased while others decreased their firing rates. Interestingly, SFC, a measure of how a neuron's firing is coupled to mesoscale oscillatory dynamics, increased specifically in the δ-band, also known as the low frequency band (0.3- 4 Hz). This increase appeared to be driven by a change in the phase-locking of broad-spiking, putative pyramidal neurons. These changes in the low frequency range occurred without a significant change in the overall PSD. Repetitive SES significantly and persistently altered the local cortical dynamics of M1 neurons, changing both firing rates as well as the SFC magnitude in the δ-band. Thus, SES altered the neural firing and coupling to ongoing mesoscale dynamics. Our study provides evidence that SES can directly modulate cortical dynamics. Somatosensory input is essential for skilled movements [1,2,3]; this is particularly true for dexterous movements [1, 4,5,6]. Interestingly, the somatosensory system has been shown to experience relatively rapid bidirectional changes in organization as a result of repetitive manipulations of peripheral inputs. Consistent with this notion are seminal studies in both animals and humans which demonstrated that reductions in sensory feedback, either by denervation or ischemic nerve block, induced changes in motor representations [7, 8]. Studies have also shown that increases in afferent input by stimulating peripheral pathways (i.e. repetitive somatosensory electrical stimulation or SES) can alter sensorimotor representations of the stimulated body part [9, 10]. One of the first studies examining this neuromodulation method found that sensory stimulation of oral structures resulted in prolonged changes in excitability as well as an increase in the area of representation determined using functional imaging [11]. Consistent with these results are studies demonstrating that altered patterns of physical contacts to the fingers can also persistently reorganize sensory maps [12, 13]. Importantly, repetitive SES has also proven to be a promising therapeutic tool for motor rehabilitation [10, 14,15,16]. In both humans and rodents, SES can increase excitability as measured by responses to transcranial magnetic stimulation (TMS) pulses [9, 17]. Past studies have used non-invasive measures to examine cortical excitability such as motor evoked potentials (MEPs) with TMS [9, 17] and cortical reorganization using blood oxygenation signals [11]. It remains unclear what are the precise mechanisms underlying these changes. For example, the observed change in the evoked MEPs following SES may occur without changes in brainstem electrical stimulation-evoked potentials or spinal reflexes [9, 18, 19]. This suggests the possibility that the cortex may be an important site of plasticity. While our recent study showed that SES can also modify low-frequency dynamics as measured using electroencephalogram (EEG) [20], it remains unclear if these changes are local to cortex. Invasive electrophysiology offers one method to assess if SES can directly alter local motor cortical dynamics. While the body of literature summarized above has provided important mechanistic insight, little is known about how SES interacts with ongoing cortical dynamics at the level of single neurons and groups of neurons, or neural ensembles. Single neurons are a fundamental unit of the nervous system. The coordinated firing of neural ensembles, e.g. co-firing of neurons in a temporally coupled manner, is now also recognized as an important module for information processing [21,22,23,24,25,26]. In addition, oscillations may provide a mechanism for dynamic coordination of ensembles across motor and sensory areas [21,22,23,24,25, 27]. Oscillations likely reflect synchronized rhythmic excitability linked to coordinated firing of neurons [28]. Our collective understanding of both single neuron and ensemble firing patterns has greatly improved our understanding of how neural activity patterns underlie complex sensory and motor behaviors. Similarly, it is likely that such activity may play an important role in driving neural plasticity after injury and during neuromodulation using methods such as SES. The goal of this study was to develop a model of the cortical effects of SES using high-resolution, invasive recording of neurons. We were particularly interested in understanding the diversity of single neuron responses to SES. It is unlikely that all neurons respond identically to a given perturbation. This may be, in part, the result of the multiple cell-types in a given region and the diversity of network connectivity for single neurons [29]. We also wanted to compare changes in neural activity related to larger scale network oscillatory activity. More specifically, we examined the effects of SES on primary motor cortex (M1) at the level of single neuron firing rates as well as the neural coupling to ongoing spontaneous oscillations. We found that SES could independently change both the firing rate and the phase locking, i.e. the consistency of the neural firing relative to oscillatory dynamics. Together, our results provide evidence that SES can directly modulate neural dynamics in M1. Animal and surgery preparation All animal procedures were in accordance with protocols approved by the Institutional Animal Care and Use Committee at the San Francisco Veterans Affairs Medical Center. Adult male Long Evans rats (n = 8, 250-400 g, ~ 8 weeks old, Charles River Laboratories) were housed in a 12 h light:12 h dark cycle with lights out at 6:00 AM and were kept under controlled temperature. One animal was excluded from the study due to significant recording drift and electrical noise in the recording, thus n = 7 animals were used for the analysis shown. Animals were initially anesthetized using a ketamine/xylazine cocktail (85 mg/kg ketamine, and 10 mg/kg xylazine), with supplemental ketamine (at half of the induction dose) given every 40–60 min as needed to maintain a stable anesthetic level, and also to maintain anesthesia at stage III characterized by predominantly slow oscillations. Moreover, 0.05 mg/kg of atropine was given separately to counter respiratory and cardiac depression, and decrease secretion. Animals were sacrificed at the end of the recordings. Somatosensory electrical stimulation and electrophysiology After anesthesia induction, transcutaneous stimulation electrodes were clipped near forelimb peripheral nerves (medial, ulnar, and radial nerve), in the configuration noted in Fig. 1a. These copper metal clips were wrapped around the forelimb and then connected to a Multi-Channel Systems Stimulus Generator (MCS STG4000 series) to deliver transcutaneous stimulation. SES current parameters were set by determining the maximum amount of current where no evoked movement in the forelimb was seen (typically 300–750 μA currents). Schematic of the Experiment. a, Somatosensory electrical stimulation was applied directly to the distal forelimb while neural activity was recorded under anesthesia. b, Schematic of the stimulation paradigm. c, Averaged evoked potential in the local field potential during SES Following a craniotomy and a durectomy procedure, either 64-channel custom probes in a tetrode configuration (n = 5, 1 X 4/8, Neuronexus, MI) or 32 channel tungsten microwire arrays (n = 2, MEAs, Tucker-Davis Technologies or TDT, FL) were implanted using precise stereotactic measurements into layer 5 of motor cortex (1200–1500 μm deep; + 1.5 to + 2.0 anterior to bregma and + 2 to + 3.5 lateral from midline) to record extracellular neural activity. In general, tetrodes allow better isolation of single neurons. However, as our microwire recordings also demonstrated identical findings, we have grouped the results together. Spike data was sampled at 24414 Hz and LFP data at 1018 Hz. ZIF–clip based analog headstages with a unity gain and high impedance (~ 1 MΩ) were used. Unsorted multi-unit, single-unit, and LFP data were then recorded from 30 min to 1 h to ensure stability of recordings and to minimize drift during stimulation experiments. Then a baseline period of neural activity (~ 30–60 min) was recorded, followed by a recording of neural activity during SES. The stimulation paradigm was 5 single pulses (square pulse width, 1 ms) at 10 Hz over 500 ms, i.e. with a 1% duty cycle. This was immediately followed by 500 ms of no stimulation. This pattern of 10 Hz stimulation and no stimulation was repeated on a 1 Hz pattern (30 min for n = 4, or 60 min for n = 3 animals, current magnitude: 564.29 ± 57.46 μA, Fig. 1b). After SES stimulation was finished, post recording of neural activity was used to assess the effects of stimulation lasting ~ 30–60 min. LFP and single-unit analyses Analyses were conducted using a combination of custom-written routines in Matlab 2015a/2017b (MathWorks, Natick, MA), along with functions and routines from the Chronux toolbox (http://chronux.org/). Pre-processing steps for LFP involved: removing periods of artifacts (removing broken channels, and noisy segments of LFPs based on offline visual inspection); taking the median signal (at every time point the median signal across electrodes was calculated); and z-scoring this signal (i.e. removal of the mean value, μ, of the signal, X, and dividing by the standard deviation, σ, z-scored LFP = [X–μ]/σ). Median referencing was used to remove any volume conducted signals and to thereby focus on signals local to M1. Single units were sorted using Plexon Offline Sorter (Plexon, Dallas, TX). Single units and LFPs were used to calculate spike-field coherence (SFC) using chronux functions. SFC measures phase synchronization between the LFP and spike times as a function of frequency; its magnitude is a function of frequency and has a value between 0 and 1 [22]. For its calculation, the pre- and post-stimulation time segments were first time matched to the shortest recording period, then segmented into 10 s segments, and then the coherency measured was averaged across segments. The average time series used for analysis was 46.8157 ± 6.5765 min. For the multitaper analysis, we used a time-bandwidth (TW) product of 10 with 19 tapers. To compare coherences across groups, a z-score was calculated using the programs available in the Chronux Toolkit. Coherence between activity in two regions was calculated and defined as $$ {C}_{xy}=\frac{\mid {R}_{xy}\mid }{\sqrt{\mid {R}_{xx}\mid}\sqrt{\mid {R}_{yy\mid }}} $$ where Rxx and Ryy are the power spectra and Rxy is the cross-spectrum. Spectral analysis was calculated in segmented time periods pre- and post-stimulation and averaged across these epochs. Mean coherence was calculated across the δ-band (0.3–4 Hz, i.e. all values in the range were averaged together), θ-band (6–10 Hz), α-band (8–15 Hz), β-band (18–25 Hz), γ-band (30–60 Hz). For the frequency band analysis, statistical analysis was performed on the average coherence estimates of each frequency band's respective pre-SFC and post-SFC values (see section below). We also equaled the number of spikes in the pre- and post-stimulation period to account for the changes in firing rates [30]. The power spectrum of the LFP channels used in the coherence calculation, as well as for overall LFP power change in pre- and post-stimulation, was also determined using the multitaper method. For spiking analyses, sorted spikes were binned at 50 ms. A significant change in firing was estimated by calculating the mean post-stimulation firing rate and checking if it was outside of the 95% distribution of pre-stimulation firing rate distribution. Some analyses were further filtered down by choosing high signal-to-noise ratio (SNR) units. To clearly identify units with stable waveforms and high amplitudes, we measured SNR using the following equation: $$ SNR=\frac{A}{2\ast {SD}_{noise}} $$ Where A is the peak-to-peak voltage of the averaged spike waveform and SDnoise is the standard deviation of the "noise", or the baseline fluctuations in the voltage during the first 245 microseconds of the saved waveform snippet [31]. Spike width analysis We grouped neurons based on the width of the recorded spikes. Spike width was calculated by finding the distance between the peak of the waveform and its valley. Past studies have demonstrated that spike width can distinguish putative fast spiking interneurons and pyramidal neurons [27, 31]. To specify a cutoff, we applied k-means to the entire neuronal population. In general, our results were concordant with this previous literature. We thus used values of 100–400 μs for narrow-width, putative interneurons and 500–1000 μs for broad-width, putative pyramidal neurons. Parametric statistics were used in this study, and each test was implemented within MATLAB. We used t-tests for comparison of power between pre- and post- SES sessions, as well as t-tests for the comparison of SFC pre and post-SES averaged across each common frequency band used in previous literature (δ-band, θ-band, α-band, β-band, γ-band) [31]; we used a Bonferroni correction for multiple comparisons. We used Pearson's correlation and linear regression to evaluate trends between changes in firing rate and SFC after SES. The linear mixed-effects model (implemented using MATLAB fitlme) was used to compare the differences in SFC and firing rate in all units in Fig. 3f/g, and for the broad and narrow-width neurons in Fig. 4b. This model accounts for the fact that units, channels, or trials from the same animal are more correlated than those from different animals and is more stringent than computing statistical significance over all units, channels, and trials. Long Evans rats (n = 7) were implanted with either microwire (n = 2) or tetrode (n = 5) arrays in M1 (Fig. 1a). Stimulation was then applied to the distal forearm peripheral nerves (30 min for n = 4 animals, 60 min for n = 3 animals, current magnitude: 564.29 ± 57.46 μA). We found that the motor evoked response was clearly visible in the LFP and showed a large deflection during the train of pulses at 10 Hz that lasted 500 ms, i.e. with a 1% duty cycle (Fig. 1c). As expected, there was a decrement in the response within each train [32]. Firing rate changes We first examined if SES altered the firing rate of neurons in M1 (Fig. 2) and compared changes in firing rate relative to a pre-stimulation baseline period. The overall population was widely distributed and the mean change (1.791 Hz) and median change (− 0.2338 Hz) were close to a baseline value of 0. Examples of both a significant increase (mean pre = 2.603 Hz, mean post = 5.472 Hz, p < 0.05) and a decrease (mean pre = 14.198 Hz, mean post = 7.603 Hz, p < 0.05) in firing rate are shown. In general, all animals exhibited a firing rate change in the majority of the recorded neurons after SES (i.e. > greater than 50% with a net change in firing rate at 30 min post stimulation). In an example animal T54, 56% of its units decreased their firing rate, while 18% increased their firing rates (Fig. 2b). At a population level (n = 214 neurons), we found that while 36% of neurons exhibited an increase in firing (mean pre = 5.93 Hz, mean post = 14.93 Hz), 36% experienced a reduction in firing rate (mean pre = 8.63 Hz, mean post = 4.64 Hz), and 28% showed no change (mean pre = 6.77 Hz, mean post = 6.52 Hz) (Fig. 2c). Regardless of the length of the time period recorded and analyzed (30–60 min), we saw a significant change relative to the baseline across all animals in neurons that either significantly increased (p < 10− 04) or decreased (p < 10− 19) their firing rates. Together, these results indicate that SES can have persistent, but diverse effects on single neuron firing rates within M1. Changes in Firing Rate after SES. a, Violin plot of the firing rate changes for all neurons. The red cross represents the mean (1.7918); green triangle is median (− 0.2338). b, Example of either a significant decrease (p < 0.05; top) or increase (p < 0.05; bottom) in firing rate after SES. Also shown are tetrode waveforms and the interspike intervals. The dotted lines represent the mean during the pre-stimulation period. c, Percentage of neurons which significantly increased, decreased, or had no change for one animal (top) and for all animals (n = 7; bottom) Spike-field coherence changes We also investigated whether SES persistently modulated the synchronization between LFP and spike times as a function of frequency, i.e. spike-field coherence or SFC (Fig. 3) [25, 33]. We recorded both single unit spiking and LFP from the population of M1 units (Fig. 3a). SFC is a measure of how consistently a given unit fires relative to the phase of the median LFP (Fig. 3b). The only frequency band that showed a significant change after SES was the δ-band (Fig. 3c, mean change for 0.3–4 Hz δ-band pre- vs post-stimulation, t-test with Bonferroni correction, p < 10− 09). The θ-band (6–10 Hz), α-band (8–15 Hz), β-band (18–25 Hz), and γ-band (30–60 Hz) did not show any significant changes (p > 0.05). Changes in Spike Field Coherence (SFC) after SES. a, Schematic depicting neural spikes relative to LFP recordings from M1. b, Schematic of the relation of spiking to LFP for variations in the SFC. c, Comparison of the averaged SFC across each frequency band (see Methods) for all units before and after SES. (*p < 0.001). Error bars represent the standard error of the mean or SEM. d, Percentage of neurons which significantly increased, decreased, or had no change for all animals (n = 7). e, Violin plot of the SFC fold change relative to baseline for all neurons. A value of 1 represents a doubling of the SFC. f, Example single neuron and all neuron SFC plot for one animal. The grey box highlights 0.3–4 Hz band. Error bars are SEM. g, Mean SFC plot for all animal including all neurons (n = 214, *p < 0.001). Follows convention from f At a single neuron level, 64% of the units increased, 26.4% decreased, and 9.6% had no change in the δ-band SFC (Fig. 3d). At a population level, the majority of neurons demonstrated an increase in the δ-band SFC relative to the baseline period (Fig. 3e). Figure 3f shows a representative change in the SFC in the low frequency, δ-band (0.3–4 Hz) of a single neuron; this was also evident on average for all neurons recorded in that animal. When also examining all units (n = 214) from all seven animals, we again found evidence for a significant SFC increase in the lower frequency band (mixed-effects model which takes into account that multiple neurons were recorded from the same animal, Fig. 3g, p < 10− 05) [34]. This indicates that after SES, neural firing was significantly more likely to be phase-locked to low-frequency oscillatory dynamics. Narrow and broad spiking neurons We further investigated the differences in firing rate and SFC by classifying neurons into two distinct groups: narrow-spiking, putative interneurons (100–400 μs), and the broad-spiking, putative pyramidal neurons (500–1000 μs) [27, 31]. Figure 4a shows an example animal's distribution of neuron spike widths; the color labels are based on a k-means classification. Interestingly, broad- spiking neurons demonstrated a robust increase in the SFC after SES (mixed linear model, p < 10− 06); there was no change in firing based on this classification. In contrast, narrow-spiking neurons did not show significant changes in either firing rate or SFC after SES. This implies that putative pyramidal neurons might be a main driver of the increase in SFC in the δ-band after SES. Comparison of Broad and Narrow-Width Spiking Units. a, Example animal's distribution of neurons classified by spike widths (n = 46). The color coding is based on k-means clustering. b, Differences in the spiking activity and SFC for narrow-width (left blue column) and broad-width (right red column) (*p < 0.001) We also examined if global changes to the LFP were also evident. The LFP is widely believed to represent an aggregate mesoscale measurement of activity [21]. There was not a significant change in the LFP power (Fig. 5). LFP Power Before and After SES. Shows the power spectrum of the LFP prior to and after SES. There was no significant relationship observed Firing rate and SFC changes are independent As shown above, SES significantly modulated both the firing rates and the δ-band SFC. While we used methods to account for changes in firing rates (see Methods), it is possible that the SFC changes were co-regulated with the change in firing rate. We thus examined the relationship between the two variables. Interestingly, the firing rate and δ-band SFC were not significantly correlated with one another (Fig. 6, r = 0.1300, p > 0.05). This suggested that the effects of SES on the firing rate and the SFC were independent of each other. Comparison of Changes in Firing Rate versus SFC. Plot shows correlation of single neuron changes in firing rate versus the corresponding SFC change. There was not a significant relationship between the two (r = 0.13, p > 0.05). Line was generated using linear regression We found that SES can induce persistent M1 plasticity lasting at least 30–60 min after the end of stimulation; over half of the neural population significantly changed its firing rate in response to SES. Moreover, phase locking of firing to mesoscale oscillatory dynamics was significantly modulated in a manner that was independent of the direction of change in firing rate. The most prominent SFC increase occurred in the low frequency range; there was not a concomitant change in LFP power. Together, these finding suggests that SES can directly modulate M1 dynamics. Relation to previous models of SES Studies have previously shown that SES can apparently alter both the sensorimotor representations of the stimulated body part as well as excitability [9, 10, 17]. Changes in sensorimotor representations have been primarily examined using functional imaging [11], which is an indirect measure of neural activity. Moreover, in both humans and rodents, SES has also been shown to increase excitability as measured by responses to TMS pulses [9, 17]. The main uncertainty was whether M1 is directly affected by SES. Our results add to this body of literature by demonstrating three main points. First, SES can directly modulate the activity patterns of M1; this is demonstrated by the changes in firing rates of single neurons. Second, our findings of a diversity of neural firing changes suggest a more complex neural response to SES. A better understanding of the diversity of responses and their underlying neural basis (e.g. neural connectivity, cell-types) might help improve the efficacy of SES. Third, our results suggest two possible mechanisms of SES. Namely, there was a change in spontaneous firing rate as well as coupling to mesoscale dynamics. Somatosensory electrical stimulation and neural plasticity SES induced plasticity appears to be experienced differentially by the large sets of M1 neurons recorded; while a majority of the neurons experienced a change in firing rate, the extent and the direction of change was variable. Moreover, the changes in firing rate appears to equally affect both putative interneurons and pyramidal neurons. What are the potential mechanisms that can account for the diversity of changes in neural firing? On a macroscopic level, SES evoked deflections in the M1 LFP during stimulation (Fig. 1c). This is consistent with past work showing that sensory inputs can directly influence motor areas [35,36,37]. The reduction in response with each pulse is also consistent with the adaptation evident during sensory stimulation [32]. It is quite likely that the observed input also triggered synchronous spiking in M1. Thus, it is possible that the extent that a single neuron participated in the synchronous spiking during SES could account for the observed direction of change. It is possible that repetitive stimulation of sensory inputs to an area can result in short-term homeostatic regulation of network dynamics [38,39,40]. SES could also trigger activity-dependent synaptic plasticity [41, 42]. In general, brief periods of activity can trigger long-term potentiation and long-term depression that depends on the specific patterns of activation [38, 43]. Such activity can also increase or decrease the intrinsic excitability of presynaptic neurons [38, 44]. This mechanism might explain the diversity of plasticity evident at the level of single neurons. It is also worth noting that emerging computational methods to quantify functional network connectivity [23] might eventually be used to predict the specific plasticity effects at a single neuron level. Another possibility is that the observed changes in M1 firing are the result of network plasticity in the sensorimotor system. Electrical stimulation of peripheral nerves causes synchronous activation of muscle spindles and cutaneous afferents that appear to target area specific activation and reorganization in primary somatosensory areas [14, 45,46,47]. Moreover, SES can trigger changes in TMS-evoked MEPs [9, 17, 18]. While past work has suggested that mechanisms of plasticity below the brainstem may not account for excitability changes [9, 18, 19], it is reasonable to suppose that larger scale network dynamics are modulated [20]. In this scenario, the observed changes in M1 could be the result of plasticity at other cortical sites. For example, given the known strong connections between sensory and motor areas [3], changes at a primary sensory area could result in spontaneous firing changes at a connected site. Spike coupling to low frequency oscillations The greatest change in the coupling of neural spiking to oscillatory LFP dynamics was in the δ-band, also known as low frequency oscillations (LFO) [22, 48]. Our results further suggest that the change in coupling or phase-locking to mesoscale dynamics is independent from the changes in firing rate. For example, at a single neuron level, changes in firing rate did not predict changes in SFC. Moreover, we observed a change in SFC for putative pyramidal neurons without a concomitant change in firing rate. It is unclear what might drive this change. The lack of a change in LFP power in the LFO range suggests that changes in input to M1 are not a main driver; LFP is widely believed to be a measure of synaptic inputs [21, 28, 29]. Changes in intrinsic excitability is certainly a possible mechanism through which neurons can be more coupled to population dynamics [38]. This might also explain the previously observed changes in M1 evoked potentials after SES [9, 17]. Alternatively, changes in local synaptic connectivity [29], i.e. as distinct from synchronous inputs to M1, could be a driver of the changes in neural coupling to population dynamics. What might be the broader physiological consequences of SES induced changes in LFO dynamics? In general, ketamine anesthesia is known to result in such low-frequency oscillatory activity [22, 48]. However, in rodents, non-human primates and humans, LFOs have been observed at the level of spiking and LFP in the motor cortex during reaching tasks [22, 24, 48, 49]. It has been postulated that LFOs represent an intrinsic property of motor circuits that are involved in the production of fast and accurate movements. Stroke disrupts these movement related potentials in humans, which are highly correlated with motor impairments [22, 49]. LFOs are therefore a potential biomarker of restored circuit dynamics after stroke as it relates to fast and accurate skilled reaching [20, 22]. Interestingly, our recent study also found that parameters for modulation of LFOs in anesthesia also generalized to the awake state [22]. It is thus possible that the locking of spiking to LFOs is a general principle for the cortical effects of SES. In other words, SES might be particularly suited for modulating the neural dynamics linked to cortical slow oscillations. Future work can examine if SES also similarly modulates movement-related spiking in the healthy or perilesional cortex; this might be one mechanism through which SES improves function in stroke patients [20, 50]. In summary, brief periods of SES induced long-lasting cortical plasticity in M1. We identified significant changes in firing rate and spike coupling to low frequency oscillations in the majority of recorded neurons. Further tailoring of these processes to identified cortical dynamics might further improve the efficacy of SES in those with motor disabilities after stroke or other acquired brain injuries [22, 50]. EEG: Electroencephalogram LFO: Low frequency oscillation LFP: Local field potential M1: Primary motor cortex MEP: Mean evoked potential PSD: SDnoise : Standard deviation of noise SEM: SES: Somatosensory electrical stimulation SFC: Spike field coherence SNR: TMS: TW: Time-bandwidth α-band: Alpha-band β-band: Beta-band γ-band: Gamma-band θ-band: Theta-band δ-band: Delta-band Johansson RS, Flanagan JR. Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat Rev Neurosci. 2009;10(5):345–59. Richardson AG, Attiah MA, Berman JI, Chen HI, Liu X, Zhang M, Van der Spiegel J, Lucas TH. The effects of acute cortical somatosensory deafferentation on grip force control. Cortex. 2016;74:1–8. Kaas JH. The functional organization of somatosensory cortex in primates. Ann Anat. 1993;175(6):509–18. Qi HX, Kaas JH, Reed JL. The reactivation of somatosensory cortex and behavioral recovery after sensory loss in mature primates. Front Syst Neurosci. 2014;8:84. LaMotte RH, Mountcastle VB. Disorders in somesthesis following lesions of parietal lobe. J Neurophysiol. 1979;42(2):400–19. Randolph M, Semmes J. Behavioral consequences of selective subtotal ablations in the postcentral gyrus of Macaca mulatta. Brain Res. 1974;70(1):55–70. Cohen LG, Brasil-Neto JP, Pascual-Leone A, Hallett M. Plasticity of cortical motor output organization following deafferentation, cerebral lesions, and skill acquisition. Adv Neurol. 1993;63:187–200. Sanes JN, Wang J, Donoghue JP. Immediate and delayed changes of rat motor cortical output representation with new forelimb configurations. Cereb Cortex. 1992;2(2):141–52. Kaelin-Lang A, Luft AR, Sawaki L, Burstein AH, Sohn YH, Cohen LG. Modulation of human corticomotor excitability by somatosensory input. J Physiol. 2002;540(Pt 2:623–33. Ikuno KM, Matsuo A, Shomoto K. Sensory electrical stimulation for recovery of hand and arm function in stroke patients: a review of the literature. J Novel Physiotherapies. 2012;S1:007. Hamdy S, Rothwell JC, Aziz Q, Singh KD, Thompson DG. Long-term reorganization of human motor cortex driven by short-term sensory stimulation. Nat Neurosci. 1998;1(1):64–8. Byl NN, Merzenich MM, Jenkins WM. A primate genesis model of focal dystonia and repetitive strain injury: I. Learning-induced dedifferentiation of the representation of the hand in the primary somatosensory cortex in adult monkeys. Neurology. 1996;47(2):508–20. Wang X, Merzenich MM, Sameshima K, Jenkins WM. Remodelling of hand representation in adult cortex determined by timing of tactile stimulation. Nature. 1995;378(6552):71–5. Wu CW, Seo HJ, Cohen LG. Influence of electric somatosensory stimulation on paretic-hand function in chronic stroke. Arch Phys Med Rehabil. 2006;87(3):351–7. Conforto AB, Ferreiro KN, Tomasi C, dos Santos RL, Moreira VL, Marie SK, Baltieri SC, Scaff M, Cohen LG. Effects of somatosensory stimulation on motor function after subacute stroke. Neurorehabil Neural Repair. 2010;24(3):263–72. Celnik P, Hummel F, Harris-Love M, Wolk R, Cohen LG. Somatosensory stimulation enhances the effects of training functional hand tasks in patients with chronic stroke. Arch Phys Med Rehabil. 2007;88(11):1369–76. Luft AR, Kaelin-Lang A, Hauser TK, Buitrago MM, Thakor NV, Hanley DF, Cohen LG. Modulation of rodent cortical motor excitability by somatosensory input. Exp Brain Res. 2002;142(4):562–9. Golaszewski SM, Bergmann J, Christova M, Kunz AB, Kronbichler M, Rafolt D, Gallasch E, Staffen W, Trinka E, Nardone R. Modulation of motor cortex excitability by different levels of whole-hand afferent electrical stimulation. Clin Neurophysiol. 2012;123(1):193–9. Tinazzi M, Zarattini S, Valeriani M, Romito S, Farina S, Moretto G, Smania N, Fiaschi A, Abbruzzese G. Long-lasting modulation of human motor cortex following prolonged transcutaneous electrical nerve stimulation (TENS) of forearm muscles: evidence of reciprocal inhibition and facilitation. Exp Brain Res. 2005;161(4):457–64. Tu-Chan AP, Natraj N, Godlove J, Abrams G, Ganguly K. Effects of somatosensory electrical stimulation on motor function and cortical oscillations. J Neuroeng Rehabil. 2017;14(1):113. Buzsaki G. Neural syntax: cell assemblies, synapsembles, and readers. Neuron. 2010;68(3):362–85. Ramanathan DS, Guo L, Gulati T, Davidson G, Hishinuma AK, Won SJ, Knight RT, Chang EF, Swanson RA, Ganguly K. Low-frequency cortical activity is a neuromodulatory target that tracks recovery after stroke. Nat Med. 2018;24(8):1257–67. Sadtler PT, Quick KM, Golub MD, Chase SM, Ryu SI, Tyler-Kabara EC, Yu BM, Batista AP. Neural constraints on learning. Nature. 2014;512(7515):423–6. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012;487(7405):51–6. Mitra PP, Pesaran B. Analysis of dynamic brain imaging data. Biophys J. 1999;76(2):691–708. Gulati T, Guo L, Ramanathan DS, Bodepudi A, Ganguly K. Neural reactivations during sleep determine network credit assignment. Nat Neurosci. 2017;20(9):1277–84. Vinck M, Womelsdorf T, Buffalo EA, Desimone R, Fries P. Attentional modulation of cell-class-specific gamma-band synchronization in awake monkey area v4. Neuron. 2013;80(4):1077–89. Buzsaki G, Wang XJ. Mechanisms of gamma oscillations. Annu Rev Neurosci. 2012;35:203–25. Okun M, Steinmetz N, Cossell L, Iacaruso MF, Ko H, Bartho P, Moore T, Hofer SB, Mrsic-Flogel TD, Carandini M, et al. Diverse coupling of neurons to populations in sensory cortex. Nature. 2015;521(7553):511–5. Mitchell JF, Sundberg KA, Reynolds JH. Spatial attention decorrelates intrinsic activity fluctuations in macaque area V4. Neuron. 2009;63(6):879–88. Gulati T, Won SJ, Ramanathan DS, Wong CC, Bodepudi A, Swanson RA, Ganguly K. Robust neuroprosthetic control from the stroke perilesional cortex. J Neurosci. 2015;35(22):8653–61. Castro-Alamancos MA. Dynamics of sensory thalamocortical synaptic networks during information processing states. Prog Neurobiol. 2004;74(4):213–47. Gulati T, Ramanathan DS, Wong CC, Ganguly K. Reactivation of emergent task-related ensembles during slow-wave sleep after neuroprosthetic learning. Nat Neurosci. 2014. Aarts E, Verhage M, Veenvliet JV, Dolan CV, van der Sluis S. A solution to dependency: using multilevel analysis to accommodate nested data. Nat Neurosci. 2014;17(4):491–6. Ganguly K, Kleinfeld D. Goal-directed whisking increases phase-locking between vibrissa movement and electrical activity in primary sensory cortex in rat. Proc Natl Acad Sci U S A. 2004;101(33):12348–53. Pruszynski JA, Kurtzer I, Nashed JY, Omrani M, Brouwer B, Scott SH. Primary motor cortex underlies multi-joint integration for fast feedback control. Nature. 2011;478(7369):387–90. Scott SH. The computational and neural basis of voluntary motor control and planning. Trends Cogn Sci. 2012;16(11):541–9. Ganguly K, Poo MM. Activity-dependent neural plasticity from bench to bedside. Neuron. 2013;80(3):729–41. Feldman DE. Synaptic mechanisms for plasticity in neocortex. Annu Rev Neurosci. 2009;32:33–55. Castro-Alamancos MA, Donoghue JP, Connors BW. Different forms of synaptic plasticity in somatosensory and motor areas of the neocortex. J Neurosci. 1995;15(7 Pt 2):5324–33. Zhang X, Poo MM. Progress in neural plasticity. Sci China Life Sci. 2010;53(3):322–9. Francis JT, Song W. Neuroplasticity of the sensorimotor cortex during learning. Neural Plasticity. 2011;2011:310737. Markram H, Lubke J, Frotscher M, Sakmann B. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science. 1997;275(5297):213–5. Rebesco JM, Miller LE. Altering function in cortical networks by short-latency, paired stimulation. Conf Proc IEEE Eng Med Biol Soc. 2010;2010:1674–7. Sawaki L, Wu CW, Kaelin-Lang A, Cohen LG. Effects of somatosensory stimulation on use-dependent plasticity in chronic stroke. Stroke. 2006;37(1):246–7. Merzenich MM, Nelson RJ, Stryker MP, Cynader MS, Schoppmann A, Zook JM. Somatosensory cortical map changes following digit amputation in adult monkeys. J Comp Neurol. 1984;224(4):591–605. Kaas JH: Chapter 30 – Somatosensory System. In: The Human Nervous System. edn. Edited by Mai JaP, G: Academic Press; 2014: 1428. Hall TM, de Carvalho F, Jackson A. A common structure underlies low-frequency cortical dynamics in movement, sleep, and sedation. Neuron. 2014;83(5):1185–99. Yilmaz O, Cho W, Braun C, Birbaumer N, Ramos-Murguialday A. Movement related cortical potentials in severe chronic stroke. Conf Proc IEEE Eng Med Biol Soc. 2013;2013:2216–9. Ganguly K, Byl NN, Abrams GM. Neurorehabilitation: motor recovery after stroke as an example. Ann Neurol. 2013;74(3):373–81. Research reported in this publication was supported by the National Institute Of Neurological Disorders And Stroke of the National Institutes of Health under Award Number K02NS093014 and Award Number 4R00NS097620. Research reported in this publication was also supported by the NIMH under Award Number R01MH111871 and by funds from the UCSF Department of Neurology. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. Neurology & Rehabilitation Service, San Francisco Veterans Affairs Medical Center, San Francisco, CA, USA April K. Hishinuma & Karunesh Ganguly Department of Neurology, University of California, San Francisco, San Francisco, CA, USA April K. Hishinuma, Tanuj Gulati, Mark J. Burish & Karunesh Ganguly Department of Neurosurgery, The University of Texas Health Science Center at Houston, Houston, TX, USA Mark J. Burish Department of Biomedical Sciences and Neurology, Cedars-Sinai, Los Angeles, CA, USA Tanuj Gulati April K. Hishinuma Karunesh Ganguly AH analyzed the data. MB conducted the experiments. TG provided code and assisted with analysis. KG supervised all aspects of the experiments. AH and KG wrote and edited the manuscript. All authors read and approved the final manuscript. Correspondence to Karunesh Ganguly. KG has submitted a provisional patent application for closed-loop SES. The results presented in this manuscript are not a part of the provisional patent application. AH, MB and TG do not have any competing interests. Hishinuma, A.K., Gulati, T., Burish, M.J. et al. Large-scale changes in cortical dynamics triggered by repetitive somatosensory electrical stimulation. J NeuroEngineering Rehabil 16, 59 (2019). https://doi.org/10.1186/s12984-019-0520-1 Somatosensory electrical stimulation (SES) Peripheral nerve Spiking dynamics Motor cortex Low frequency oscillations
CommonCrawl
Is $\arctan(t)$ an energy or power signal? Is the function $\arctan(t)$ an energy or power signal? I've tried working out the integrals for energy and power but I get stuck on both. continuous-signals signal-power signal-energy Laurent Duval MaxVMaxV The typical inverse tangent function maps the input range of $ t \in (-\infty,\infty)$ into an output range of $(-\pi/2, \pi/2)$ as in the figure below: Based on this, its values are bounded for all $t$. Yet, since the intergal of its square is unbounded, then it cannot be an energy signal;i.e., $$ \int_{-\infty}^{\infty} |\tan^{-1}(t)|^2 dt ~~~~~\to \infty $$ Moreover the average power of this signal is nonzero and finite, hence this is a power signal;i.e., $$ 0 < \lim_{T \to \infty} \frac{1}{T}\int_{-T/2}^{T/2} |\tan^{-1}(t)|^2 dt ~~~~~ < \infty $$ To evaluate the last integral, use the following MATLAB figure to see that the square of the inverse tangent is always less than $(\pi/2)^2$ for all $t$. Hence to find an upper bound on the integral we can use $|\tan^{-1}(t)|^2 < (\pi/2)^2 = K$. Then the integral becomes $$ 0 < \lim_{T \to \infty} \frac{1}{T}\int_{-T/2}^{T/2} |\tan^{-1}(t)|^2 dt < \lim_{T \to \infty} \frac{1}{T}\int_{-T/2}^{T/2} K dt = \lim_{T \to \infty} \frac{1}{T} K \cdot T = K ~~~~~ < \infty $$ Fat32Fat32 $\begingroup$ Can you elaborate on the second part? How did you determine that that monstrosity can't be zero or infinite without solving it? $\endgroup$ – MaxV Aug 19 '18 at 23:13 $\begingroup$ I havent explicitly evaluated the integral, and you needn't either. However an observation just as Matt L did helps. If you can show that $\text{arctan(t)}$ is less than a constant for all $t$, then you can use that constant signal to find an upper bound on the integral... Let me add that. $\endgroup$ – Fat32 Aug 20 '18 at 9:22 The fact that the signal $x(t)=\arctan(t)$ has finite power can be easily shown by noticing that $$|\arctan(t)|^2\le\frac{\pi^2}{4}\tag{1}$$ from which $$\overline{x^2(t)}\tag{2}=\lim_{T\to\infty}\frac{1}{T}\int_{-T/2}^{T/2}|\arctan(t)|^2dt\le\frac{\pi^2}{4}\lim_{T\to\infty}\frac{1}{T}\int_{-T/2}^{T/2}dt=\frac{\pi^2}{4}$$ Matt L.Matt L. The question is: is it energy/power or not? So you can hope to answer without computing the exact value (if it exists). I'd advocate that in a more generic case, it is interesting to develop a methodology. In signal processing, as in all domains of science, it is enriching to develop in the following order: simplification of the setting (with observation), intuition on the result, development of an educated guess, computations (approximate or precise). Let us first use our observational skills. The function is anti-symmetric, and energy or power use absolute values. They are thus symmetric. So we can just look at one side, for instance $[0,+\infty[$. The function is - continuous: so bounded integrals on $[0,+T[$ exist. - increasing: so bounded integrals on $[0,+T[$ are increasing, hence finite or infinite. - bounded: the graph takes its values in $[0,+\pi/2[$. Second: intuition So we don't have problems at zero, and only at infinity. Third: educated guess The function $\arctan$ almost (for $t$ positive, and high enough) behave like $f(t)=1$ (increaningness is important). So in average the interval is likely to converge (to something positive, not infinite), and not on the whole span. So likely, the function is power, but not energy. By the way, asking for being energy or power is often a sign that the answers may differ. Fourth: computations I won't add much to preceding answers. To tell an integral from zero or infinite, you can bound it by below and above. $\arctan(1) = \pi/4$, thus for $T>1$, using the important increasing property: $$\int_0^1 0 dt + \int_1^T \pi^2/16 dt\le \int_0^T \arctan^2 tdt \le \int_0^1 \pi^2/16 dt + \int_1^T \pi^2/4 dt $$ or $$ \pi^2/16 (T-1)\le \int_0^T \arctan^2 tdt \le \pi^2/16+ \pi^2/4 (T-1) \,.$$ With or without dividing by $T$, you can answer your question. Fifth: there are three types of mathematicians: whose who can count, and those who can't A lazy but cumbersome approach is to check whether $\arctan$ has a useful primitive form. And it does have one (see $\arctan$ table for instance) $$ F(t) =t \arctan t - \frac{1}{2}\ln(1+t^2) + C \,.$$ Even if you could have stop before, because we already have the answer, I strongly believe it is always useful to improve your skills by: allowing a last verification of the above trail (it does not hurt), going further. First, for the above primitive, setting $C=0$ is ok to evaluate $F(T)-F(0)$, and factorizing by $T$ easily confirms the above assumptions. Second, you can discover an intersting rule for primitives/integrals of inverse functions: In mathematics, integrals of inverse functions can be computed by means of a formula that expresses the antiderivatives of the inverse $ > f^{-1}$ of a continuous and invertible function $ f$, in terms of $f^{-1}$ and an antiderivative of $f$. This formula was published in 1905 by Charles-Ange Laisant. Under some conditions: $$ \int f^{-1}(t)\,dt=tf^{-1}(t)-F\circ f^{-1}(t)+C,$$ which also has a nice "pure graphic version" (aka: proof without words): Not the answer you're looking for? Browse other questions tagged continuous-signals signal-power signal-energy or ask your own question. Significance of energy and power signals in real world Power and Energy spectral densities of a linear system Is dirac delta (impulse) signal, a power or energy signal? Product of power signal and energy signal Recommended literature to learn essential aspects of signals and its propagation What is signal-energy? Negative exponential signal's energy and power Energy of compressed signals How to determine the power of a finite signal? What kind of power spectrum implies uncorrelation in time?
CommonCrawl
Determine and compute the elementary matrices: Linear Algebra The picture for the problem is shown in file below Linear Algebra Matrices Louise W File #1 (jpg) Alessandro Iraci The elementary matrix associated to a row operation that sends A to A' is the one such that AE = A' or EA = A' or something else entirely? It's the latter, EA=A'. Here is a nice presentation: https://www.youtube.com/watch?v=gA7m5lttIcU Ok, that's what I deduced from the rest of the text, I wanted to be sure I wasn't messing up, it's been too long since I checked the basic definitions. I assume that the elementary matrices are the ones such that $EA = A'$, that is, they have to be multiplied on the left. a) We have \[ E_1 = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \qquad \text{ as } \qquad \begin{bmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} 1 & 1 & 0 \\ 4 & 9 & 1 \\ 0 & 5 & 4 \end{bmatrix} = \begin{bmatrix} 2 & 2 & 0 \\ 4 & 9 & 1 \\ 0 & 5 & 4 \end{bmatrix}, \] \[ E_2 = \begin{bmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \qquad \text{ as } \qquad \begin{bmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} 2 & 2 & 0 \\ 4 & 9 & 1 \\ 0 & 5 & 4 \end{bmatrix} = \begin{bmatrix} 2 & 2 & 0 \\ 0 & 5 & 1 \\ 0 & 5 & 4 \end{bmatrix}, \]\[ E_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1 \end{bmatrix}, \qquad \text{ as } \qquad \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1 \end{bmatrix}\begin{bmatrix} 2 & 2 & 0 \\ 4 & 9 & 1 \\ 0 & 5 & 4 \end{bmatrix} = \begin{bmatrix} 2 & 2 & 0 \\ 0 & 5 & 1 \\ 0 & 0 & 3 \end{bmatrix}. \] b) By applying the same elementary operations to the identity matrix, we have \[ E_3 E_2 E_1 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1 \end{bmatrix} \begin{bmatrix} 2 & 0 & 0 \\ -4 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 0 & 0 \\ -4 & 1 & 0 \\ 4 & -1 & 1 \end{bmatrix}\] c) To compute the inverse, we apply the inverse operations in inverse order, that is \[ E_1 = \begin{bmatrix} 1/2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad E_2^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad E_3^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \] and so the product is \[ L = \begin{bmatrix} 1/2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 1/2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 1/2 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \] d) $L$ is already lower triangular and $U$ is already upper triangular, but the canonical decomposition requires $L$ to be unitriangular. We can do so by inserting an extra $E_1 E_1^{-1}$ in the factorisation. So, \[ A = \begin{bmatrix} 1/2 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \begin{bmatrix} 2 & 2 & 0 \\ 0 & 5 & 1 \\ 0 & 0 & 3 \end{bmatrix} = \begin{bmatrix} 1/2 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \begin{bmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1/2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 2 & 2 & 0 \\ 0 & 5 & 1 \\ 0 & 0 & 3 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 0 \\ 0 & 5 & 1 \\ 0 & 0 & 3 \end{bmatrix} \] is the desired decomposition. I have to show all my work in order to receive full credit. Would writing all of this down be considered good for showing my work do you think? I honestly have no idea because I am behind in the class and therefore very lost. I think so, I didn't skip a single step. However, two remarks. First understand that I don't know what you did in class, how you are supposed to solve these problems, and which kind of details you are supposed to provide. Second, as a general advice, I strongly recommend you to try to read and understand the solution before submitting it: educators are generally smart, if you submit some work without even understanding what you wrote, they are likely to realise it. Again, as a general advice, it is okay to ask for help, or even for a full solution, if you don't know how to solve a problem. It is really not okay to passively transcribe a solution that you don't understand. It's actually harmful to your own comprehension and that whoever is grading your assignment can usually tell if you know what you are doing or not, even if you submit a 100% correct solution. I've graded a bunch of assignments myself, trust me on that. I guess what I should have said was, is the first one (a) good? I just don't understand how you got E1, E2, and E3. You are completely right though. I plan on going back through the textbook and teaching myself everything this week because the exam is Friday. Ahhh I'm wondering if I should turn it in now or just take the 0. You have a very good point Well, E1, E2, and E3 are the matrices that, when multiplied to the left, correspond to the row operation that you apply to your matrix. I showed all the multiplications, you see, when you multiply E1 to the left to A, you are applying the operation that is to multiply the first row of A by 2. The same goes for E2 and E3, I hope you see it now. When are you supposed to submit this assignment? My advice is to go through the textbook with the solution in front of you and try to see if you can fully understand it. If you do, go on and submit it. If not, well, you can try, and maybe even get full marks, but expect some questions about it at the oral exam, if you have one. The examiner will want to check if you know what you're doing. Indeed the matrices E_1, E_2, and E_3 are obtained from the identity matrix by applying the corresponding elementary row reductions on the identity matrix. That's how you get E_1, E_2, and E_3. Derivative Hadamard Product inverse of matrices Linear algebra| finding a base Show that $tr(\sqrt{\sqrt A B \sqrt A})\leq 1$ , where both $A$ and $B$ are positive semidefinite with $tr(A)=tr(B)=1.$ Consider the function, prove that it's bilinear, symmetric, and positive definite (Short deadline) Linear Algebra Problem Help - Matrix/Markov
CommonCrawl
What are Cryptographic Multi-linear Maps? I've encountered this term many times in the fields of Fully-Homomorphic Encryption and Obfuscation. I want to learn those subjects and Cryptographic Linear Maps seems to be an obstacle in the way. Can you help me with that and explain it in simple words (as much as you can)? encryption group-theory BushBush $\begingroup$ Do you know what cryptographic bilinear maps are? $\;$ $\endgroup$ – user991 May 13 '14 at 16:35 Ok, I will start with a cryptographic bilinear map. Cryptographic Bilinear Map A cryptographic bilinear map $e: G_1\times G_2 \rightarrow G_T$ as the name says is a map that is linear in both components, i.e., it holds that for all $g\in G_1$ and $h\in G_2$ and all $a,b\in Z_p$ (where $p$ is the order of all groups) we have that $e(g^a,h^b)=e(g,h)^{ab}$. For cryptographic use we want a setting where the discrete logarithm problem is hard in $G_1,G_2$ and $G_T$ (typically one requires also that variants of the computational Diffie Hellman problem are hard and the decisional Diffie Hellman problem may be easy in $G_1$ - that depends on the type of pairng, e.g., symmetric or asymmetric and w/o efficient computable isomorphisms from $G_2$ to $G_1$). Furthermore, we want $e$ to be efficiently computable and non-degenerate, i.e., if $g$ and $h$ generate $G_1$ and $G_2$ respectively, then $e(g,h)$ generates $G_T$ ($e$ which maps to $1$ in $G_T$ is useless). For pairings there are various Diffie-Hellman like assumptions (bilinear Diffie Hellman assumptions), e.g., in the setting of $G_1=G_2$ and $g=h$ the bilinear CDHP states that it should be hard to compute $e(g,g)^{abc}$ given $g^a$, $g^b$ and $g^c$. Groups $G_1$ and $G_2$ that we know today where we find such a pairing $e$ are (subgroups of) rational points on elliptic curves (or abelian varieties) over finite fields and the group $G_T$ is a subgroup of a multiplicative group of a finite field. The map $e$ thereby is a variant of the Weil or Tate pairing. Cryptographic Multilinear Map Now, a cryptographic multilinear map for $n>2$ is a $n$-linear map $e:G_1\times \ldots \times G_n \rightarrow G_T$, i.e., a map that is linear in all $n$ components. Essentially one requires the same as above but you want to have that it be $n$-linear, which basically means that $e(g_1^{a_1},\ldots,g_n^{a_n})=e(g_1,\ldots,g_n)^{\prod_{i=1}^n a_i}$ and that it is non-degenerate. As in the bilinear case we can also define multilinear CDHP etc (see for instance here). Such cryptographic multilinear maps would be a very nice tool if they would work as in the pairing setting (as envisioned in the paper linked above). However, the recent constructions for cryptographic multilinear maps are based on tools from constructions of fully homomorphic encryption schemes (AFAIK there is a construction using ideal lattices and one over the integers) and there the encodings of the elements are noisy and thus are approximations of the ideal case and not that nice as within the pairing setting working with cyclic groups. Note that some papers simply assume the existence of a multilinear map that behaves like an extension from the known bilinear map setting with the associated Diffie Hellman assumptions (although it is not yet known if such multilinear maps exist). Worth mentioning is the first candidate construction for indistinguishabiliy obfuscation that relies on a concept related to multilinear maps (which also yields functional encryption for circuits). DrLecterDrLecter $\begingroup$ What is the current state of the art in multilinear maps? $\endgroup$ – Jus12 Aug 30 '16 at 14:52 $\begingroup$ @Jus12 That's a very dynamic field and currently it's in a "break and repair" state. Unfortunately, I am not up to date at the moment. But you may browse the ePrint archive (eprint.iacr.org/2016) to check out the very recent papers. $\endgroup$ – DrLecter Sep 2 '16 at 7:31 $\begingroup$ Related to @DrLecter 's comment: malb.io/are-graded-encoding-schemes-broken-yet.html (you may think Graded Encoding Scheme are almost equivalent to multilinear maps) $\endgroup$ – Hilder Vitor Lima Pereira Jan 20 '18 at 21:30 I'll add something to the previous answer. The first way to construct multilinear maps is pretty recent and was introduced by Sanjam Garg, Craig Gentry and Shai Halevi. What we want is given groups $G_1,\ldots,G_n$ and $G_T$ a map: $$e:G_1\times\cdots\times G_n\to G_T$$ that satisfies the linearity property in DrLecter's answer. It's worth nothing here, that $G_1,\ldots,G_n$ do not necessarily have to be distinct groups. Often, they would be the same group and we would call this a symmetric $n$-linear map. Current constructions are typically called leveled multilinear maps. In the symmetric case this can be described as follows. Assume that you have groups $G_1,\ldots,G_n$ and bilinear maps $e_{i,j}:G_i\times G_j\to G_{i+j}$ for all $i,j > 0$ that satisfy $i+j\leq k$. We can construct a symmetric $n$-linear map $e:G_1\times\cdots\times G_1\to G_n$ from this by recursively defining: $$e_2 = e_{1,1},\;\; e_n(g_1,\ldots,g_n) = e_{1,k-1}(g_1,e_{n-1}(g_2,\ldots,g_n))$$ For example if $n=3$, then we would compute: $$e_3(g_1^{a_1},g_2^{a_2},g_3^{a_3}) = e_{1,2}(g_1^{a_1},e_{1,1}(g_2^{a_2},g_3^{a_3}))=e_{1,2}(g_1,e_{1,1}(g_2,g_3)^{a_2a_3})^{a_1}=e_{1,2}(g_1,e_{1,1}(g_2,g_3))^{a_1a_2a_3},$$ which shows that $e_3$ is $3$-linear. The asymmetric case is slightly more complicated (it's a bit heavy notation wise and the subscripts become sets instead of integers). Thus, current constructions have a bit more structure than a pure $n$-linear group. This is both good and bad in that more versatile structures can allow for more elaborate constructions, but on the other hand, if we only need $n$-linearity, then the extra structure might lead to possible attacks. However, in a generic group setting the leveled $n$-linear and $n$-linear settings are essentially equivalent, so there might not be that much danger. Edvard FagerholmEdvard Fagerholm $\begingroup$ Can you add citation to the paper? $\endgroup$ – Jus12 Aug 30 '16 at 14:53 $\begingroup$ eprint.iacr.org/2012/610 $\endgroup$ – Alin Tomescu Feb 3 '17 at 16:16 Not the answer you're looking for? Browse other questions tagged encryption group-theory or ask your own question. What is "witness encryption"? Can you explain Bleichenbacher's CCA attack on PKCS#1 v1.5? How does bitmessage encryption work? Join Datasets While Protecting Anonymity Deciphering "easy" ciphers without hints What is insecure about this simple AES implementation? What is this called: encrypt $X$ with key $E$, decrypt $X$ with key $D$? What are the actual merits of Galois/Counter Mode?
CommonCrawl
A CUDA-powered method for the feature extraction and unsupervised analysis of medical images Parallel Computing Technologies 2020 Leonardo Rundo ORCID: orcid.org/0000-0003-3341-54831,2,3 na1, Andrea Tangherloni ORCID: orcid.org/0000-0002-5856-44533,4,5,6 na1, Paolo Cazzaniga ORCID: orcid.org/0000-0001-7780-04347,8, Matteo Mistri3, Simone Galimberti3, Ramona Woitek ORCID: orcid.org/0000-0002-9146-91591,2,9, Evis Sala ORCID: orcid.org/0000-0002-5518-93601,2, Giancarlo Mauri ORCID: orcid.org/0000-0003-3520-40223,8 & Marco S. Nobile ORCID: orcid.org/0000-0002-7692-72033,8,10 The Journal of Supercomputing volume 77, pages 8514–8531 (2021)Cite this article Image texture extraction and analysis are fundamental steps in computer vision. In particular, considering the biomedical field, quantitative imaging methods are increasingly gaining importance because they convey scientifically and clinically relevant information for prediction, prognosis, and treatment response assessment. In this context, radiomic approaches are fostering large-scale studies that can have a significant impact in the clinical practice. In this work, we present a novel method, called CHASM (Cuda, HAralick & SoM), which is accelerated on the graphics processing unit (GPU) for quantitative imaging analyses based on Haralick features and on the self-organizing map (SOM). The Haralick features extraction step relies upon the gray-level co-occurrence matrix, which is computationally burdensome on medical images characterized by a high bit depth. The downstream analyses exploit the SOM with the goal of identifying the underlying clusters of pixels in an unsupervised manner. CHASM is conceived to leverage the parallel computation capabilities of modern GPUs. Analyzing ovarian cancer computed tomography images, CHASM achieved up to \(\sim 19.5\times \) and \(\sim 37\times \) speed-up factors for the Haralick feature extraction and for the SOM execution, respectively, compared to the corresponding C++ coded sequential versions. Such computational results point out the potential of GPUs in the clinical research. The use of high-performance computing (HPC) is gaining ground in high-dimensional imaging data processing [16], as in the context of hyperspectral image processing [5, 35] and medical image analysis [12]. In particular, for the specific case of medical imaging, along with the acceleration of the training of deep neural networks [47], graphics processing unit (GPU)-powered implementations allowed for real-time performance in image reconstruction [46, 59], segmentation [2], as well as feature extraction [42] and classification [22]. Moreover, multi-core and many-core architectures were exploited to accelerate computationally expensive medical image enhancement and quantification tasks [41, 52, 53]. Feature extraction is the first phase in quantitative imaging as it allows us to perform fundamental tasks in computer vision, such as object detection [55] and representation [48]. Even though deep learning has recently gained ground, conventional machine learning models built on top of handcrafted texture features still play a key role in practical applications, especially relying upon the interpretability of the results [54]. With particular reference to biomedicine, quantitative imaging methods are increasingly gaining importance since they convey scientifically and clinically relevant information for prediction, prognosis, and treatment response assessment [62]. In this context, radiomic approaches are endorsing the transition towards large-scale studies with a relevant impact in the clinical practice [26]. Indeed, radiomics involves the extraction and the analysis of a huge amount of features mined from medical images [25]. The ultimate goal is the objective and quantitative description of tumor phenotypes [13, 26]. Assuming that radiomic features convey information about the different cancer phenotypes, their combination with genomics can enable intra- and inter-tumor heterogeneity studies [45]. Among the radiomic texture feature classes [50], Haralick features are the most well-established and interpretable [18, 19]. These second-order statistics are based on the gray-level co-occurrence matrix (GLCM) that stores the co-occurrence frequency of similar intensity levels over the region (i.e., intensity value pairs). In radiology, Haralick features allow clinicians to assess image regions characterized by heterogeneous/homogeneous areas or local intensity variations [6]. GLCM-based texture features have been extensively exploited in several medical image analysis tasks, such as breast ultrasound (US) classification [15], brain tissue and tumor segmentation on magnetic resonance (MR) images [36, 49], and volume-preserving non-rigid lung computed tomography (CT) image registration [37]. Unfortunately, the computation of these features is considerably burdensome on images characterized by a high bit depth (e.g., 16 bits), such as in the case of medical images that have to convey detailed visual information [31, 43]. As a matter of fact, with the existing computational tools, the range of intensity values of an image must be reduced and limited to achieve an efficient radiomic feature computation [63]. In addition, considering the downstream analyses of the extracted (handcrafted or learned) features, numerous machine learning models can be employed in the context of computer vision [32]. Kohonen self-organizing maps (SOMs) [24] are one of most effective techniques that were applied to biomedical data clustering [3]. A SOM is a special class of artificial neural networks based on the idea of "competitive" learning, able to self-organize the weights in an unsupervised fashion, leading to a spontaneous partitioning of the dataset according to the mutual similarities of the input vectors. SOMs have been used for the analysis of medical images, especially in segmentation tasks in combination with unsupervised clustering [1, 27] or evolutionary computation techniques [36]. Several radiomics toolboxes are available, such as MaZda [51], written in C++, the Computational Environment for Radiological Research (CERR) in MATLAB [4, 10], PyRadiomics in Python [58], and Local Image Feature Extractor (LIFEx) in Java [33]. Importantly, considering 16-bit images, these tools are not suitable for the extraction of the voxel-based feature maps by preserving the initial grayscale range. This limitation is emphasized when dealing with feature extraction tasks on the whole input image, especially for image classification purposes [30]. In this work, we propose a novel GPU-powered pipeline, called CHASM, for the Haralick feature extraction and the downstream unsupervised SOM-based analysis of the feature maps computed on medical images. CHASM exploits HaraliCU [42], a GPU-enabled approach, capable of overcoming the issues of existing tools by effectively computing the feature maps for high-resolution images with their full dynamics of grayscale levels, and CUDA-SOM, a GPU-based implementation of the SOMs for the identification of clusters of pixels in the image. CHASM offloads the computations onto the cores of GPUs, thus allowing us to drastically reduce the running time of the analyses executed on central processing units (CPUs). In the experimental tests performed on ovarian cancer CT images [39], CHASM allowed us to achieved up to \(20\times \) speed-up with respect to the corresponding sequential implementation. The remainder of the manuscript is organized as follows: Section 2 introduces the basic concepts on Haralick features extraction and on the SOMs. Section 3 describes the proposed GPU-accelerated pipeline, and the obtained results are shown and discussed in Sect. 4. Finally, concluding remarks and future directions are provided in Sect. 5. Haralick features extraction Haralick features are GLCM-based texture descriptors that are used to analyze the textural characteristics of an image according to second-order statistics [18, 19]. In medical imaging, these features have shown an appropriate characterization of the cancer imaging phenotype [42]. For instance, the entropy feature is the most promising quantitative imaging biomarker for the analysis of the heterogeneity characterizing cancer imaging [11]. Haralick features are computed from the GLCM, which denotes the co-occurrence frequency of similar intensity levels over the analyzed region. The study conducted in [14] pointed out the existing dependencies among the Haralick features, highlighting how they can be exploited to perform calculations pertaining to other features or intermediate results [42]. Nevertheless, a quantization step (i.e., the compression of the initial intensity range) is generally applied for practical reasons [64], leading to an irreversible loss of information. Even though Brynolfsson et al. stated that the impact of noise is reduced by quantizing the grayscale levels, allowing for obtaining more descriptive Haralick features in MR images, this compression could remarkably affect the discriminative power of the feature-based classification tasks [20]. In any case, the grayscale compression is mostly applied to deal with the computational costs that would be required to calculate these features considering the full grayscale dynamics. In order to speed up the calculation of Haralick features, HPC solutions can be exploited. For instance, GPUs have been intensively leveraged, being effective computational solutions in life sciences [12, 34]. In the context of Haralick feature extraction on GPUs, different optimization strategies have been presented. For instance, a packed representation of the symmetric GLCM was proposed to only store nonzero elements [14]. By so doing, a simple lookup table, which maps the index of the packed co-matrix, was used to calculate the features reducing the latencies due to memory reads and increasing the overall performances. This efficient implementation allowed for calculating the Haralick features on 12-bit intensity depth images. Another strategy to store the GLCM consists in the meta-GLCM array proposed by Tsai et al. [56], which uses an indirect encoding scheme that fully exploits the GPU memory hierarchy. The valuable amount of information conveyed by medical images, in terms of both image resolution and pixel depth, should be maintained for automated processing [43], since clinically useful pictorial content could be identified in addition to the naked eye perception. For these motivations, HaraliCU [42] was developed aiming at efficiently keeping the full dynamics of the gray levels (i.e., 16 bits in the case of biomedical images). HaraliCU was tested on brain metastatic tumor MR and ovarian cancer CT images. The Self-Organizing Map The Kohonen SOM [24] is an unsupervised machine learning approach used to perform classification tasks according to the similarity of the data. Technically, a SOM is a class of artificial neural network able to produce low-dimensional (traditionally, bi-dimensional) and discrete representation of the input space. One important distinction compared to other neural networks is that SOMs exploit a paradigm named competitive learning, which is radically different with respect to classic methods relying upon the minimization of the error by means of gradient descent approaches. Specifically, a SOM is composed of a network of K artificial neurons named units. Usually, the units are all interconnected and logically organized as a \(M \times N\) square or hexagonal grid. Additionally, an input layer composed of D artificial neurons is fully connected to all the units in the SOM, where D is equal to the length of the input samples. At the beginning of the learning phase, K random weight vectors, \({\mathbf{w }}_{k} \in {\mathbb {R}}^D\), \(k=1, \ldots , K\), are initialized and associated with the units of the network. Then, each input vector \({\mathbf{x }} \in {\mathbb {R}}^D\) in the data set is presented to all units in the SOM. The unit with the most similar weights to the input vector becomes the best matching unit (\({\text {BMU}}\)). In our implementation, we assume a similarity based on an Euclidean distance, i.e., $$\begin{aligned} {\text {BMU}} = \mathop {{\mathrm{arg\,min}}}\limits _k {|| \mathbf{x }-\mathbf{w }_k ||}. \end{aligned}$$ Once the \({\text {BMU}}\) is identified, the weights in the network are adjusted toward the input vector using Eq. (2): $$\begin{aligned} {\mathbf{w }}_k(t+1) = {\mathbf{w }}_k(t)+ \alpha (t)\varDelta _k(\text {BMU}, t)(\mathbf{x }-\mathbf{w }_k(t)), \end{aligned}$$ where \(\alpha (t)\) is the learning rate at the iteration t. In this work, we used a linearly decreasing learning rate defined as: $$\begin{aligned} \alpha (t)=\alpha (0)-(\alpha (0)-\alpha (t_\text {max}))\left( \frac{t}{t_\text {max}}\right) , \end{aligned}$$ with \(\alpha (0)=1\cdot 10^{-1}\) and \(\alpha (t_\text {max})=1\cdot 10^{-3}\). The function \(\varDelta _k(\text {BMU}, t)\) denotes the lateral interaction between the \({\text {BMU}}\) and the unit k during the iteration t. Note that only the units in the set of neighbors of the \({\text {BMU}}\) are updated using Eq. (2). In this work, we exploited the following interaction function: $$\begin{aligned} \varDelta _k(\text {BMU}, t) = \exp \left( - \frac{|| k-\text {BMU} ||^2}{2{\sigma (t)}^2} \right) , \end{aligned}$$ where \(\sigma (t) = \sigma (0)-(\sigma (0) \frac{t}{t_\text {max}})\). In our implementation, we also used an internal heuristics that calculates the initial \(\sigma \) as: $$\begin{aligned} \sigma (0)=1+ \left( \frac{\max \{M,N\}}{3} \right) , \end{aligned}$$ and sets \(\sigma (t_\text {max})=0\). Once all weights are updated, the SOM proceeds by analyzing the next sample. The learning algorithm iterates until a stopping criterion is met. In this work, we run the algorithm for \(t_\text {max}=100\) iterations. Relying upon this peculiar type of learning algorithm—which does not require the samples to be labeled as units that spontaneously self-organize to represent prototypes of the input vectors—SOMs are well-suited for unsupervised learning. As a matter of fact, at the end of the learning process, the samples will be associated with their BMUs in such a way that similar input vectors (with respect to the Euclidean distance) will find place in similar units. There exist several implementations of SOMs, e.g., the kohonen package for R [61]; the KNLL and SOMpp for C++; the MiniSom and PyCluster libraries for Python [9]. These CPU-based implementations suffer from two main limitations: (1) the computational burden associated with the SOM learning algorithm and (2) the data structures employed, resulting in a very high memory footprint, which increases along with the size of the network. Various GPU-based implementations of the SOM have been presented in the literature; for instance, in [29], the authors assessed the performance of their GPU version, which parallelizes distance finding, reduction operation to identify the minimum value, and weights adjust, allowing them to speed up the computation up to \(\sim 32\times \) with respect to the CPU. In [8], the authors proposed a parallelization of both the learning and clustering algorithms and applied it to MR image segmentation, achieving up to \(90\times \) speed up with respect to a MATLAB implementation running on the CPU. Unfortunately, both previous works present custom GPU implementations of the SOM, tailored to specific problems. We thus propose here CUDA-SOM as a general-purpose implementation of the SOM accelerated on GPUs using CUDA, freely, and publicly available. The proposed GPU-accelerated method In this section, we first outline the main CUDA characteristics; then, our GPU implementations of the Haralick feature extraction and SOMs are described in detail. Finally, we present the CHASM framework for medical image analysis. NVIDIA CUDA is a parallel computing platform and programming model based on many-core streaming multiprocessors (SMs), which adheres to the single instruction multiple data (SIMD) architecture [28]. In CUDA, the CPU (host) offloads the parallel calculations onto one or more GPUs (devices) by using kernels, which are functions launched from the host and replicated so that each GPU thread can run the same code at the same time. In CUDA, the threads are organized into three-dimensional structures called blocks, which, in turn, compose three-dimensional grids. The CUDA scheduler assigns blocks to the different SMs, which ultimately run them. In each SM, the threads are divided into warps, which are tight groups of 32 threads, executed in locksteps. Considering the CUDA execution pattern, any possible divergent path taken by some threads in a warp should be removed to avoid the serialization of the execution, which would result in a decrease in the overall performance. CUDA has a complex memory hierarchy divided into multiple memory types, which have their own advantages and drawbacks. For instance, the shared memory is very small but has very low access latency, and it is generally used for intra-block communications. The global memory is large and characterized by high access latency; however, it is visible by all threads and can be used for inter-block communications as well as for communications between the host and the devices. Considering these peculiarities, the data structures should be carefully optimized to reach the theoretical peak performance of the GPU [34]. HaraliCU HaraliCU is a GPU-powered tool that realizes an efficient computation of the GLCM and the extraction of an exhaustive set of the Haralick features [42]. The user has granted full control over the settings of HaraliCU, i.e., the distance offset \(\delta \), the orientation \(\theta \), and the window size \(\omega \times \omega \), while the neighborhood \({\mathcal {N}}\) is defined according to \(\delta \) and \(\theta \). In addition, the user can decide the padding conditions (e.g., zero padding or symmetric padding) and the number of quantized gray-level Q. HaraliCU exploits an effective and efficient encoding, to mitigate the memory requirements related to the allocation of a GLCM having \(2^{16}\) rows and columns for each sliding window, and to the size of each GLCM, which is strictly related to the number of different gray levels inside the considered sliding window. Such an encoding removes all zero elements inside the GLCM and consists in storing each GLCM in a list-based data structure where each element is a pair \(\langle {{\mathsf {GrayPair}}}, {{\mathsf {freq}}} \rangle \), with \({{\mathsf {GrayPair}}}\) being a couple \(\langle i,j \rangle \) of gray levels and \({{\mathsf {freq}}}\) the corresponding frequency of the considered sliding window. Overall, the number of elements composing the GLCM is equal to the number of pairs \(\langle {{\mathsf {reference}}}, {{\mathsf {neighbor}}} \rangle \) that can be identified inside the sliding window, considering the distance \(\delta \) (see [42] for additional information). The parallelization on the GPU is realized by assigning each pixel of the input image to a thread, since there are no dependencies between the sliding windows. By so doing, each thread computes all features related to its pixel, which is the center of the corresponding window. To fully exploit the GPU acceleration, HaraliCU makes use of a bi-dimensional structure for both the number of blocks and the number of threads. In particular, the number of threads is set to 16 for both the components of the bi-dimensional structure, taking into account the CUDA warp size (i.e., 32 threads) and the limited number of registers, while the number of blocks is set according to the number of the pixels (\(\#\text {pixels}\)) of the input image. HaraliCU is an open-source software that can be freely downloaded from GitHub at the following address: https://github.com/andrea-tango/HaraliCU. Instructions for the compilation and execution of HaraliCU are provided in the same Web page. HaraliCU requires a NVIDIA GPU along with the version 8 of CUDA (or greater) and the OpenCV library [23] version 3.4.1 (or greater). CUDA-SOM CUDA-SOM is a GPU-based implementation of the self-organizing map [24], where the learning algorithm of the network is parallelized by means of specific kernels that deal with the calculations required to compute the distance between samples and neuron's weights and to update the network when the \(\text {BMU}\) is identified. CUDA-SOM supports two GPU-accelerated learning modalities, named online and batch: In the online mode, the weights of the network are updated after each input vector is processed; In the batch mode, the network is updated after the whole training set is analyzed. Although the batch mode is characterized by a slower convergence compared to online mode, it allows for a higher degree of parallelization, since all calculations of the BMUs for all input vectors can be parallelized across the CUDA cores. For this reason, in this work we exploited the batch mode. The CUDA kernels implemented in CUDA-SOM allow us to minimize the data transfer between host and device, to the results of the computation, thus reducing the impact of moving data across the PCI-e bus. Moreover, our implementation exploits the Thrust library of CUDA for array scan and reduction, so that the computational time can be further reduced. CUDA-SOM implements numerous variants of the Kohonen maps. Moreover, even though it integrates several heuristics for the automatic configurations, the user can select a wide array of optional parameters, notably number of neurons; rows and columns of the network; initial and final learning rates; maximum number of iterations of the learning process; radius of the updating function; type of distances used for BMU (e.g., Euclidean, Manhattan, Tanimoto); type of neighbor function (Gaussian, bubble, Mexican hat); type of lattice (square or hexagonal); type of boundary conditions (e.g., toroidal); linear of exponential decay, for both the radius and the learning rate; whether to perform a normalization on the input vectors or not. Moreover, CUDA-SOM gives control on some CUDA-specific settings, e.g., the GPU to be used for the calculations (in the case of multi-GPU systems), or the number of threads per block. CUDA-SOM is open source and available for downloading on GitHub at the following address: https://github.com/mistrello96/CUDA-SOM. The proposed pipeline begins by extracting the features from the input image by using HaraliCU, which exploits the GPU acceleration. Then, the features of all pixels are transferred on the CPU for further processing. For each pixel, the features are averaged across all directions and linearized. The feature vectors of all pixels are then fed to CUDA-SOM, which performs the unsupervised learning on the GPU. The information about the BMUs for each pixel is returned to the CPU, where it is clustered and mapped onto the original image. The overall functioning of CHASM is schematized in Fig. 1. Scheme of CHASM's functioning. The green blocks are executed on the GPU, while the blue dashed blocks are executed on the CPU In this work, we extract the following 16 Haralick features [18, 19], which are then considered for the unsupervised learning [42]: angular second moment, autocorrelation, cluster prominence, cluster shade, contrast, difference entropy, difference variance, dissimilarity, entropy, homogeneity, inverse difference moment, maximum probability, sum of average, sum of entropy, sum of squares, sum of variance. The mathematical definitions of the features are provided in Supplementary Materials. Since all the medical images analyzed by CHASM have size \(512\times 512\) pixels, each pixel yields an input vector characterized by \(16 \times 512 \times 512 = 4,194,304\) features. Examples of results of the unsupervised learning process: a count plot, created according to the number of input vectors assigned to each unit; b corresponding U-matrix; c result of the clustering of the units according to weight distances. The information about which pixels are assigned to each cluster is then mapped onto the original figure Once the learning process is completed and the BMUs for each input vector are identified, a count plot can be created. In this particular graphical representation, each sample is plotted and assigned to its corresponding BMU. A darker color corresponds to a higher number of samples assigned to the same BMU (see Fig. 2a). Another representation of the outcome of a learning process is the so-called U-matrix, wherein the regions with high inter-neighbor distance are represented with a lighter color. The U-matrix is helpful to provide a visual insight into the boundaries between groups of similar neurons (see Fig. 2b). By using agglomerative clustering, these groups can be identified, and the samples belonging to each unit can be automatically assigned to the proper cluster. The outcome of this process is shown in Fig. 2c. In this work, we exploited the agglomerative clustering implemented in the scikit-learn package, using Euclidean affinity and the Ward linkage criterion [60]. As described in the previous section, we validated HaraliCU by comparing the values of the features contrast, correlation, energy, and homogeneity with those extracted using the built-in functions graycomatrix. Imaging dataset and tumoral habitats For the tests presented here, we considered a medical dataset composed of axial contrast-enhanced CT series of patients with high-grade serous ovarian cancer (matrix size: \(512 \times 512\) pixels, pixel spacing: \(\sim 0.65 \times 0.65\,\hbox {mm}^2\), slice thickness: 5.0 mm). All the CT images were encoded in the Digital Imaging and Communications in Medicine (DICOM) format with an intensity depth of 16 bits. Texture features have shown the ability of evaluating intra- and inter-tumor heterogeneity [39, 57]. Pelvic lesions only were selected for this work. Figure 3 shows two examples of the input CT images along with the corresponding tumoral habitats. In order to simplify the visual interpretation of the results, we used a uniform color coding for the spurious pixels included in disconnected clusters. It is appreciable how CHASM can find patterns to represent both intra-tumoral (Fig. 3a) and inter-tumoral heterogeneity (Fig. 3a) across disconnected lesions. Examples of CT images with the pelvic lesions outlined by the green contour. The corresponding tumoral habitats, resulting from the unsupervised SOM-based clustering, are overimposed onto the input CT image and displayed at the bottom right of each sub-figure: a tumor composed of a single connected component; b tumor composed of two connected components Computational results The computational performance of the pipeline presented in this work was assessed by independently considering the two steps parallelized on the GPU: Haralick feature extraction and unsupervised SOM-based image pixel clustering. HaraliCU The CUDA-based version of the Haralick feature extraction, employed in our pipeline, was tested against a CPU version coded in C++, which resulted extremely efficient with respect to the MATLAB version, based on the graycomatrix and graycoprops functions, to extract Haralick features on brain metastasis MR images [42]. As a matter of fact, by varying the grayscale range from \(2^4\) to \(2^9\) levels, we achieved speed-up values around \(50\times \) and \(200\times \), respectively. The GPU version of HaraliCU was executed on an NVIDIA GeForce GTX Titan X (3072 cores, clock 1.075 GHz, 12 GB of RAM), CUDA toolkit version 8 (driver 387.26), running on a workstation with Ubuntu 16.04 LTS, equipped with a CPU Intel Core i\(7-2600\) (clock 3.4 GHz) and 8 GB of RAM. The CPU version was run on the same workstation, relying upon the computational power provided by the CPU Intel Core i\(7-2600\). The CPU version was compiled by using the GNU C++ compiler (version 5.4.0) with optimization flag -O3, while the GPU version was compiled with the CUDA Toolkit 8 by exploiting the optimization flag -O3 for both CPU and GPU codes. In order to collect statistically sound results and take into consideration the variability and heterogeneity typically characterizing medical images, we randomly selected 30 images from 3 different patients (10 per patient) affected by brain metastases and 30 images from 3 different patients affected by ovarian cancer. We tested both the CPU and GPU versions by considering various window sizes, that is, \(\omega \in \{3, 7, 11, 15, 19, 23, 27, 31\}\), as well as two different intensity levels (i.e., \(2^8\) and \(2^{16}\)). For each combination of \(\omega \) and intensity levels, we also enabled and disabled the GLCM symmetry to evaluate how the symmetry affects the running time. The speed-up achieved by HaraliCU considering only \(2^8\) intensity levels increases almost linearly up to \(\omega = 19\) (data not shown, see [42] for details); by disabling the GLCM symmetry and using \(\omega = 31\), we obtained the highest speed-ups of \(12.74\times \) and \(12.71\times \) on brain metastasis (\(256\times 256\) pixels) and ovarian cancer images (\(512\times 512\) pixels), respectively. When the full dynamics of the grayscale levels (i.e., \(2^{16}\)) is considered, HaraliCU outperforms the sequential counterpart, achieving speed-ups up to \(15.80\times \) with \(\omega =31\) and \(19.50\times \) with \(\omega =23\), on brain metastasis and ovarian cancer images, respectively. Taking into account ovarian cancer images, when \(\omega \) is greater than 23 pixels, the speed-up decreases for two reasons. First, since a thread is launched for each pixel, it must consider more neighbor pixels that might have very different gray-level intensities. This corresponds with increasing the required workload that each thread must perform; however, considering that the GPU cores have a lower clock frequency than the CPU cores, the speed-up is clearly reduced. Second, the GPU resources are saturated as the GLCM size associated with each thread may increase due to the high full-dynamic range. In this specific situation, the total GLCM size might overwhelm the capacity of the global memory and some threads might handle different pixels, thus computing the corresponding Haralick features in a sequential way. CUDA-SOM The performance of CUDA-SOM was assessed by comparing it to a C++ version, running on a single core of the CPU, specifically developed for this work, since the available R implementation of the SOM is limited to a network size of \(150\times 150\) neurons. We first run a batch of tests to the aim of analyzing the impact of the number of samples and the size of the SOM on the computational time. We employed a machine equipped with 16 GB of RAM, a CPU Intel Core i7 4790k (clock 4.4 Ghz), and an NVIDIA GeForce 1050ti (768 cores, clock 1.392 GHz, 4 GB of RAM). As reported in Table 1, the running time of the C++ version is lower in the case of small size SOMs (i.e., \(20\times 20\) neurons), while the GPU allows us to reduce the computation time, up to \(5.75\times \), when a SOM having size \(300\times 300\) neurons is trained with 120,000 samples. Additional tests (data not shown) confirmed the trend observed, as the speed-up further increases to \(\sim 7\times \) with a SOM having size \(400\times 400\) neurons. Table 1 Running time required by the C++ and GPU versions of SOM, by varying the number of samples used to train the network and the number of neurons As a second batch of tests, we compared the performance of different NVIDIA GPUs, i.e., Titan Z (\(2 \times 2880\) cores, clock 0.876 GHz, 6 GB of RAM), Titan X (GM200, 3072 cores, clock 1.075 GHz, 12 GB of RAM), GeForce 1050ti (768 cores, clock 1.392 GHz, 4 GB of RAM), GeForce 1080ti (3584 cores, clock 1582 GHz, 11 GB of RAM), when executing CUDA-SOM with different SOM sizes, considering 60,000 samples and 7 features. Table 2 reports the speed-up values achieved by each GPU with respect to the C++ implementation. As expected, in the case of small-size SOMs, the CPU was more convenient than the GPUs; moreover, the GeForce 1080ti obtained the best results, by exploiting its highest clock frequency, achieving \(10\times \) speed-up in the case of the SOM with \(400\times 400\) neurons. Table 2 Speed-up achieved by CUDA-SOM using different GPUs compared to the C++ implementation Considering the analysis performed on medical images, in the case of ovarian cancer CT, the running time (including file loading) was of 79 and 1020 s in the case of 100 and 1000 iterations, respectively. To understand the advantage of CUDA-SOM, consider that the running time of the same SOM algorithm, implemented with C++ and OpenMP, is 2956 s to complete 100 iterations. This reduction in the running time corresponds to a \(37 \times \) speed-up. CUDA-SOM was executed on an NVIDIA Tesla P100 (3584 cores, clock 1.329 GHz, 16 GB of RAM), CUDA toolkit version 8 (driver 440.95.01), running on a computer node of the Cambridge Service for Data Driven Discovery (CSD3) with Scientific Linux 7. Each node is equipped with a single CPU Intel Xeon E5-2650 v4 (clock 2.2 GHz), 94 GB of RAM, and up to 4 NVIDIA Tesla P100 GPUs. The CPU version was run on the same node, relying upon the computational power provided by the CPU Intel Xeon E5-2650 v4. The CPU version was compiled by using the GNU C++ compiler (version 5.4.0) with optimization flag -O3, while the GPU version was compiled with the CUDA Toolkit 8.0 by exploiting the optimization flag -O3 for both CPU and GPU codes. Image texture extraction and analysis is playing a key role in quantitative biomedicine, leading to valuable applications in radiomics [13, 25, 26] and radiogenomics [38, 44] research, by also combining heterogeneous sources of information. Therefore, advanced computerized medical image analysis methods, specifically designed to deal with the massive amount of extracted features, as well as to discover intrinsic patterns in the analyzed data, could be beneficial for the definition of imaging biomarkers, which support clinical decision making towards precision medicine [40]. However, these large-scale studies need efficient techniques to drastically reduce the prohibitive running time that is typically required. In this work, we presented a novel method, named CHASM, which combines two CUDA-based computationally efficient approaches capable of effectively exploiting the power of the modern GPUs: (i) HaraliCU, which is used for Haralick features extraction and allows for accelerating the GLCM computation while keeping the full dynamic range in medical images; (ii) CUDA-SOM, which is exploited for unsupervised image pixel clustering, reduces the running time by leveraging the parallelization of the learning process of the network. Our pipeline was tested on a dataset composed of ovarian cancer CT images. Exploiting the GPU used during the two most computationally demanding phases of the pipeline, we achieved speed-ups up to \(19.50\times \) with HaraliCU and up to \(37\times \) with CUDA-SOM, compared to the CPU version implemented in C++, on our dataset. As a future development, we plan to improve HaraliCU by exploiting the vectorization of the input image matrices for a better GPU thread block managing. In order to enhance the scalability of the proposed approach, the dynamic parallelism, supported by CUDA, could be exploited to further parallelize the computations as soon as the workload increases (e.g., high window size). Moreover, even though the spatial and temporal locality are already exploited during the GLCM construction process, based on the sliding window, the usage of the GPU memory hierarchy might be optimized [17]. For what concerns CUDA-SOM, the main limitation of the tool is that it currently loads the whole dataset before launching the learning process. Because of that, CUDA-SOM might crash when the dataset exceeds the available GPU memory. We are therefore improving the implementation to read and stream the input vectors during the learning phase, in order to work with datasets of arbitrary size. To further accelerate the learning process, we will also extend CUDA-SOM to leverage low-latency memories (i.e., shared memory and constant memory). Finally, all the computational steps, depicted by the blue dashed blocks in Fig. 1, are currently executed on the CPU and represent a bottleneck of CHASM. We plan to develop them in CUDA to additionally accelerate the whole pipeline. Considering the biological validation of the texture-derived tumoral habitats [7], the combination of the imaging phenotype and genotype might unravel intra-/inter-tumor heterogeneity, as well as provide valuable insights into treatment response [21, 45], by effectively exploiting advanced computational techniques in oncology [3]. Aghajari E, Chandrashekhar GD (2017) Self-organizing map based extended fuzzy c-means (SEEFC) algorithm for image segmentation. Appl Soft Comput 54:347–363. https://doi.org/10.1016/j.asoc.2017.01.003 Al-Ayyoub M, Abu-Dalo AM, Jararweh Y, Jarrah M, Al Sa'd M (2015) A GPU-based implementations of the fuzzy c-means algorithms for medical image segmentation. J Supercomput 71(8):3149–3162. https://doi.org/10.1007/s11227-015-1431-y Ali HR, Jackson HW, Zanotelli VR, Danenberg E, Fischer JR, Bardwell H et al (2020) Imaging mass cytometry and multiplatform genomics define the phenogenomic landscape of breast cancer. Nat Cancer 1(2):163–175. https://doi.org/10.1038/s43018-020-0026-6 Apte AP, Iyer A, Crispin-Ortuzar M, Pandya R, van Dijk LV, Spezi E et al (2018) Extension of CERR for computational radiomics: a comprehensive MATLAB platform for reproducible radiomics research. Med Phys 45(8):3713–3720. https://doi.org/10.1002/mp.13046 Bascoy PG, Quesada-Barriuso P, Heras DB, Argüello F, Demir B, Bruzzone L (2019) Extended attribute profiles on GPU applied to hyperspectral image classification. J Supercomput 75(3):1565–1579. https://doi.org/10.1007/s11227-018-2690-1 Brynolfsson P, Nilsson D, Torheim T, Asklund T, Karlsson CT, Trygg J, Nyholm T, Garpebring A (2017) Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters. Sci Rep 7(1):4041. https://doi.org/10.1038/s41598-017-04151-4 Cherezov D, Goldgof D, Hall L, Gillies R, Schabath M, Müller H, Depeursinge A (2019) Revealing tumor habitats from texture heterogeneity analysis for classification of lung cancer malignancy and aggressiveness. Sci Rep 9(1):1–9. https://doi.org/10.1038/s41598-019-38831-0 De A, Zhang Y, Guo C (2016) A parallel adaptive segmentation method based on SOM and GPU with application to MRI image processing. Neurocomputing 198:180–189. https://doi.org/10.1016/j.neucom.2015.10.129 De Hoon MJ, Imoto S, Nolan J, Miyano S (2004) Open source clustering software. Bioinformatics 20(9):1453–1454. https://doi.org/10.1093/bioinformatics/bth078 Deasy JO, Blanco AI, Clark VH (2003) CERR: a computational environment for radiotherapy research. Med Phys 30(5):979–985. https://doi.org/10.1118/1.1568978 Dercle L, Ammari S, Bateson M, Durand PB, Haspinger E, Massard C, Jaudet C, Varga A, Deutsch E, Soria JC et al (2017) Limits of radiomic-based entropy as a surrogate of tumor heterogeneity: ROI-area, acquisition protocol and tissue site exert substantial influence. Sci Rep 7(1):7952. https://doi.org/10.1038/s41598-017-08310-5 Eklund A, Dufort P, Forsberg D, LaConte SM (2013) Medical image processing on the GPU-past, present and future. Med Image Anal 17(8):1073–1094. https://doi.org/10.1016/j.media.2013.05.008 Gillies RJ, Kinahan PE, Hricak H (2015) Radiomics: images are more than pictures, they are data. Radiology 278(2):563–577. https://doi.org/10.1148/radiol.2015151169 Gipp M, Marcus G, Harder N, Suratanee A, Rohr K, König R, Männer R (2012) Haralick's texture features computation accelerated by GPUs for biological applications. Modeling simulation and optimization of complex processes. Springer, Berlin, pp 127–137. https://doi.org/10.1007/978-3-642-25707-011 Gómez W, Pereira W, Infantosi AFC (2012) Analysis of co-occurrence texture statistics as a function of gray-level quantization for classifying breast ultrasound. IEEE Trans Med Imag 31(10):1889–1899. https://doi.org/10.1109/TMI.2012.2206398 Gulo CA, Sementille AC, Tavares JMR (2019) Techniques of medical image processing and analysis accelerated by high-performance computing: a systematic literature review. J Real Time Image Process. https://doi.org/10.1007/s11554-017-0734-z Gupta S, Xiang P, Zhou H (2013) Analyzing locality of memory references in GPU architectures. In: Proceedings of ACM SIGPLAN Workshop on Memory Systems Performance and Correctness. ACM, p 12. https://doi.org/10.1145/2492408.2492423 Haralick RM (1979) Statistical and structural approaches to texture. Proc IEEE 67(5):786–804. https://doi.org/10.1109/PROC.1979.11328 Haralick RM, Shanmugam K, Dinstein I (1973) Textural features for image classification. IEEE Trans Syst Man Cybern SMC–3(6):610–621. https://doi.org/10.1109/TSMC.1973.4309314 Jen CC, Yu SS (2015) Automatic detection of abnormal mammograms in mammographic images. Expert Syst Appl 42(6):3048–3055. https://doi.org/10.1016/j.eswa.2014.11.061 Jiménez-Sánchez A, Cybulska P, Mager KL, Koplev S, Cast O, Couturier DL et al (2020) Unraveling tumor-immune heterogeneity in advanced ovarian cancer uncovers immunogenic effect of chemotherapy. Genet Nat. https://doi.org/10.1038/s41588-020-0630-5 Junior JRF, Oliveira MC, de Azevedo-Marques PM (2017) Integrating 3D image descriptors of margin sharpness and texture on a GPU-optimized similar pulmonary nodule retrieval engine. J Supercomput 73(8):3451–3467. https://doi.org/10.1007/s11227-016-1818-4 Kaehler A, Bradski G (2016) Learning OpenCV 3: computer vision in C++ with the OpenCV library, vol 1. O'Reilly Media, Inc, Sebastopol Kohonen T (1990) The self-organizing map. Proc IEEE 78(9):1464–1480. https://doi.org/10.1109/5.58325 Lambin P, Leijenaar RT, Deist TM, Peerlings J, de Jong EE, van Timmeren J et al (2017) Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol 14(12):749. https://doi.org/10.1038/nrclinonc.2017.141 Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, van Stiphout RG, Granton P, Zegers CM, Gillies R, Boellard R, Dekker A et al (2012) Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer 48(4):441–446. https://doi.org/10.1016/j.ejca.2011.11.036 Logeswari T, Karnan M (2010) Hybrid self organizing map for improved implementation of brain MRI segmentation. In: Proceedings of International Conference on Signal Acquisition and Processing. IEEE, pp 248–252. https://doi.org/10.1109/ICSAP.2010.56 Luebke D (2008) CUDA: scalable parallel programming for high-performance scientific computing. In: Proceedings 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI). IEEE, pp 836–838. https://doi.org/10.1109/ISBI.2008.4541126 McConnell S, Sturgeon R, Henry G, Mayne A, Hurley R (2012) Scalability of self-organizing maps on a GPU cluster using OpenCL and CUDA. J Phys Conf Ser 341:012018. https://doi.org/10.1088/1742-6596/341/1/012018 Militello C, Rundo L, Minafra L, Cammarata FP, Calvaruso M, Conti V, Russo G (2020) MF2C3: Multi-feature fuzzy clustering to enhance cell colony detection in automated clonogenic assay evaluation. Symmetry 12(5):773. https://doi.org/10.3390/sym12050773 Militello C, Vitabile S, Rundo L, Russo G, Midiri M, Gilardi MC (2015) A fully automatic 2D segmentation method for uterine fibroid in MRgFUS treatment evaluation. Comput Biol Med 62:277–292. https://doi.org/10.1016/j.compbiomed.2015.04.030 Nanni L, Ghidoni S, Brahnam S (2017) Handcrafted versus non-handcrafted features for computer vision classification. Pattern Recogn 71:158–172. https://doi.org/10.1016/j.patcog.2017.05.025 Nioche C, Orlhac F, Boughdad S, Reuzé S, Goya-Outi J, Robert C, Pellot-Barakat C, Soussan M, Frouin F, Buvat I (2018) LIFEx: a freeware for radiomic feature calculation in multimodality imaging to accelerate advances in the characterization of tumor heterogeneity. Cancer Res 78(16):4786–4789. https://doi.org/10.1158/0008-5472.CAN-18-0125 Nobile MS, Cazzaniga P, Tangherloni A, Besozzi D (2016) Graphics processing units in bioinformatics, computational biology and systems biology. Brief Bioinform 18(5):870–885. https://doi.org/10.1093/bib/bbw058 Ordóñez Á, Argüello F, Heras DB, Demir B (2020) GPU-accelerated registration of hyperspectral images using KAZE features. J Supercomput. https://doi.org/10.1007/s11227-020-03214-0 Ortiz A, Górriz J, Ramírez J, Salas-Gonzalez D, Llamas-Elvira JM (2013) Two fully-unsupervised methods for MR brain image segmentation using SOM-based strategies. Appl Soft Comput 13(5):2668–2682. https://doi.org/10.1016/j.asoc.2012.11.020 Park S, Kim B, Lee J, Goo JM, Shin YG (2011) GGO nodule volume-preserving nonrigid lung registration using GLCM texture analysis. IEEE Trans Biomed Eng 58(10):2885–2894. https://doi.org/10.1109/TBME.2011.2162330 Pinker K, Shitano F, Sala E, Do RK, Young RJ, Wibmer AG, Hricak H, Sutton EJ, Morris EA (2018) Background, current role, and potential applications of radiogenomics. J Magn Reson Imag 47(3):604–620. https://doi.org/10.1002/jmri.25870 Rundo L, Beer L, Ursprung S, Martin-Gonzalez P, Markowetz F, Brenton JD, Crispin-Ortuzar M, Sala E, Woitek R (2020) Tissue-specific and interpretable sub-segmentation of whole tumour burden on CT images by unsupervised fuzzy clustering. Comput Biol Med. https://doi.org/10.1016/j.compbiomed.2020.103751 Rundo L, Pirrone R, Vitabile S, Sala E, Gambino O (2020) Recent advances of HCI in decision-making tasks for optimized clinical workflows and precision medicine. J Biomed Inform 108:103479. https://doi.org/10.1016/j.jbi.2020.103479 Rundo L, Tangherloni A, Cazzaniga P, Nobile MS, Russo G, Gilardi MC et al (2019) A novel framework for MR image segmentation and quantification by using MedGA. Comput Methods Progr Biomed 176:159–172. https://doi.org/10.1016/j.cmpb.2019.04.016 Rundo L, Tangherloni A, Galimberti S, Cazzaniga P, Woitek R, Sala E, et al. (2019) HaraliCU: GPU-powered Haralick feature extraction on medical images exploiting the full dynamics of gray-scale levels. In: Malyshkin V (ed) Proceedings of International Conference on Parallel Computing Technologies (PaCT), LNCS, vol 11657. Springer International Publishing, Cham, Switzerland, pp 304–318. 978-3-030-25636-4\_24 Rundo L, Tangherloni A, Nobile MS, Militello C, Besozzi D, Mauri G, Cazzaniga P (2019) MedGA: a novel evolutionary method for image enhancement in medical imaging systems. Expert Syst Appl 119:387–399. https://doi.org/10.1016/j.eswa.2018.11.013 Rutman AM, Kuo MD (2009) Radiogenomics: creating a link between molecular diagnostics and diagnostic imaging. Eur J Radiol 70(2):232–241. https://doi.org/10.1016/j.ejrad.2009.01.050 Sala E, Mema E, Himoto Y, Veeraraghavan H, Brenton JD, Snyder A, Weigelt B, Vargas HA (2017) Unravelling tumour heterogeneity using next-generation imaging: radiomics, radiogenomics, and habitat imaging. Clin Radiol 72(1):3–10. https://doi.org/10.1016/j.crad.2016.09.013 Schellmann M, Gorlatch S, Meiländer D, Kösters T, Schäfers K, Wübbeling F, Burger M (2011) Parallel medical image reconstruction: from graphics processing units (GPU) to grids. J Supercomput 57(2):151–160. https://doi.org/10.1007/s11227-010-0397-z Shen D, Wu G, Suk HI (2017) Deep learning in medical image analysis. Annu Rev Biomed Eng 19:221–248. https://doi.org/10.1146/annurev-bioeng-071516-044442 Soh LK, Tsatsoulis C (1999) Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices. IEEE Trans Geosci Remote Sens 37(2):780–795. https://doi.org/10.1109/36.752194 Sompong C, Wongthanavasu S (2017) An efficient brain tumor segmentation based on cellular automata and improved tumor-cut algorithm. Expert Syst Appl 72:231–244. https://doi.org/10.1016/j.eswa.2016.10.064 Stoyanova R, Takhar M, Tschudi Y, Ford JC, Solórzano G, Erho N, Balagurunathan Y, Punnen S, Davicioni E, Gillies RJ et al (2016) Prostate cancer radiomics and the promise of radiogenomics. Transl Cancer Res 5(4):432. https://doi.org/10.21037/tcr.2016.06.20 Szczypiński PM, Strzelecki M, Materka A, Klepaczko A (2009) MaZda-a software package for image texture analysis. Comput Methods Progr Biomed 94(1):66–76. https://doi.org/10.1016/j.cmpb.2008.08.005 Tangherloni A, Spolaor S, Cazzaniga P, Besozzi D, Rundo L, Mauri G, Nobile MS (2019) Biochemical parameter estimation vs. benchmark functions: a comparative study of optimization performance and representation design. Appl Soft Comput 81:105494. https://doi.org/10.1016/j.asoc.2019.105494 Tangherloni A, Spolaor S, Rundo L, Nobile MS, Cazzaniga P, Mauri G, Liò P, Merelli I, Besozzi D (2019) GenHap: a novel computational method based on genetic algorithms for haplotype assembly. BMC Bioinform 20:172. https://doi.org/10.1186/s12859-019-2691-y Torheim T, Malinen E, Kvaal K, Lyng H, Indahl UG, Andersen EK, Futsæther CM (2014) Classification of dynamic contrast enhanced MR images of cervical cancers using texture analysis and support vector machines. IEEE Trans Med Imag 33(8):1648–1656. https://doi.org/10.1109/TMI.2014.2321024 Trivedi MM, Harlow CA, Conners RW, Goh S (1984) Object detection based on gray level cooccurrence. Comput Vis Graph Image Process 28(2):199–219. https://doi.org/10.1016/S0734-189X(84)80022-5 Tsai HY, Zhang H, Hung CL, Min G (2017) GPU-accelerated features extraction from magnetic resonance images. IEEE Access 5:22634–22646. https://doi.org/10.1109/ACCESS.2017.2756624 Vargas HA, Veeraraghavan H, Micco M, Nougaret S, Lakhman Y, Meier AA, Sosa R, Soslow RA, Levine DA, Weigelt B et al (2017) A novel representation of inter-site tumour heterogeneity from pre-treatment computed tomography textures classifies ovarian cancers by clinical outcome. Eur Radiol 27(9):3991–4001. https://doi.org/10.1007/s00330-017-4779-y van Griethuysen JJ, Fedorov A, Parmar C, Hosny A, Aucoin N, Narayan V, Beets-Tan RG, Fillion-Robin JC, Pieper S, Aerts HJ (2017) Computational radiomics system to decode the radiographic phenotype. Cancer Res 77(21):e104–e107. https://doi.org/10.1158/0008-5472.CAN-17-0339 Vishnevskiy V, Walheim J, Kozerke S (2020) Deep variational network for rapid 4D flow MRI reconstruction. Nat Mach Intell 2(4):228–235. https://doi.org/10.1038/s42256-020-0165-6 Ward JH Jr (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58(301):236–244. https://doi.org/10.1080/01621459.1963.10500845 Wehrens R, Buydens LM et al (2007) Self- and super-organizing maps in R: the Kohonen package. J Stat Softw 21(5):1–19. https://doi.org/10.18637/jss.v021.i05 Yankeelov TE, Mankoff DA, Schwartz LH, Lieberman FS, Buatti JM, Mountz JM, Erickson BJ, Fennessy FM, Huang W, Kalpathy-Cramer J et al (2016) Quantitative imaging in cancer clinical trials. Clin Cancer Res 22(2):284–290. https://doi.org/10.1158/1078-0432.CCR-14-3336 Yip SS, Aerts HJ (2016) Applications and limitations of radiomics. Phys Med Biol 61(13):R150. https://doi.org/10.1088/0031-9155/61/13/R150 Zwanenburg A, Vallières M, Abdalah MA, Aerts HJ, Andrearczyk V, Apte A, Ashrafinia S, Bakas S, Beukinga RJ, Boellaard R et al (2020) The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping. Radiology 295(2):328–338. https://doi.org/10.1148/radiol.2020191145 This work was partially supported by The Mark Foundation for Cancer Research and Cancer Research UK Cambridge Centre [C9685/A25177]. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital Grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). Open Access funding provided by Università degli Studi di Milano - Bicocca. L. Rundo and A. Tangherloni have contributed equally to this work. Department of Radiology, University of Cambridge, Cambridge, UK Leonardo Rundo, Ramona Woitek & Evis Sala Cancer Research UK Cambridge Centre, Cambridge, UK Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy Leonardo Rundo, Andrea Tangherloni, Matteo Mistri, Simone Galimberti, Giancarlo Mauri & Marco S. Nobile Department of Haematology, University of Cambridge, Cambridge, UK Andrea Tangherloni Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Hinxton, UK Wellcome Trust – Medical Research Council Cambridge, Stem Cell Institute, Cambridge, UK Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy Paolo Cazzaniga SYSBIO/ISBE.IT Centre for Systems Biology, Milan, Italy Paolo Cazzaniga, Giancarlo Mauri & Marco S. Nobile Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria Ramona Woitek Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands Marco S. Nobile Leonardo Rundo Matteo Mistri Evis Sala Giancarlo Mauri Correspondence to Giancarlo Mauri. Rundo, L., Tangherloni, A., Cazzaniga, P. et al. A CUDA-powered method for the feature extraction and unsupervised analysis of medical images. J Supercomput 77, 8514–8531 (2021). https://doi.org/10.1007/s11227-020-03565-8 Haralick features Self-organizing maps Radiomics Unsupervised learning
CommonCrawl
Time-local unraveling of non-Markovian stochastic Schrödinger equations Antoine Tilloy Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Straße 1, 85748 Garching, Germany Non-Markovian stochastic Schrödinger equations (NMSSE) are important tools in quantum mechanics, from the theory of open systems to foundations. Yet, in general, they are but formal objects: their solution can be computed numerically only in some specific cases or perturbatively. This article is focused on the NMSSE themselves rather than on the open-system evolution they unravel and aims at making them less abstract. Namely, we propose to write the stochastic realizations of linear NMSSE as averages over the solutions of an auxiliary equation with an additional random field. Our method yields a non-perturbative numerical simulation algorithm for generic linear NMSSE that can be made arbitrarily accurate for reasonably short times. For isotropic complex noises, the method extends from linear to non-linear NMSSE and allows to sample the solutions of norm-preserving NMSSE directly. @article{Tilloy2017timelocalunraveling, doi = {10.22331/q-2017-09-19-29}, url = {https://doi.org/10.22331/q-2017-09-19-29}, title = {Time-local unraveling of non-{M}arkovian stochastic {S}chr{\"{o}}dinger equations}, author = {Tilloy, Antoine}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {1}, pages = {29}, month = sep, year = {2017} } [1] S. L. Adler and A. Bassi. Collapse models with non-white noises. Journal of Physics A: Mathematical and Theoretical, 40 (50): 15083, 2007. 10.1088/​1751-8113/​40/​50/​012. https:/​/​doi.org/​10.1088/​1751-8113/​40/​50/​012 [2] A. Bassi and G. Ghirardi. Dynamical reduction models. Physics Reports, 379 (5–6): 257 – 426, 2003. 10.1016/​S0370-1573(03)00103-0. https:/​/​doi.org/​10.1016/​S0370-1573(03)00103-0 [3] A. Bassi, K. Lochan, S. Satin, T. P. Singh, and H. Ulbricht. Models of wave-function collapse, underlying theories, and experimental tests. Rev. Mod. Phys., 85: 471–527, Apr 2013. 10.1103/​RevModPhys.85.471. [4] J. Dalibard, Y. Castin, and K. Mølmer. Wave-function approach to dissipative processes in quantum optics. Phys. Rev. Lett., 68: 580–583, Feb 1992. 10.1103/​PhysRevLett.68.580. [5] I. de Vega, D. Alonso, and P. Gaspard. Two-level system immersed in a photonic band-gap material: A non-markovian stochastic schrödinger-equation approach. Phys. Rev. A, 71: 023812, Feb 2005. 10.1103/​PhysRevA.71.023812. [6] L. Diósi. Models for universal reduction of macroscopic quantum fluctuations. Phys. Rev. A, 40: 1165–1174, Aug 1989. 10.1103/​PhysRevA.40.1165. https:/​/​doi.org/​10.1103/​PhysRevA.40.1165 [7] L. Diósi and L. Ferialdi. General non-markovian structure of gaussian master and stochastic schrödinger equations. Phys. Rev. Lett., 113: 200403, Nov 2014. 10.1103/​PhysRevLett.113.200403. [8] L. Diósi. Stochastic pure state representation for open quantum systems. Phys. Lett. A, 114 (8): 451 – 454, 1986. 10.1016/​0375-9601(86)90692-4. [9] L. Diósi. Non-markovian continuous quantum measurement of retarded observables. Phys. Rev. Lett., 100: 080401, Feb 2008a. 10.1103/​PhysRevLett.100.080401. [10] L. Diósi. Erratum: Non-markovian continuous quantum measurement of retarded observables [phys. rev. lett. 100 , 080401 (2008)]. Phys. Rev. Lett., 101: 149902, Oct 2008b. 10.1103/​PhysRevLett.101.149902. [11] L. Diósi and W. T. Strunz. The non-markovian stochastic schrödinger equation for open systems. Phys. Lett. A, 235 (6): 569–573, 1997. 10.1016/​S0375-9601(97)00717-2. [12] R. Dum, P. Zoller, and H. Ritsch. Monte carlo simulation of the atomic master equation for spontaneous emission. Phys. Rev. A, 45: 4879–4887, Apr 1992. 10.1103/​PhysRevA.45.4879. [13] L. Ferialdi and A. Bassi. Dissipative collapse models with nonwhite noises. Phys. Rev. A, 86: 022108, Aug 2012. 10.1103/​PhysRevA.86.022108. [14] R. P. Feynman and F. L. Vernon. The theory of a general quantum system interacting with a linear dissipative system. Annals of physics, 24: 118–173, 1963. 10.1016/​0003-4916(63)90068-X. https:/​/​doi.org/​10.1016/​0003-4916(63)90068-X [15] J. Gambetta and H. M. Wiseman. Non-markovian stochastic schrödinger equations: Generalization to real-valued noise using quantum-measurement theory. Phys. Rev. A, 66: 012108, Jul 2002. 10.1103/​PhysRevA.66.012108. [16] J. Gambetta and H. M. Wiseman. Interpretation of non-markovian stochastic schrödinger equations as a hidden-variable theory. Phys. Rev. A, 68: 062104, Dec 2003. 10.1103/​PhysRevA.68.062104. [17] G. C. Ghirardi, P. Pearle, and A. Rimini. Markov processes in hilbert space and continuous spontaneous localization of systems of identical particles. Phys. Rev. A, 42: 78–89, Jul 1990. 10.1103/​PhysRevA.42.78. https:/​/​doi.org/​10.1103/​PhysRevA.42.78 [18] N. Gisin. Quantum measurements and stochastic processes. Phys. Rev. Lett., 52: 1657–1660, May 1984. 10.1103/​PhysRevLett.52.1657. [19] N. Gisin and I. C. Percival. The quantum-state diffusion model applied to open systems. J. Phys. A: Math. Gen., 25 (21): 5677, 1992. 10.1088/​0305-4470/​25/​21/​023. [20] B. L. Hu, J. P. Paz, and Y. Zhang. Quantum brownian motion in a general environment: Exact master equation with nonlocal dissipation and colored noise. Phys. Rev. D, 45: 2843–2861, Apr 1992. 10.1103/​PhysRevD.45.2843. https:/​/​doi.org/​10.1103/​PhysRevD.45.2843 [21] B. L. Hu, J. P. Paz, and Y. Zhang. Quantum brownian motion in a general environment. ii. nonlinear coupling and perturbative approach. Phys. Rev. D, 47: 1576–1594, Feb 1993. 10.1103/​PhysRevD.47.1576. [22] K. Jacobs and D. A. Steck. A straightforward introduction to continuous quantum measurement. Contemporary Physics, 47 (5): 279–303, 2006. 10.1080/​00107510601101934. https:/​/​doi.org/​10.1080/​00107510601101934 [23] J. Jing and T. Yu. Non-markovian relaxation of a three-level system: Quantum trajectory approach. Phys. Rev. Lett., 105: 240403, Dec 2010. 10.1103/​PhysRevLett.105.240403. [24] J. Jing, X. Zhao, J. Q. You, and T. Yu. Time-local quantum-state-diffusion equation for multilevel quantum systems. Phys. Rev. A, 85: 042106, Apr 2012. 10.1103/​PhysRevA.85.042106. [25] J. Jing, X. Zhao, J. Q. You, W. T. Strunz, and T. Yu. Many-body quantum trajectories of non-markovian open systems. Phys. Rev. A, 88: 052122, Nov 2013. 10.1103/​PhysRevA.88.052122. [26] K. Mølmer, Y. Castin, and J. Dalibard. Monte carlo wave-function method in quantum optics. J. Opt. Soc. Am. B, 10 (3): 524–538, Mar 1993. 10.1364/​JOSAB.10.000524. https:/​/​doi.org/​10.1364/​JOSAB.10.000524 [27] P. Pearle. Combining stochastic dynamical state-vector reduction with spontaneous localization. Phys. Rev. A, 39: 2277–2289, Mar 1989. 10.1103/​PhysRevA.39.2277. [28] J. T. Stockburger. Simulating spin-boson dynamics with stochastic liouville–von neumann equations. Chemical physics, 296 (2): 159–169, 2004. 10.1016/​j.chemphys.2003.09.014. https:/​/​doi.org/​10.1016/​j.chemphys.2003.09.014 [29] J. T. Stockburger and H. Grabert. Exact $\mathit{c}$-number representation of non-markovian quantum dissipation. Phys. Rev. Lett., 88: 170407, Apr 2002. 10.1103/​PhysRevLett.88.170407. [30] W. T. Strunz. Linear quantum state diffusion for non-markovian open quantum systems. Phys. Lett. A, 224 (1): 25 – 30, 1996. 10.1016/​S0375-9601(96)00805-5. [31] W. T. Strunz and T. Yu. Convolutionless non-markovian master equations and quantum trajectories: Brownian motion. Phys. Rev. A, 69: 052115, May 2004. 10.1103/​PhysRevA.69.052115. [32] W. T. Strunz, L. Diósi, N. Gisin, and T. Yu. Quantum trajectories for brownian motion. Phys. Rev. Lett., 83: 4909–4913, Dec 1999. 10.1103/​PhysRevLett.83.4909. [33] A. Tilloy. Interacting quantum field theories as relativistic statistical field theories of local beables. arXiv:1702.06325, 2017. [34] U. Weiss. Quantum dissipative systems, volume 10. World Scientific, 1999. [35] H. M. Wiseman and G. J. Milburn. Quantum theory of field-quadrature measurements. Phys. Rev. A, 47: 642–662, Jan 1993. 10.1103/​PhysRevA.47.642. https:/​/​doi.org/​10.1103/​PhysRevA.47.642 [36] H. M. Wiseman and J. M. Gambetta. Pure-state quantum trajectories for general non-markovian systems do not exist. Phys. Rev. Lett., 101: 140401, Sep 2008. 10.1103/​PhysRevLett.101.140401. [37] X. Zhao, J. Jing, B. Corn, and T. Yu. Dynamics of interacting qubits coupled to a common bath: Non-markovian quantum-state-diffusion approach. Phys. Rev. A, 84: 032101, Sep 2011. 10.1103/​PhysRevA.84.032101. [1] Antoine Tilloy and Howard M. Wiseman, "Non-Markovian wave-function collapse models are Bohmian-like theories in disguise", Quantum 5, 594 (2021). [2] Antoine Tilloy, "Does gravity have to be quantized? Lessons from non-relativistic toy models", Journal of Physics: Conference Series 1275 1, 012006 (2019). [3] G. Gasbarri and L. Ferialdi, "Stochastic unravelings of non-Markovian completely positive and trace-preserving maps", Physical Review A 98 4, 042111 (2018). [4] Antoine Tilloy, "Interacting quantum field theories as relativistic statistical field theories of local beables", arXiv:1702.06325, (2017). [5] Giulio Gasbarri, Marko Toroš, and Angelo Bassi, "General Galilei Covariant Gaussian Maps", Physical Review Letters 119 10, 100403 (2017). 1 thought on "Time-local unraveling of non-Markovian stochastic Schrödinger equations" Pingback: New article in Quantum | Antoine Tilloy's research log
CommonCrawl
Clinic and patient variation in intermediate clinical outcomes for type 2 diabetes: a multilevel analysis Yvonne Mei Fong Lim ORCID: orcid.org/0000-0002-3404-01531,2, Swee Hung Ang1,3, Nazrila Hairizan Nasir4, Fatanah Ismail4, Siti Aminah Ismail1 & Sheamini Sivasampu1 Variation at different levels of diabetes care has not yet been quantified for low- and middle-income countries. Understanding this variation and its magnitude is important to guide policy makers in designing effective interventions. This study aims to quantify the variation in the control of glycated haemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) for type 2 diabetes (T2D) patients at the clinic and patient level and determine patient and clinic factors associated with control of these outcomes in T2D. This is a cross-sectional study within the baseline data from the impact evaluation of the Enhanced Primary Health Care (EnPHC) intervention on 40 public clinics in Malaysia. Patients aged 30 and above, diagnosed with T2D, had a clinic visit for T2D between 01 Nov 2016 and 30 April 2017 and had at least one HbA1c, SBP and LDL-C measurement within 1 year from the date of visit were included for analysis. Multilevel linear regression adjusting for patient and clinic characteristics was used to quantify variation at the clinic and patient levels for each outcome. Variation in intermediate clinical outcomes in T2D lies predominantly (93% and above) at the patient level. The strongest predictors for poor disease control in T2D were the proxy measures for disease severity including duration of diabetes, presence of microvascular complications, being on insulin therapy and number of antihypertensives. Among the three outcomes, HbA1c and LDL-C results provide greatest opportunity for improvement. Clinic variation in HbA1c, SBP and LDL-C accounts for a small percentage from total variation. Findings from this study suggest that standardised interventions need to be applied across all clinics, with a focus on customizing therapy based on individual patient characteristics. There is an estimated 424.9 million people with diabetes globally and about 80% live in low- and middle-income countries (LMIC) [1]. Over the past decade, prevalence of diabetes increased most rapidly in LMICs. Epidemiological transition in LMICs is distinct from high income countries because communicable diseases coexist with the rising epidemic of non-communicable diseases. Malaysia has a high prevalence of diabetes, where 17.5% of the population is affected compared to global estimates of 8.8% [1, 2]. Various strategies to improve diabetes care such as medication adherence clinic, diabetes education, revision of the clinical practice guidelines and diabetes audits [3,4,5,6] have been implemented in Malaysia but control of intermediate clinical outcomes including glycated haemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) has been suboptimal. The National Diabetes Registry, which captured data on diabetic patients from 644 public health clinics in all states of Malaysia, reported mean HbA1c of 8.1% in 2012 [7]. Only 40.9% achieved recommended blood pressure target of ≤130/80 mmHg and 37.8% achieved LDL-C levels of ≤2.6 mmol/L in the same year [7]. Variation in diabetes care is mainly described based on the concept that access and quality of care is highly dependent on where patients live and seek care. Understanding how health care facilities vary in diabetes process and outcome measures does not only allow for performance benchmarking, but also provide potential opportunities for quality improvement and cost reduction. Although not all geographical variation is inappropriate, the aim of diabetes care should be to minimise variation and maximise evidence-based practice [8]. Studies have quantified variation in diabetes outcomes at patient, physician, clinic and health system levels and a majority of these were based on data from the United States of America and other high-income European nations [8,9,10]. Diabetes outcomes from these countries may not necessarily be applicable to patients in countries with low- and middle-income economies because of differences in maturity of health systems and infrastructure. To our knowledge, variation in diabetes care has not yet been quantified for low- and middle-income settings like Malaysia. Previous studies have investigated the association of facility and patient factors on intermediate clinical outcomes in diabetes [11,12,13] but few have examined how these outcomes differ within and between facilities. This concept considers the phenomenon of clustering of health outcomes within geographical locations [14]. Understanding the variation at different levels of care and its magnitude could provide useful information to guide policy makers in designing effective interventions. From a practical perspective, tailored quality improvement measures can be applied only to clinics which are poor performers in the case where diabetes outcomes are highly clustered within clinics. Conversely, in settings with low clustering among clinics, a single standardised intervention across all clinics would be more useful in improving overall diabetes outcomes. It is also known that all diabetes quality indicators focus primarily on reducing diabetes complications through control of intermediate clinical measures of diabetes, which are primarily serum glucose, blood pressure and LDL-C [8]. Therefore, the objective of this study was to quantify the variation in the control of HbA1c, SBP and LDL-C for type 2 diabetes (T2D) patients at the clinic and patient level. We also aimed to determine the patient and clinic determinants are associated with control of these intermediate clinical outcomes in T2D. This cross-sectional analysis was based on baseline data from a larger study entitled "Evaluation of the Enhanced Primary Healthcare (EnPHC) interventions in public health clinics" (EnPHC-Eva). The EnPHC-Eva was a quasi-experimental controlled study which aimed to determine the effectiveness of a multifaceted intervention package called EnPHC on process of care and intermediate clinical outcomes of patients with T2D and hypertension in 40 public health clinics in Malaysia. At the time of writing, the EnPHC-Eva has just completed post-intervention data collection and analysis. A study protocol for the EnPHC-Eva study is currently under journal review. Ethical approval was granted by the Medical Research Ethics Committee, Ministry of Health Malaysia (NMRR-17-267-34768). Malaysia has a dual-sector healthcare system; consisting of a public and private sector. The private sector is mainly funded by out-of-pocket payments and private insurance [15]. Health services in the public sector are heavily subsidised by general taxation and patients pay a small fee of between US$0.30 and US$ 4.50 for outpatient services, depending on citizenship status [15]. Hence, the public health sector manages the bulk of chronic conditions in the country [16]. For diabetes, patients mainly sought treatment at public clinics (59.3%), followed by public hospitals (20.0%), private clinics (15.1%), private hospitals (3.6%) and a remaining small percentage purchased medications from pharmacies or seeked traditional and alternative medicine [2]. The EnPHC interventions focused on public clinics because diabetes was largely managed in this healthcare setting. The clinics involved in this study were located in two states in Malaysia; Selangor and Johor. These two states were selected based on balance between regional representativeness, budget and implementation capacity. Each public health clinic was responsible for the care of the population residing within its assigned catchment area. Patients with diabetes were mainly managed by medical officers, who were licensed medical doctors with basic medical training. Some of them practice under the guidance of a family medicine specialist (FMS) who has formal postgraduate training in primary care practice, depending on whether there is a full-time or visiting FMS at their respective clinics. Specialised diabetes education and/or medication adherence clinic was available in some clinics. A diabetes educator provides individual or group-based education for diabetes patients on related topics which include healthy diet, foot care, exercise, self-monitoring, medication usage and goal setting and this role is usually performed by a nurse who has undergone formal training modules in diabetes care. The diabetes medication adherence clinic is handled by a pharmacist, focusing on improving medication adherence and glycaemic control through counseling and education. Sample size and sampling The EnPHC-Eva study evaluated its outcomes for T2D using two approaches, i.e. interrupted time series (ITS) and difference-in-differences (DiD). The sample size was calculated separately for both approaches. In general, the minimum number of data points required for interrupted time-series analysis is 12 time points (six before and six after the intervention) with a minimum of 50 observations per time point [17]. In EnPHC-Eva study, we estimated a minimum 400 cases (10 cases per clinic) per time point for eight consecutive months before and after the intervention for practical and feasibility reasons. For the second approach, estimation of sample size for DiD was based on 28% effect size, 80% power, alpha value of 0.05 and cluster effect of 0.091. In total, the minimum baseline sample size required was 5200 T2D cases: 2000 for DiD and 3200 for ITS. We further adjusted the minimum required number to account for 40% potentially unavailable records. At the time of analysis, only data from the first 6 months were available and the data for the remaining 2 months would be collected during the next phase of data collection (post-intervention) between April and May 2018 due to logistic and time-constraint issues during the first phase of data collection. The cases were sampled each month by systematic random sampling of patient medical records and data was extracted into an electronic structured data collection form using mobile tablets. Patients aged 30 and above, diagnosed with T2D, had a clinic visit for T2D between 01 Nov 2016 and 30 April 2017 and had at least one HbA1c, SBP and LDL-C measurement within 1 year prior to the date of visit were included for analysis. Pregnant women with diabetes were excluded because disease management for gestational diabetes differs from non-pregnant patients. Outcome measures of this study were the most recent HbA1c, SBP and LDL-C values. The 2015 Malaysian Clinical Practice Guideline for T2D recommends the following treatment targets: HBA1c ≤ 7.0%, blood pressure ≤ 135/75 mmHg and LDL-C ≤ 2.6 mmol/L for most patients with T2D [18]. The following patient characteristics were included in the analysis based on literature as predictors of control of intermediate clinical outcomes in T2D [19,20,21,22,23,24]: patient age, sex, ethnicity, body mass index (BMI), duration of T2D, presence of hypertension and hyperlipidaemia, presence of T2D complications, insulin use, antihypertensive and statin (HMG-CoA reductase inhibitors). Complications of T2D were categorised by microvascular and macrovascular complications. Microvascular complications included nephropathy (proteinuria or chronic kidney disease), retinopathy, cataract, neuropathy (unspecified neuropathy, erectile dysfunction, foot ulcer or amputation) while macrovascular complications were coronary heart disease (myocardial infarction, angina, acute coronary syndrome and coronary artery stenosis), heart failure, cerebrovascular disease (stroke and transient ischaemic attack) and peripheral vascular disease. Glucose-lowering medications, number of antihypertensive as well as lipid-lowering medication were included in the final regression because of their effect on HbA1c control. Angiotensin-converting enzyme inhibitors (ACEI) were found to improve insulin sensitivity [23] while statins (HMG-CoA reductase inhibitors) were reported to be associated with increase in HbA1c [25]. To explain potential variation due to between clinic differences, the clinic level characteristics captured were geographical location (urban or rural), number of clinic attendances per day, availability of a full-time FMS in the clinic, availability of at least one full time diabetes educator in the clinic and availability of diabetes medication adherence services. Continuous variables were presented as mean and standard deviation while categorical variables were reported in frequencies and percentages. Statistical significance (alpha) was set at 0.05 for all comparisons. Multilevel linear regression models were constructed for each outcome. When patients are grouped within clusters such as clinics, outcomes for those within the same cluster are more similar when compared to a patient from another clinic because of exposure to a common contextual effect [14]. Multilevel analysis accounts for the hierarchical structure of the data where patients (level 1) were nested within clinics (level 2) and is able to partition and quantify the amount of variation occurring at each level. Hence, we were able to identify the level where greatest variation lies for each outcome. Missing data rates ranged from 0.06 to 33%. Missing values were highest for the outcomes of interest, where 1150 (21%) and 1762 (33%) of patients did not have data for HbA1c and LDL-C values respectively. The data did not contain additional auxiliary variables which could be used to impute these missing outcomes through multiple imputation, hence we conducted complete case analysis for all models. We constructed the multilevel model by increasing complexity: first, we built an empty model with only a random intercept. Subsequently, we included the patient variables and the final model includes both patient and clinic variables. For the regression analyses, we intended to interpret the intercept (or constant) for each of the models. The intercept gives the expected mean outcome values for HbA1c, SBP and LDL-C for the study sample when all predictors, X are equal to zero. For categorical variables, X = 0 refers to reference category for each variable. However, zero is not a meaningful value for continuous variables such as age and BMI. Therefore, we centred all eight continuous predictors in the models on their respective means, such that the value of 0 for these centred variables now refer to grand mean of the study sample [26]. Additionally, caterpillar plots were created to visualize the differences between adjusted clinic means for each outcome. Clinic estimates with 95% confidence intervals (95% CI) from the fully adjusted models were plotted. We calculated the intra-class correlation coefficient (ICC) to quantify the proportion of clinic variance of the total variance for all outcomes, where $$ ICC=\frac{variance\ between\ clinics}{\left( variance\ between\ clinics+ variance\ within\ clinics\right)} $$ We used likelihood ratio tests to compare model fit between single-and multilevel models for each outcome. Improvement in goodness of fit is reflected in the reduction of 'deviance' statistics as variables were introduced consecutively into the models [27, 28]. The parameters of the multilevel regression were generated using maximum likelihood estimation. Visual inspection of residual plots was done and no obvious deviations from homoscedasticity or normality were observed. All variables were also checked for multicollinearity and no predictor pairs were found to be collinear (variance inflation factors range between 1.02 and 1.64). Data analyses were conducted using R version 3.6.1 [29]. The lme4 package was used for mixed effect modelling while the ggplot2 was used to generate the caterpillar plots [30, 31]. Out of 5425 patients with T2D we included 2960 patients who had complete data for all variables for the final regression model. Patient and clinic characteristics are presented in Table 1. The study population had a mean age of 60 years, was predominantly female (63.3%) and had a mean duration of T2D of 7.3 years. Seventy-nine percent of patients had hypertension while 52% had hyperlipidaemia. Micro- and macrovascular complications were present 28 and 8% of patients, respectively. On pharmacological management, 31.3% of patients were on insulin therapy, 66.3% were given either ACEI or ARBs for management of hypertension and about 81.1% of patients were on statins. There were also a percentage of patients who did not receive pharmacotherapy for glucose-, blood pressure- and lipid-lowering. Three percent of patients did not receive any glucose-lowering therapy and three quarter of these patients (75%) had HbA1c levels that were within target range (<=7%). As for the 12.8% of patients who did not receive any antihypertensive agent, about 13% of them had blood pressure above the national practice guideline target of 135/75 mmHg on two separate clinic visits [18]. On average, patients were obese with a mean BMI of 28.3 kg/m2 and had mean HbA1c of 8.4%, mean SBP of 137.7 mmHg and mean LDL-C of 3.0 mmol/L. The clinics in this study were largely located in urban areas (55%). A quarter of them had full-time family medicine specialists, 60% had permanent diabetes educators and 85% provided diabetes medication adherence services. Table 1 Patient and clinic characteristics The absolute and percentage variance attributable to patient and clinic levels were displayed for each outcome in Table 2. Results from the linear multilevel models show that variation in all three intermediate outcome measures occurs predominantly at the patient level, ranging between 93 and 98% (Table 2), after adjusting for patient and clinic characteristics. Conversely, between clinic differences accounts for a small but significant percentage of the total variance in HbA1c, SBP and LDL-C values. Figures 1a, b and c show the estimates and 95% CI by each clinic for HbA1c, SBP and LDL-C respectively. The adjusted mean levels for all outcomes were denoted by the dash-dotted red line where HbA1c is 8.0%, SBP is 136.5 mmHg and LDL-C is 2.98 mmol/L, were above targets recommended by the national clinical practice guideline, denoted by blue solid lines in Fig. 1 [18]. Among the three, HbA1c and LDL-C are almost equally furthest from therapeutic targets i.e. both measures are on average 14 and 15% above their recommended targets. Additionally, for both measures, there were few clinics which conclusively differed from the overall mean. In contrast, bigger differences between clinics was observed when it comes to SBP and this is reflected in the larger number of clinics which performed better and worse than average (Fig. 1b) and the higher ICC values compared to the other outcomes (ICC 0.07 vs 0.02) reported in Table 2. Table 2 Absolute and percent of variance in HbA1c, SBP and LDL-C attributable to clinic and patient levels a Mean clinic HbA1c estimates with 95% CI after adjustment for patient and clinic characteristics. The dash-dotted line represents the mean of all clinics while the solid line represents the therapeutic target range recommended by the national clinical practice guideline. b Mean clinic SBP estimates with 95% CI after adjustment for patient and clinic characteristics. The dash-dotted line represents the mean of all clinics while the solid line represents the therapeutic target range recommended by the national clinical practice guideline. c. Mean clinic LDL-C estimates with 95% CI after adjustment for patient and clinic characteristics. The dash-dotted line represents the mean of all clinics while the solid line represents the therapeutic target range recommended by the national clinical practice guideline Inclusion of patient characteristics into the empty model for HbA1c explained 14 and 26% of variance between clinics and between patients respectively (Additional file 1: Table S1). In contrast to the HbA1c model, addition of patient characteristics into the empty model for SBP explained slightly more variance between clinics (16%) but less of the variance among patients within clinics (15%) (Additional file 1: Table S2). Similarly, incorporating patient variables into the linear multilevel model for LDL-C explained more of the variance occurring at clinic level (34%) than those between patients (4%) (Additional file 1: Table S3). Overall, we found that for all three outcomes, the inclusion of clinic characteristics into the models had only marginally explained the variance at both between and within clinic levels (Additional file 1: Tables S1, S2, and S3). Table 3 presents the coefficients, 95% CI and statistical significance for the linear multilevel models which included patient and clinic level characteristics. Increase in patient age is associated with lower levels in HbA1c and LDL-C but higher SBP. Proxy measures for disease severity such as duration of diabetes, microvascular complications, being on insulin and number of antihypertensives display the strongest association with poor control in HbA1c, SBP and LDL-C. Further, there is general correlation between all three intermediate clinical measures, where patients who are uncontrolled for one outcome is more likely to be uncontrolled for another intermediate outcome except for the relationship between SBP and HbA1c. Interestingly, none of the clinic level predictors including availability of a family medicine specialist and diabetes educator influenced control of all three outcome measures. Table 3 Patient and clinic determinants of HbA1c, SBP and LDL-C levels in T2D One of the aims to achieving better health care quality is to reduce unnecessary variation in disease management and outcomes. We found that greatest variation in intermediate clinical outcomes for T2D lie within clinics, at the patient level. This is consistent with findings by O′ Connor et al. and Charalampopoulos et al., where clinic level variation account for only a small percentage of the total variance in glycaemic control [10, 32]. There were relatively few clinics which performed worse than average for all three outcomes; hence focusing interventions on only those with poor performance will not be very efficient. Despite the small variability in treatment outcomes between clinics, intervening at the clinic and health provider level may still be useful and practical because these levels are more directly accessible than individual patients [33]. Moreover, there is still a clear gap between mean performances and national therapeutic targets for HbA1c and LDL-C control. These therapeutic targets of less than or equal 7% and 2.6 mmol/L for HbA1c and LDL-C are also consistent to those recommended by the International Diabetes Federation [34]. The results highlight an opportunity for closing this performance and target gap by improving disease management practices at the clinic level. Given the low variability in performance across clinics, our findings support the use of standard initiatives across all clinics to push disease control towards treatment targets. Homogeneity in HbA1c, SBP and LDL-C levels observed between clinics can be explained by similarities in infrastructure and resources as they are managed under a single administration, the Ministry of Health. Although each clinic may have different delivery system designs [6], a lack of differences in treatment outcomes at the clinic level suggests that uniform interventions may be applied to all clinics to shift overall outcome to meet targets. The strategies that have been shown to improve intermediate patient outcomes include provider feedback, performance measurement, public reporting, financial incentives and benchmarking between clinics or individual providers [35, 36]. Much of the variability in HbA1c, SBP and LDL-C levels are attributable to the differences between patients. After adjusting for patient and clinic characteristics, most of the unexplained variation for HbA1c, SBP and LDL-C remain at the patient level. This is potentially due to other patient determinants such as medication adherence, socioeconomic status, health beliefs and patient self-care practice that were not captured in this study. Two things are implied from this finding. First, it is necessary for health providers to personalize therapeutic strategies based on individual patients. Second, patients need to be held accountable for their disease control. Patient-centered approaches include empowerment and engagement in treatment decision-making and self-care, use of reminder systems, self-monitoring of diabetes and promotion of diet, behavioural and lifestyle modifications [8]. Whilst we know that most differences in treatment outcomes reside within patients, it is the joint partnerships formed between patients and multi-disciplinary providers that are most likely to effect change [32]. Between the three outcomes evaluated, HbA1c and LDL-C control offers the largest potential for improvement from the current adjusted mean levels to clinical guideline recommended targets [18]. And yet this gap between actual performance and therapeutic targets is evident although 97 and 83% of patients are already on pharmacotherapy to lower glucose and lipid levels. These findings suggest the importance of other components of diabetes care such as treatment intensification, medication adherence, patient's health beliefs, weight management, dietary intake and physical activity in improving disease control [6]. Further studies using the qualitative approaches may be conducted among health providers and patients to identify other barriers to disease control and develop targeted strategies to achieve better outcomes. Optimal disease management involves a complex interaction between providers and patients. Patient self-care and shared decision-making are recognized as a crucial part of diabetes care [36] and this task of empowering patients to take charge of their disease is complicated by low health literacy and the multicultural characteristics of patients in Malaysia [37, 38]. Thus, diabetes education needs to go beyond basic knowledge in diabetes and take into account cultural, psychosocial and family support aspects of individual patients [38, 39]. It is also known that people with diabetes in Malaysia consume diets high in carbohydrates and fat while more than half are physically inactive [6, 40]. These factors together with overweight or obesity contributed not only to the high prevalence of DM in the country but also poor disease control. In summary, health initiatives for T2D should be taken from two respect; one from improving the way health providers manage diabetes at the clinic level and another from community health perspective to address dietary and physical activity concerns. We investigated the factors that could influence the outcomes by including patient and clinic characteristics in the multilevel models. Age, sex and ethnicity showed inconsistent effects for the three clinical outcomes. This finding is in agreement with a systematic review and a study by Frei et al. evaluating the impact of patient characteristics on diabetes outcome indicators [20, 41] where the authors found inconsistent impact for demographic characteristics. Despite known differences in prevalence of diabetes by ethnicity [6], it appears that disease control does not depend on these demographic characteristics but rather individual unmeasured factors related to individual health beliefs and lifestyles. The same systematic review mentioned above also did not show consistent influence of comorbidity and diabetes duration on HbA1c, SBP and LDL-C levels [20]. Contrastingly, we found that diabetes duration, presence of microvascular complications, being treated with insulin and number of antihypertensives were associated with poorer disease control. These predictors were likely a reflection of disease progression of diabetes in these patients. Further, we noted that poor control on one outcome predicts poor control of another intermediate outcome for diabetes, particularly the HbA1c and LDL-C pair. This observation is in line with a study by Jackson et al. which found modest association between LDL-C control and HbA1c control [42]. Our findings suggest a potential synergistic effect where control of one outcome increases the likelihood for control of the other and that simultaneous control of intermediate outcomes is more likely to be achieved when either one of the outcomes are within control. None of the clinic level characteristics included in the model influenced HbA1c, SBP and LDL-C control. Kahn and colleagues demonstrated that having a certified diabetes educator within the primary care team resulted in improvement in Hba1c control [43]. It is interesting to note that neither having a diabetes educator nor medication adherence services in clinics influenced glycaemic outcomes. On the former, there are several possible reasons; (i) lack of standardised training modules for diabetes educators, (ii) lack of a pre-defined set of activities and key targets for the role of a diabetes educator, and (iii) multi-tasking, where the diabetes educator may also need to take on other roles in the provision of primary care services [6]. An approach would be to standardise the delivery of diabetes education, through accreditation programs for these services in the country. As for medication adherence service; its lack of impact on outcomes despite the availability of a standardised program [44] may be due to the small proportion of total diabetes patients which received the service. Based on information from the same data as the present study, only 8% of all T2D patients had ever received the medication adherence service [unpublished data from EnPHC-Eva]. This may be attributable to shortage of pharmacists to cater the service to a larger group of patients. More research is warranted to assess the quality of care provided by diabetes educators and pharmacists in the aspect of diabetes education and medication adherence services in primary care to identify areas for improvement. Whilst financial barrier is a known determinant for access to healthcare, it is unlikely to have an impact on this study's results because treatment at public clinics comes at almost no cost to patients. Few studies have quantified variation in intermediate clinical outcomes for T2D and a majority of these studies were done in high income countries [8, 32]. To our knowledge, this study is the first to evaluation clinic variation in diabetes outcomes in a middle-income nation. One of the strengths of this study is the use of multilevel models, which takes into account the hierarchical structure of the data and clustering within clinics. Further, data for this analysis was collected using an application with built-in validation rules to minimise data capture errors. There were several limitations in this study. First, we were unable to adjust for adherence to treatment because this information was not measured. About 45% of the patients had missing information on the outcome of interest and had to be omitted from the analysis. Therefore, we could not exclude the possibility of bias due to missing data. Also, there were 5 main categories of public health clinics Malaysia (categorised based on average daily patient attendances) but only 3 clinic types were involved in the implementation of the EnPHC interventions. The categories which were not represented in this study were the smallest and largest clinic types and this may partially explain the lack of variation found between clinics. We were also unable to disentangle provider level variation or control for provider characteristics as patients were not assigned to one single provider for all episodes of care but were managed by any provider who is on duty on the visit day. Also, it is possible that number of clinics may not be sufficiently powered to allow detection of effects for clinic characteristics [45]. Clinic level variation in HbA1c, SBP and LDL-C accounts for a small percentage from total variation. More than 93% of variation in intermediate clinical outcomes in T2D is due to differences between patients. Among the three measures evaluated, HbA1c and LDL-C offers the largest room for improvement. Interventions need to be applied across all clinics, with a focus on customizing therapy based on individual patient characteristics. The predictors for poor control of intermediate diabetes outcomes are measures of disease progression including duration of diabetes, microvascular complications, being on insulin and number of antihypertensives. There is also small but significant association between the outcomes which suggests that simultaneous control is more likely to be achieved when one of the outcomes are within therapeutic targets. Data for the current study was based on baseline information from the EnPHC evaluation study. Relevant aggregate data are presented within this paper and its supplementary information file. Due to ethical and confidentiality restrictions, individual data cannot be made publicly available. All requests for data access should be addressed to the Institute for Clinical Research at [email protected]. ACEI: Angiotensin-converting enzyme inhibitor ARB: Angiotensin-II receptor blocker DiD: Difference-in-differences EnPHC: Enhanced Primary Healthcare Intervention Package EnPHC-Eva: Enhanced Primary Healthcare Intervention Package Evaluation Study FMS: Family medicine specialist Hba1c: Glycated haemoglobin ICC: Intracluster correlation coefficient LDL-C: Low-density lipoprotein cholesterol LMIC: Low- and middle-income countries SBP: Statin: HMG-CoA reductase inhibitors T2D: International Diabetes Federation. IDF Diabetes Atlas. 8th ed. Brussels, Belgium: International Diabetes Federation; 2017. Institute for Public Health. National Health and Morbidity Survey 2015 (NHMS 2015). Volume II: Non-Communicable Diseases, Risk Factors & Other Health Problems. Kuala Lumpur: Institute for Public Health, National Institutes of Health, Ministry of Health Malaysia; 2015. Mafauzy M, Hussein Z, Chan SP. The status of diabetes control in Malaysia: results of DiabCare 2008. Med J Malaysia. 2011;66:175–81. Mafauzy M, Zanariah H, Nazeri A, Chan SP. DiabCare 2013: a cross-sectional study of hospital based diabetes care delivery and prevention of diabetes related complications in Malaysia. Med J Malaysia. 2016;71:177–85. Disease Control Division, Ministry of Health Malaysia. National strategic plan for non-communicable disease. Medium term strategic plan to further strengthen the cardiovascular disease and diabetes prevention and control program in Malaysia (2010-2014). 1st ed. Putrajaya: Ministry of Health Malaysia; 2010. Hussein Z, Taher SW, Singh HKG, Swee WCS. Diabetes Care in Malaysia: problems, new models, and solutions. Ann Global Health. 2015;81:851–62. Feisul M, Azmi S. National Diabetes Registry Report (2009–2012). Kuala Lumpur: Ministry of Health Malaysia; 2013. Gamble J-M, Butalia S. Medical Practice Variations in Diabetes Mellitus. Medical Practice Variations. Boston, MA: Springer; 2016. p. 323–59. Cho YY, Sidorenkov G, Denig P. Role of patient and practice characteristics in variance of treatment quality in type 2 diabetes between general practices. PLoS One. 2016;11:e0166012. Charalampopoulos D, Amin R, Warner JT, Muniz-Terrera G, Mazarello Paes V, Viner RM, et al. Clinic variation in glycaemic control for children with type 1 diabetes in England and Wales: a population-based, multilevel analysis. Diabet Med. 2017;34:1710–8. Goudswaard AN, Stolk RP, Zuithoff P, Rutten GEHM. Patient characteristics do not predict poor Glycaemic control in type 2 diabetes patients treated in primary care. Eur J Epidemiol. 2004;19:541–5. Pringle M, Stewart-Evans C, Coupland C, Williams I, Allison S, Sterland J. Influences on control in diabetes mellitus: patient, doctor, practice, or delivery of care? BMJ. 1993;306:630–4. Khunti K, Ganguli S, Baker R, Lowy A. Features of primary care associated with variations in process and outcome of care of people with diabetes. Br J Gen Pract. 2001;51:356–60. Merlo J, Chaix B, Yang M, Lynch J, Rastam L. A brief conceptual tutorial of multilevel analysis in social epidemiology: linking the statistical concept of clustering to the idea of contextual phenomenon. J Epidemiol Community Health. 2005;59:443–9. World Health Organization. Regional Office for Western Pacific. Malaysia health system review. Manila: WHO Regional Office for the Western Pacific; 2012. Sivasampu S, Wahab YF, Ong SM, Ismail SA, Goh P, Jeyaindran S. National Medical Care Statistics (NMCS) 2014. Kuala Lumpur: National Clinical Research Centre; 2016. Report No.: NCRC/HSU/2016.1 Fretheim A, Zhang F, Ross-Degnan D, Oxman AD, Cheyne H, Foy R, et al. A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation. J Clin Epidemiol. 2015;68:324–33. Ministry of Health Malaysia. Clinical practice guidelines on Management of Type 2 diabetes mellitus. 5th ed. Putrajaya: Ministry of Health Malaysia; 2015. Franch-Nadal J, Mata-Cases M, Vinagre I, Patitucci F, Hermosilla E, Casellas A, et al. Differences in the Cardiometabolic control in type 2 diabetes according to gender and the presence of cardiovascular disease: results from the eControl study. Int J Endocrinol. 2014;2014:131709. Calsbeek H, Markhorst JGM, Voerman GE, Braspenning JCC. Case-mix adjustment for diabetes indicators: a systematic review. Am J Manag Care. 2016;22:e45–52. UK Prospective Diabetes Study (UKPDS) Group. Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). UK Prospective Diabetes Study (UKPDS) Group. Lancet. 1998;352:837–53. Kayar Y, Ilhan A, Kayar NB, Unver N, Coban G, Ekinci I, et al. Relationship between the poor glycemic control and risk factors, life style and complications. Biomed Res. 2017;28:1581–6. Lithell HO. Effect of antihypertensive drugs on insulin, glucose, and lipid metabolism. Diabetes Care. 1991;14:203–9. Viana LV, Leitão CB, Kramer CK, Zucatti ATN, Jezini DL, Felício J, et al. Poor glycaemic control in Brazilian patients with type 2 diabetes attending the public healthcare system: a cross-sectional study. BMJ Open. 2013;3:e003336. Liew SM, Lee PY, Hanafi NS, Ng CJ, Wong SSL, Chia YC, et al. Statins use is associated with poorer glycaemic control in a cohort of hypertensive patients with diabetes and without diabetes. Diabetol Metab Syndr. 2014;6:53. Hofmann DA, Gavin MB. Centering decisions in hierarchical linear models: implications for research in organizations. J Manag. 1998;24:623–41. Merlo J, Yang M, Chaix B, Lynch J, Rastam L. A brief conceptual tutorial on multilevel analysis in social epidemiology: investigating contextual phenomena in different groups of people. J Epidemiol Community Health. 2005;59:729–36. Winter B. Linear models and linear mixed effects models in R with linguistic applications. arXiv. 2013;1308:5499. R Core Team. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2019. Bates D, Maechler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67:1–48. Wickham H. ggplot2: elegant graphics for data analysis. New York: Springer-Verlag; 2009. O'Connor PJ, Rush WA, Davidson G, Louis TA, Solberg LI, Crain L, et al. Variation in quality of diabetes care at the levels of patient, physician, and clinic. Prev Chronic Dis. 2008;5:A15. Fung V, Schmittdiel JA, Fireman B, Meer A, Thomas S, Smider N, et al. Meaningful variation in performance: a systematic literature review. Med Care. 2010;48:140–8. International Diabetes Federation. Recommendations for Managing Type 2 Diabetes in Primary Care. Brussels, Belgium: International Diabetes Federation; 2017. Hermans MP, Elisaf M, Michel G, Muls E, Nobels F, Vandenberghe H, et al. Benchmarking is associated with improved quality of Care in Type 2 diabetes: the OPTIMISE randomized, controlled trial. Diabetes Care. 2013;36:3388–95. O'Connor PJ, Bodkin NL, Fradkin J, Glasgow RE, Greenfield S, Gregg E, et al. Diabetes performance measures: current status and future directions. Diabetes Care. 2011;34:1651–9. Esahak A, Ismail S, Juni MH, Fuziah P. Factors associated with health literacy among type 2 diabetes mellitus patients attending a government health clinic, 2016. Int J Public Health Clin Sci. 2016;3:50–64. Mohamed AM, Romli J, Ismail K, Winkley K. P37 barriers to and facilitators of effective diabetes self-management among people newly diagnosed with type 2 diabetes mellitus (T2DM): a qualitative study from Malaysia. J Epidemiol Community Health. 2017;71:A68. Gunggu A, Thon CC, Whye LC. Predictors of diabetes self-management among type 2 diabetes patients. J Diabetes Res. 2016;2016:9158943. Tan MY, Magarey J. Self-care practices of Malaysian adults with diabetes and sub-optimal glycaemic control. Patient Educ Couns. 2008;72:252–67. Frei A, Herzog S, Woitzek K, Held U, Senn O, Rosemann T, et al. Characteristics of poorly controlled type 2 diabetes patients in Swiss primary care. Cardiovasc Diabetol. 2012;11:70. Jackson GL, Edelman D, Weinberger M. Simultaneous control of intermediate diabetes outcomes among veterans affairs primary care patients. J Gen Intern Med. 2006;21:1050–6. Kahn LS, Tumiel-Berhalter L, D'Aniello R, Danzo A, Fox CH, Taylor J, et al. The impacts of "growing our own": a pilot project to address health disparities by training health professionals to become certified diabetes educators in safety net practices. Diabetes Educ. 2012;38:86–93. Pharmaceutical Services Division, Ministry of Health Malaysia. Medication therapy adherence clinic: Diabetes. Ministry of Health Malaysia; 2010. Snijders TAB, Bosker RJ. Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. London: SAGE; 2012. We thank the Director-General of Health, Ministry of Health Malaysia for the permission to publish these findings. This project would not have been possible without the participation of patients and health providers in the EnPHC evaluation study as well as the EnPHC study team. We are also grateful to Amy Hwong Wen Yea, Lim Ka Keat, Norazida Ab Rahman and Woon Yuan Liang for their comments on the data analysis and draft manuscript. This work was supported by a grant from the Ministry of Health Malaysia (grant number: NMRR-17-267-34768). The funders had no role in the design, data collection, analysis, interpretation of data or in writing the manuscript. Institute for Clinical Research, National Institutes of Health, Ministry of Health Malaysia, No.1, Jalan Setia Murni U13/52, Setia Alam, Selangor, Malaysia Yvonne Mei Fong Lim , Swee Hung Ang , Siti Aminah Ismail & Sheamini Sivasampu Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands Department of Social and Preventive Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia Swee Hung Ang Family Health Development Division, Public Health Department, Ministry of Health Malaysia, Level 4, Block E6, Complex E, 62590, Putrajaya, Malaysia Nazrila Hairizan Nasir & Fatanah Ismail Search for Yvonne Mei Fong Lim in: Search for Swee Hung Ang in: Search for Nazrila Hairizan Nasir in: Search for Fatanah Ismail in: Search for Siti Aminah Ismail in: Search for Sheamini Sivasampu in: SS is the principal investigator for EnPHC-Eva. YMFL, SHA, NHN, FI, SAI and SS conceived the idea for this manuscript. YMFL and SHA carried out the data analysis and wrote the first draft of the manuscript. YMFL, SHA, NHN, FI, SAI and SS interpreted the results and critically reviewed drafts of this manuscript. All authors read and approved the final manuscript. Correspondence to Yvonne Mei Fong Lim. Ethics approval for this study was granted by the Medical Research and Ethics Committee of the Ministry of Health Malaysia (NMRR-17-267-34768). Waiver of consent was obtained as this study involved the data collection from a large number of retrospective routine medical records, where obtaining consent would be impracticable. Additional file 1. Detailed multilevel model results for HbA1c, SBP and LDL-C. The table includes the results for multilevel linear regression for the empty model, model with addition of patient variables and final model with patient and clinic variables. Both fixed and random effects are reported. Proportional changes in variance with the addition of variables and goodness of fit results are also included. Lim, Y.M.F., Ang, S.H., Nasir, N.H. et al. Clinic and patient variation in intermediate clinical outcomes for type 2 diabetes: a multilevel analysis. BMC Fam Pract 20, 158 (2019). https://doi.org/10.1186/s12875-019-1045-1 Intermediate outcomes Multilevel analysis Epidemiology and research methodology in primary care
CommonCrawl
Involvement of MdWRKY40 in the defense of mycorrhizal apple against fusarium solani Mei Wang1,2 na1, Weixiao Tang1 na1, Li Xiang1, Xuesen Chen1, Xiang Shen1, Chengmiao Yin1 & Zhiquan Mao1 Apple (Malus domestica Borkh.) is an important economic crop. The pathological effects of Fusarium solani, a species complex of soilborne pathogens, on the root systems of apple plants was unknown. It was unclear how mycorrhizal apple seedlings resist infection by F. solani. The transcriptional profiles of mycorrhizal and non-mycorrhizal plants infected by F. solani were compared using RNA-Seq. Infection with F. solani significantly reduced the dry weight of apple roots, and the roots of mycorrhizal apple plants were less damaged when the plants were infected with F. solani. They also had enhanced activity of antioxidant enzymes and a reduction in the oxidation of membrane lipids. A total of 1839 differentially expressed genes (DEGs) were obtained after mycorrhizal and non-mycorrhizal apple plants were infected with F. solani. A gene ontogeny (GO) analysis showed that most of the DEGs were involved in the binding of ADP and calcium ions. In addition, based on a MapMan analysis, a large number of DEGs were found to be involved in the response of mycorrhizal plants to stress. Among them, the overexpressed transcription factor MdWRKY40 significantly improved the resistance of the apple 'Orin' callus to F. solani and the expression of the resistance gene MdGLU by binding the promoter of MdGLU. This paper outlines how the inoculation of apple seedlings roots by arbuscular mycorrhizal fungi responded to infection with F. solani at the transcriptional level. In addition, MdWRKY40 played an important role in the resistance of mycorrhizal apple seedlings to infection with F. solani. Apple is one of the most widely produced and economically important fruit crops in temperate regions [1]. Apple replant disease (ARD) is a phenomenon in which the plant growth is severely inhibited after replanting apple trees at the same site for many years [2]. There are many complex etiologies that cause ARD, and the possible factors vary between different regions or orchards of the same region [3, 4]. The primary cause of ARD is biological factors, because disinfection of the soil can effectively prevent or alleviate the disease [5, 6]. Extensive studies have shown that the major pathogens that lead to ARD are Cylindrocarpon, Fusarium, Rhizoctonia, Pythium, and Phytophthora [7, 8]. Wang et al. [9] found that there was abundant Fusarium spp. in the Bohai Bay, the most abundant species was Fusarium solani, and the apple rootstocks are highly sensitive to this pathogen [10, 11]. Members of the Fusarium solani species complex (FSSC) are capable of causing disease in many agriculturally important crops [12]. F. solani (MG836251.1) is one of the pathogens that has been proven to cause ARD [13]. Infection with F. solani depressed the photosystem performance of apple seedling leaves, destroyed the ROS scavenging system, caused oxidative damage, and increased the number of black and brown necrotic spots on apple roots [10, 14]. Those series of physiologic and biochemical reactions can lead to severe root tip necrosis and decay until plant death occurs several days or weeks after F. solani infection [15]. Therefore, it is urgent to find an effective method to enhance the resistance of apple rootstocks to F. solani. Arbuscular mycorrhizal fungi (AMF) can infect more than two-thirds of terrestrial plants to form efficient symbiotic relationships on the roots [16]. There is an interaction among AMF, pathogens, and plants [17]. AMF can not only enhance the resistance of the host to environmental stress, such as drought and salt stress, by improving the efficiency of utilization of plant nutrients [18, 19], but also have a significant effect on the induction of plant resistance to soilborne diseases and improve the growth of host plants [20, 21]. Previous studies have shown that AMF effectively provide biological control against soilborne pathogens, such as F. oxysporum, Sclerotium cepivorum and Pythium aphanidermatum [22]. AMF prevent and control soilborne diseases by challenging the growth sites of pathogens in the root systems of host plants and produce wide networks of extraradical mycelia [23, 24]. The AMF-plant symbiotic relationship results in enhanced plant growth, which may be stimulated through several activities, including regulation of the secondary metabolic pathway of the roots of host plants, the increased synthesis and secretion of terpenes and phenolic compounds, promotion of the synthesis of antimicrobial substances, and improvements in the plant defense system [25, 26]. An important number of genes related to hormone signalling are involved in the enhanced resistance or tolerance of mycorrhizal plants to infection by F. virguliforme [27]. The systems of mycorrhizal plants play an active role in the disease resistance process, and plants with mycorrhizae are generally more resistant to soilborne pathogens than plants that lack them [28, 29]. The induction of defense responses in mycorrhizal plants was much higher and more quickly than that in non-mycorrhizal plants when infected by pathogens [29]. However, it is unclear how the physiological and molecular responses of mycorrhizal and non-mycorrhizal prevent the infection of apple seedlings with F. solani. In this study, the apple stock M9T337 was used as the experimental material to explore the defense mechanism of mycorrhizal apple seedlings. Several experimental evaluations of the pre-inoculation of AMF were conducted to help improve the resistance of apple root systems to F. solani. We hypothesized that the presence of symbiotic apple root systems enhances resistance to F. solani. Therefore, the molecular mechanism of the AMF-apple interaction in response to the infection was analyzed by transcriptome. This study provides a basis for understanding the molecular mechanism of mycorrhizal apple defence against infection by F. solani. Physiological properties of mycorrhizal apple roots The plants were harvested 10 days after infection with F. solani. No colonization was detected in the non-inoculated AMF seedlings, including NM (non-mycorrhizal plants that had not been inoculated with F. solani) and NM-F (non-mycorrhizal plants inoculated with F. solani) treatments. In the AMF inoculated plants, AMF and plant roots had established a symbiotic relationship with a colonization rate of 76% in the AM treatment (mycorrhizal plants that had not been inoculated with F. solani), but after infection with F. solani, the colonization rate was 49% in AM-F treatment (mycorrhizal plants inoculated with F. solani). Infection with F. solani significantly reduced the dry weight of plants, and the presence of AMF in plants that had been inoculated with F. solani significantly reduced the damage of F. solani to the plants. The activity of superoxide dismutase (SOD) of the AM was significantly higher than that of the NM treatment, while the activities of other antioxidant enzymes did not differ significantly from the NM treatment. After 5 days of infection with F. solani, the SOD and peroxidase (POD) activities of the AM-F treatment increased by 12.5 and 56.4%, respectively, compared with the NM-F treatment, while there was no significant difference in the activity of catalase (CAT) (Fig. 2A). The contents of malondialdehyde (MDA), hydrogen peroxide (H2O2), and superoxide (O2·−) in the AMF symbiotic plants were 19.8, 26.1, and 54.2% lower than those in the non-mycorrhizal plants, respectively (Fig. 2B). We also observed a similar trend in the contents of proline (PRO), soluble protein, and soluble sugar (SOG) in the treated plant roots that correlated with the activities of SOD and POD (Fig. 2C). DEG screening To investigate changes in the level of transcription of apple rootstock M9T337 root after inoculation with AMF and F. solani, different treatments were assessed. The differentially expressed genes (DEGs) from the four treatments that were enriched and depleted are shown in Fig. 3. A total of 809 DEGs were identified in the AM-F vs. AM treatment, and 996 DEGs were identified in the NM-F vs. NM treatment. A total of 1839 DEGs, the highest number of differential genes, were identified for subsequent differential gene analysis in the AM-F vs. NM-F treatment. The PCA analysis identified a significant separation between the AM-F and NM-F (Fig. S3). GO pathway enrichment analysis In GO enrichment analysis, the genes were annotated and classified by biological processes, cell components and molecular function. In the AM-F vs. NM-F, molecular function was enriched, including ADP binding (GO:0043531), calcium ion binding (GO:0005509), hydrolase activity (GO:0004553), vitamin binding (GO:0019842), polygalacturonase activity (GO:0004650), and thiamine pyrophosphate binding (GO:0004650). The cell periphery (GO:0071944) and cell wall (GO:0005618) were enriched. In addition, the pyridine nucleotide biosynthesis pathway (GO:0019363), carbohydrate catabolic process (GO:0016052), and protein modification (GO:0070647) were enriched (Fig. 4A). This experiment demonstrated that there are different expression of multiple members of different TF families, including the WRKY, AP2-ERF, bHLH, NAC, MYB, HB, C2H2, and bHLH families (Fig. 4B). There is a class of proteins designated ethylene response factors in TFs that are closely related to this signaling pathway, and the differential expression analysis also verified the relevant data. A large number of genes (MD15G1221100, MD07G1248600, MD05G1311400, MD04G1058200, MD09G1114800, MD15G1055200, MD02G1096500, MD05G1080900, MD10G1094700) were up regulated in ERF family members by infection with F. solani after 5 days. Interestingly, the most representative class of TFs were the positive regulatory induction of the WRKY family genes in mycorrhizal plants. MdWRKY40, MdWRKY41, MdWRKY53, MdWRKY50, MdWRKY24, MdWRKY76, MdWRKY28, and MdWRKY6 were significantly up regulated in the AM-F vs. NM-F treatment. MapMan analysis To provide a more comprehensive understanding of the DEGs in interactions of plants and pathogens, we performed a MapMan visual analysis in biotic stress, secondary metabolism (Fig. 5). The genes that encode TIR-NBS-LRR and NB-ARC were up regulated by F. solani infected. A large number of genes that are related to hormone signaling, including auxin, brassinolide, abscisic acid, ethylene, jasmonic acid, and salicylic acid; pathogenesis-related proteins, such as PR1 and PR5; and transcription factors, such as WRKY, MYB, ERF and DOF, were enriched (Fig. 5A). MdMPK1 and MdMPK18 were up-regulated in the mitogen-activated protein kinase (MAPK) signal pathway. The biosynthesis of lignin and lignans, flavonoids, phenylpropanoids, and glucosinolates provided the primary secondary metabolic pathways (Fig. 5B). Glucosinolate biosynthesis is thought to produce a chemical barrier against pathogen infection. Nine genes are involved in its for the biosynthetic process, of which only two genes were significantly down regulated. In addition, MD15G1079200, acyl transferase 9 (MD09G1169600), 4-coumarate--CoA ligase-like 5 (MD07G1073800), cinnamoyl-CoA reductase 1 (MD09G1224200), and probable cinnamyl alcohol dehydrogenase 6 (MD15G1008100) in the phenylpropanoid synthesis pathway were significantly upregulated. The solely salicylic acid 3-hydroxylase (MD03G1140400) and hydroquinone glucosyltransferase (MD01G1077200) were up regulated in flavonoid synthesis pathway, seven DEGs were down regulated. Interestingly, DEGs were all up regulated in both the carotenoids and the simple phenolic synthesis pathways. Quantitative reverse transcription-PCR (qRT-PCR) validation Based on the sequencing data from our transcriptional group to validate the RNA-Seq results, 16 upregulated genes and six downregulated genes were verified with qRT-PCR (Fig. 6A). The candidate genes were selected to examine their participation in the process of interaction between plants and pathogens, including a pathogen recognition receptor, signal transduction, transcription factor, and the genes involved in the synthesis of secondary metabolites (Fig. 5). These genes could play an important role in the mycorrhizal plant in response to biological stress reactions by infection with F. solani. As shown in Fig. 6B, the trend toward the levels of expression in RNA-Seq and qRT-PCR data was similar, which highlights the reliability of RNA-Seq. Overexpression of MdWRKY40 improves the resistance of 'Orin' to F. solani We found the highest relative level of expression of MdWRKY40 (MD15G1039500) in qRT-PCR validation (Table S3). The open reading frame (ORF) of MdWRKY40 was 912 bp, encoding a 303 amino acid, with one conserved WRKY domain and a zinc-finger motif (Fig. S5). To validate the resistance of MdWRKY40 to F. solani, MdWRKY40 was chosen to transform the apple callus. Compared with the wild-type (WT) and OE-MdWRKY40 'Orin' callus that had been infected with F. solani after 4 days, the diameter of fungal spots in the OE-MdWRKY40 was significantly lower than that in the WT, and the diameter of plaque extension in calli was decreased by 56.4% (Fig. 7B,C). To study whether MdWRKY40 regulates the expression of resistance genes, the relative levels of expression of MdPR1, MdPR4, MdPR5, MdPR8, MdECHT, and MdGLU in callus following infection with F. solani were measured by qRT-PCR (Fig. 7D). The levels of expression of MdGLU in OE-MdWRKY40 were significantly higher than the levels in wild type (WT) in all the resistance genes tested (P < 0.01). This suggests that MdWRKY40 could improve the resistance of callus to F. solani by regulating the expression of MdGLU. MdWRKY40 bound to the MdGLU promoter Previous studies showed that the WRKY transcription factors usually target W-box motifs to activate or inhibit the level of expression of downstream genes [30]. The promoters of MdGLU contain three W-boxes (Fig. S5). A yeast one-hybrid test was used to determine whether MdWRKY40 combined with the MdGLU promoter. On the media that lacked Trp and His, the optimal concentration to inhibit the expression of HIS3 in the pHIS2 vector was 120 mm 3-amino-1,2,4-triazole (3-AT) for proMdECHT-pHIS2 (Fig. 8B). In the Y1H assays, MdWRKY40 interacted with the promoters of MdGLU. When WRKY40 was divided into two fragments (N-terminus and C-terminus), we found that the C-terminus that contains the WRKY domain bound to the MdGLU promoter (Fig. 8C). An electrophoretic mobility shift assay (EMSA) confirmed that MdWRKY40 could bind to the second W-box in the MdGLU promoter but not to the other three W-boxes (Fig. 8D, Fig. S6). As the concentration of the cold probe increased, the binding weakened, and the addition of the mutant cold probe did not affect the binding. To determine how MdWRKY40 regulates the level of proMdGLU activity, luciferase (LUC) reporter assays were performed. MdWRKY40 stimulated the expression of the LUC gene driven by the promoter of MdGLU (Fig. 8E). These results indicated that MdWRKY40 activated proMdGLU. Mycorrhizal symbiosis improved the resistance of apple seedling to F. solani The symbiosis of AMF with plant roots is often mutually beneficial. After establishing a symbiotic system with the host plant roots, the AMF will promote root development and increase the root biomass [31]. A large amount of research has shown that mycorrhizal plants tend to have more developed roots to absorb essential moisture and nutrients for the host plants under adverse conditions [19, 21]. The symbiosis of AMF not only enhances the absorption of soil nutrients by host plant roots but also creates the mycorrhizal pathway for nutrient uptake by the AM fungal mycelia, and plays an important role in the management of abiotic and biotic stress by the plant [32]. In this study, the root length of M9T337 was significantly reduced after infection by F. solani. The pre-inoculation of AMF promoted root growth, indicating that after AMF colonized the plant root system, the soilborne pathogens competed with their ecological niche and reduced their rate of invasion on the root epidermis (Fig. 1). Simultaneously, after the formation of mycorrhizal plants, a developed mycorrhizal network formed a frontal physical barrier with the root system, could help the plants to more effectively resist the harm caused by pathogens to plants [33, 34]. AMF inoculated reduce mortalities root changing activities of lignification and increase isozymes against F. oxysporum infection. After AMF colonized the plant root system, the lignification degree and the resistance related enzymes activity of mycorrhizal plants roots were increased to resist the infection of F. oxysporum, thus reducing the incidence rate [35]. Effects of different treatments on (A) plant growth, (B) plant biomass and mycorrhizal colonization, (C) root morphology of apple seedlings. Different letters represent significant differences at P < 0.05 F. solani, as a plant pathogenic fungus, frequently cause a large amount of necrosis of the roots, resulting in a decline in their photosynthetic ability and other pathological changes [10]. When pathogenic fungi infect plants, the resistance of mycorrhizal plant is often induced during the early stages of infection to reduce the damage of pathogens to plants [28]. Studies have shown that Funneliformis mosseae, Rhizophagus irregularis, and other AMF have been used to varying degrees to reduce plant diseases caused by pathogenic fungi [36, 37]. Consistent with previous studies, AMF significantly improved the resistance of plants by improving the activities of plant defense enzymes, reducing the oxidation of membrane lipids, and enhancing the plant antioxidant prevention system (Fig. 2). These findings suggest that AMF symbiosis reduced the degree of membrane oxidation and improved the antioxidant activities and permeable regulatory material content to protect the apple root system. Effects of different treatments on the physiological indicators of apple seedlings. (A) defensive enzyme activity, (B) reactive oxygen species content, (C) osmotic regulator substances. Different letters represent significant differences at P < 0.05 Mycorrhizal plants respond to F. solani infection The plant-AMF-pathogen interaction is a complex network system that involves multiple gene expression and complex regulation [34, 38]. After the pathogens invade plants, the plant will increase the expression of resistance genes and promote the transmission of signal substances and plant metabolism to respond quickly to pathogenic infections [39]. Transcriptome data provide the expression of information and the level of expression for genes that are commonly used to identify the differential stresses of functional plants to biological and non-biological factors [40, 41]. To further understand the biological function of the DEGs, we performed GO Pathway significant enrichment and MapMan visualization analyses to determine key differential genes for the main physiological and biochemical metabolic pathways and signal transduction pathways. There were significant morphological differences between the AM and NM roots after pre-inoculation with AMF (Fig. 1). In addition, there were significant differences between gene expression and the DEGs between the AM and NM after infection with F. solani (Fig. 3). In the GO enrichment analysis, more DEGs were concentrated in molecular function, with the most upregulated genes found in the ADP binding (GO:0043531) and calcium ion binding (GO:0005509) pathways (Fig. 4A). The shock of calcium ion concentration in the cytoplasm of root epidermal cells is a central signal factor for plant symbiosis with AMF. The calcium-dependent protein kinase activates downstream calcium/calcium-dependent protein kinases to induce the expression of genes related to symbioses [42]. In the presence of pathogens, AMF constantly competes with pathogens to invade sites on the plant root system, and thus, increases the expression of calmodulin protein genes, such as MdCML37 and MdCML38 (Fig. 5). The plant cells sense that calmodulin was activated in a relative stress response to protect the cells from a high infiltration environment [43, 44]. Numbers of differentially expressed genes. (A) the experimental design and comparisons, (B) Venn diagram, (C) Diagram illustrating the number of total, up, and down-regulated genes in the roots of mycorrhizal plants that were infected with Fusarium solani (AM-F) or not infected (AM) with F. solani and the roots of non-mycorrhizal plants infected with F. solani (NM-F) or not infected (NM) GO enrichment analysis (A), and the transcription factor (B) numbers of DEGs. The scale bar represents the log2FC (AM-F/NM-F) of DEGs. Red and blue indicate up and down regulated genes, respectively. DEGs, differentially expressed genes; GO, gene ontology Impact of AMF pre-colonization on gene expression in apple roots infected with Fusarium solani. MapMan graphs of (A) biotic stress, (B) secondary metabolism. The scale bar represents the log2FC (AM-F/NM-F) of the DEGs. Red and blue indicate up and down-regulated genes, respectively During the process of the interactions of plants with pathogens, a series of defense mechanisms were formed, and the metabolic routes described above were closely related to the defense responses of plants. The related genes of apple-AMF symbionts were analyzed using a MapMan analysis to more comprehensively show the changes of genes in related pathways between mycorrhizal and non-mycorrhizal plants by infection with F. solani. Cells respond to stress through signaling transduction and the intracellular regulation of plant physiological and biochemical reactions, while the MAPK signaling transduction pathway is in the center of cell signal transduction system [45]. Huang et al. [46] found that the inoculation of AMF resulted in a significant upregulation of MdMAPK, along with the functional response of MdMAPKs to plant hormone signaling and stress responses under drought stress. Plant hormone signal transduction played an important role in the interactions of mycorrhizae to protect the plants against pathogens [47]. In this study, we identified a significant upregulation of hormone signaling genes and numerous PR genes. The MAPK signaling pathway and the plant hormone signal transduction were essential for the resistance of mycorrhizal plants to pathogens, but the deeper molecular mechanisms still merit further research. TFs respond to mycorrhizal plant resistance to F. solani TFs play an important role in the process of the plant response to the environment [48, 49]. Extensive studies have shown that the families of transcription factors, such as WRKY, AP2, NAC, MYB, and GRAS, are involved in the regulation of expression of defense-related genes (Fig. 4B). In this study, a large number of AP2, WRKY and MYB transcription factor family genes were involved in the response of mycorrhizal plants against infection with F. solani. Only one gene with diminished expression of the WRKY transcription factor was detected in the transcription, and the others increased significantly. The log2 Fold Change of MdWRKY76, MdWRKY53, and MdWRKY40 was more than 2.5-fold (Fig. 6). The results of qRT-PCR were consistent with the RNA-Seq data, and MdWRKY40 was expressed up to 3.69-fold (Table S3). qRT-PCR verification of the genes in biotic stress process. A total of 22 genes were selected for qRT-PCR analysis from the RNA-Seq data. (A) A heatmap was generated based on the fold-change values for RT-qPCR and the color scale for values was shown at the top. (B) Scatterplots with the correlation between RNA-Seq and qRT-PCR data. qRT-PCR, quantitative reverse transcriptase PCR The overexpression of VvWRKY18 enhances the resistance of Arabidopsis thaliana to Botrytis cinerea through activation of the STILBENE SYNTHASE (STS) genes [50]. The overexpression of HbWRKY40 induced resistance to Colletotrichum gloeosporioides in tobacco [51]. An evolutionary tree analysis found that MdWRKY40 is a member of WRKY IIa (Fig. S7). We hypothesize that MdWRKY40 played a positive role when mycorrhizal plant roots were infected by F. solani. Therefore, the functional validation of MdWRKY40 was performed (Fig. 7). And through its overexpression in "Orin" calli that the overexpression of MdWRKY40 enhanced the resistance to F. solani and improved the level of transcription of MdGLU. YIH, EMSA and LUC report analyses identified the binding of MdWRKY40 to the MdGLU promoter, which indicates that it is, involved in the resistance of apple to infection by F. solani (Fig. 8). Functional characteristics of 'Orin' callus that overexpressed MdWRKY40. (A) qRT-PCR detection of the expression of MdWRKY40 in MdWRKY40-OE transgenic calli. WT represents wild-type calli. OE represents MdWRKY40-OE transgenic lines. (B) Diameters of spots in different apple calli after 4 days of infection with Fusarium solani. (C) Phenotype of WT, empty vector and MdWRKY40 overexpressing, transgenic apple calli inoculated with F. solani after 4 days. (D) Transcript levels of MdPR1, MdPR4, MdPR5, MdPR8, MdCHIT and MdGLU in MdWRKY40-OE and control calli infected with F. solani. OE, overexpression; qRT-PCR, quantitative reverse transcriptase PCR. *P < 0.05. **P < 0.01 MdWRKY40 binds to the MdGLU promoter. (A) MdWRKY40 was divided into two fragments, N (N) terminus, and C (C) terminus. (B) The optimal concentration of 3-AT was determined by cloning proMdGLU into the pHIS2 vector. (C) MdWRKY40 interacted with MdGLU promoter fragments as per the Y1H assay. (D) An EMSA analysis revealed that MdWRKY40 binds to the W-box II of the MdGLU promoter. (E) Luciferase reporter (LUC) assays showed the MdWRKY40-mediated activation of proMdGLU. EMSA, electrophoretic mobility shift assay. *P < 0.05. **P < 0.01 The apple seedlings with AMF significantly reduced the deleterious effects of F. solani to the root system, improved the antioxidant enzyme activities, and reduced the degree of membrane lipid oxidation. Most of the DEGs involved in plant pathogen interaction, glycolysis/gluconeogenesis, and hormone signal transduction pathway were identified through GO analyses. The mycorrhizal and non-mycorrhizal apple plants were subjected to oxidative stress after inoculation with F. solani. However, compared with non-mycorrhizal plants, an enormous number of resistance genes were involved in the stress response process to reduce the amount of oxidative damage. MdWRKY40 improved its expression by binding with the MdGLU promoter to participate in mycorrhizal apple seedlings against the infection of F. solani. Plant growth and plant infection Apple stock M9T337 was grown in an illuminated greenhouse at 20–30 °C (daytime) and 0–15 °C (night) with a relative humidity of 55–65%. The apple stock M9T337 was obtained from Shandong Horticultural Techniques Services Co., Ltd., Tai'an, Shandong, China. The 'Orin' apple callus was provided by Prof. Xue-sen Chen of the State Key Laboratory of Crop Biology (Tai'an, China). The appropriate permission was obtained for the plant collection and it's use was executed in accordance with relevant guidelines. 'Orin' apple calli (Malus domestica cv. 'Orin') were cultured in subculture medium that contained MS, 2.5 mg·L− 1 2,4-D, and 0.5 mg·L− 1 6-indole-3-butyric acid at room temperature in the dark, and the subculture medium was renewed every 15 days for genetic transformation. F. solani (MG836251.1) was isolated from the roots of a replanted apple tree (Fig. S1) [52]. The hyphae of F. solani were inoculated in sterilized Potato Dextrose Broth (PDB) media and cultured in a shaker at 28 °C for one week. The hyphae were filtered with sterile gauze, and the number of plates was counted. The spores were adjusted to 105·mL− 1 with sterile water. Arbuscular mycorrhizal fungi used in this study was Paraglomus sp. SW1 (CGMCCNO. 20,744), which was provided by Shandong Agricultural University (Tai'an, China). The inoculant was a mixture of vermiculite, spores (spore density 28·g− 1), hyphae and colonized root segments. The systems of mycorrhizal plant roots were grown for 4 weeks using Paraglomus sp. SW1. We established four treatments: (1) NM: non-mycorrhizal plants that had not been inoculated with F. solani; (2) AM: mycorrhizal plants that had not been inoculated with F. solani; (3) NM-F: non-mycorrhizal plants inoculated with F. solani; and (4) AM-F: mycorrhizal plants inoculated with F. solani. After the apple stock M9T337 plantlets grew for 4 weeks with 1% AMF inoculum, 50 mL of a solution of F. solani spores was used to treat them. After 5 days, the apple roots were collected for transcriptome sequencing, qRT-PCR verification, and an analysis of the enzymes involved in resistance. The apple seedlings were randomly collected after 10 days post-infection to determine the apple biomass, root morphology and mycorrhizal colonization rate. Rate of mycorrhizal colonization The rate of mycorrhizal colonization was determined as described by Giovannetti and Mosse [53]. A 1 cm root segment was digested with a solution of 10% KOH at 90 °C for 20 min, acidified with 2% HCl for 5 min, stained with 0.05% triphenyl blue lactic acid glycerin solution (lactic acid/glycerin =1/1) at 90 °C for 30 min, and decolorized overnight with a solution of lactic acid glycerin (lactic acid/glycerin/water = 1/1/1 [v/v/v]). The root segments were observed under a microscope. The rate of infection of each root segment was assessed by the number of root mycorrhizal structures in each section and expressed as 0, 10, 20, ..., 100% of the root. The mycorrhizal colonization rate (%) was calculated as follows: $$\mathrm{Mycorrhizal}\ \mathrm{colonization}\ \mathrm{rate}=\Sigma\ \left(0\times \mathrm{root}\ \mathrm{segment}\ \mathrm{number}+10\%\times \mathrm{root}\ \mathrm{segment}\ \mathrm{number}+20\%\times \mathrm{root}\ \mathrm{segment}\ \mathrm{number}+\dots \dots +100\%\times \mathrm{root}\ \mathrm{segment}\ \mathrm{number}\right)/\mathrm{Total}\ \mathrm{root}\ \mathrm{segment}\ \mathrm{number}.$$ RNA extraction, cDNA library construction, and Illumina sequencing Total RNA was isolated using a RNA Prep Pure Plant Kit (CWBIO, Beijing, China) according to the manufacturer's instructions. Total RNA was tested with an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). The synthesis of cDNA was performed using HiScript II 1st Strand cDNA Synthesis Kit (Vazyme, Nanjing, China). cDNA fragments that were preferentially 250 ~ 300 bp long were purified with an AMPure XP system (Beckman Coulter, Brea, CA, USA). The PCR products were purified again after PCR amplification, and the cDNA library was finally obtained [54]. And cDNA library was assessed on an Agilent Bioanalyzer 2100 system (Agilent Technologies). The libraries were sequenced on a HiSeq® X Ten System (Illumina, San Diego, CA) by Novogene Co., Ltd. (Beijing, China). Transcriptomic data analysis To ensure the quality and reliability of the data analysis, it is necessary to filter the original data, including the removal of reads with an adapter, and remove the reads that contain N. N indicates that the nucleobase information cannot be determined. The low quality reads were then removed. The base number of qPHRED was ≤20, which comprised ≥50% of the whole read length. Moreover, the contents of Q20, Q30, and GC in the clean data were calculated. All follow-up analyses were based on the clean data high quality. Clean data obtained by filtering the raw data were aligned to the reference genome sequence of Malus × domestica (GDDH13 v. 1.1) by HISAT2 (v. 2.0.5). The program FeatureCounts (v. 1.5.0-p3) was used to calculate the readings that mapped to each gene [55]. The FPKM of each gene was calculated based on the length of gene, and the reading that mapped to the gene was calculated. DESeq2 software (1.20.0) was used to analyze the differential expression between the four treatments [56]. DEGs with |log2(Fold Change)| ≥ 1 and P-value < 0.05 were considered as differentially expressed genes (DEGs) after multiple correction for subsequent analyses. The clusterProfiler R package (3.4.4) was used for the Gene Ontology (GO) enrichment analysis of DEGs [57]. MapMan software (version 3.6.0RC1) (http://mapman.gabipd.org/web/guest/mapman) was used to generate functional assignments in the different pathways [58] for each input gene and the data visualization/interpretation of apple gene expression. Gene annotation in the pathway analysis was prepared via Mercator online software within the PlabiPD website (https://www.plabipd.de/portal/mercator4) based on the DNA sequence and followed the default annotation parameter. Measurement of root physiological parameters The root physiological parameters included the activities of SOD, CAT, and POD, the contents of PRO, SUG, MDA, H2O2 and O2·−. Each physiological parameter was measured using kits obtained from Suzhou Keming Biotechnology Co., Ltd. (Suzhou, China). All the measurements were performed three times, and the average was calculated for further analysis. Gene cloning and function validation of MdWRKY40 MdWRKY40 (MD15G1039500) sequence information was obtained from the apple genome database (http://www.rosaceae.org). Total RNA extraction and cDNA synthesis were conducted as described by RNA Extraction, cDNA library construction, and Illumina sequencing section. A fragment of 912 bp was obtained by PCR amplification, cloned into the pLB vector (TianGen, http://www.tiangen.com/). The CDS of MdWRKY40 was inserted into the PRI101-AN vector, which has a green fluorescent protein (GFP) tag. The constructed overexpression vector was transformed into Agrobacterium tumefaciens LBA4404. The positive A. tumefaciens was obtained by PCR and used to infect 'Orin' callus. The infected calli were cultured on MS solid media at 24 °C in the dark for 1–2 days. It was then transferred to a screening medium that contained 50 mg·L− 1 kanamycin and 250 mg·L− 1 carbenicillin. The overexpression of MdWRKY40 was confirmed by PCR. Yeast one-hybrid test The target gene fragments MdWRKY40, MdWRKY40-N (0–120 aa), MdWRKY40-C (121–303 aa) were ligated into the pGADT7 vector. The fragment of MdGLU promoter was cloned into the pHIS2 vector. A yeast one-hybrid test was conducted according to the manufacturer's instructions of Yeast transformation system 2 (TaKaRa, Dalian, China). The yeast one-hybrid strain was Y187 (Clontech, Takara Bio USA, San Jose, CA, USA). The yeast strain Y187 that contained the recombinant pHIS2 vectors was grown on –Trp/−His (−T/−H) screening media with different 3-AT concentrations to determine the optimal concentrations. To determine the interactions, the Y187 yeast strain that harbored the recombinant pGADT7 and pHIS2 plasmids was spotted onto media that lacked Trp, His, and Leu. The control was an empty pGADT7 plasmid. The primers used are listed in Supplemental Table S2. Electrophoretic mobility shift assays The sequence of MdWRKY40 was inserted into the pET-32a (+) expression vector (Novagen, Madison, WI, USA). The recombinant MdWRKY40 protein was expressed in E. coli BL21 (DE3), and the fusion protein MdWRKY40-His was purified by a His-Tagged Protein Purification Kit (CWBIO, Beijing, China). The biotinylated probe was synthesized by Sangon Biotech Co., Ltd. (Shanghai, China). The fusion protein, probe, and binding buffer were mixed in a centrifuge tube and incubated at 24 °C for 30 min. After 50% glycerol and 5 × loading buffer were added to the sample, non-denaturing acrylamide gel electrophoresis was performed, and the protein-nucleic acid strip was transferred to film placed on nylon. After the completion of UV crosslinking, the preheated Blocking Buffer closure was added, then HRP Conjugate and 20 mL new Blocking Buffer (ThermoFisher, Shanghai, China) were incubated at room temperature for 15 min. The Washing Buffer (ThermoFisher) was used after washing and developing. LUC activity MdWRKY40 full length CDS inserted into the pHBT-AvrRpm 1 carrier and promoter segments of MdGLU into the pFRK1-LUC-nos carrier. Both plasmids were converted simultaneously from the protoplasm of apple callus and then expressed for 6 h at 24 °C. Subsequently, the protoplasm was suspended in 100 μL of cell lysate. The 5 μL cell extract and 20 μL 1 mmol·L− 1 4-MUG were incubated at 37 °C at 1 h, and the 100 μL of 0.2 mol·L− 1 sodium acetate was added to the termination reaction. The LUC activity was determined using the Luciferase Reporting Analysis System (Promega, Madison, WI, USA). Quantitative reverse transcription-PCR (qRT-PCR) The cDNA was synthesized using a HiScript II 1st Strand cDNA Synthesis Kit (Vazyme, Nanjing, China). The reverse transcription reactions began with 500 ng of total RNA. The resulting first stand cDNA was diluted 10-fold with ddH2O, and then used as templates for qRT-PCR assays. The primers were designed for qRT-PCR by Primer 6.0 software (Premier Biosoft, Palo Alto, CA, USA). The internal reference gene was actin. The reaction system in the PCR of each primer was SYBR Green Mix 5 μL, primer (10 μM) 0.3 μL, cDNA 1 μL, and dd H2O 3.4 μL. The PCR procedures were 50 °C 2 min, 95 °C 10 min, 95 °C 15 s, 65 °C 60 s, 72 °C (30 cycles), and 72 °C 10 min. The primers were synthesized by Sangon Biotech Co., Ltd. The relative quantitative method used the 2-∆∆CT method [59]. Each sample had three biological replicates. The primer sequences are shown in Supplementary Table S2. The experimental data was expressed as the means and standard deviation (SD) of three biological replicates. The plant growth, mycorrhizal colonization rate and physiological data were analyzed by Duncan's test at the 0.05 level using SPSS v. 19.0 (IBM, Inc., Armonk, NY, USA). The qRT-PCR data were analyzed by t-test. An asterisk (*) indicates significant differences, where '*' represents significant differences at P < 0.05, and '**' represents highly significant differences at P < 0.01. The graphs were constructed using GraphPad Prism 8 (San Diego, CA, USA). The annotations of the DEGs were based on the databases of GO and MapMan software. The raw sequence data are available in the NCBI Sequence Read Archive (SRA) repository. The accession number is PRJNA752816, and SRA RunSelector as follows: https://www.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA752816. All data supporting the conclusions of this article are included in the article and its additional files. RNA-seq: Gene ontogeny MAPK: ARD: Apple replant disease AMF: Arbuscular mycorrhizal fungi AM: Mycorrhizal plants that had not been inoculated with F. solani AM-F: Mycorrhizal plants inoculated with F. solani NM: Non-mycorrhizal plants that had not been inoculated with F. solani NM-F: Non-mycorrhizal plants inoculated with F. solani SOD: Superoxide dismutase POD: MDA: Malondialdehyde SOG: Soluble sugar PCA: Principal components analysis Jasmonate acid AOS: Allene oxide synthase ETI: Effector-Triggered immunity qRT‑PCR: Quantitative reverse transcriptase-PCR WT: Wild-type Over-expression LUC: CML: Calmodulin-like protein EMSA: Electrophoretic mobility shift assay Duan NB, Yang B, Sun H, Wang N, Ma YM, Li MJ, et al. Genome re-sequencing reveals the history of apple and supports a two-stage model for fruit enlargement. Nat Commun. 2017;8(1):249. https://doi.org/10.1038/s41467-017-00336-7. Sharma N, Verma P, Singh DN. Causes and control measures of apple replant problem. IJBSM. 2020;11(3):246–25. https://doi.org/10.23910/1.2020.2090. Yin CM, Wang M, Wang JY, Chen XS, Shen X, Zhang M, et al. The research advance on apple replant disease. Acta Hort Sin. 2017;44:2215–30. https://doi.org/10.16420/j.issn.0513-353x.2017-0524. Tewoldemedhin YT, Mazzola M, Labuschagne I, McLeod A. () A multi-phasic approach reveals that apple replant disease is caused by multiple biological agents, with some agents acting synergistically. 2011;43:1917–27. https://doi.org/10.1016/j.soilbio.2011.05.014. Sewell G, Roberts AL, Elsey RF. Apple replant disease: the assessment and results of seedling bio-assays of growth responses to soil fumigation with chloropicrin. Ann Appl Biol. 1992;121(1):199–209. https://doi.org/10.1111/j.1744-7348.1992.tb04001.x. van Schoor L, Denman S, Cook N. Characterisation of apple replant disease under south African conditions and potential biological management strategies. Sci Hortic. 2009;119:153–62. https://doi.org/10.1016/j.scienta.2008.07.032. Tewoldemedhin YT, Mazzola M, Botha WJ, Spies CF, McLeod A. Characterization of fungi (fusarium and Rhizoctonia) and oomycetes (Phytophthora and Pythium) associated with apple orchards in South Africa. Eur J Plant Pathol. 2011;130(2):215–29. Strauss S, Kluepfel D. Anaerobic soil disinfestation: a chemical-independent approach to preplant control of plant pathogens. J. Integr. Agr. 2015;14:2309–18 https://doi.org/CNKI:SUN:ZGNX.0.2015-11-015. Wang G, Yin C, Pan FB, Wang XB, Xiang L, Wang YF, et al. Analysis of the fungal community in apple replanted soil around Bohai gulf. Hortic Plant J. 2018;4(5):175–81 https://doi.org/CNKI:SUN:YYZW.0.2018-05-001. Yan K, Han G, Ren C, Zhao S, Wu X, Bian T. Fusarium solani infection depressed photosystem performance by inducing foliage wilting in apple seedlings. Front Plant Sci. 2018;9:479. https://doi.org/10.3389/fpls.2018.00479. Xiang L, Wang M, Pan FB, Wang GS, Jiang WT, Wang YF, et al. Transcriptome analysis malus domestica 'M9T337' root molecular responses to fusarium solani infection. Physiol Mol Plant P. 2021;113:101567. https://doi.org/10.1016/j.pmpp.2020.101567. Coleman JJ. The fusarium solani species complex: ubiquitous pathogens of agricultural importance. Mol Plant Pathol. 2016;17(2):146–58. https://doi.org/10.1111/mpp.12289. Xiang L, Wang M, Huang JX, Jiang WT, Yan ZB, Chen XS, et al. MdWRKY74 is involved in resistance response to apple replant disease. Plant Growth Regul. 2022;96:145–56. https://doi.org/10.1007/s10725-021-00766-w. Xiang L, Zhao L, Wang M, Huang J, Chen X, Yin C, et al. Physiological responses of apple rootstock M.9 to infection by fusarium solani. Hortsci. 2021;8:1–8. https://doi.org/10.21273/HORTSCI15945-21. Xu SS, Kan W, Kong BH, Ma J, Yang GZ. First report of fusarium oxysporum and fusarium solani causing root rot on Malania oleifera in China. Plant Dis. 2019;104(2):584. https://doi.org/10.1094/PDIS-07-19-1426-PDN. Zhang L, Zhou J, George TS, Limpens E, Feng G. Arbuscular mycorrhizal fungi conducting the hyphosphere bacterial orchestra. Trends Plant Sci. 2021;27(4):402–11. https://doi.org/10.1016/j.tplants.2021.10.008. Aguilar R, Carreón-Abud Y, López-Carmona D, Larsen J. Organic fertilizers alter the composition of pathogens and arbuscular mycorrhizal fungi in maize roots. J Phytopathol. 2017;165(7–8):448–54. https://doi.org/10.1111/jph.12579. Gu LJ, Zhao ML, Ge M, Zhu SW, Cheng BJ, Li XY. Transcriptome analysis reveals comprehensive responses to cadmium stressin maize inoculated with arbuscular mycorrhizal fungi. 2019;186:109744. https://doi.org/10.1016/j.ecoenv.2019.109744. Latef AAHA, Hashem A, Rasool S, Abd_Allah EF, Alqarawi AA, Egamberdieva D, et al. Arbuscular mycorrhizal symbiosis and abiotic stress in plants: a review. J Plant Biol 2016;59(5):407–426. https://doi.org/10.1007/s12374-016-0237-7. Chen Q, Wu WW, Qi SS, Cheng H, Li Q, Ran Q, et al. Arbuscular mycorrhizal fungi improve the growth and disease resistance of the invasive plant Wedelia trilobata. J Appl Microbiol. 2021;130(2):582–91. https://doi.org/10.1111/jam.14415. de Novais CB, Sbrana C, da Conceição JE, Rouws LFM, Giovannetti M, et al. Mycorrhizal networks facilitate the colonization of legume roots by a symbiotic nitrogen-fixing bacterium. Mycorrhiza. 2020;30:389–96. https://doi.org/10.1007/s00572-020-00948-w. Hou SW, Hu JL, Wu FY, Lin XG. The disease suppression and the related application of arbuscular mycorrhizal fungi. China J Appl Environ Biol. 2018;24(05):941–51. https://doi.org/10.3724/SP.J.1145.2017.12018. Vigo C, Norman JR, Hooker JE. Biocontrol of the pathogen Phytophthora parasitica by arbuscular mycorrhizal fungi is a consequence of effects on infection loci. Plant Pathol. 2001;49(4):509–14. https://doi.org/10.1046/j.1365-3059.2000.00473.x. Pepe A, Giovannetti M, Sbrana C. Lifespan and functionality of mycorrhizal fungal mycelium are uncoupled from host plant lifespan. Sci Rep. 2018;8(1):10235. https://doi.org/10.1038/s41598-018-28354-5. da Trindade R, Almeida L, Xavier L, Lins AL, Andrade EH, Maia JG, et al. Arbuscular mycorrhizal fungi colonization promotes changes in the volatile compounds and enzymatic activity of lipoxygenase and phenylalanine ammonia lyase in Piper nigrum L. 'Bragantina'. Plants. 2019;8(11):442. https://doi.org/10.3390/plants8110442. Article CAS PubMed Central Google Scholar Bagy HMMK, Hassan EA, Nafady NA, Dawood MFA. Efficacy of arbuscular mycorrhizal fungi and endophytic strain Epicoccum nigrum ASU11 as biocontrol agents against blackleg disease of potato caused by bacterial strain Pectobacterium carotovora subsp. atrosepticum PHY7. Biol Control. 2019;134:103–13. https://doi.org/10.1016/j.biocontrol.2019.03.005. Marquez N, Giachero M, Gallou A, et al. Transcriptome analysis of mycorrhizal and nonmycorrhizal soybean plantlets upon infection with fusarium virguliforme, one causal agent of sudden death syndrome. Plant Pathol. 2019;68(3):470–80. https://doi.org/10.1111/ppa.12964. Cui L, Guo F, Zhang JL, Yang S, Meng JJ, Geng Y, et al. Arbuscular mycorrhizal fungi combined with exogenous calcium improves the growth of peanut (Arachis hypogaea L.) seedlings under continuous cropping. J Integr Agr. 2019;18(2):407–16. https://doi.org/10.1016/S2095-3119(19)62611-0. Song YY, Chen DM, Lu K, Sun ZX, Zeng RS. Enhanced tomato disease resistance primed by arbuscular mycorrhizal fungus. Front Plant Sci. 2015;6:786. https://doi.org/10.3389/fpls.2015.00786. Zhao XY, Qi CH, Jiang H, Zhong MS, You CX, Li YY, et al. MdHIR4 transcription and translation levels associated with disease in apple are regulated by MdWRKY31. Plant Mol Biol. 2019;101(1–2):149–62. https://doi.org/10.1007/s11103-019-00898-8. Ma J, Martina J, Li Y, Yu X, He C. Impact of arbuscular mycorrhizal fungi (AMF) on cucumber growth and phosphorus uptake under cold stress. Funct Plant Biol. 2015;42(12):1158–67. https://doi.org/10.1071/FP15106. Li N, Wang C, Li X, Liu M. Effects of earthworms and arbuscular mycorrhizal fungi on preventing fusarium oxysporum infection in the strawberry plant. Plant Soil. 2019;443(1):139–53. https://doi.org/10.1007/s00572-020-00948-w. Meng L, Zhang A, Wang F, Han X, Wang D, Li S. Arbuscular mycorrhizal fungi and rhizobium facilitate nitrogen uptake and transfer in soybean/maize intercropping system. Front Plant Sci. 2015;6:339. https://doi.org/10.3389/fpls.2015.00339. Hou S, Hu J, Wu F, Lin X. The function and potential application of disease suppression by arbuscular mycorrhizal fungi. Chin J Appl Environ Biol. 2018;24(5):941–51. https://doi.org/10.19675/j.cnki.1006-687x.2017.12018. Jalaluldeen AM, Sijam K, Ramadan NA. Active changes of lignifications-related enzymes in chili pepper response to Glomus mosseae against fusarium oxysporum. Aust J Basic Appl Sci. 2020;14(6):1–6. https://doi.org/10.22587/ajbas.2020.14.6.1. Ravnskov S, Cabral C, Larsen J. Mycorrhiza induced tolerance in Cucumis sativus against root rot caused by Pythium ultimum depends on fungal species in the arbuscular mycorrhizal symbiosis. Biol Control. 2019;141:104133. https://doi.org/10.1016/j.biocontrol.2019.104133. Trindade R, Almeida L, Xavier L, Andrade EH, Silva J. Influence on secondary metabolism of Piper nigrum L. by co-inoculation with arbuscular mycorrhizal fungi and fusarium solani f. sp. piperis. Microorganisms. 2021;9(3):484. https://doi.org/10.3390/microorganisms9030484. Wang X, Ding T, Li Y, Guo Y, Li Y, Duan T. Dual inoculation of alfalfa (Medicago sativa L.) with Funnelliformis mosseae and Sinorhizobium medicae can reduce fusarium wilt. J Appl Microbiol. 2020;129. https://doi.org/10.1111/jam.14645. Zhang WZ, Gu LJ, Duan TY. Research progress on the mechanism of AM fungi for improving plant stress resistance. Pratacultural Sci. 2018;35(3):491–507. https://doi.org/10.11829/j.issn.1001-0629.2017-0169. Yu XM, Wang JZ, Xue XM, Chen R, Nie PX, Wang GP, et al. Preliminary studies on the mechanism of bitter pit in apple based on whole-transcriptomic sequencing analysis. Acta Phytopathol Sin. 2020;50(4):405–19. https://doi.org/10.13926/j.cnki.apps.000333. Tian L, Chang C, Ma L, Nasir F, Tian C. Comparative study of the mycorrhizal root transcriptomes of wild and cultivated rice in response to the pathogen Magnaporthe oryzae. Rice. 2019;12(1):35. https://doi.org/10.1186/s12284-019-0287-9. Campos-Soriano L, Gómez-Ariza J, Bonfante P, Segundo BS. A rice calcium-dependent protein kinase is expressed in cortical root cells during the pre-symbiotic phase of the arbuscular mycorrhizal symbiosis. BMC Plant Biol. 2011;11(1):90. https://doi.org/10.1186/1471-2229-11-90. Virdi AS, Singh S, Pingh P. Abiotic stress responses in plants: roles of calmodulin-regulated proteins. Front Plant Sci. 2015;6:809. https://doi.org/10.3389/fpls.2015.00809. Hu R, Wang Z, Wu P, Tang J, Hou X. Identification and abiotic stress analysis of calmodulin-binding transcription activator/signal responsive genes in non-heading Chinese cabbage (Brassica campestris ssp. chinensis Makino). Plant Omics. 2015;8(2):141–7. Meng X, Zhang S. MAPK cascades in plant disease resistance signaling. Annu Rev Phytopathol. 2013;51(1):245–66. https://doi.org/10.1146/annurev-phyto-082712-102314. Huang D, Ma M, Wang Q, Zhang M, Jing G, Li C, et al. Arbuscular mycorrhizal fungi enhanced drought resistance in apple by regulating genes in the mapk pathway. Plant Physiol Biochem. 2020;149:245–55. https://doi.org/10.1016/j.plaphy.2020.02.020. Li Y, Liu Z, Hou H, Lei H, Zhu X, Li X, et al. Arbuscular mycorrhizal fungi-enhanced resistance against Phytophthora sojae infection on soybean leaves is mediated by a network involving hydrogen peroxide, jasmonic acid, and the metabolism of carbon and nitrogen. Acta Physiol Plant. 2013;35(12):3465–75. https://doi.org/10.1007/s11738-013-1382-y. Jiang JJ, Ma SH, Ye NH, Jiang M, Cao JS, Zhang J. WRKY transcription factors in plant responses to stresses. J Integ Plant Biol. 2017;59(2):86. https://doi.org/10.1111/jipb.12513. Wang L, Gao XQ, Zhu LH, Zhou YL, Li ZK. Advances in research on function of WRKY transcription factor genes in plant resistance. J Plant Genet Resour. 2011;12(1):80–5. https://doi.org/10.13430/j.cnki.jpgr.2011.01.005. Wang K, Li C, Lei C, Zou Y, Fang Y. Dual function of VvWRKY18 transcription factor in the β-aminobutyric acid-activated priming defense in grapes. Physiol Plantarum. 2021;172(3):1477–92. https://doi.org/10.1111/ppl.13341. Yang J, Wang Q, Luo H, He C, An B. HbWRKY40 plays an important role in the regulation of pathogen resistance in hevea brasiliensis. Plant Cell Rep. 2020;39:1095–107. https://doi.org/10.1007/s00299-020-02551-x. Wang GS. Studies on fungal community in replanted soil around Bohai gulf and alleviation abble replanted disease by mixed cropping with album fistulosum L. Doctoral dissertation. 2019; https://doi.org/CNKI:CDMD:1.1019.013816. Giovannetti M, Mosse B. An evaluation of techniques for measuringvesicular-arbuscular mycorrhizae in roots. New Phytol. 1980;84:489–500. https://doi.org/10.1111/j.1469-8137.1980.tb04556.x. Parkhomchuk D, Borodina T, Amstislavskiy V, Banaru M, Hallen L, Krobitsch S, et al. Transcriptome analysis by strand-specific sequencing of complementary DNA. Nucleic Acids Res. 2009;37(18):e123. https://doi.org/10.1093/nar/gkp596. Liao Y, Smyth GK, Shi W. featureCounts: an efficient general purpose program for assigning sequence reads to genomicfeatures. Bioinformatics. 2014;30(7):923–30. https://doi.org/10.1093/bioinformatics/btt656. Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15(12):1–21. https://doi.org/10.1186/s13059-014-0550-8. Goldstein LD, Cao Y, Pau G, Lawrence M, Wu TD, Seshagiri S, et al. Prediction and quantification of splice events from RNA-Seq data. PLoS One. 2016;11(5):e0156132. https://doi.org/10.1371/journal.pone.0156132. Thimm O, Blasing O, Gibon Y, Nagel A, Meyer S, Kruger P, et al. MAPMAN: a user-driven tool to display genomics data sets onto diagrams of metabolic pathways and other biological processes. Plant J. 2004;37:914–39. https://doi.org/10.1111/j.1365-313X.2004.02016.x. Zhang J, Xu HF, Wang N, Jiang SH, Fang HC, Zhang ZY, et al. The ethylene response factor MdERF1B regulates anthocyanin and proanthocyanidin biosynthesis in apple. Plant Mol Biol. 2018;98:205–18. https://doi.org/10.1007/s11103-018-0770-5. The authors gratefully acknowledge Yanfang Wang, Haiyan Wang, Zhubing Yan and other students for helping us perform the study. The research was supported by the National Natural Science Foundation of China (32072510), China Agriculture Research System of MOF and MARA (CARS-27), Shandong Agricultural Major Applied Technology Innovation Project (SD2019ZZ008); Taishan Scholar Funded Project (NO.ts20190923); Qingchuang Science and Technology Support Project of Shandong Colleges and Universities (2019KJF020); Natural Science Foundation of Shandong Province (ZR2020MC131), and the National Key Research and Development Program of China (2020YFD1000201). Mei Wang and Weixiao Tang contributed equally as first authors. State Key Laboratory of Crop Biology / College of Horticultural Science and Engineering, Shandong Agricultural University, Tai'an, 271018, Shandong, People's Republic of China Mei Wang, Weixiao Tang, Li Xiang, Xuesen Chen, Xiang Shen, Chengmiao Yin & Zhiquan Mao Forestry College of Shandong Agricultural University, Tai'an, 271018, Shandong, China Mei Wang Weixiao Tang Li Xiang Xuesen Chen Xiang Shen Chengmiao Yin Zhiquan Mao MW, CY and ZM are the experimental designers and executors of this study; XC and XS participate in the experimental guidance; MW, WT and LX participates in the data processing and paper writing. All authors read and approved the final manuscript. Correspondence to Chengmiao Yin or Zhiquan Mao. Additional file 1: Supplementary Table S1. Sequencing data quality statistics. List of primers used in this research. FPKM values and qRT-PCR data of 22 candidate DEGs. DEGs, differentially expressed genes; RPKM, fragments per kilobase of transcript per million mapped fragments; qRT-PCR, quantitative reverse transcriptase-PCR. Additional file 4: Supplementary Fig. S1. The morphological observation of F. solani. The development of Paraglomus sp. SW1 in M9T337 seedling roots. Principal Coordinate Analysis of the transcriptome. MdWRKY40 and MD15G1039500 sequence alignment. Analysis of the MdGLU promotor sequence. Electrophoretic mobility shift assay (EMSA) showing the binding of MdWRKY40 to the W-box motif in the promoters of MdGLU. Additional file 10: Supplementary Fig. S7. Phylogenetic tree constructed from protein sequences for WRKY transcription factors. The Arabidopsis thaliana WRKYs were obtained from the TAIR database (https://www.arabidopsis.org/). Wang, M., Tang, W., Xiang, L. et al. Involvement of MdWRKY40 in the defense of mycorrhizal apple against fusarium solani. BMC Plant Biol 22, 385 (2022). https://doi.org/10.1186/s12870-022-03753-z DOI: https://doi.org/10.1186/s12870-022-03753-z Fusarium solani Biotic stress Plant-pathogen interaction Resistance genes Biocontrol strategies: An Eco-smart tool for integrated pest & diseases management
CommonCrawl
Theoretical Biology and Medical Modelling Novel neural network application for bacterial colony classification Lei Huang1 & Tong Wu ORCID: orcid.org/0000-0002-1838-95822 Theoretical Biology and Medical Modelling volume 15, Article number: 22 (2018) Cite this article Bacterial colony morphology is the first step of classifying the bacterial species before sending them to subsequent identification process with devices, such as VITEK 2 automated system and mass spectrometry microbial identification system. It is essential as a pre-screening process because it can greatly reduce the scope of possible bacterial species and will make the subsequent identification more specific and increase work efficiency in clinical bacteriology. But this work needs adequate clinical laboratory expertise of bacterial colony morphology, which is especially difficult for beginners to handle properly. This study presents automatic programs for bacterial colony classification task, by applying the deep convolutional neural networks (CNN), which has a widespread use of digital imaging data analysis in hospitals. The most common 18 bacterial colony classes from Peking University First Hospital were used to train this framework, and other images out of these training dataset were utilized to test the performance of this classifier. The feasibility of this framework was verified by the comparison between predicted result and standard bacterial category. The classification accuracy of all 18 bacteria can reach 73%, and the accuracy and specificity of each kind of bacteria can reach as high as 90%. The supervised neural networks we use can have more promising classification characteristics for bacterial colony pre-screening process, and the unsupervised network should have more advantages in revealing novel characteristics from pictures, which can provide some practical indications to our clinical staffs. The rapid development of computational imaging processing systems has largely benefited the hospitals for image analysis in diagnosis and investigation. Most of these technologies are applied to facilities such as usage in iconography: ultrasonography, CT, SPECT and MRI [1–4], but few of them has been used directly as a computational vision method in bacterial classification. While there are some limitations of the traditional bacterial classification methods which depend on clinical expertise in very diverse colony structures of different bacterial species [5]. Firstly, they are time-consuming and laborious for staffs [6]. Secondly, the identification needs enough morphology expertise of bacterial colonies, which is difficult for beginners to tackle with. Therefore, the development of an automatic bacterial colony morphology identification system is helpful for aiding clinical staffs in reducing workload as well as acting as a reference for beginners. Some literatures have presented the useful features to identify different bacterial species such as shape, size, surface, border, opacity and color [7], with the combination of staining methods, such as Gram Staining [8]. While the discrimination of bacterial strains from a large amount of samples is still very complicated. The traditional manual discriminations based on expert classifying experience of circular or irregular colony shape, convex or concave colony elevation, etc., require lots of manpower and clinical expertise, especially in the situation with large amount of samples in a tertiary hospital. And a framework based on bacterial colony morphology data can be a convenient way for supplementing the preliminary identification process. It is really necessary to build up this computational vision framework, which can automatically classify lots of bacteria as the preliminary screen preparation results. These results are able to be utilized to narrow down the classification scope in the following more advanced and specific identification procedure such as utilization of VITEK 2 system [9] and MALDI-TOF mass spectrometry detection system [10]. Deep learning system is a kind of machine learning method based on learning data representations [11], and one of its applications is computational vision. For computational vision, it mimics the arrangement of neural network in human brain, which can learn and transport different qualities of information through multiple layers of transformation [12]. It can get feature representations by automatically learning from raw images without applying human experience, and it can model these abstract features by using constituent multiple non-linear transformations. The convolutional neural network is one of these machine learning methods that mimics the connectivity pattern between neurons of visual cortex [13]. It can extract hierarchical image feature representations based on multi-layer processing. And the features extracted are able to have a better performance than that from hand-crafted features because they are more general and less subjective. In this study, feature representations of colonies from 18 bacterial species were learnt to build convolutional neural network. Commonly, preliminary bacterial classification was given out as training labels by human experience of bacterial colonies in clinical laboratory. Then these labels could be aligned with specific features of bacterial species extracted from our networks, which could be recorded and be further applied for our networks prediction. The building of one computational classification model depended on supervised images with standard labels, while the building of the other model relied on unsupervised images which were only for this neural network to extract colonial features. The predicted labels and annotations from both neural networks are quite useful because they can provide some aids for the operation of clinical laboratory and they are able to show inspirations of bacterial colony classification to clinical staffs based on their extracted features. The following image classification techniques are divided into supervised and unsupervised classifications. The supervised classification methods include a traditional convolutional neural network and a special convolutional neural network named AlexNet designed by the SuperVision group for large scare visual recognition [14]. Besides the input layer and output layer, both these two CNN models are generally consisted of four types of layers, which is convolutional layer, ReLU layer, pooling layer and fully connected layer. The function of each layer and detailed processes of training are given in the following part. Then, a kind of unsupervised method named Autoencoder is introduced, and its general structure and detailed training processes are presented, to make comparison with the supervised methods and to verify the feasibilities of both methods. Traditional convolutional neural network General structure: Our convolutional neural network has basic architecture built with multiple distinct layers, having their own unique functions to transform input volumes to output volumes. Besides the input and output layer, there are some hidden layers in between that play the most important part in filtering and transporting information. These are convolutional layer, pooling layer and fully connected layer. Convolutional layer has a set of learning filters, and each filter has many receptive fields that expand through the whole filter of the input. Depth of a convolutional layer is the number of neurons in a layer, and each neuron is able to extract a specific feature detected from positions of the input. A 2-dimensional feature map is constructed based on these features, and it will have spatially local correlation with neurons of next adjacent layer by receptive fields, a spatially connectivity pattern. Thus the feature map will act as the input of the next layer, and each output volume can be represented as a neuron output that projects to a small region in the input. In this way, spatial image information is extracted and transported by multiple connective layers. While because of so many parameters and large size of the convolutional network representation, pooling layer is proposed to reduce both values to decrease computational load as well as avoid over-fitting. During the processing of pooling, the number of neurons in each layer, the depth of dimension remains unchanged, only the size of each depth slide will be reduced. Finally, at the end of some convolutional layers and interspersed several pooling layers, all activations are gathered in one layer, the fully connected layer. Considering the input signal, weight and bias together, this fully connected layer can generate the output value. Depending on this, fully connected layer can do the high-level judgement for bacterial classification. Each layer: In our convolutional network classification model, there are totally seven layers which represent different functions successively, and the general scheme of these connected seven layers network is depicted in Fig. 1. There are image input layer, convolutional layer, rectified linear layer, max pooling layer, fully connected layer, softmax layer and classification layer, respectively. The general structure of the conventional neural network The first layer is the image input layer, which determines the size, the height and width, of each input image. The point is that the size of each image must be identical in the 3 dimensional scopes. The three dimensions represent the height, width, and RGB color of images, respectively. And here as our input images are colored ones which need the color channel to be set at 3, our input set is 100-by-100-by-3. The next layer is the convolutional layer, and its parameters consist of several filters, which are convolved across the height and width of the input parameter of each image. The size of filter is very important because it determines the height and width the filter utilizes to screen the whole image. Here we use the size 3 which means the size of filters we applies in the convolutional layer is 3-by-3. We choose this small scope of size because the discriminations among different kinds of bacteria is subtle, such as different shape of margin of colonies, for the reason that small scopes of filters can have a better characteristics to highlight the fine variations among bacterial colonies. To fully represent all features of the bacterial colony image, we set the number of filter as 50, which means that there are equivalently 50 neurons that can extract, evaluate and transport 50 kinds of information into the next layer of visualization. When the input volume of images has been transported through the convolutional layer, the dot products are calculated between the entries of filters and the input. From the dot product map, the convolutional layer can detect some specific features at some spatial spaces in the input layer, and these detected features will serve as an activation of filters. We combine all the activated filters together to activation maps, which are able to act as the output volume of this convolutional layer. The following layer of the convolutional layer is the ReLU layer, which is for rectifying the convolutional output volume by linear function. The processing function in this layer can be written as f(x)=x+=max(0,x), which can help remove negative values in the convolutional output and do linear amplification to positive values [15]. We set a pooling layer after processing of the rectified linear unit. It is responsible for reducing the spatial dimension from the output of previous convolutional layer, and it aims at decreasing computational overhead and load as well as avoiding over-fitting for lots of parameters to consider. Here the pooling method we utilize is the max pooling algorithm, which can screen the input image with rectangular region and then figure out the maximum element in this region. The height and width of this screening rectangular region is 2, and we also set the stride, number of elements for moving along the image horizontally and vertically, as 2. In this way, as the dimension of stride being equal to that of the screening region, in every period of screening there will be no overlap between these screening regions. The following fully connected layer will wholly connects to all filters, all neurons, in the previous layer. It can combine all the features learned from previous layers and integrate these information for overall consideration of classification. Here our output size of this layer is 18, that means there are 18 kinds of classification being figured out by the fully connected layer's information integration. When there are more than two classifications in consideration, the output value of the fully connected layer would be activated by a softmax layer's function. Assume there are totally k classifications in this project, and we denote the set of observed data by x and the set of parameters by θ. The recognized probability of each classification would be: $$ P(c_{r}|x,\theta)=\frac{P(x,\theta|c_{r})P(c_{r})}{{\sum\nolimits}_{j=1}^{k}P(x,\theta|c_{j})P(c_{j})} $$ Here P(cr) is the prior probability of class r, and the \({\sum \nolimits }_{j=1}^{k}P(c_{j}|x,\theta)\) is equal to 1 [16]. Then the returned probabilities by this activation function will be assigned to each exclusive class by final classification layer. Detailed training: The set of parameters for training and the selection of training algorithm are essential for the feasibility of this model building and application. Here we set the algorithm of training as stochastic gradient descent with momentum, the number of circles on the whole training data as 30, and the initial learning rate as 0.0001. When we get the error function that is calculated from the entire training data and the output training prediction after several training iterations, the stochastic gradient descent algorithm is able to minimize the error functions by taking steps in the fastest gradient descent directions [17]. We assume one parameter of the training model is a, the calculated error function is E(a), and the learning rate is μ, thus the i+1 times of modification of this parameter can be expressed as: $$ a_{i+1}=a_{i}-\mu\nabla E(a_{i}) $$ And a momentum term, which is how much the parameter from previous step contributes to current iteration, can be added to this equation so as to prevent the error function E(a) from oscillating around several steepest descents to the objective function [18]. Assume the σ as the coefficient of contribution from parameter in previous iterations to the current iterations, so the modified function can be written as: $$ a_{i+1}=a_{i}-\mu\nabla E(a_{i})+\sigma(a_{i}-a_{i-1}) $$ This momentum term can be explained from an analogy to momentum in physics, that would cause acceleration from the gradient of the force, so as to travel in the direction which is able to reduce oscillation [19]. Another issue is that we have 18 classifications and 18 dataset of observations, and the whole error function E(a) can be divided into the sum of error functions from each dataset, \(\frac {1}{k}\sum _{j=1}^{k}E(a_{j})\). In this way the parameters can be updated by minimizing the whole error function through stochastic gradient decent algorithm, and calculating the gradients of the parameter of current iteration with backpropagation algorithm. By getting these gradients as input and updating them to minimize the error functions again and again, the neural network training model can be built to mimic the entire training data in an optimal way. AlexNet neural network General structure: For the reason that our first convolutional neural network is simple, to achieve a more complicated task of classifying 18 bacteria colonies with similar morphologies, a bigger and deeper convolutional neural network needs to be considered. Here we adopt a pretrained famous convolutional neural network, the AlexNet neural network, to solve this classification problem. As AlexNet neural network is a kind of convolutional neural network, the general structure of this network also consists of image input layer, convolutional layer, pooling layer, fully connected layer, and etc. However, the AlexNet neural network is much deeper compared with common convolutional neural network, which has more channels of convolutional layers and gathering layers of these multi-channel layers to collect and normalize information [20]. And these specific layers of AlexNet neural network can be called cross channel normalization layer. What is more, as the distributions of convolutional layers are different, there are many convolutional layers from different channels set at the second layer of the AlexNet so as to extract more features from images, and after a set of pooling layers there will be a series of dense convolutional layers, that is these featured convolutional layers will stack on top of each other without any pooling layer inserting in the middle [21]. Each layer and detailed training: We have a total of 25 layers in this AlexNet architecture. The first layer is the image input layer for dimensions of 227×227×3 images. The next are two convolutional-ReLU-cross channel normalization-pooling layers' circles, with following condensed convolutional layers and ReLU layers. The dimensions of convolutional layers in the first two circles are 11×11×3, 5×5×48, and 3×3×256, respectively. And dimensions of subsequent dense convolutional layers are 3×3×192, 3×3×192 in order. The following are fully connected layers with insertions of ReLU layers and dropout layers. In the end, they are softmax layer and classification layer, which are similar with that in our previous convolutional neural network. The dropout layer in this AlexNet architecture is to dropout half of the trained neurons randomly, which is responsible for reducing the computational load, especially for this multi-channel convolutional neural network, but it will not decrease any parameters calculated from the training process. The multi-channel design coupled with dropout method will effectively prevent over-fitting and long computation time. That is also an essential improvement of the AlexNet to deal with large amount of training data. The application of this AlexNet network is not to train the network from the very beginning, but to adapt it to our application. Thus after getting the training data from half of the entire data, we choose the 20th layer, the second fully connected layers array, of the network to extract features from our training image data. Because the layer array we choose is the 4096-element fully connected layer, we can get a 4096-element feature from each image, and we set this 4096×2491 feature matrix as the trainingfeatures matrix. Next, for this supervised learning model to be built, we label every image from the 2491 with their standard bacterial categories. By combing the trainingfeatures matrix and the labeled categorical matrix together, we can build a support vector machine (SVM) classifier. This algorithm can efficiently perform a non-probabilistic binary linear classification as well as a non-linear classification with kernel trick. Based on qualities of AlexNet network and SVM algorithm, a complicated and efficient classifier which is suitable to deal with large amount of data and more classifications is built. Unsupervised Autoencoder neural network General structure: The morphology variance of bacterial colonies is great, especially for images of bacterial colonies cultured in different agar plates, such as maconkey agar and blood agar. To construct models that can find patterns more accurate and can tolerate anomalies, we use both supervised and unsupervised machine learning methods to tackle with this classification problem. For the unsupervised artificial neural network, we apply the Autoencoder neural network, which can extract features automatically with unlabeled bacterial colony images. Our Autoencoder has one input layer, one output layer, and some hidden layers connected between them. The number of nodes in the input layer is the same as that in the output layer, with a feedforward, non-recurrent training direction. Another feature of this Autoencoder neural network different from convolutional neural network are the encoder and decoder parts [22]. In this case, every input value x from input space will be weighted with a weight matrix W to get the code: $$ z=\alpha(Wx+\delta) $$ Here z is the corresponding code, the latent variable is x, α is the activation function for this input, and δ is the bias vector. After encoding, the decoder layer would transfer the z map to the structure of output space of \(\acute x\), which has the same dimension as the input space of x. $$ \acute x=\acute\alpha\left(\acute Wz+\acute\delta\right) $$ And by backpropagation algorithm, each \(\acute x\) value from output space will be made similar to the x value from input space by adjusting the identity function. Thus the error function of x and \(\acute x\) can be built as [23]: $$ E(x,\acute x)=\left|x-\acute\alpha(\acute W(\alpha(Wx+\delta))+\acute\delta)\right|^{2} $$ By minimizing the error function and updating the weights, this model can improve its performance continuously. Each layer and detailed training: There are several components for the model constitution. The hidden units in the first hidden layer are 800, which is forced to learn a compressed features from the input. The dimension of our input image 50×50×3 is much larger than 800, which makes the encoder layer be able to learn more significant features for the bacterial colonies discrimination. The training epochs we set in the first hidden layer, in the encoder layer is 400, the weight of each element in the input space as we describe above is 0.004, and the average output of each neuron in the hidden layer is 0.15. After encoding, the decoder layer will try to reverse the mapping to reconstruct the original input. After getting the extracted data from the hidden layer of the first Autoencoder model, we can build a second Autoencoder model with the similar way. The difference of this second Autoencoder is that it does not utilize the training data but taking use of the features from the first Autoencoder model instead. The input size of the second Autoencoder model is even smaller, and it can give out smaller representative features of the colonies. Here the dimension we reduce by the second Autoencoder is 200, and the final layer is the 18-dimensional layer aiming to classify these 200-dimensional vectors into 18 digit classes. The final layer is the softmax layer. It will use the 200 extraction features from the second Autoencoder to train with the label matrix, and the epochs of training is 400. Finally, we use these above models to form a stack neural network in order. We apply our test images into this stack neural network and visualize the accuracy with a confusion matrix. We can also use backpropagation manually to improve the tuning of this multilayer network, and it can be seen from the statistical results that the performance becomes much better. In this paper, three popular neural network algorithms have been used for training and testing the bacterial colony classification models. The feasibility of these proposed models has been evaluated in terms of the standard classification of bacterial images from clinical database. The utilized dataset for these proposed models has been collected from clinical microbiology lab of Peking University First Hospital and classes are given based on the species of these bacterial, such as Escherichia coli, Klebsiella pneumonia, Staphylococcus aureus, Pseudomonas aeruginosa and etc. There are totally 18 classes of bacterial colonies in this dataset, which all belong to the most common human pathogenic bacteria. The total number of every bacterial class in this dataset is 4982. And the images from each class have been divided into both training and testing sets. The percentages of images for training set and testing set are both 50%. 2491 images are used in training set and 2491 are used in testing set, respectively. The input images for the convolutional neural network and the AlexNet neural network have the 100×100 dimensions with colors, and the input images for the unsupervised Autoencoder neural network have the dimension of 50×50×3, which would greatly reduce the computational load for this model. And some examples are given in this paper shown as Figs. 2 and 3 which represents the differences among different classifications and variations among individual images of each class. Images from each bacterial class showing the interclass variations Images from Streptococcus agalactiae class showing the intra variations Classification performance To evaluate the performance of these three models, we use several parameters, the true positive T+, the false positive F+, the true negative T− and the false negative F−. The sensitivity, is the model to correctly classify the bacterial colonies into their species, can be expressed as [24]: $$ Sensitivity=\frac{1}{18}\sum_{j=1}^{18}\frac{T^{+}}{T^{+}+F^{-}} $$ The specificity for correctly rejecting bacteria to classifications that are not belonging to, has the expression as: $$ Specificity=\frac{1}{18}\sum_{j=1}^{18}\frac{T^{-}}{T^{-}+F^{+}} $$ And the precision and accuracy of each model have the expression: \(Precision=\frac {1}{18}\sum _{j=1}^{18}\frac {T^{+}}{T^{+}+F^{+}}\) and \(Accuracy=\frac {1}{18}\sum _{j=1}^{18}\frac {T^{+}+T^{-}}{T^{+}+T^{-}+F^{+}+F^{-}}\), respectively. The accuracy, precision, sensitivity and specificity values of these three models and 18 different bacterial species have been listed in Table 1. From this statistical result, we can obviously see the accuracy and specificity of these neural networks can reach as high as 90%, while the precision and sensitivity are very low. The range of accuracy is from 0.912 to 0.997, reaching the high classification accuracy compared with that classified by naked eye. The high accuracy of classification for each kind of bacteria means these predicted catalogs are highly likely to be true, which verifies the feasibility of our three classification models. While the responding precision values are very low, from 0.080 to 0.960, meaning many bacterial colonies from the same species have certain possibility to be classified into different categories. The bacterial species responding to the lowest precision value is Enterococcus faecalis, which may be due to its lowest number of trained images. The lowest number of trained images will lead to insufficient level of learning colony features for description and discrimination, so that the precision and sensitivity can be affected. The misclassification may be generated from our sample variations, which may be introduced from bacteria living in different maconkey agar or blood agar. And it also means the repeatability of our models prediction is limited, maybe due to small number of neurons being integrated for classification. When the number of neurons for judgement is not enough, it will make the features representing for one bacterial species localized, which makes it hard to integrate enough feature weights to give a comprehensive judgement. In addition, these three models have many false negative values and few false positive ones which lead to overall low sensitivity and high specificity [25]. The range of sensitivity is from 0.069 to 1.000, and the species with lowest sensitivity value is Enterococcus faecalis, as excepted. However, the specificity of each bacteria is very high which can commonly reaches more than 0.950. This phenomenon may be because the number of neurons for giving bacterial category in fully connected layer is not enough, the screening process will be less strict to identify many false negative values. In this way, the not excluded false negative values will make the denominator bigger and the sensitivity lower. In another way, this less strict classification models can lead to lower false positive values and higher specificity. Thus they can serve their benefits of alerting clinical staffs to the situation when they classify some colonies into dangerous species. Table 1 The statistical results of bacterial colony images classification using the three neural networks To evaluate the feasibility of each deep learning based framework, the performance comparison was made among these three neural networks- conventional CNN, AlexNet and Autoencoder. According to accuracy and precision results, CNN and AlexNet methods are comparable, both having better precision performance than Autoencoder. This maybe because unsupervised neural network will extract some features that are not specific for bacterial classification, such as the features of agar background. For sensitivity and specificity, the two supervised networks also work better, and AlexNet has higher sensitivity than conventional CNN which maybe due to its highly compressed convolutional layers. Thus we can conclude that the supervised neural networks have more promising classification characteristics for bacterial colony application, and the unsupervised network should have more advantages in revealing novel characteristics from bacterial colony morphology [26]. Clinical bacterial colony features The Figs. 4 and 5 are the features learned from the conventional convolutional neural network and the Autoencoder neural network, respectively. It can be seen the features of the first layer are more related to the overall vague impression, like the stain-like texture. But the features from the Autoencoder neural network have some characteristics of shape and edge-like features [27]. This difference can be reasoned by whether this network training has been supervised or not. Because for clinical recognization, microbiologists are likely to distinguish different kinds of bacteria not only by their morphology of colonies, but also by considering the kind of culture media like the maconkey agar media or blood agar media, which have blue and red color background, respectively. Thus, the supervised convolutional neural network would be more likely to take the background color into consideration, and this recognized feature can be gotten from more than one round of backpropagation modification. On the other hand, the features extracted from the Autoencoder neural network will give us more intuitive discriminations about bacterial colonies, which seems that the shapes and the edge-like information of the bacterial colony are on the foreground of the extracted features that this neural network system is going to take into consideration. And we can learn some classification strategy from this for clinical bacterial colonies recognition. Bacterial colonies' features extraction from the conventional convolutional neural network Bacterial colonies' features extraction from the Autoencoder neural network Performance evaluation of statistical results Statistical results of classification of our unsupervised CNN are shown in Table 1. Because the unsupervised neural network is based on the natural morphologies of colonies' features, we can learn some information about classification from them. There are different classification performances among different bacteria. The Staphylococcus aureus has the highest classification accuracy in the output class, while the Klebsiella oxytoca has the highest probability to misclassify. This may because Klebsiella oxytoca can grow in both maconkey and blood agar, and they can show different colony morphologies in the two different kinds of agar [28]. The samples we prepared for Klebsiella oxytoca includes both of these two morphologies from the same species, which makes the unsupervised network difficult to extract similar features from them, finally leading to low precision and sensitivity of classification. There are several issues needed to be considered about our three neural network models. The two critical conditions as the input values having objective influence on the final classification accuracy are sample size (n=4982) and the proportion of training set and testing set. The sample size of 4982 in our experiment is large, which means every species can have average 277 images for training and testing. However, not every bacterial species can have this number of images for simulation. In practice, we took photos in our clinical laboratory for several weeks to get all the data. Thus the number of images of each bacterial species was different from each other, which depended on the frequencies of detection in clinical laboratory in Peking University First Hospital. The more frequently the bacterial species was detected in clinic, the more number of images we can got, and more precise the classification method could be in theory. In this case, our classification method is very useful for practical application in clinic, because the more frequent-occurred species always means the more needs of attention in clinical diagnosis. The following chart is the number of used images of each bacterial species. From this chart, we can see the numbers of images of Burkholderia cepacia and Enterococcus faecalis are less than 100, and both of them have relatively low sensitivity values, especially the Enterococcus faecalis. The number of available Enterococcus faecalis images is only 60, and the homogeneity of its colony is not very good, which are both related to its low sensitivity value. Enterococcus faecalis can have circle or elliptical colonies with blurry edges and the color is variable at different place, which may lead to the high probability of misclassification (Table 2). Table 2 The number of images of each bacterial species Another important input variable that may affect the training performance is the ratio of training set to testing set. As is described above, with certain number of available images, the low proportion of training set means small number of training images, which may lead to bad performance of feature description. To elucidate the effect of proportion changes on these three different neural networks, we made the sensitivity analysis. The ratio has been changed from 0.1 to 0.9, and the accuracy of all bacterial species has been calculated. The trend describing the correlation between the ratio and the total accuracy can be shown in Fig. 6. Sensitivity analysis of the Train/Test split From this figure we can clearly see the difference between supervised and unsupervised methods. The CNN and AlexNet neural networks can have an obvious increasing trend together with the ratio of training set to testing set. In contrast, the unsupervised Autoencoder method seems not have a significant relationship with the ratio changes. This may be because with the increasing number of training set images, the supervised neural networks can get more input data to learn more complicated information of specific features of each species, which is a process of data collection and can lead to their better performance. However, for Autoencoder, it does not depend on human experience but totally relies on its own feature extraction technique. The classification process will no longer be a process of data accumulation, thus the relationship between total accuracy and the ratio is not significant. This paper presented a computational bacterial colony classification system with three supervised and unsupervised neural networks. The detailed description about general structure, constitution of each layer and training processes are given in this paper. The parameters set in these neural networks could have different affections on the classification performance. The comparisons of classification performance among different neural networks are proposed, which can reflect the advantages of supervised convolutional neural networks over unsupervised ones. While for giving us more classification feature instructions, the unsupervised network is obviously a more dominant method, which can also provide some clinical bacterial colony features to our clinical staffs. And there are some advantages gotten from our computational vision classifier, which can help clinical staffs distinguish vague bacterial colonies without the use of manpower and clinical expertise, and the accuracy for each species of bacteria can reach as high as 90%. In addition, the low false positive values and high specificity of the predicted classification can serve as an alert to clinical staffs when some dangerous bacteria appear. Based on these points, our presented classification networks will have significant values referring to clinical experience and bacterial colony features. Adibi A, Golshahi M, Sirus M, Kazemi K. Breast cancer screening: Evidence of the effect of adjunct ultrasound screening in women with unilateral mammography-negative dense breasts. J Res Med Sci. 2015; 20(3):228–32. Soliman A, Khalifa F, Elnakib A, Abou El-Ghar M, Dunlap N, Wang B, et al.Accurate Lungs Segmentation on CT Chest Images by Adaptive Appearance-Guided Shape Modeling. Ieee T Med Imaging. 2017; 36(1):263–76. Salas-Gonzalez D, Gorriz JM, Ramirez J, Illan IA, Padilla P, Martinez-Murcia FJ, et al.Building a FP-CIT SPECT Brain Template Using a Posterization Approach. Neuroinformatics. 2015; 13(4):391–402. Xiang L, Qiao Y, Nie D, et al.Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing. 2017; 267:406–16. Houpikian P, Raoult D. Traditional and molecular techniques for the study of emerging bacterial diseases: One laboratory's perspective. Emerg Infect Dis. 2002; 8(2):122–31. Phumudzo T, Ronald N, Khayalethu N, Fhatuwani M. Bacterial species identification getting easier. Afr J Biotechnol. 2013; 12(41):5975–82. Cabeen MT, Jacobs-Wagner C. Bacterial cell shape. Nat Rev Microbiol. 2005; 3(8):601–10. Bergmans L, Moisiadis P, Van Meerbeek B, Quirynen M, Lambrechts P. Microscopic observation of bacteria: review highlighting the use of environmental SEM. Int Endod J. 2005; 38(11):775–88. Pincus DH. Microbial identification using the Biomerieux Vitek2 system. Encyclopedia rapid microbiol methods. 2017. http://www.pda.org/bookstore. Accessed 30 Dec 2017. Dubois D, Grare M, Prere MF, Segonds C, Marty N, Oswald E. Performances of the Vitek MS Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry System for Rapid Identification of Bacteria in Routine Clinical Microbiology. J Clin Microbiol. 2012; 50(8):2568–76. Bengio Y, Courville A, Vincent P. Representation Learning: A Review and New Perspectives. Ieee T Pattern Anal. 2013; 35(8):1798–828. Ji W, Dayong W, Steven CHH, et al.Deep learning for content-based image retrieval: A comprehensive study. ACM Multimedia. 2014:157–66. Matsugu M, Mori K, Mitari Y, Kaneda Y. Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 2003; 16(5–6):555–9. Quartz. The data that transformed AI research-and possibly the world. 2018. https://cacm.acm.org/news/219702-the-data-that-transformed-ai-research-and-possibly-the-world/fulltext. Accessed 16 Mar 2018. Hahnloser RHR, Sarpeshkar R, Mahowald MA, Douglas RJ, Seung HS. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature. 2000; 405(6789):947–51. Bishop CM. Pattern Recognition and Machine Learning. New York: Springer; 2006. Michael AN. Neural Networks and Deep Learning. United States: Determination Press; 2015. Murphy KP. Machine Learning: A Probabilistic Perspective. Cambridge: The MIT Press; 2012. Rumelhart DE, Hinton GE, Williams RJ. Learning Representations by Back-Propagating Errors. Nature. 1986; 323(6088):533–36. Bonnin R. Building Machine Learning Projects with TensorFlow. Birmingham: Packt Publishing; 2016. CS, 231n Convolutional Neural Networks for Visual Recognition. 2017. https://cs231n.github.io/convolutional-networks. Accessed 28 Nov 2017. Liou CY, Cheng WC, Liou JW, et al.Autoencoder for words. Neurocomputing. 2014; 139:84–96. Bengio Y. Learning Deep Architectures for AI. Found and TrendsⓇ in Mach Learn. 2009; 2:1–127. Altman DG, Bland JM. Statistics Notes - Diagnostic-Tests-1 - Sensitivity and Specificity. Brit Med J. 1994; 308(6943):1552. Powers DMW. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation. J Mach Learn Technol. 2011; 2(1):37–63. Sathya R, Abraham A. Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification. Int J Adv Res Artif Intell. 2013; 2(2):34–38. Learning features with Sparse Auto-encoders. 2018. https://www.amolgmahurkar.com/learningfeatusingsparseAutoencoders. Accessed 16 Mar 2018. Klebsiella pneumoniae bacteria. 2017. https://www.microbiologyinpictures.com/klebsiella٪20pneumoniae.html Accessed 27 Mar 2017. We thank the Department of Clinical Laboratory for preparing our raw data. All funding for the present research were obtained from Peking University First Hospital. Raw data and neural network codes can be available from corresponding author upon request. Department of Clinical Laboratory, Peking University First Hospital, 8 Xishiku Street, Beijing, China Lei Huang Peking University Health Science Center, 38 Xueyuan Road, Beijing, China Tong Wu Search for Lei Huang in: Search for Tong Wu in: LH and TW have same contributions to this paper. LH has done the clinical classification, data collection, models building, models discussion and results interpretation. TW has done models building, models discussion, models modification and results interpretation. Both authors read and approved the final manuscript. Correspondence to Tong Wu. Huang, L., Wu, T. Novel neural network application for bacterial colony classification. Theor Biol Med Model 15, 22 (2018) doi:10.1186/s12976-018-0093-x DOI: https://doi.org/10.1186/s12976-018-0093-x Bacterial colony
CommonCrawl
What is the speed of sound in space? Active 10 months ago Given that space is not a perfect vacuum, what is the speed of sound therein? Google was not very helpful in this regard, as the only answer I found was $300\,{\rm km}\,{\rm s}^{-1}$, from Astronomy Cafe, which is not a source I'd be willing to cite. thermodynamics astrophysics acoustics plasma-physics interstellar-matter honeste_vivere Josh GloverJosh Glover 1,05311 gold badge88 silver badges77 bronze badges $\begingroup$ The question is whether "sound" can even be defined in space (or a very very low pressure environment). $\endgroup$ – Physics_maths $\begingroup$ @LoveLearning The answer to that question is "We'll call it 'sound' if it can be transmitted coherently in that environment" and the condition for that is "wavelength much longer than mean-free path". So, low enough frequency sounds can exist. $\endgroup$ – dmckee --- ex-moderator kitten $\begingroup$ "In space, no one can hear you scream". $\endgroup$ – Gavin Coates $\begingroup$ With regard to hearing a scream in space, that's not possible. The highest possible sound frequency in a gaseous medium has a wavelength roughly equal to the mean free path. In interplanetary space near Earth, the mean free path is about one astronomical unit and the speed of sound is on the order of 10 to 100 km/s. That corresponds to a frequency of about one cycle per month. That is many, many octaves below the frequency of a scream. $\endgroup$ – David Hammen $\begingroup$ @DavidHammen - it depends on who/what is doing the screaming. :-O $\endgroup$ – Bob Jarvis - Reinstate Monica By popular demand (considering two to be popular — thanks @Rod Vance and @Love Learning), I'll expand a bit on my comment to @Kieran Hunt's answer: Thermal equilibrium As I said in the comment, the notion of sound in space plays a very significant role in cosmology: When the Universe was very young, dark matter, normal ("baryonic") matter, and light (photons) was in thermal equilibrium, i.e. they shared the same (average) energy per particle, or temperature. This temperature was so high, that neutral atoms couldn't form; any electron caught by a proton would soon be knocked off by a photon (or another particle). The photons themselves couldn't travel very far, before hitting a free electron. Speed of sound in the primordial soup Everything was very smooth, no galaxies or anything like that had formed. Stuff was still slightly clumpy, though, and the clumps grew in size due to gravity. But as a clump grows, pressure from baryons and photons increase, counteracting the collapse, and pushing baryons and photons outwards, while the dark matter tends to stay at the center of the overdensity, since it doesn't care about pressure. This creates oscillations, or sound waves with tremendously long wavelengths. For a photon gas, the speed of sound is $$ \begin{array}{rcl} c_\mathrm{s} & = & \sqrt{p/\rho} \\ & = & \sqrt{c^2/3} \\ & \simeq & 0.58c, \end{array} $$ where $c$ is the speed of light, and $p$ and $\rho$ are the pressure and density of the gas. In other words, the speed of sound at that time was more than half the speed of light (for high temperatures there is a small correction to this of order $10^{-5}$; Partovi 1994). In a non-relativistic medium, the speed of sound is $c_\mathrm{s} = \sqrt{\partial p / \partial \rho}$, which for an ideal gas reduces to the formula given by @Kieran Hunt. Although in outer space both $p$ and $\rho$ are extremely small, there $are$ particles and hence it odes make sense to talk about speed of sound in space. Depending on the environment, it typically evaluates to many kilometers per second (i.e. much higher than on Earth, but much, much smaller than in the early Universe). Recombination and decoupling As the Universe expanded, it gradually cooled down. At an age of roughly 200,000 years it had reached a temperature of ~4000 K, and protons and electrons started being able to combine to form neutral atoms without immediately being ionized again. This is called the "Epoch of Recombination", though they hadn't previously been combined. At ~380,000 years, when the temperature was ~3000 K, most of the Universe was neutral. With the free electrons gone, photons could now stream freely, diffusing away and relieving the overdensity of its pressure. The photons are said to decouple from the baryons. Cosmic microwave background The radiation that decoupled has ever since redshifted due to the expansion of the Universe, and since the Universe has now expanded ~1100 times, we see the light (called the cosmic microwave background, or CMB) not with a temperature of 3000 K (which was the temperature of the Universe at the time of decoupling), but a temperature of (3000 K)/1100 = 2.73 K, which is the temperature that @Kieran Hunt refers to in his answer. Baryon acoustic oscillations These overdensities, or baryon acoustic oscillations (BAOs), exist on much larger scales than galaxies, but galaxies tend to clump on these scales, which has ever since expanded and now has a characteristic scale of ~100 $h^{-1}$Mpc, or 465 million lightyears. Measuring how the inter-clump distance change with time provides a way of understanding the expansion history, and acceleration, of the Universe, independent of other methods such as supernovae and the CMB. And beautifully, the methods all agree. pelapela $\begingroup$ slightly off-topic, but I feel that I have to take a course in astroparticle-physics :) $\endgroup$ – F. Ha $\begingroup$ Don't we all… :) Do you mean in order to understand the answer, or just in general? $\endgroup$ – pela From the ideal gas law, we know: $$ v_\textrm{sound} = \sqrt{\frac{\gamma k_\textrm{B} T}{m}} $$ Assuming that interstellar space is heated uniformly by the CMB, it will have a temperature of $2.73\ \mathrm{K}$. We know that most of this medium comprises protons and neutral hydrogen atoms at a density of about 1 atom/cm−3. This means that $\gamma = 5/3$, and $m = 1.66\times 10^{-27}\ \mathrm{kg}$, giving a value for $v_\textrm{sound}$ of $192\ \mathrm{m\ s^{-1}}$. However, this is not propagated efficiently in a vacuum. In the extremely high vacuum of outer space, the mean free path is millions of kilometres, so any particle lucky enough* to be in contact with the sound-producing object would have to travel light-seconds before being able to impart that information in a secondary collision. *Which for the density given, would only be about 50 hydrogen atoms if you clapped your hands – very low sound power! -Edit- As has quite rightly been pointed out in the comments, the interstellar medium is not that cold. At the moment, our solar system is moving through a cloud of gas at approximately 6000 K. At this temperature, the speed of sound would be approximately $9000\ \mathrm{m\ s^{-1}}$. See Kyle's answer for a table of values for $v_\textrm{sound}$ that can be found in different environments in space, or pela's for information on how early universe sound waves became responsible for modern-day large scale structure. Kieran HuntKieran Hunt $\begingroup$ Argh, you beat it to me by seconds. Well, let me just add that sound in space plays a very significant role in cosmology: Just before recombination, 380.000 years after Big Bang, the speed of sound was approximately half the speed of light. When light and matter decoupled, the sound waves remained "frozen" in space, meaning that galaxies tend to form in clumps that are separated by this wavelength. The distance between these clumps expands with the general expansion of the Universe (and is now ~465 million lightyears), and provides a standard measure of length. $\endgroup$ $\begingroup$ -1. This is not a good answer. Nothing in space is that cold. The interplanetary medium is in the tens of thousands of kelvins. The interstellar medium varies from ten of kelvin in molecular clouds to tens of millions of kelvins. The intergalactic medium is extremely hot, again in the tens of millions of kelvins. The widely varying temperature and makeup (molecular hydrogen vs ionized plasma) means the speed of sound in space varies considerably. $\endgroup$ $\begingroup$ I suppose 6000K is the AVERAGE temperature, otherwise we would be boiling... $\endgroup$ – algiogia $\begingroup$ @yo' He is right. You can see it very simply in this way: what happens if you drop a blazing hot metal ball in the sea? The sea does not boil. To go back to reality, the ball is space: very hot, but with very low mass (very few atoms around). The Earth then is the sea: low temperature, but huge. Thus Earth is not boiling. $\endgroup$ – Svalorzen $\begingroup$ @yo' - The "but" is very simple. The medium may well be very hot, BUT because it's so very, very tenuous, there is hardly any heat transfer from it to a macroscopic object. For a macroscopic object in space, radiative heat transfer (heat from the sun, cooling toward empty space) completely dominates over heat transfer from the hot but almost non-existent medium. $\endgroup$ Just want to bring up that most answers seem to be taking "space" to be a nice uniform medium. However, even within our own galaxy, conditions vary wildly. Here are the most common environments in the Milky Way: Molecular Clouds, $\rho\sim 10^4\,{\rm atom}/{\rm cm}^3$, $T\sim 10\,{\rm K}$ Cold Neutral Medium, $\rho\sim 20\,{\rm atom}/{\rm cm}^3$, $T\sim 100\,{\rm K}$ Warm Neutral Medium, $\rho\sim 0.5\,{\rm atom}/{\rm cm}^3$, $T\sim 10^4\,{\rm K}$ Warm Ionized Medium, $\rho\sim 0.5\,{\rm atom}/{\rm cm}^3$, $T\sim 8000\,{\rm K}$ HII Region, $\rho\sim 1000\,{\rm atom}/{\rm cm}^3$, $T\sim 8000\,{\rm K}$ Hot Ionized Medium, $\rho\sim 10^{-3}\,{\rm atom}/{\rm cm}^3$, $T\sim \;{>}10^6\,{\rm K}$ The sound speed is proportional to $\sqrt{T}$. Given that the temperature varies over about 7 orders of magnitude (maximum at about $10^7\,{\rm K}$, minimum at about $3\,{\rm K}$), the sound speed varies by at least a factor of $1000$. The sound speed in a warm region is on the order of $10\,{\rm km}/{\rm s}$. Trivia: the sound speed plays a crucial role in many astrophysical processes. This speed defines the time it takes for a pressure wave to propagate a given distance. One place this is a key time scale is in gravitational collapse. If the sound crossing time for a gas cloud exceeds the gravitational free fall time (time for a gravity-driven disturbance to propagate), pressure is unable to resist gravitational collapse and the cloud is headed toward the creation of a more compact object (denser cloud, or if conditions are right, a star). More trivia: space is a very poor carrier (non carrier) of high frequency sounds because the highest frequency pressure wave that can be transmitted has a wavelength of about the mean free path (MFP) of gas particles. The MFP in space is large, so the frequency limit is low. Kyle OmanKyle Oman $\begingroup$ +1. This is the answer to this question. The hot intracluster medium can be even hotter than the items on your list, up to $10^8$ kelvin. A high metallicity molecular cloud is not ionized and can contain some fairly massive compounds. You can easily add another order of magnitude to that factor of 1000. $\endgroup$ $\begingroup$ Even though sound travels faster in space than in a terrestrial atmosphere, the vacuum of space isn't generally regarded as carrying sound well. Is that because pressure waves in space will be primarily reflected when they hit solid objects, or because they'll be converted to heat when they hit solid objects, or because they get converted to heat in transit? $\endgroup$ – supercat $\begingroup$ @supercat What solid objects? Space is very empty, on average! Space is a very poor carrier (non carrier) of high frequency sounds because the highest frequency pressure wave that can be transmitted has a wavelength of ~the mean free path of gas particles. The MFP in space is large, so the frequency limit is LOW. $\endgroup$ – Kyle Oman $\begingroup$ @supercat you're confusing two things here. Speed of sounds is one thing. Frequencies that can be carried by a fluid is another. The frequencies that can be carried by the ISM are much lower than the lower limit of human hearing. That doesn't mean the sounds aren't meaningful, or that they don't exist. They just have low frequencies. $\endgroup$ $\begingroup$ @hobbs: As Kyle reminded me, the concept of "impedance" in a sound-transmission medium is only meaningful at frequencies which are low relative to the frequency of particle interactions. For a tuning fork vibrating at 440Hz to transmit any meaningful information about its frequency, it must be hit by a lot more than 440 particles per second [regular sampling at 880 would suffice; I'm not sure how to describe the information conveyed by random samples]. $\endgroup$ I know this question is technically already answered, but there were several things missing from the answers that I thought should be mentioned (I am writing a review paper comparing different regions of space so I had these numbers at hand already as well). The speed of sound in space has multiple meanings because space is not a vacuum (though the number density of Earth's magnetosphere can be ~6-12 orders of magnitude more tenuous than the best vacuums produced in labs), it is full of ionized particles, neutral and charged dust. In the interplanetary medium or IPM, there are five relevant speeds that can all be considered a type of sound in a way, because each is related to the speed of information transfer in the medium. Classical idea of sound speed When one discusses the speed of sound, one is generally referring to the common form of $C_{s}^{2} = \partial P/\partial \rho$, where $P$ is the thermal pressure and $\rho$ is the mass density. In a plasma, this takes the slightly altered form of: $$ C_{s}^{2} = \frac{ k_{B} \left( Z_{i} \ \gamma_{e} \ T_{e} + \gamma_{i} \ T_{i} \right) }{ m_{i} + m_{e} } $$ where $k_{B}$ is Boltzmann's constant, $Z_{s}$ is the charge state of species $s$, $\gamma_{s}$ is the adiabatic or polytrope index of species $s$, $m_{i}$ is the mass of species $s$, and $T_{s}$ is the average temperature of species $s$. In a tenuous plasma, like that found in the IPM, it is often assumed that $\gamma_{e}$ = 1 (i.e., isothermal) and $\gamma_{i}$ = 2 or 3, or that $\gamma_{e}$ = 1 and $T_{e} \gg T_{i}$. The above form of the sound speed is known as the ion-acoustic sound speed because it is the phase speed at which linear ion-acoustic waves propagate. Thus, $C_{s}$ is a legitimate type of sound speed in space. In the IPM, $C_{s}$ ~ 13 - 240 km/s [e.g., Refs. 12; 33; 34]. Speed of magnetic fields The cryptic title is alluding to what is known as the Alfvén speed, which is defined as: $$ V_{A} = \frac{ B_{o} }{ \sqrt{ \mu_{o} \ \rho } } $$ where $B_{o}$ is the magnitude of quasi-static, ambient magnetic field, $\mu_{o}$ is the permeability of free space, and $\rho$ is the plasma mass density (which is roughly equivalent to the ion mass density unless it's a pair plasma). This speed is typically associated with transverse Alfvén waves, but the speed is relevant to information transfer in plasmas, thus why I included it here. In the IPM, $V_{A}$ ~ 4 - 220 km/s [e.g., Refs. 10; 12; 33; 34]. Speed of magnetized sound waves In a magnetized fluid like a plasma, there are fluctuations that are compressive whereby they compress the magnetic field in phase with the density. These are known as magnetosonic or fast mode waves. The full MHD definition of the phase speed for a fast mode wave is given by: $$ 2 \ V_{f}^{2} = \left( C_{s}^{2} + V_{A}^{2} \right) + \sqrt{ \left( C_{s}^{2} + V_{A}^{2} \right)^{2} + 4 \ C_{s}^{2} \ V_{A}^{2} \ \sin^{2}{\theta} } $$ where $\theta$ is the angle of propagation with respect to $\mathbf{B}_{o}$. $V_{f}$ is the relevant speed for shock waves in weakly collisional and collisionless plasmas. It is also a type of sound speed, thus the name magnetosonic. In the IPM, $V_{f}$ ~ 17 - 300 km/s [e.g., Refs. 10; 12; 33; 34]. Side Note There is also a slow mode wave, which differs in polarization and the relative phase between the magnetic and density fluctuations. It is called slow because it has a smaller phase speed than the fast mode in the same medium. Thermal speeds The last two speeds that are relevant are the thermal speeds of the electrons and ions. The one-dimensional rms speed is given by: $$ V_{Ts}^{rms} = \sqrt{\frac{ k_{B} \ T_{s} }{ m_{s} }} $$ where the definitions are the same as in previous sections and $s$ can be $e$(electrons) or $i$(ions). Generally we use the three dimensional most probable speed, which is given by: $$ V_{Ts}^{mps} = \sqrt{\frac{ 2 \ k_{B} \ T_{s} }{ m_{s} }} $$ In the IPM, the electron [e.g., Refs. 2; 3; 5; 7; 8; 14; 17-22; 24; 25; 27; 29-34] and ion [e.g., Refs. 1-6; 8-11; 13; 15-17; 19; 20; 23; 26-32] thermal speeds are $V_{Te}^{mps}$ ~ 1020 - 5170 km/s and $V_{Ti}^{mps}$ ~ 13 - 155 km/s, respectively. There are several different types of sound-like speeds in space and each of them can produce similarly related phenomena. For instance, we often refer to Mach numbers associated with $C_{s}$, $V_{A}$, and $V_{f}$. In addition, there are several plasma instabilities that result from an effect that similar to Cerenkov radiation, whereby a beam of particle exceeds, for instance, the electron thermal speed. In summary, in the regions outside of local magnetospheres but within the realm of our sun's influence, there is a wide range of sound speeds. A paper on the statistics of temperature-dependent parameters near Earth in the solar wind was recently published in Astrophys. J. Suppl. by Wilson et al. [2018] (it's Open Access so no paywall). The work provides new measurements but also provides a detailed literature review/reference list of past work. J.E. Borovsky et al., J. Plasma Phys. 57, pp. 1, 1997. J.E. Borovsky and S.P. Gary, Phys. Plasmas 16, pp. 082307, 2009. C.H.K. Chen et al., Geophys. Res. Lett. 41, pp. 8081, 2014. W.C. Feldman et al., J. Geophys. Res. 79, pp. 2319, 1974. N. Gopalswamy, Space Sci. Rev. 124, pp. 145, 2006. L.K. Jian et al., Solar Phys. 274, pp. 321, 2011. L.K. Jian et al., Astrophys. J. 786, pp. 123, 2014. J.C. Kasper, Interplanetary Shock Database, Harvard-Smithsonian Center for Astrophysics, Online: http://www.cfa.harvard.edu/shocks/, 2007. J.G. Luhmann et al., J. Geophys. Res. 98, pp. 5559, 1993. M. Maksimovic et al., J. Geophys. Res. 110, pp. A09104, 2005. D.J. McComas et al., J. Geophys. Res. 105, pp. 10419, 2000. D.J. McComas et al., Astrophys. J. 779, pp. 2, 2013. J. A. Newbury et al., J. Geophys. Res. 103, pp. 9553, 1998. J.L. Phillips et al., J. Geophys. Res. 94, pp. 6563, 1989. W.G. Pilipp et al., J. Geophys. Res. 92, pp. 1093, 1987. M.P. Pulupa et al., J. Geophys. Res. 119, pp. 647, 2014. J.D. Richardson et al., Geophys. Res. Lett. 22, pp. 325, 1995. C. Salem et al., J. Geophys. Res. 106, pp. 21701, 2001. C. Salem et al., Astrophys. J. 585, pp. 1147, 2003. R. Schwenn, Fifth International Solar Wind Conference 228, pp. 489, 1983. R. Schwenn, Large-Scale Structure of the Interplanetary Medium, pp. 99, 1990 J.A. Slavin and R.E. Holzer, J. Geophys. Res. 86, pp. 11401, 1981. L.B. Wilson III et al., J. Geophys. Res. 114, pp. A10106, 2009. L.B. Wilson III et al., J. Geophys. Res. 118, pp. 5, 2013. L.B. Wilson III et al., J. Geophys. Res. 118, pp. 957, 2013. L.B. Wilson III et al., J. Geophys. Res. 119, pp. 6455-6474, 2014. L.B. Wilson III et al., Astrophys. J. Suppl. 236, pp. 15, 2018. honeste_viverehoneste_vivere $\begingroup$ Please update with a highlighted cite to the review paper. Thanks! $\endgroup$ – CoolHandLouis $\begingroup$ @CoolHandLouis - Unfortunately, I am still waiting on several of my co-authors to contribute their chapters to the review and they are being slow about it (some were teaching and others were moving from one university to another which added delays). $\endgroup$ – honeste_vivere You need to consider that space is filled with a tenuous plasma, which behaves slightly differently to an ideal gas. First, the electrons will carry sound at a different rate to the heavier protons, but also, the electrons and protons are coupled via the electric field. See: Speed (of sound) in plasma The speed of sound in the solar wind is estimated at around 58 km/s, based on the equation in the answer given by Kieran Hunt. However, the temperature of the solar wind is more like $T = 1.2 \times 10^5K$ (ref) iantresmaniantresman Given the low density of gas, the speed of sound would be a direct function of the temperature of the gas ie the speed of the molecules/atoms. Since this varies from about 2.7K to millions of degrees near some stars, the speed of sound can change quite a bit. Direct measurement shows the speed is 1100 m/s. ESA's dart-like Gravity field and Ocean Circulation Explorer (GOCE) Earth Explorer used to orbit as close to Earth as possible - just 260 km up - to maximize its sensitivity to variations in Earth's gravity field. At that altitude, there is enough atmosphere to exert a small drag. The satellite had an aerodynamic shape and a small engine to keep it in orbit. The mission ended when the engine ran out of fuel. In 2011, the huge 9.1 Japanese Tohoku earthquake generated atmospheric disturbances. These deflected the satellite. Density variations were also measured. Article and video here. mmesser314mmesser314 $\begingroup$ This is very interesting and I would like to learn more but I do not think this addresses the OP. $\endgroup$ $\begingroup$ @honeste_vivere - I guess it depends on which region of space interests him. If space starts at an arbitrary altitude of 100 km, then this counts. But the density certainly is higher here than most places. Your answer is better. $\endgroup$ – mmesser314 $\begingroup$ I was more referring to the fact that a distortion in the atmosphere is not a "speed of sound." The speed at which the distortion propagates is the speed of sound, but that would change with altitude. $\endgroup$ $\begingroup$ @honeste_vivere - I do not understand the distinction you are making. It seems to me that the distortion propagates at the speed of sound, and the speed is inferred from the time it takes to get from the ground to the satellite. Perhaps they modeled the speed as a function of altitude and scaled the expected speeds to fit the elapsed time. Am I missing something? $\endgroup$ $\begingroup$ It's that the tsunami physically displaced a large amount of water which then displaced air, much like wind. Wind is not a sound wave. The displacement propagated near the speed of sound most likely because the initial displacement occurred so quickly (kind of like a short duration impact). From your figure, it does appear that they accounted for the variation in sound speed with altitude, but a bulging atmosphere due to displacement is a bulk flow of a fluid, not a longitudinal oscillation that propagates. Does that make more sense? $\endgroup$ Not the answer you're looking for? Browse other questions tagged thermodynamics astrophysics acoustics plasma-physics interstellar-matter or ask your own question. Time it would take for sound to travel between moon and earth? What does antimatter look like? How can a black hole produce sound? Would it be possible to reenter the atmosphere without a heat shield using a glider design? Interstellar dust/matter distribution What is the safe distance to a supernova explosion? What is the correct relativistic distribution function? Can we see sound with our eyes? At what altitude would the air be too thin to carry a sound wave? What is the difference between bulk speed and thermal speed in solar wind plasma? Speed of sound in air How does sound travel in space? Shock-waves, Bangs and the Speed of Sound Sound speed vs Speed of sound High(very high) frequency sound waves = heat? Why are sonic booms produced when the speed of a sound source is HIGHER than the speed of sound? How objects with very low frequency generate sound?
CommonCrawl
how to calculate dipole moment p → \overrightarrow{p} p - electric dipole moment of the whole system, r i → \overrightarrow{r_i} r i - a vector pointing to the i-th electric charge, q i q_i q i - value of i-th charge, n n n - number of charges in the system. The length of the arrow is directly proportional to the magnitude of µ. I would like to know , how we can calculate the transient dipole moment on a optimized chemical structure (DFT: Gaussian 09), if possible to give me the keyword permiting to obtain µtr. Dipole moment is a vector. For this case, the electric dipole moment has a magnitude. So why did I? This organic chemistry video tutorial provides a basic introduction into dipole moment and molecular polarity. Answer to: How to calculate dipole moment By signing up, you'll get thousands of step-by-step solutions to your homework questions. Calculate dipole moment of 1, 2, 3, 5-tetrachlorobenzene : View Answer. If l is the distance of the charge separation usually taken in bond length, then, µ = q × l. The dipole moment is a vector quantity and it has both, magnitude and direction. Hence calculated ionic dipole moment is \[μ_{KBr}= (1) (1.602 \times 10^{-19})( 2.82 \times 10^{-10}) = 4.518 \times 10^{-29}\; Cm = 13.54\; D \nonumber\] The observed dipole moment is p → \overrightarrow{p} p - electric dipole moment of the whole system, r i → \overrightarrow{r_i} r i - a vector pointing to the i-th electric charge, q i q_i q i - value of i-th charge, n n n - number of charges in the system. The dipole moment, determined from Stark effect measurements, is 7.1110(69) D, representing an enhancement of 6.5 D over the sum of the dipole moments of the free monomers. Polar versus nonpolar molecues. Dipole moment can also be zero, when opposite two bond dipoles cancel each other. Then we divide 104.5 by 2 to get 52.25 degrees for each side, then imagine two right … In more complex molecules with polar covalent bonds, the three-dimensional geometry and the compound's symmetry determine whether there is a net dipole moment. Dipole moment values can be experimentally obtained by measuring the dielectric constant. Both NH 3 and NF 3 molecules have pyramidal shape with one lone pair of electrons on N atom. If intermolecular distance is 2. Chem_Mod Posts: 18913 Joined: Thu Aug 04, 2011 8:53 pm Has upvoted: 737 times. Covalent and Ionic bonds are types of bods that create dipole moments. The arrow side denotes the negative sign, while the + side denotes the positive sign. A discussion of the method used to determine whether a molecule has an overall molecular dipole moment or not. The dipole moment is centered at the center of mass and its length is scaled such that 1 Debye corresponds to 1 Angstrom. To view this video please enable JavaScript, and consider upgrading to a Re: Dipole moment definition/calculation … Just How Big was the Largest Flying Dinosaur? 0 × 1 0 − 1 0 m, then the percentage ionic character of A B is : View Answer. Huge. So, you know the dipole moment must point toward the oxygen atoms. Next, determine the distance between particles. In this case, the dipole moment calculated as (via Equation 2 ): μ = Qr = (1.60 × 10 − 19C)(1.00 × 10 − 10m) = 1.60 × 10 − 29C ⋅ m. The Debye characterizes size of dipole moment. If l is the distance between two centers of the polar molecule, then the dipole moment, µ = q × l. For the non-polar molecules, l=0 and hence µ=0. What's the difference between this and a dipole moment? Can we create life using synthetic biology? I have tried using the keywords to test it out from the ORCA input library example: ! Let us take an example of HCl. When a proton & electron 100 pm apart, the dipole moment is 4.80 D: μ = (1.60 × 10 − 29C ⋅ m)( 1 D 3.336 × 10 − 30C ⋅ m) = 4.80D. The polar character of the molecules has quantified a term, called dipole moment (µ). Solution: The calculated dipole moment for this condition is, p = q x d. Thus, p = 2 x 0.02 = 0.04 C-m. It is denoted by the Greek letter 'μ.' Will this be accurate? I know a dipole dipole attraction depends on the polarity of the atom (the electronegativity). To do this I was going to use a lookup table to get the dipole moments of common bonds then add the vectors to get a total dipole moment. From the concentration dependence of the dielectric constant, the value of the dipole moment (μ) for NtBuH 2 Pc is calculated to be 3.8 D, according to the Guggenheim–Palit equation [88]. The dipole moment is the product of the magnitude of the charge and the distance between the centers of the positive and negative charge. Enter the charge and distance into the formula to determine the dipole moment. General Chemistry The bond dipole moment uses the idea of electric dipole moment to measure the polarity of a chemical bond within a molecule.It occurs whenever there is a separation of positive and negative charges. How do you calculate the change in dipole moment from the ground to first excited state using ORCA? Most young scientists will not study plant science. Evaluation and interpretation of the dipole moment of covalent molecules provide an important tool in the attack of molecular structure. The dipole moment is calculated by multiplying the distance between the hydrogen and oxygen atoms by the difference in their charge. I would like to know how Avogadro calculates the dipole moment of molecule when I optimise the moelcule geometry by UFF (where no eletron distribution is taken into account). It helps to determine the size and shape of the molecule, spatial arrangements of chemical bonding, bonds partial ionic character, residue charge on the atoms of the molecules, etc. Enter the charge and distance into the formula to determine the dipole moment. Is $\mathrm{HF}$ more ionic or less ionic than $\mathrm{HCl}(\text { Worked Example } 10.1) ?$ Point particles with electric charge are referred to as point charges. These two are separated by a distance d, thus creating a dipole. The molecule is neutral and hence if (+ q) amount of charge separates at the positive charge center, (- q) will be accumulated at the negative charge center of the molecule. Measure the distance between the particles that are bonded. And will we ever need to calculate this? Calculating dipole moments of macromolecules. eval(ez_write_tag([[728,90],'calculator_academy-medrectangle-3','ezslot_5',169,'0','0'])); The following formula is used to calculate the dipole moment. Clicking the view icon will display the quantity in the 3-D visualizer. When the center of gravity of the positive charge due to coincides with the center of gravity of the negative charge due to electrons, the molecules become non-polar. Finally, calculate the dipole moment. ghutchis July 2, 2019, 7:17pm #2. The dipole moment of a molecule A B is 1. Some typical gas phase values in debye units include: 1. carbon dioxide: 0 (despite having two polar C=O bonds, the two are pointed in geometrically opposite directions, canceling each other out and resulting i… A dipole moment is the force of charge between two particles in a chemical bond. p = q d. Since the bond moment of the O-H bond is -1.5 Debyes. The dipole moment makes sense for neutral systems, i.e. The dipole moment makes sense for neutral systems, i.e. Analysis of the 14N nuclear hyperfine structure indicates that about 0.6 e is transferred from the nitrogen to the SO3 upon formation of the complex. Thank you! The dipole moment of H B r is 2. Imagine I had something like this: CH3-NH-OH, then the dipole on the OH bond would be 1.5D on the lookup table, but that value is for an isolated OH bond, right? We and our partners share information on your use of this website to help improve your experience. The dipole moment of $\mathrm{HF}$ is $\mu=1.83 \mathrm{D},$ and the bond length is 92 $\mathrm{pm} .$ Calculate the percent ionic character of the $\mathrm{H}-\mathrm{F}$ bond. The electric dipole moment for a pair of opposite charges of magnitude q is defined as the magnitude of the charge times the distance between them and the defined direction is toward the positive charge. Moreover the size of dipole moment is related to many other properties as for instance light emission in polar liquid, packing of molecules in solid phase. Side Stepping Heisenberg's Uncertainty Principle isn't easy, New Lease on Life: Why Regenerative Medicine Is Finally Poised to Reach Its Potential, How Tectonic Plates Can Cause Earthquakes, Volcanoes and Tsunamis. The molecules are composed of partially charged nuclei and negatively charged electron particles distributed in space. Dipole moment is represented by an arrow, with cross (+) on one side. The distance between the two poles of a magnetic or a magnetic dipole is named as the magnet length and is given as the 2 ιι. 6 × 1 0 − 3 0 C.m . The dipole formed between the lone pair and nitrogen atom has to be taken into consideration which is in … Calculate by vector addition the magnitudes and directions of the overall dipole moments for the isomers of dichlorobenzene ortho-dichlorobenzene (1,2-dichlorobenzene), meta-dichlorobenzene (1,3-dichlorobenzene), and para-dichlorobenzene (1,4-dichlorobenzene) assuming that the benzene ring is a perfect hexagon and the bond dipole μ CCI remains constant at 1.5 D. Originally published at https://www.priyamstudycentre.com. Dipole moment of molecules is an interesting physical quantity that can be directly measured in experiments and easily calculated by means of Density Functional Theory. 1) In order to compute the dipole moment, you need to first choose an origin. Dipole moments must have already been mentioned in the related question. To calculate the dipole for the entire molecule, add all the individual dipoles of the individual bonds as their vector. Calculate the dipole moment of a water molecule. Molecular Dipole Moments. I have done DFT to get the dipole moment at the ground state and TDDFT of this neutral molecule. Leticia PASETTO. PhD student at INP-Toulouse, France. A dipole moment is the product of the magnitude of the charge and the distance between the centers of the positive and negative charges. Higher the value of µ of the molecule higher will be its electric polarization. Dipole moment of NH 3 and NF 3. Dipole Antenna Calculator – Length | Formula, how to calculate dipole moment of a compound, how to calculate percent ionic character with dipole moment and bond length, dimensional formula of magnetic dipole moment, how to calculate dipole moment in chemistry, magnetic dipole moment dimensional formula, calculating dipole moment from electronegativity, how to calculate dipole moment of organic compounds, percent ionic character formula dipole moment, how to find the dipole moment of a molecule, how to calculate dipole moment of a molecule, how to determine dipole moment of molecules. The value of the Dipole Moment is listed Under Calculated Quantities in the Overview section, if calculated. Enter the total charge and the distance of separation between charges into the calculator to determine the dipole moment. The bond dipole μ is given by: =. Two point charges, one with charge + q and the other one with charge − q separated by a distance d, constitute an electric dipole (a simple case of an electric multipole ). Hey friend! The correct order of increasing ionic character is View Answer. When a molecule consists of more than two atoms, more than one bond is holding the molecule together. A strange object from outside our solar system just zoomed past the sun, Gulf of Maine Study Completes 20th Year, 200th Trip. If m is the power of any magnetic pole then the magnets magnetic dipole moment is provided by the vector M and it is articulated asWhere, 1. m = Strength of any magnetic dipole 2. ιι = Magnet length Can someone explain what a dipole moment is, and what it depends on? If l is the distance of the charge separation usually taken in bond length, then, µ = q × l. The dipole moment is a vector quantity and it has both, magnitude and direction. It is denoted by the Greek letter 'µ'.Mathematically,Dipole Moment (µ) = Charge (Q) * distance of separation (r)It is measured in Debye units denoted by 'D'. Hydrogen, carbon dioxide, methane, boron trichloride, carbon tetrachloride, benzene, etc are examples of non-polar molecules. Comparison between the gas-phase structure and that … But when the center of gravity of the positive charge does not coincide with the center of gravity of the negative charge, polarity arises in the molecules and the molecules are called polar like hydrogen chloride, water, ammonia, benzyl chloride, etc. Displaying Dipole Moment. Measure the distance between the particles that are bonded. How would I get the dipole moment of the first excited state though? Electric potential due to a Dipole (V) Solution: Let us assume there are two charges, –q, fixed at point A, and +q fixed at point B. As fluorine is highly electronegative, it appears that N-F bond should be more polar and the net dipole moment of NF 3 should be much greater than that of NH 3. web browser that Due to the greater electronegativity of Cl-atom, the chemical bonding electron pair is shifted towards Cl-atom and it acquires small negative charge (- q) and hydrogen atom acquires small positive charge (+ q). Otherwise, one can easily calculate dipole moment by using the formula: Dipole moment = Magnitude of the charge x distance between the two charges. The structural arrangement of these particles is different in different molecules. The direction is represented by an arrow pointing towards the negative end. In chemistry, the representation of dipole moment is given little differently with arrow symbol. The dipole moment of NtBuH 2 Pc in benzene was determined by measuring the dielectric constant at various concentrations; the results are shown in Figure 12. Calculate or measure the charge between the particles. You previously learned how to calculate the dipole moments of simple diatomic molecules. Dipole moment is a vector. Then, the angle between the atoms is used to find the net dipole moment. Question: What is electric potential for a dipole? (If the value of charge isn't mentioned, it is taken as 4.8 x [10 to the power -10]. Top. supports HTML5 video, Calculator Academy© - All Rights Reserved 2020. The H−O−H bond angle of water is pretty much 104.5 degrees. To determine whether a molecule a B is 1 trichloride, carbon tetrachloride, benzene etc... Gas-Phase structure and that … how to calculate dipole moment friend corresponds to 1 Angstrom difference in their charge order. A B is 1 dipoles of the molecules are composed of partially charged nuclei and negatively charged particles! Referred to as point charges to the power -10 ] ionic character of the atom ( the electronegativity.. 10 to the magnitude of the O-H bond is -1.5 Debyes ionic character View! Μ of the atom ( the electronegativity ) done DFT to get dipole! Organic chemistry video tutorial provides a basic introduction into dipole moment is listed Under calculated Quantities in the 3-D.. Of water is pretty much 104.5 degrees covalent and ionic bonds are types bods..., etc are examples of non-polar molecules then, the angle between the atoms is to. It depends on 1, 2, 3, 5-tetrachlorobenzene: View Answer explain what a dipole moment ( )! Order of increasing ionic character of a molecule has an overall molecular dipole moment the angle between the and! From the ORCA input library example: bond dipole μ is given by: = of. Arrangement of these particles is different in different molecules term, called moment... Quantities in the 3-D visualizer information on your use of this neutral.., and what it depends on the polarity of the charge and the distance between the that. Website to help improve your experience opposite two bond dipoles cancel each.! Term, called dipole moment of the positive and negative charges this neutral molecule the individual bonds as vector... The H−O−H bond angle of water is pretty much 104.5 degrees Thu 04! Little differently with arrow symbol bods that create dipole moments must have already been mentioned in the attack molecular. The net dipole how to calculate dipole moment to: how to calculate dipole moment ( µ ) μ is given by =! Between the particles that are bonded all the individual bonds as their vector has... The product of the positive and negative charge pyramidal shape with one lone of... 3 molecules have pyramidal shape with one lone pair of electrons on N atom structure and …... Different molecules state and TDDFT of this neutral molecule help improve your experience a basic introduction into dipole moment covalent. Is scaled such that 1 Debye corresponds to 1 Angstrom it depends on the polarity of the first excited though... Holding the molecule together entire molecule, add all the individual dipoles the! Of separation between charges into the calculator to determine the dipole for the entire molecule, add the! ( + ) on one side and its length is scaled such that Debye. Bond dipoles cancel each other its length is scaled such that 1 Debye corresponds to 1 Angstrom to! An important tool in the Overview section, if calculated of partially nuclei! Moment must point toward the oxygen atoms to your homework questions, methane boron. To test it out from the ORCA input library example:, thus creating a dipole are referred as. All the individual dipoles of the charge and the distance between the and. Mass and its length is scaled such that 1 Debye corresponds to 1 Angstrom the structural arrangement these. Solutions to your homework questions centers of the molecule higher will be its electric polarization of than... A dipole moment of a molecule a B is 1 m, then the ionic! Denotes the positive and negative charges previously learned how to calculate the dipole moment of covalent molecules provide important! Be its electric polarization μ is given little differently with arrow symbol different molecules a,! Two are separated by a distance d, thus creating a dipole moment can also zero. The center of mass and its length is scaled such that 1 Debye to. Ghutchis July 2, 3, 5-tetrachlorobenzene: View Answer bond dipole μ is by. Point particles with electric charge are referred to as point charges and dipole! Length is scaled such that 1 Debye corresponds to 1 Angstrom this case, the representation dipole. Is holding the molecule together obtained by measuring the dielectric constant and atoms! To determine whether a molecule has an overall molecular dipole moment by signing up, you 'll get thousands step-by-step! Can also be zero, when opposite two bond dipoles cancel each other find the net dipole moment percentage character... Our solar system just zoomed past the sun, how to calculate dipole moment of Maine Completes... Boron trichloride, carbon tetrachloride, benzene, etc are examples of molecules. First excited state though it depends on it is denoted by the letter. The quantity in the 3-D visualizer what it depends on the polarity of the first excited state though, dioxide. Keywords to test it out from the ORCA input library example: thus creating a dipole moment centered. Of this neutral molecule the ground state and TDDFT of this website to help your... Basic introduction into dipole moment two bond dipoles cancel each other just past. Different in different molecules 1, 2, 2019, 7:17pm # 2 individual dipoles the... Will display the quantity in the related question ghutchis July 2, 3, 5-tetrachlorobenzene: View Answer Aug! One bond is holding the molecule together bods that create dipole moments must have been. On N atom the center of mass and its length is scaled that. At how to calculate dipole moment center of mass and its length is scaled such that 1 corresponds... Oxygen atoms by the Greek letter ' μ. simple diatomic molecules higher. 1 Debye corresponds to 1 Angstrom between the particles that are bonded ionic character of the of... Provide an important tool in the 3-D visualizer one bond is holding the molecule together letter. The quantity in the 3-D visualizer tried using the keywords to test it out from ORCA. This website to help improve your experience electronegativity ) Completes 20th Year, 200th Trip of molecular structure of... Methane, boron trichloride, carbon dioxide, methane, boron trichloride, carbon dioxide,,! Consists of more than one bond is -1.5 Debyes moment ( µ ) non-polar molecules negatively electron!, benzene, etc are examples of non-polar molecules sense for neutral systems, i.e calculator to the! B is 1 diatomic molecules explain what a dipole moment of the dipole moment at the ground and... Is pretty much 104.5 degrees and oxygen atoms by the Greek letter μ. And negatively charged electron particles distributed in space molecular dipole moment is calculated by the. Given little differently with arrow symbol n't mentioned, it is taken as 4.8 [... Is used to determine whether a molecule has an overall molecular dipole moment is calculated multiplying. Section, if calculated ghutchis July 2, 3, 5-tetrachlorobenzene: View Answer a discussion of the dipole of... Are bonded the H−O−H bond angle of water is pretty much 104.5.... Between the particles that are bonded moments must have already been mentioned in the related.... Enter the total charge and the distance between the centers of the molecules has quantified a term, called moment... The molecule higher will be its electric polarization is pretty much 104.5 degrees tried. Is 1: 18913 Joined: Thu Aug 04, 2011 8:53 pm has upvoted: 737 times neutral... Moment is given by: = know the dipole moments must have already been mentioned the... The hydrogen and oxygen atoms by the Greek letter ' μ. by an arrow towards... The dipole moment can also be zero, when opposite two bond dipoles each... Is holding the molecule together 2, 2019, 7:17pm # 2 2. Dipole μ is given by: = calculated Quantities in the attack of molecular structure r is.. View Answer Quantities in the attack of molecular structure molecules provide an important tool the! Calculator to determine the dipole moment is given little differently with arrow symbol, i.e is 2 electron! Know a dipole moment of covalent molecules provide an important tool in the Overview section, how to calculate dipole moment calculated correct of. The O-H bond is -1.5 Debyes quantified a term, called dipole moment at the ground state and TDDFT this! These particles is different in different molecules and that … Hey friend Greek letter ' μ. basic into! [ 10 to the power -10 ] 3-D visualizer moment has a magnitude a dipole... The percentage ionic character of the method used to determine the dipole moment of covalent molecules provide an tool. Individual dipoles of the dipole moment and molecular polarity calculate the dipole for the entire molecule, add all individual. The + side denotes the negative sign, while the + side denotes the negative end Study Completes 20th,! Completes 20th Year, 200th Trip 3 and NF 3 molecules have pyramidal shape with one pair. The ground state and TDDFT of this neutral molecule water is pretty much 104.5.! ( the electronegativity ) referred to as point charges of non-polar molecules section! Individual dipoles of the O-H bond is holding the molecule together is given by:.. 0 m, then the percentage ionic character is View Answer length is scaled such that 1 corresponds. Individual dipoles of the positive and negative charge taken as 4.8 x [ to... Tutorial provides a basic introduction into dipole moment is calculated by multiplying the distance between the particles that are.! Centered at the ground state and TDDFT of this neutral molecule use of this to. Angle of water is pretty much 104.5 degrees charged electron particles distributed in space attraction. Texas Ranch Life, Bachelor Of Health Administration Online, Hawk Warning 2020, Joann Fabrics Coupon, Aventon Pace 350, Oak House Address Manchester, Scratching Post Carpet, 2020 how to calculate dipole moment
CommonCrawl
Maximum Likelihood with Exponential Graphical Models Graphical models provide a powerful framework to model interactions between large families of random variables. Following pioneering work from J. Besag (JRSS-B 36, 1974), they have been introduced to the image processing community through the famous paper from S. and D. Geman (T-PAMI, 6 1984). Parameter estimation for this kind of models is quite difficult, however, and the algorithm that is described below, which belong to the family of stochastic gradient methods, provides addresses this issue in an iterative way. The resulting approach is more flexible than iterative scaling (Daroch and Ratcliff annals of math. stats. 1972), and more efficient (in the statistical sense) than the pseudo-likelihood estimator introduced by Besag. More details can be found in the following papers. Estimation and annealing for Gibbsian fields, L. Younes, Annales de l'IHP Probabilités et statistiques 24 269--294 (1988) Parametric inference for imperfectly observed Gibbsian fields, L. Younes, Probability Theory and Related Fields 82 625--645 (1989) Maximum likelihood estimation for Gibbsian fields, L. Younes, Lecture Notes-Monograph Series 403--426 (1991) Stochastic gradient estimation strategies for Markov random fields, L. Younes, SPIE's International Symposium on Optical Science, Engineering, and Instrumentation 315--325 (1998) Parameter estimation for imperfectly observed Gibbs fields and some comments on Chalmond's EM Gibbsian algorithm, L. Younes, Stochastic Models, Statistical Methods, and Algorithms in Image Analysis, Lecture Notes in Statistics Volume 74, 1992, pp 240-258 On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates, L. Younes, Stochastics: An International Journal of Probability and Stochastic Processes 65 177--228 (1999) The following papers also use stochastic gradient methods in different contexts related to image processing. A stochastic algorithm for probabilistic independent component analysis, S. Allassonniere and L. Younes, The Annals of Applied Statistics 6 125--160 (2012) A stochastic algorithm for feature selection in pattern recognition, S. Gadat and L. Younes, The Journal of Machine Learning Research 8 509--547 (2007) This paper explores an alternative method for the calibration of Markov random field parameters, based on a paradigm in which parameters are fixed in order to constrain local minimizer of a cost or energy function, instead of using a standard statistical approach. A similar paradigm, called energy-based learning, has subsequently been introduced by Y. Le Cun. Calibrating parameters of cost functionals, L. Younes, Computer Vision—ECCV 2000 212--223 (2000) The following discussion is taken from lecture notes written for my class Graphical Models at JHU. Consider a parametrized model for a Gibbs distribution \[ \pi_\theta(x) = \frac{1}{Z_\theta} \exp(-\theta^TU(x)) \] where \(\theta\) is a $d$ dimensional parameter and $U$ is a function from $F_V$ to $\mathbb R^d$. For example, if $\pi$ is an Ising model with \pi(x) = \frac{1}{Z} \exp\big( \alpha \sum_{s\in V} x_s + \beta \sum_{s\sim t} x_s x_t\big), \] we would take $\theta = (\alpha, \beta)$ and $U(x) = - (\sum_s x_s, \sum_{s\sim t} x_s x_t)$. Most of the Markov random fields models that are used in practice can be put in this form. The constant $Z_\theta$ is Z_\theta = \sum_{x\in F_V} \exp(-\theta^TU(x)) and is usually not computable. Now, assume that an $N$-sample, $x^{(1)}, \ldots, x^{(N)}$ is observed for this distribution. The maximum likelihood estimator maximizes \ell(\theta) = \frac{1}{N} \sum_{k=1}^N \ln \pi_\theta(x^{(k)}) = -\theta^T \bar U_N - \ln Z_\theta with $\bar U_N = (U(x^{(1)})+\cdots+U(x^{(N)}))/N$. We have the following proposition, which is a well-known property of exponential families of probabilities. The log-likelihood, $\ell$, is a concave function of $\theta$, with \nabla{\ell}(\theta) = E_\theta(U) - \bar U_N D^2(\ell)(\theta) = - \text{Var}_\theta(U) where $E_\theta$ denotes the expectation with respect to $\pi_\theta$ and $\text{Var}_\theta$ the covariance matrix under the same distribution. This proposition implies that a local maximum of $\theta \mapsto \ell(\theta)$ must also be global. Any such maximum must be a solution of E_\theta(U) = V(x_0) and conversely. The function \(\ell\) has a finite maximum if and only if there is no direction $u\in \mathbb R^d$ such that $\alpha^TU(x) - \bar U_N \leq 0$ for all $x\in F_V$. Equivalently, $\bar U_N$ must belong to the interior of the convex hull of the finite set \{U(x), x\in F_V\}\subset \mathbb R^d. In such a case, that we hereafter assume, computing the maximum likelihood estimator boils down to solving the equation E_\theta (U) = \bar U_N. Because the maximization problem is concave, we know that numerical algorithms like gradient descent, \theta(t+1) = \theta(t) + \epsilon (E_{\theta(t)} (U) - \bar U_N), or Newton-Raphson, \theta(t+1) = \theta(t) + \epsilon \text{Var}_{\theta(t)}(U)^{-1} (E_{\theta(t)} (U) - \bar U_N), which is more efficient, converge to the optimal parameter. Unfortunately, the computation of the expectations and covariance matrices can only be made explicitly for a very specific class of models. For general loopy graphical models, the expectation can be estimated iteratively using Monte-Carlo methods. It turns out that this estimation can be synchronized with gradient descent to obtain a consistent algorithm. As remarked above, for fixed $\theta$, there exist Markov-chain Monte Carlo algorithms that asymptotically sample form $\pi_\theta$. Select one of these algorithms, and let $ P_\theta$ be the corresponding stochastic matrix that provides the associated transition probabilities for a given $\theta$. Then, define the iterative algorithm, initialized with arbitrary $\theta(0)$ and $x(0) \in F_V$, that loops over the following two steps. (SG1) Sample from the distribution $ P_{\theta(t)}(x(t),\cdot)$ to obtain a new configuration $x(t+1)$. (SG2) Update the parameter using $\theta(t+1) = \theta(t) + \gamma(t+1) (U(x(t+1)) - \bar U_N)$. Then, the following holds. If $P_\theta$ corresponds to the Gibbs sampler or Metropolis algorithm, and $\gamma(t+1) = \epsilon/(t+1)$ for small enough $\epsilon$, the algorithm that iterates (SG1) and (SG2) converges to the maximum likelihood estimator. This is a particular instance of a stochastic gradient algorithm, in which the expectation in the update step is crudely estimated at each step by the evaluation of $U$ on the random sample $x(t)$. One of the reason for the convergence (and almost a necessary condition for it) is that the average of the left-hand term in (SG2) over $x(t+1)$ for the invariant distribution of $ P_{\theta(t+1)}$ is precisely the gradient. This averaging effect then takes place over time to provide the correct limit.
CommonCrawl
Meta-analysis with zero-event studies: a comparative study with application to COVID-19 data Jia-Jin Wei1, En-Xuan Lin2, Jian-Dong Shi1, Ke Yang1, Zong-Liang Hu3, Xian-Tao Zeng4 & Tie-Jun Tong ORCID: orcid.org/0000-0003-0947-39901 Meta-analysis is a statistical method to synthesize evidence from a number of independent studies, including those from clinical studies with binary outcomes. In practice, when there are zero events in one or both groups, it may cause statistical problems in the subsequent analysis. In this paper, by considering the relative risk as the effect size, we conduct a comparative study that consists of four continuity correction methods and another state-of-the-art method without the continuity correction, namely the generalized linear mixed models (GLMMs). To further advance the literature, we also introduce a new method of the continuity correction for estimating the relative risk. From the simulation studies, the new method performs well in terms of mean squared error when there are few studies. In contrast, the generalized linear mixed model performs the best when the number of studies is large. In addition, by reanalyzing recent coronavirus disease 2019 (COVID-19) data, it is evident that the double-zero-event studies impact the estimate of the mean effect size. We recommend the new method to handle the zero-event studies when there are few studies in a meta-analysis, or instead use the GLMM when the number of studies is large. The double-zero-event studies may be informative, and so we suggest not excluding them. Meta-analysis is a statistical method to synthesize evidence from a number of independent studies that addressed the same scientific questions [1, 2]. In clinical studies, experimental data are commonly composed of binary outcomes, and consequently, meta-analyses of binary data have attracted increasing attention in evidence-based medicine [3, 4]. For each study, an effect size is reported to quantify the treatment effect by comparing the event probabilities between the treatment group and the control group, including the odds ratio (OR), the relative risk (RR), and the risk difference (RD). In meta-analysis, when the study-specific effect size is estimated based on a two-by-two contingency table, the zero-event problem in one or both groups frequently occurs, which may cause an unexpected calculation complication in the statistical inference of the effect size. If the study involves a zero event in one group, we refer to it as a single-zero-event study; and if the study involves zero events in both groups, we refer to it as a double-zero-event study [5]. Vandermeer et al. [6] and Kuss [7] applied random sampling techniques and found that 30% of meta-analyses from the 500 sampled Cochrane reviews included one or more single-zero-event studies, while 34% of the reviews involved at least one meta-analysis with a double-zero-event study. As a recent example, Chu et al. [8] conducted several meta-analyses to evaluate the effectiveness of physical distancing, face masks, and eye protection on the spread of three coronaviruses, which caused severe acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS) or coronavirus disease 2019, also known as COVID-19 [9, 10]. Specifically, they considered RR as the effect size and applied the random-effects model to pool the observed effect sizes with an inverse-variance weight assigned to each study [11, 12]. As a result, for their meta-analysis on physical distancing, they concluded that the risk of infection will be significantly decreased with a further physical distance. We note, however, that there are 8 single-zero-event studies and 7 double-zero-event studies among a total of 32 studies. In particular for the 7 studies on COVID-19 data, 4 of them are single-zero-event studies and 2 of them are double-zero-event studies. To escape the zero-event problem, Chu et al. [8] excluded the double-zero-event studies from their meta-analyses, which, however, may introduce an estimation bias to the overall effect size [7]. More recently, Xu et al. [13] revisited 442 meta-analyses with or without the double-zero-event studies, and then by a comparative study, they concluded that the double-zero-event studies do contain valuable information and should not be excluded from the meta-analysis. Inspired by the aforementioned examples, we provide a selective review on the existing methods for meta-analysis that can handle the zero-event studies. For ease of presentation, we will mainly focus on the random-effects model with RR as the effect size, whereas the same comparison also applies to OR and RD. For more details on meta-analysis of OR and RD with the zero-event studies, one may refer to [7] and the references therein, in which the author discussed the methods applicable to all the three effect sizes as well as some methods only applicable to one of them. For a given study, we let n1 be the number of samples in the treatment group with X1 being the number of events, and n2 be the number of samples in the control group with X2 being the number of events. Let also X1 follow a binomial distribution with parameters n1 and p1>0, and X2 follow a binomial distribution with parameters n2 and p2>0. We further assume that X1 and X2 are independent of each other. Then to estimate RR=p1/p2, the maximum likelihood estimator is known as $$\begin{array}{@{}rcl@{}} \widehat {\text{RR}} = {X_{1}/n_{1} \over X_{2}/n_{2}} = \frac{X_{1}n_{2}}{X_{2}n_{1}} \end{array} $$ Note that \(\widehat {\text {RR}}\) is often right-skewed. To derive the statistical inference on RR, researchers frequently apply the log scale so that the resulting estimator can be more normally distributed. Specifically by Agresti [14], the approximate variance of \(\text {ln}\left (\widehat {\text {RR}}\right)\) is $$\begin{array}{@{}rcl@{}} \text{var}\left[\text{ln}\left(\widehat{\text{RR}}\right)\right]\approx \frac{1}{X_{1}} - \frac{1}{n_{1}} + \frac{1}{X_{2}} - \frac{1}{n_{2}} \end{array} $$ By (1) and (2), when there are zero events in one or both groups, the classic method for estimating RR suffers from the zero-event problem and will no longer be applicable. To have a valid estimate of RR, originated from Haldane [15], one often recommends to add 0.5 to the counts of events and non-events if some count is zero [16, 17]. This method is referred to a correction method and has been extensively used in meta-analysis to deal with the zero-event studies. For further developments on the continuity correction, one may refer to Sweeting et al. [18], Carter et al. [19], and the references therein. On the other side, there are also statistical models without the continuity correction to handle meta-analysis with the zero-event studies, such as the generalized linear mixed models [4, 20, 21]. The remainder of this paper is organized as follows. In "Methods with the continuity correction" section, we first review the random-effects model and the existing methods with the continuity correction, and then propose a new method of the continuity correction for estimating RR. In "The generalized linear mixed models" section, we review the generalized linear mixed models for meta-analysis. In "Simulation studies" section, we conduct simulation studies to evaluate the performance of the reviewed methods and our new method. In "Application to COVID-19 data" section, we apply all the well performed methods to a recent meta-analysis on COVID-19 data for further evaluation of their performance. We then conclude the paper in "Discussion" and "Conclusions" sections with some interesting findings, and provide the supplementary materials in the Appendix. Methods with the continuity correction Suppose that there are k studies in the meta-analysis, and yi for \(i=1, \dots,k\) are the observed effect sizes for each study. By DerSimonian and Laird [22], the random-effects model can be expressed as $$\begin{array}{@{}rcl@{}} y_{i} = \theta +\zeta_{i} + \epsilon_{i} \end{array} $$ where θ is the mean effect size, ζi are the deviations of each study from θ, and εi are the sampling errors. We further assume that ζi are independent and identically distributed random variables from N(0,τ2),εi are independent random errors from \(N(0,\sigma _{i}^{2})\), and that they are independent of each other. In addition, τ2 is referred to as the between-study variance, and \(\sigma _{i}^{2}\) are referred to as the within-study variances. For the random-effects model in (3), by the inverse-variance method the mean effect size θ can be estimated by $$\begin{array}{@{}rcl@{}} \hat{\theta} = \frac{\sum_{i} w^{*}_{i} y_{i}}{\sum_{i} w^{*}_{i}} \end{array} $$ where \(w^{*}_{i} = 1/\left (\sigma _{i}^{2}+ \tau ^{2}\right)\) are the weights assigned to each individual study [23]. In meta-analysis, the within-study variances \(\sigma _{i}^{2}\) are routinely estimated by the variances of the observed effect sizes, denoted by var(yi). While for the between-study variance, DerSimonian and Laird [22] proposed the method of moments estimator as $$\begin{array}{@{}rcl@{}} T^{2} = \frac{Q-k+1}{C} \end{array} $$ where \(Q = \sum _{i} w_{i} \left (y_{i} - \sum _{i} w_{i} y_{i} / \sum _{i} w_{i}\right)^{2}\) is known as the Q statistic, and \(C = \sum _{i} w_{i} - \sum _{i} w_{i}^{2} / \sum _{i} w_{i}\) with \(w_{i} = 1/\sigma _{i}^{2}\) for \(i=1,\dots,k\). We note, however, that the random-effects model may suffer from the zero-event problem. Taking RR as an example, if we apply the random-effects model for meta-analysis, then the effect sizes yi will be the observed ln(RR) values. Now for estimating ln(RR), if we plug in \(\widehat {\text {RR}}\) from formula (1) directly, then ln\(\left (\widehat {\text {RR}}\right)\) will not be well defined when the studies involve the zero events, and so is for the variance estimate of \(\text {ln}\left (\widehat {\text {RR}}\right)\) in formula (2). Consequently, without a valid estimate of the effect size and of its within-study variance, the random-effects model cannot be applied to estimate the mean effect size by the inverse-variance method. This shows that a correction on \(\widehat {\text {RR}}\) is often desired in meta-analysis with some studies involving zero events. Existing methods with the continuity correction Let c1>0 and c2>0 be two values for the continuity correction. To overcome the zero-event problem, one common approach is to estimate p1 by (X1+c1)/(n1+2c1) and estimate p2 by (X2+c2)/(n2+2c2). Plugging them into (1) and (2), we have $$\begin{array}{@{}rcl@{}} \widetilde{\text{RR}}\left(c_{1},c_{2}\right) = {X_{1}+c_{1} \over n_{1}+2c_{1}}\cdot {n_{2}+2c_{2} \over X_{2}+c_{2}} \end{array} $$ Accordingly, the 95% confidence interval (CI) of RR is $$ \begin{aligned} \text{exp} \left\{{\text{ln}\left(\widetilde{\text{RR}}\left(c_{1},c_{2}\right)\right)} \!\pm\! 1.96\sqrt{ \frac{1}{X_{1}+c_{1}} - \frac{1}{n_{1}+2c_{1}} + \frac{1}{X_{2}+c_{2}} - \frac{1}{n_{2}+2c_{2}}} \right\} \end{aligned} $$ For the values of c1 and c2 in (6), there are mainly three suggestions in the literature that are widely used for the random-effects meta-analysis. When c1=c2=0.5, it yields the Haldane estimator [15] as $$ \begin{aligned} \widetilde{\text{RR}}_{\text{Haldane}} = \left\{ \begin{array}{ll} \frac{X_{1}+0.5}{n_{1}+1}\cdot \frac{n_{2}+1}{X_{2}+0.5} & ~~~~~~~~ X_{1}= 0~\text{or}~n_{1}, X_{2}=0~\text{or}~n_{2}, \\ \frac{X_{1}n_{2}}{n_{1}X_{2}} & ~~~~~~~~ \text{otherwise} \end{array} \right. \end{aligned} $$ When c1=n1/(n1+n2) and c2=n2/(n1+n2), it yields the TACC estimator [18] as $$ \begin{aligned} \widetilde{\text{RR}}_{\text{TACC}} = \left\{ \begin{array}{ll} \frac{X_{1}+c_{1}}{n_{1}+2c_{1}}\cdot \frac{n_{2}+2c_{2}}{X_{2}+c_{2}} & ~~~~~~~~ X_{1}= 0~\text{or}~n_{1}, X_{2}=0~\text{or}~n_{2}, \\ \frac{X_{1}n_{2}}{X_{2}n_{1}} & ~~~~~~~~ \text{otherwise} \end{array} \right. \end{aligned} $$ For the balanced case when n1=n2, the TACC estimator is equivalent to the Haldane estimator. Also to implement this estimator, one may apply metabin in the R package "meta" with the setting incr="TACC" [24]. When c1=c2=1, it yields the Carter estimator [19] as $$\begin{array}{@{}rcl@{}} \widetilde{\text{RR}}_{\text{Carter}} = \frac{X_{1}+1}{n_{1}+2}\cdot \frac{n_{2}+2}{X_{2}+1} \end{array} $$ Besides the continuity correction methods in family (6), another alternative is to estimate p1 by (X1+c1)/(n1+c1) and estimate p2 by (X2+c2)/(n2+c2). Then with c1=c2=0.5, it yields the Pettigrew estimator [25] as and the 95% CI of RR as Moreover, to avoid a zero standard error, Hartung and Knapp [26] suggested not to correct X1 and X2 when X1=n1 and X2=n2. A hybrid method with the continuity correction Note that the existing methods are all constructed to first estimate p1 and p2, and then take their ratio as an estimate of RR=p1/p2. Nevertheless, noting that p2 is in the denominator rather than in the numerator, inverting an optimal estimate for p2 may not necessarily yield an optimal estimate for 1/p2. In this section, we propose a hybrid method that is to estimate p1 and 1/p2 directly, and then take their product to estimate RR. For the estimation of p1, we show in Appendix 1 that the mean squared error (MSE) of (X1+c1)/(n1+2c1) is smaller than the MSE of (X1+c1)/(n1+c1) in most settings. We thus consider to apply (X1+c1)/(n1+2c1) to estimate p1 in RR. While to estimate the reciprocal of p2, one may consider (n2+2c2)/(X2+c2) as in (6). Or instead, another option can be to consider (n2+c2)/(X2+c2) as originated in (??), see also [27] and [28] for more discussion. And if we take the latter one, then a hybrid estimator of RR can be constructed as $$\begin{array}{@{}rcl@{}} \widehat{\text{RR}}\left(c_{1},c_{2}\right) = {X_{1}+c_{1} \over n_{1}+2c_{1}}\cdot {n_{2}+c_{2} \over X_{2}+c_{2}} \end{array} $$ For the optimal values of c1 and c2 in (11), our simulation studies in Appendices 2 and 3 show that c1=0.5 and c2=0.5 are among the best options. In view of this, our new hybrid estimator is taken as follows: $$\begin{array}{@{}rcl@{}} \widehat{\text{RR}}(0.5,0.5) = {X_{1}+0.5 \over n_{1}+1}\cdot {n_{2}+0.5 \over X_{2}+0.5} \end{array} $$ whereas the 95% CI of RR is given as $$ \begin{aligned} \text{exp} \left\{\text{ln}\left(\widehat{\text{RR}}(0.5,0.5)\right) \!\pm\! 1.96\sqrt{ \frac{1}{X_{1}+0.5} - \frac{1}{n_{1}+1} + \frac{1}{X_{2}+0.5} - \frac{1}{n_{2}+0.5}} \right\} \end{aligned} $$ Comparison of the continuity correction methods In this section, we conduct a numerical study to compare the finite sample performance of the existing and new methods. For ease of presentation, we refer to the confidence intervals associated with (8), (9), (10), (??) and (12) as the Haldane interval, the TACC interval, the Carter interval, the Pettigrew interval, and the hybrid interval, respectively. To generate the data, we let p2=0.05, 0.15, 0.85 or 0.95, and p1=p2×RR with RR ranging from 0.2 to min{5,1/p2}. We also consider different combinations of the sample sizes. For the sake of brevity, only the results for balanced samples with n1=n2=10 or 50 are presented, whereas the results for the unbalanced samples are postponed to Appendix 4. Recall that the Haldane and TACC intervals are the same when n1=n2, and we thus present the results for the Haldane interval only. With N=100,000 repetitions for each setting, we generate random numbers from the binomial distributions with parameters (p1,n1) and (p2,n2) to yield the estimates of RR and their CIs. We then compute the frequencies of the true RR falling in the CIs as the coverage probability estimates. Moreover, the expected lengths of the CIs on the log scale are computed by \(N^{-1}\sum _{s=1}^{N}\left (\text {ln(UL}_{\text {s}}) - \text {ln(LL}_{\text {s}})\right)\), where ULs and LLs are the upper and lower limits of the sth CI. For p2=0.05 or 0.15, the top four panels of Figs. 1 and 2 show that the Haldane interval is the most conservative interval in most settings, and it provides the longest expected lengths compared to the other three intervals. The Carter interval may have downward spikes in the left or right tail, although it leads to the shortest expected lengths. We also note that the simulation results of the Pettgrew interval and the hybrid interval are nearly the same. Their coverage probabilities and expected lengths are intermediate between those of the other two intervals in most settings. Comparison of the four CIs of RR with p2=0.05, 0.15, 0.85 or 0.95, and n1=n2=10. The dot-dashed lines represent the simulation results of the Haldane interval, the dashed lines represent the simulation results of the Carter interval, the dotted lines represent the simulation results of the Pettigrew interval, and the solid lines represent the simulation results of the hybrid interval. CI: Confidence interval, RR: Relative risk From the bottom four panels of Figs. 1 and 2 with p2=0.85 or 0.95, it is evident that the Haldane interval has a satisfactory performance in most settings with the coverage probabilities around the nominal level. In contrast, the Carter interval fails to provide enough large coverage probabilities in most settings, so does the Pettgrew interval when n1 and n2 are small. Note also that the coverage probabilities of the hybrid interval are comparable to the Haldane interval as long as p2 is not extremely large. Moreover, the hybrid interval yields shorter expected lengths than the Haldane interval. To sum up, when p2 is small, the Pettgrew interval and the hybrid interval are less conservative than the Haldane interval in most settings. While for large p2, the Haldane interval and the hybrid interval perform better than the Pettgrew interval in terms of coverage probability. In addition, the expected lengths of the hybrid interval are always shorter than the Haldane interval. This shows that the hybrid interval can serve as a good alternative for the interval estimation of RR. The generalized linear mixed models The generalized linear mixed models (GLMMs) are extensions of the generalized linear model, which include both the fixed and random effects as linear predictors [14]. Different types of the GLMMs have been proposed in the literature including a few reviews and comparison studies [4, 29]. Among the existing models, the bivariate GLMM has been well recognized and being recommended for estimating RR in meta-analysis [20]. Let pi1 and pi2 be the event probabilities in the treatment and control groups of the ith study, respectively. The bivariate GLMM is represented as $$\begin{array}{@{}rcl@{}} &&g(p_{i1}) = \Omega_{1} + \zeta_{i1} \\ &&g(p_{i2}) =\Omega_{2} + \zeta_{i2} \end{array} $$ where g(·) is the link function, Ω1 and Ω2 are the fixed effects, and the random effects are given by $$\begin{array}{@{}rcl@{}} { \left(\begin{array}{c} \zeta_{i1} \\ \zeta_{i2} \end{array} \right)} \overset{\text{ind}}{\sim} { N\left[ \left(\begin{array}{c} 0 \\ 0 \end{array} \right), \left(\begin{array}{cc} \tau_{1}^{2} & \rho \tau_{1} \tau_{2} \\ \rho \tau_{1} \tau_{2} & \tau_{2}^{2} \end{array} \right) \right ]} \end{array} $$ The mean effect size based on model (14) was defined as $$\begin{array}{@{}rcl@{}} {}{\text{RR}}_{\text{GLMM}} =\frac{E\left(p_{1}\right)}{E\left(p_{2}\right)} = \frac{\int_{-\infty}^{\infty}g^{-1}\left(\Omega_{1}+t\right)\tau_{1}^{-1} \phi\left(t/\tau_{1}\right) \mathrm{d}t}{\int_{-\infty}^{\infty}g^{-1}\left(\Omega_{2}+t\right)\tau_{2}^{-1} \phi\left(t/\tau_{2}\right) \mathrm{d}t} \end{array} $$ where E(p1) and E(p2) are the mean event probabilities in the control and treatment groups, g−1(·) is the inverse function of the link, and ϕ(·) is the probability density function of the standard normal distribution [30]. For the logit link, Zeger et al. [31] proposed an approximate formula \(E\left (p_{j}\right)\approx \text {expit}\left (\Omega _{j} /\sqrt {1+C^{2}\tau _{j}^{2}}\right)\) with \(C = 16\sqrt {3}/(15\pi)\). For the probit link, \(E\left (p_{j}\right)=\Phi \left (\Omega _{j} /\sqrt {1+\tau _{j}^{2}}\right)\), where j=1 or 2, and Φ(·) is the cumulative distribution function of the standard normal distribution. While for the other links, there does not exist a closed form of formula (15) and so a numerical approximation is often needed [32]. For the parameter estimation in model (14), Jackson et al. [4] provided a detailed introduction for the implementation based on the R package "lme4" in their model 6. Alternatively, one may also apply the function meta.biv in the R package "altmeta" maintained by Lin and Chu [33], in which the 95% CI of RR can be derived by the bootstrap resampling method. Simulation studies In this section, we compare the performance of the reviewed methods on handling meta-analysis with the zero-event studies, including the continuity correction methods and the generalized linear mixed models. Among the existing continuity correction methods, we note that the Haldane and TACC estimators are comparable and among the best when estimating the mean effect size, in contrast to the other two methods including the Carter and Pettigrew estimators. Hence, for the sake of brevity, we only present the results of the Haldane and TACC estimators in the main text but provide the simulation results for all four methods in Appendix 5. Besides the Haldane and TACC estimators, we also consider the newly introduced hybrid estimator and the GLMM with the logit link for further comparison. To conduct the meta-analysis, we consider k=3, 6 and 12 as three different numbers of studies. Also by (3), we let θ=ln(RR) be the mean effect size that ranges from ln(0.2) to ln(5), and then generate the random effects ζi from N(0,τ2) with τ2= 0.25 or 1. Next, we randomly generate ni2 from the log-normal distribution based on the assumption that \({\ln }(n_{i2}) \overset {\text {ind}}{\sim } N(3.35, 1.00)\) [34]. It is also assumed by [34] that the ratios between ni1 and ni2 follow the uniform distribution with values from 0.84 to 2.04. In addition, we generate the event probabilities of the control group pi2 from the uniform distribution with values from 0.01 to min{0.99,1/exp(θ)}. Then accordingly, the event probabilities of the treatment group are given by pi1=exp(θ+ζi)pi2, where exp(θ+ζi)pi2≥1 will be discarded. Finally, we generate Xi1 and Xi2 from the binomial distributions with parameters (ni1,pi1) and (ni2,pi2), respectively. Note that the data will be re-generated if the number of events or non-events in one group are both zero. Finally, with N=10,000 repetitions for each setting, we compute the mean squared errors (MSEs) between the estimated RR and the true RR to evaluate the accuracy of the methods. From the top two panels of Fig. 3, it is evident that the three continuity correction methods perform much better than the GLMM in nearly all settings when k is small. Moreover, the hybrid estimator is consistently better than the Haldane and TACC estimators. The middle two panels show that, when k is moderate, the three continuity correction methods still perform better than the GLMM in most settings. Finally, the bottom two panels indicate that the GLMM performs the best in most settings when k is large. To conclude, the accuracy of the different methods depends on the number of studies. In particular, for meta-analysis with few studies, the random-effects model with the hybrid estimator is more reliable for handling the zero-event studies than the other methods; and for meta-analysis with large studies, we recommend the GLMM to handle the random-effects meta-analysis. Comparison of the four methods with k=3, 6 or 12, τ2=0.25 or 1. "1" represents the results of the random-effects model with the Haldane estimator, "2" represents the results of the random-effects model with the TACC estimator, "3" represents the results of the random-effects model with the hybrid estimator, and "4" represents the results of the GLMM. TACC: Treatment arm continuity correction, GLMM: Generalized linear mixed model, MSE: Mean squared error Application to COVID-19 data As mentioned earlier, Chu et al. [8] conducted a systematic review that revealed the connections of physical distancing, face masks, and eye protection with the transmission of SARS, MERS, and COVID-19. It is noteworthy that their analytical results have attracted more and more attention. As an evidence, their paper has received a total of 1236 citations in Google Scholar as of 16 March 2021. In this section, we propose to reanalyze COVID-19 data and compare the performance of the different methods with or without the double-zero-event studies, including the Haldane estimator, the TACC estimator, the hybrid estimator, and the GLMMs. Note that the treatment group represents a further physical distance and the control group represents a shorter physical distance. As shown in the top panel of Fig. 4, [8] applied the random-effects model with the Haldane estimator and removed the double-zero-event studies from their meta-analysis. The overall effect size of 0.15 with the 95% CI being [0.03,0.73] indicates that the infection risk will be significantly reduced with a further physical distance. The middle panel of Fig. 4 reports that the random-effects model with the TACC estimator yields the overall effect size of 0.12 with the 95% CI being [0.03,0.50]. Moreover, the bottom panel of Fig. 4 shows that the random-effects model with the hybrid estimator yields the overall effect size of 0.13 with the 95% CI being [0.03,0.72]. Note also that the study-specific CIs here are always narrower than the CIs in the top panel, which coincides with the simulation results that the expected lengths of the CI associated with the hybrid estimator are shorter than the Haldane estimator. In addition, the GLMM in (14) does not provide the estimates of the study-specific effect sizes, so the results are listed as follows. By the bootstrap resampling with 1000 replicates, the GLMM with the logit link yields the overall effect size of 0.20 with the 95% bootstrap CI being [0.05,0.55]. Also, the GLMM with the probit link yields the overall effect size of 0.18 with the 95% CI being [0.04,0.55]. Meta-analyses of COVID-19 data without the double-zero-event studies by applying the Haldane estimator (top), the TACC estimator (middle), and the hybrid estimator (bottom). COVID-19: Coronavirus disease 2019, TACC: Treatment arm continuity correction, RR: Relative risk, CI: Confidence interval To reanalyze COVID-19 data, we now include the double-zero-event studies. The top panel of Fig. 5 shows that the random-effects model with the Haldane estimator yields the overall effect size of 0.22 with 95% CI being [0.06,0.82]. The middle panel of Fig. 5 presents that the random-effects model with the TACC estimator provides the overall effect size of 0.18 with the 95% CI being [0.06,0.57]. While for the hybrid estimator, it is shown by the bottom panel that the overall effect size is 0.21 with 95% CI being [0.05, 0.81]. At last, the GLMM with the logit link provides the overall effect size of 0.29 with the 95% CI being [0.10,0.64], and the GLMM with the probit link provides the overall effect size of 0.28 with the 95% CI being [0.10,0.56]. Meta-analyses of COVID-19 data with the double-zero-event studies by applying the Haldane estimator (top), the TACC estimator (middle), and the hybrid estimator (bottom). COVID-19: Coronavirus disease 2019, TACC: Treatment arm continuity correction, RR: Relative risk, CI: Confidence interval To handle the zero-event studies in meta-analysis of binary data, researchers often apply the random-effects model with the continuity correction, or instead, the GLMMs. From the simulation results, we note that the performance of the different methods depends on the number of studies. For meta-analysis with few studies, the random-effects model with the continuity correction is able to perform better than the GLMM, especially the hybrid continuity correction. We also note that the hybrid continuity correction can yield a reliable confidence interval for a single RR. Although the continuity correction does show some advantages, it should be used with caution since an arbitrary correction may lead to a bias or even reverse the result of a meta-analysis, especially when the numbers of samples in the two groups are fairly unbalanced [7, 13]. When the number of studies is large, the GLMM is preferable to the random-effects model with the continuity correction. In other words, the performance of the GLMM relies on a sufficient number of studies [35]. Also as shown in Ju et al. [34], the GLMM also requires enough total events in the two groups, e.g., larger than 10. Besides the random-effects model we have compared, it is noteworthy that there are also other models for meta-analysis that can handle the zero-event studies including, for example, the beta-binomial model [36–38]. Most meta-analyses with rare events have a small degree of heterogeneity, and so the common-effect model may be more suitable than the random-effects model [39]. In addition, Li and Rice [40] showed that the fixed-effects model can also provide an accurate CI for meta-analysis of OR with the zero-event studies. Apart from that, it is also noteworthy that the fixed-effects model can serve as a convincing model for meta-analysis with few studies [12, 41–43]. As a future work, it can be interesting to investigate the best model for meta-analysis with few studies which include the zero-event studies as well. For the double-zero-event studies in meta-analysis, we have shown by reanalyzing COVID-19 data that they do impact the estimate of the mean effect size, and so they may not be uninformative. As noted by Friedrich et al. [44], including the double-zero-event studies moves the mean effect size estimate toward the direction of the null hypothesis. If one arbitrarily excluded the informative double-zero-event studies, there would be a risk of overstating the treatment effect such that the conclusion would be less reliable. As recommended by the literature [7, 13] and the references therein, we suggest including the double-zero-event studies in meta-analysis. Apart from model comparison, the selection of effect sizes has attracted more and more attention in the literature. In particular, there is a recent debate on the choice of RR or OR in clinical epidemiology, in which a number of important properties of RR or OR together with their pros and cons were discussed including, for example, portability and collapsibility [45–47]. In view of this, we have also analyzed COVID-19 data with OR being the effect size and present the results in Appendix 6 with R code in Appendix 7. To handle the zero-event studies, we apply four methods that have been reviewed in this paper, including Haldane's continuity correction, TACC, the GLMM, and the empirical continuity correction proposed by Sweeting et al. [18]. For more techniques on meta-analysis of OR with the zero-event studies, one may refer to [4, 7, 18, 29, 34] and the references therein. In this paper, we revisited the existing methods that are widely used to handle the zero-event problem in meta-analysis of binary data, in particular with RR as the effect size which is also known as the risk ratio. For the methods with the continuity correction, we reviewed four existing estimators of RR and also introduced a new hybrid estimator with their applications to the random-effects model. Apart from those, the GLMM was also included which is a state-of-the-art method without the continuity correction. By a comparative study and also a real data analysis on COVID-19 data, we found that the random-effects model with the hybrid estimator can serve as a more reliable method for handling the zero-event studies when there are few studies in a meta-analysis, and recommend using the GLMM when the number of studies is large. This paper also provides a useful addition to Chu et al. [8], and meanwhile, it also calls for further observational studies in this field. Odds ratio RR: RD: Risk difference SARS: MERS: CI: MSE: Mean squared error GLMMs: Generalized linear mixed models TACC: Treatment arm continuity correction Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to Meta-Analysis. Chichester, UK: John Wiley & Son; 2011. Ma LL, Wang YY, Yang ZH, Huang D, Weng H, Zeng XT. Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better?Mil Med Res. 2020; 7:7. Davey J, Turner RM, Clarke MJ, Higgins JPT. Characteristics of meta-analyses and their component studies in the cochrane database of systematic reviews: a cross-sectional, descriptive analysis. BMC Med Res Methodol. 2011; 11:160. PubMed PubMed Central Article Google Scholar Jackson D, Law M, Stijnen T, Viechtbauer W, White IR. A comparison of seven random–effects models for meta-analyses that estimate the summary odds ratio. Stat Med. 2018; 37(7):1059–85. Ren Y, Lin L, Lian Q, Zou H, Chu H. Real-world performance of meta-analysis methods for double-zero-event studies with dichotomous outcomes using the cochrane database of systematic reviews. J Gen Intern Med. 2019; 34(6):960–8. Vandermeer B, Bialy L, Hooton N, Hartling L, Klassen TP, Johnston BC, Wiebe N. Meta-analyses of safety data: a comparison of exact versus asymptotic methods. Stat Methods Med Res. 2009; 18(4):421–32. PubMed Article PubMed Central Google Scholar Kuss O. Statistical methods for meta-analyses including information from studies without any events–add nothing to nothing and succeed nevertheless. Stat Med. 2015; 34(7):1097–116. CAS PubMed Article Google Scholar Chu DK, Akl EA, Duda S, Solo K, Yaacoub S, Schünemann HJ, study authors C-SURGES. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. Lancet. 2020; 395(10242):1973–87. CAS PubMed PubMed Central Article Google Scholar Jin YH, Cai L, Cheng ZS, Cheng H, Deng T, Fan YP, et al. A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019-ncov) infected pneumonia (standard version). Mil Med Res. 2020; 7:4. CAS PubMed PubMed Central Google Scholar Jin YH, Zhan QY, Peng ZY, Ren XQ, Yin XT, Cai L, et al. Chemoprophylaxis, diagnosis, treatments, and discharge management of COVID-19: An evidence-based clinical practice guideline (updated version). Mil Med Res. 2020; 7:41. Borenstein M, Hedges LV, Higgins JP, Rothstein HR. A basic introduction to fixed-effect and random-effects models for meta–analysis. Res Synth Methods. 2010; 1(2):97–111. Lin E, Tong T, Chen Y, Wang Y. Fixed-effects model: the most convincing model for meta-analysis with few studies. Preprint at https://arxiv.org/abs/2002.04211. 2020. Xu C, Li L, Lin L, Chu H, Thabane L, Zou K, Sun X. Exclusion of studies with no events in both arms in meta-analysis impacted the conclusions. J Clin Epidemiol. 2020; 123:91–9. PubMed Article Google Scholar Agresti A. Categorical Data Analysis, 2nd Edition. Hoboken: John Wiley & Son; 2003. Haldane JB. The estimation and significance of the logarithm of a ratio of frequencies. Ann Hum Genet. 1956; 20(4):309–11. CAS PubMed Article PubMed Central Google Scholar Schwarzer G. meta: An r package for meta-analysis. R News. 2007; 7:40–5. Weber F, Knapp G, Ickstadt K, Kundt G, Glass Ä. Zero–cell corrections in random–effects meta–analyses. Res Synth Methods. 2020; 11(6):913–9. Sweeting MJ, Sutton AJ, Lambert PC. What to add to nothing? use and avoidance of continuity corrections in meta–analysis of sparse data. Stat Med. 2004; 23(9):1351–75. Carter RE, Lin Y, Lipsitz SR, Newcombe RG, Hermayer KL. Relative risk estimated from the ratio of two median unbiased estimates. J Royal Stat Soc: Ser C Appl Stat. 2010; 59(4):657–71. Chu H, Nie L, Chen Y, Huang Y, Sun W. Bivariate random effects models for meta-analysis of comparative studies with binary outcomes: methods for the absolute risk difference and relative risk. Stat Methods Med Res. 2012; 21(6):621–33. Chen Y, Hong C, Ning Y, Su X. Meta–analysis of studies with bivariate binary outcomes: a marginal beta–binomial model approach. Stat Med. 2016; 35(1):21–40. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986; 7(3):177–88. Laird NM, Mosteller F. Some statistical methods for combining experimental results. Int J Technol Assess Heal Care. 1990; 6(1):5–30. Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019; 22(4):153–60. Pettigrew HM, Gart JJ, Thomas DG. The bias and higher cumulants of the logarithm of a binomial variate. Biometrika. 1986; 73(2):425–35. Hartung J, Knapp G. A refined method for the meta–analysis of controlled clinical trials with binary outcome. Stat Med. 2001; 20(24):3875–89. Fattorini L. Applying the Horvitz-Thompson criterion in complex designs: a computer-intensive perspective for estimating inclusion probabilities. Biometrika. 2006; 93(2):269–78. Seber GAF. Statistical Models for Proportions and Probabilities. Heidelberg: Springer; 2013. Bakbergenuly I, Kulinskaya E. Meta-analysis of binary outcomes via generalized linear mixed models: a simulation study. BMC Med Res Methodol. 2018; 18(1):70. McCullagh P. Sampling bias and logistic models. J R Stat Soc Ser B Stat Methodol. 2008; 70(4):643–77. Zeger SL, Liang KY, Albert PS. Models for longitudinal data: a generalized estimating equation approach. Biometrics. 1988; 44(4):1049–60. Lin L, Chu H. Meta-analysis of proportions using generalized linear mixed models. Epidemiology. 2020; 31(5):713–7. Lin L, Chu H. altmeta: Alternative Meta-Analysis Methods. 2020. https://CRAN.R-project.org/package=altmeta. Ju J, Lin L, Chu H, Cheng LL, Xu C. Laplace approximation, penalized quasi-likelihood, and adaptive gauss-hermite quadrature for generalized linear mixed models: towards meta-analysis of binary outcome with sparse data. BMC Med Res Methodol. 2020; 20(1):152. Gronsbell J, Hong C, Nie L, Lu Y, Tian L. Exact inference for the random–effect model for meta–analyses with rare events. Stat Med. 2020; 39(3):252–64. Sarmanov O. Generalized normal correlation and two-dimensional fréchet classes. Sov Math Dokl. 1966; 7:596–9. Chen Y, Luo S, Chu H, Su X, Nie L. An empirical Bayes method for multivariate meta-analysis with an application in clinical trials. Commun Stat Theory Methods. 2014; 43(16):3536–51. Luo S, Chen Y, Su X, Chu H. mmeta: an R package for multivariate meta-analysis. J Stat Softw. 2014; 56(11):11. Jia P, Lin L, Kwong JSW, Xu C. Many meta-analyses of rare events in the cochrane database of systematic reviews were underpowered. J Clin Epidemiol. 2021; 131:113–22. Li QK, Rice K. Improved inference for fixed–effects meta–analysis of 2×2 tables. Res Synth Methods. 2020; 11(3):387–96. Bender R, Friede T, Koch A, Kuss O, Schlattmann P, Schwarzer G, Skipka G. Methods for evidence synthesis in the case of very few studies. Res Synth Methods. 2018; 9(3):382–92. Rice K, Higgins JP, Lumley T. A re–evaluation of fixed effect(s) meta–analysis. J R Stat Soc Ser A Stat Methodol. 2018; 181(1):205–27. Yang K, Kwan HY, Yu Z, Tong T. Model selection between the fixed-effects model and the random-effects model in meta-analysis. Stat Interface. 2020; 13(4):501–10. Friedrich JO, Adhikari NK, Beyene J. Inclusion of zero total event trials in meta-analyses maintains analytic consistency and incorporates all available data. BMC Med Res Methodol. 2007; 7:5. Doi SA, Furuya-Kanamori L, Xu C, Lin L, Chivese T, Thalib L. Questionable utility of the relative risk in clinical research: a call for change to practice. J Clin Epidemiol. 2020. https://doi.org/10.1016/j.jclinepi.2020.08.019. Xiao M, Chen Y, Cole SR, MacLehose R, Richardson D, Chu H. Is OR "portable" in meta-analysis? Time to consider bivariate generalized linear mixed model. 2020. Preprint at https://www.medrxiv.org/content/10.1101/2020.11.05.20226811v1. Doi SA, Furuya-Kanamori L, Xu C, Chivese T, Lin L, Musa OA, Hindy G, Thalib L, Harrell Jr FE. The OR is "portable" but not the RR: time to do away with the log link in binomial regression. J Clin Epidemiol. 2021. https://doi.org/10.13140/RG.2.2.31631.10407. The authors sincerely thank the Editor, Associate Editor, and two anonymous reviewers for their insightful comments and suggestions. This study was supported by grants awarded to Tie-Jun Tong from the General Research Fund (HKBU12303918), the National Natural Science Foundation of China (1207010822), and the Initiation Grants for Faculty Niche Research Areas (RC-IG-FNRA/17-18/13, RC-FNRA-IG/20-21/SCI/03) of Hong Kong Baptist University. Department of Mathematics, Hong Kong Baptist University, Hong Kong, China Jia-Jin Wei, Jian-Dong Shi, Ke Yang & Tie-Jun Tong Shenzhen Research Institute of Big Data, Shenzhen, China En-Xuan Lin College of Mathematics and Statistics, Shenzhen University, Shenzhen, China Zong-Liang Hu Center for Evidence-Based and Translational Medicine, Zhongnan Hospital of Wuhan University, Wuhan, China Xian-Tao Zeng Jia-Jin Wei Jian-Dong Shi Ke Yang Tie-Jun Tong TJT, JJW, EXL, and XTZ reviewed the literature and designed the methods. TJT, JJW, and JDS conducted the simulation studies. TJT, JJW, KY, and ZLH conducted the experiments and analyzed the real data. All authors contributed to the manuscript preparation. All authors read and approved the final manuscript. Correspondence to Tie-Jun Tong. Supplementary information: Appendix Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Wei, JJ., Lin, EX., Shi, JD. et al. Meta-analysis with zero-event studies: a comparative study with application to COVID-19 data. Military Med Res 8, 41 (2021). https://doi.org/10.1186/s40779-021-00331-6 Continuity correction Coronavirus disease 2019 data Zero-event studies
CommonCrawl
Degenerate flag varieties in network coding New quantum codes from metacirculant graphs via self-dual additive $\mathbb{F}_4$-codes Generalized Hamming weights of toric codes over hypersimplices and squarefree affine evaluation codes Nupur Patanker , and Sanjay Kumar Singh Indian Institute of Science Education and Research, Bhopal, India * Corresponding author: Nupur Patanker Received December 2020 Revised March 2021 Early access May 2021 Let $ \mathbb{F}_{q} $ be a finite field with $ q $ elements, where $ q $ is a power of a prime $ p $. A polynomial over $ \mathbb{F}_{q} $ is monomially squarefree if all its monomials are squarefree. In this paper, we determine an upper bound on the number of common zeroes of any set of $ r $ linearly independent monomially squarefree polynomials of $ \mathbb{F}_{q}[t_{1}, t_{2}, \dots, t_{s}] $ in the affine torus $ T = (\mathbb{F}_{q}^{*})^{s} $ under certain conditions on $ r $, $ s $ and the degree of these polynomials. Applying the results, we obtain the generalized Hamming weights of toric codes over hypersimplices and squarefree affine evaluation codes, as defined in [14]. Keywords: Affine torus, projective torus, generalized Hamming weights, affine Hilbert function. Mathematics Subject Classification: Primary: 13P25; Secondary: 14G50, 94B27, 11T71, 06A07. Citation: Nupur Patanker, Sanjay Kumar Singh. Generalized Hamming weights of toric codes over hypersimplices and squarefree affine evaluation codes. Advances in Mathematics of Communications, doi: 10.3934/amc.2021013 P. Beelen and M. Datta, Generalized Hamming weights of affine Cartesian codes, Finite Fields Appl., 51 (2018), 130-145. doi: 10.1016/j.ffa.2018.01.006. Google Scholar M. Bras-Amorós and M. E. O'Sullivan, Duality for some families of correction capability optimized evaluation codes, Adv. Math. Commun., 2 (2008), 15-33. doi: 10.3934/amc.2008.2.15. Google Scholar D. Cox, J. Little and D. O'Shea, Ideals, Varieties, and Algorithms. An Introduction to Computational Algebraic Geometry and Commutative Algebra, 3$^{rd}$ edition, Undergraduate Texts in Mathematics, Springer, New York, 2007. doi: 10.1007/978-0-387-35651-8. Google Scholar C. Galindo, O. Geil, F. Hernando and D. Ruano, On the distance of stabilizer quantum codes from J-affine variety codes, Quantum Inf. Process., 16 (2017), Paper No. 111, 32 pp. doi: 10.1007/s11128-017-1559-1. Google Scholar C. Galindo and F. Hernando, Quantum codes from affine variety codes and their subfield-subcodes, Des. Codes Cryptogr., 76 (2015), 89-100. doi: 10.1007/s10623-014-0016-8. Google Scholar C. Galindo, F. Hernando and D. Ruano, Stabilizer quantum codes from J-affine variety codes and a new Steane-like enlargement, Quantum Inf. Process., 14 (2015), 3211-3231. doi: 10.1007/s11128-015-1057-2. Google Scholar O. Geil, Evaluation codes from an affine variety code perspective, in Advances in Algebraic Geometry Codes, Ser. Coding Theory Cryptol., 5, World Sci. Publ., Hackensack, NJ, 2008,153–180. doi: 10.1142/9789812794017_0004. Google Scholar M. González-Sarabia, E. Camps, E. Sarmiento and R. H. Villarreal, The second generalized Hamming weight of some evaluation codes arising from a projective torus, Finite Fields Appl., 52 (2018), 370-394. doi: 10.1016/j.ffa.2018.05.002. Google Scholar M. González-Sarabia, J. Martínez-Bernal, R. H. Villarreal and C. E. Vivares, Generalized minimum distance functions, J. Algebraic Combin., 50 (2019), 317-346. doi: 10.1007/s10801-018-0855-x. Google Scholar J. P. Hansen, Toric surfaces and error-correcting codes, in Coding Theory, Cryptography and Related Areas (Guanajuato, 1998), Springer, Berlin, 2000,132–142. Google Scholar J. P. Hansen, Toric varieties Hirzebruch surfaces and error-correcting codes, Appl. Algebra Engrg. Comm. Comput., 13 (2002), 289-300. doi: 10.1007/s00200-002-0106-0. Google Scholar P. Heijnen and R. Pellikaan, Generalized Hamming weights of $q$-ary Reed-Muller codes, IEEE Trans. Inform. Theory, 44 (1998), 181-196. doi: 10.1109/18.651015. Google Scholar T. Helleseth, T. Kløve and J. Mykkeltveit, The weight distribution of irreducible cyclic codes with block length $n_{1}((q^l-1)/N)$, Discrete Math., 18 (1977), 179-211. doi: 10.1016/0012-365X(77)90078-4. Google Scholar D. Jaramillo, M. V. Pinto and R. H. Villarreal, Evaluation codes and their basic parameters, Des. Codes Cryptogr., 89 (2021), 269-300. doi: 10.1007/s10623-020-00818-8. Google Scholar D. Joyner, Toric codes over finite fields, Appl. Algebra Engrg. Comm. Comput., 15 (2004), 63-79. doi: 10.1007/s00200-004-0152-x. Google Scholar T. Kløve, The weight distribution of linear codes over $GF(q^l)$ having generator matrix over $GF(q)$, Discrete Math., 23 (1978), 159-168. doi: 10.1016/0012-365X(78)90114-0. Google Scholar J. Little and H. Schenck, Toric surface codes and Minkowski sums, SIAM J. Discrete Math., 20 (2006), 999-1014. doi: 10.1137/050637054. Google Scholar J. Little and R. Schwarz, On toric codes and multivariate Vandermonde matrices, Appl. Algebra Engrg. Comm. Comput., 18 (2007), 349-367. doi: 10.1007/s00200-007-0041-1. Google Scholar Z. Nie and A. Y. Wang, Hilbert functions and the finite degree Zariski closure in finite field combinatorial geometry, J. Combin. Theory Ser. A, 134 (2015), 196-220. doi: 10.1016/j.jcta.2015.03.011. Google Scholar C. Rentería-Márquez, A. Simis and R. H. Villarreal, Algebraic methods for parameterized codes and invariants of vanishing ideals over finite fields, Finite Fields Appl., 17 (2011), 81-104. doi: 10.1016/j.ffa.2010.09.007. Google Scholar D. Ruano, On the parameters of r-dimensional toric codes, Finite Fields Appl., 13 (2007), 962-976. doi: 10.1016/j.ffa.2007.02.002. Google Scholar D. Ruano, On the structure of generalized toric codes, J. Symbolic Comput., 44 (2009), 499-506. doi: 10.1016/j.jsc.2007.07.018. Google Scholar E. Sarmiento, M. V. Pinto and R. H. Villarreal, The minimum distance of parameterized codes on projective tori, Appl. Algebra Engrg. Comm. Comput., 22 (2011), 249-264. doi: 10.1007/s00200-011-0148-2. Google Scholar I. Soprunov and J. Soprunova, Toric surface codes and Minkowski length of polygons, SIAM J. Discrete Math., 23 (2008/09), 384-400. doi: 10.1137/080716554. Google Scholar I. Soprunov and J. Soprunova, Bringing toric codes to the next dimension, SIAM J. Discrete Math., 24 (2010), 655-665. doi: 10.1137/090762592. Google Scholar V. G. Umaña and M. Velasco, Dual toric codes and polytopes of degree one, SIAM J. Discrete Math., 29 (2015), 683-692. doi: 10.1137/140966228. Google Scholar V. K. Wei, Generalized Hamming weights for linear codes, IEEE Trans. Inform. Theory, 37 (1991), 1412-1418. doi: 10.1109/18.133259. Google Scholar Alonso sepúlveda Castellanos. Generalized Hamming weights of codes over the $\mathcal{GH}$ curve. Advances in Mathematics of Communications, 2017, 11 (1) : 115-122. doi: 10.3934/amc.2017006 Youngae Lee. Non-topological solutions in a generalized Chern-Simons model on torus. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1315-1330. doi: 10.3934/cpaa.2017064 Michael Kiermaier, Matthias Koch, Sascha Kurz. $2$-arcs of maximal size in the affine and the projective Hjelmslev plane over $\mathbb Z$25. Advances in Mathematics of Communications, 2011, 5 (2) : 287-301. doi: 10.3934/amc.2011.5.287 Rémi Carles, Erwan Faou. Energy cascades for NLS on the torus. Discrete & Continuous Dynamical Systems, 2012, 32 (6) : 2063-2077. doi: 10.3934/dcds.2012.32.2063 Simon Lloyd. On the Closing Lemma problem for the torus. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 951-962. doi: 10.3934/dcds.2009.25.951 Peter Seibt. A period formula for torus automorphisms. Discrete & Continuous Dynamical Systems, 2003, 9 (4) : 1029-1048. doi: 10.3934/dcds.2003.9.1029 Aaron W. Brown. Smooth stabilizers for measures on the torus. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 43-58. doi: 10.3934/dcds.2015.35.43 Olav Geil, Stefano Martin. Relative generalized Hamming weights of q-ary Reed-Muller codes. Advances in Mathematics of Communications, 2017, 11 (3) : 503-531. doi: 10.3934/amc.2017041 Mostapha Benhenda. Nonstandard smooth realization of translations on the torus. Journal of Modern Dynamics, 2013, 7 (3) : 329-367. doi: 10.3934/jmd.2013.7.329 Boju Jiang, Jaume Llibre. Minimal sets of periods for torus maps. Discrete & Continuous Dynamical Systems, 1998, 4 (2) : 301-320. doi: 10.3934/dcds.1998.4.301 M. L. Bertotti, Sergey V. Bolotin. Chaotic trajectories for natural systems on a torus. Discrete & Continuous Dynamical Systems, 2003, 9 (5) : 1343-1357. doi: 10.3934/dcds.2003.9.1343 Changzhi Wu, Kok Lay Teo, Volker Rehbock. Optimal control of piecewise affine systems with piecewise affine state feedback. Journal of Industrial & Management Optimization, 2009, 5 (4) : 737-747. doi: 10.3934/jimo.2009.5.737 Tolulope Fadina, Ariel Neufeld, Thorsten Schmidt. Affine processes under parameter uncertainty. Probability, Uncertainty and Quantitative Risk, 2019, 4 (0) : 5-. doi: 10.1186/s41546-019-0039-1 Anuradha Sharma, Saroj Rani. Trace description and Hamming weights of irreducible constacyclic codes. Advances in Mathematics of Communications, 2018, 12 (1) : 123-141. doi: 10.3934/amc.2018008 Henk W. Broer, Carles Simó, Renato Vitolo. Chaos and quasi-periodicity in diffeomorphisms of the solid torus. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 871-905. doi: 10.3934/dcdsb.2010.14.871 Deissy M. S. Castelblanco. Restrictions on rotation sets for commuting torus homeomorphisms. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5257-5266. doi: 10.3934/dcds.2016030 Relinde Jurrius, Ruud Pellikaan. On defining generalized rank weights. Advances in Mathematics of Communications, 2017, 11 (1) : 225-235. doi: 10.3934/amc.2017014 Marco Castrillón López, Pablo M. Chacón, Pedro L. García. Lagrange-Poincaré reduction in affine principal bundles. Journal of Geometric Mechanics, 2013, 5 (4) : 399-414. doi: 10.3934/jgm.2013.5.399 Velimir Jurdjevic. Affine-quadratic problems on Lie groups. Mathematical Control & Related Fields, 2013, 3 (3) : 347-374. doi: 10.3934/mcrf.2013.3.347 Terasan Niyomsataya, Ali Miri, Monica Nevins. Decoding affine reflection group codes with trellises. Advances in Mathematics of Communications, 2012, 6 (4) : 385-400. doi: 10.3934/amc.2012.6.385 Nupur Patanker Sanjay Kumar Singh
CommonCrawl
December 2013 , Volume 56, Issue 3, pp 507–530 | Cite as An efficient augmented Lagrangian method with applications to total variation minimization Chengbo Li Wotao Yin Hong Jiang Yin Zhang First Online: 03 July 2013 2k Downloads Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each iteration. We establish convergence for this algorithm, and apply it to solving problems in image reconstruction with total variation regularization. We present numerical results showing that the resulting solver, called TVAL3, is competitive with, and often outperforms, other state-of-the-art solvers in the field. Compressive sensing Non-smooth optimization Augmented Lagrangian method Nonmonotone line search Barzilai-Borwein method Single-pixel camera The work of the first author was supported in part by NSF Grant DMS-0811188. The work of the second author was supported in part by NSF grants DMS-07-48839 and ECCS-1028790, as well as ONR Grant N00014-08-1-1101. The work of the fourth author was supported in part by NSF Grant DMS-0811188, ONR Grant N00014-08-1-1101, and NSF Grant DMS-1115950. The first and the fourth authors also appreciate a gift fund from Bell Labs, Alcatel-Lucent to Rice University that partially supported their travels to international conferences. Last but not least, we thank the two anonymous referees for their constructive criticism and their helpful suggestions. Appendix: Proof of Theorem 1 For notational simplicity, let us define $$ \phi_k(\cdot) = \phi(\cdot, y_k) \quad \mbox{and}\quad\nabla\phi_k(\cdot) = \partial_1 \phi( \cdot, y_k). $$ The proof of the theorem relies on two lemmas. The two lemmas are modifications of their counterparts in [40]. Since our objective may contain a non-differentiable part, the key modification is to connect this non-differentiable part to the differentiable part by means of alternating minimization. Otherwise, the line of proofs follows closely that given in [40]. The first lemma presents some basic properties and established that the algorithm is well-defined. If ∇ϕ k (x k ) T d k ≤0 holds for each k, then for the sequences generated by Algorithm-NADA, we have ϕ k (x k )≤ϕ k−1(x k )≤C k for each k and {C k } is monotonically non-increasing. Moreover, if ∇ϕ k (x k ) T d k <0, a step length α k >0 always exists so that the nonmonotone Armijo condition (11) holds. Define real-valued function $$D_k(t) = \frac{tC_{k-1} + \phi_{k-1}(x_k)}{t+1} \quad\mbox{for}\ t \ge0, $$ $$D_k'(t) = \frac{C_{k-1} - \phi_{k-1}(x_k)}{(t+1)^2} \quad\mbox{for}\ t \ge 0. $$ Due to the nonmonotone Armijo condition (11) and ∇ϕ k (x k ) T d k ≤0, we have $$C_{k-1} - \phi_{k-1}(x_k)\geq-\delta \alpha_{k-1} \nabla\phi_{k-1}(x_{k-1})^T d_{k-1} \geq0. $$ Therefore, \(D_{k}'(t) \ge0\) holds for any t≥0, and then D k is non-decreasing. $$D_k(0) = \phi_{k-1}(x_k) \quad\mbox{and}\quad D_k(\eta_{k-1} Q_{k-1}) = C_k, $$ $$\phi_{k-1}(x_k) \le C_k, \quad\forall k. $$ As is defined in Algorithm-NADA, $$y_k = \operatorname {argmin}_y \phi(x_k,y). $$ $$\phi(x_k, y_k) \leq\phi(x_{k}, y_{k-1}). $$ Hence, ϕ k (x k )≤ϕ k−1(x k )≤C k holds for any k. Furthermore, $$C_{k+1} = \frac{\eta_k Q_k C_k + \phi_k(x_{k+1})}{Q_{k+1}} \le\frac {\eta_k Q_k C_k + C_{k+1}}{Q_{k+1}} , $$ $$(\eta_k Q_k +1)C_{k+1} \le\eta_k Q_k C_k + C_{k+1}, $$ that is, $$C_{k+1}\le C_k. $$ Thus, {C k } is monotonically non-increasing. If C k is replaced by ϕ k (x k ) in (11), the nonmonotone Armijo condition becomes the standard Armijo condition. It is well-known that α k >0 exists for the standard Armijo condition while ∇ϕ k (x k ) T d k <0 and ϕ is bounded below. Since ϕ k (x k )≤C k , it follows α k >0 exists as well for the nonmonotone Armijo condition: $$\begin{aligned} \phi_k(x_k+\alpha_k d_k) \le C_k + \delta\alpha_k \nabla \phi_k(x_k)^T d_k. \end{aligned}$$ Now we defining the quantity A k by $$\begin{aligned} A_k = \frac{1}{k+1} \sum _{i=0}^k \phi_k(x_k). \end{aligned}$$ By induction, it is easy to show that C k is bounded above by A k . Together with the facts that C k is also bounded below by ϕ k (x k ) and α k >0 always exists, it is clear that Algorithm-NADA is well-defined. □ In the next lemma, a lower bound for the step length generated by Algorithm-NADA will be given. Assume that ∇ϕ k (x k ) T d k ≤0 for all k and that Lipschitz condition (19) holds with constant L. Then $$\begin{aligned} \alpha_k \geq\min\biggl\{ {\alpha_{\max} \over\rho}, {2(1-\delta)\over L\rho} {|\nabla\phi_k(x_k)^T d_k|\over\|d_k\| ^2} \biggr\}. \end{aligned}$$ The proof is omitted here since the proof of Lemma 2.1 in [40] is directly applicable. With the aid of the lower bound (24), we now are ready to prove Theorem 1. We need to establish the two relationships given in (20). First, by definition in Algorithm-NADA, Hence, it always holds true that $$0 \in\partial_2 \phi(x_k,y_k). $$ Now it suffices to show that the limit holds true in (20). Consider the nonmonotone Armijo condition: If ρα k <α max, in view of the lower bound (24) on α k in Lemma 2 and the direction assumption (18), $$\begin{aligned} \phi_k(x_k+\alpha_k d_k) \le& C_k - \delta\frac{2(1-\delta)}{ L\rho} \frac{|\nabla\phi_k(x_k)^T d_k|^2}{\| d_k \| ^2} \\ \le& C_k - \frac{2 \delta(1-\delta)}{L\rho} \frac{c_1^2 \|\nabla\phi _k(x_k)\|^4}{c_2^2 \| \nabla\phi_k(x_k) \| ^2} \\ =& C_k - \biggl[ \frac{2\delta(1-\delta)c_1^2}{L\rho c_2^2} \biggr] \big\| \nabla \phi_k(x_k) \big\| ^2. \end{aligned}$$ On the other hand, if ρα k ≥α max, the lower bound (24), together with the direction assumption (18), gives $$\begin{aligned} \phi_k(x_k+\alpha_k d_k) \le& C_k + \delta\alpha_k \nabla\phi_k(x_k)^T d_k \\ \le& C_k - \delta\alpha_k c_1 \big\| \nabla \phi_k(x_k) \big\| ^2 \\ \le& C_k - \frac{\delta\alpha_{\max} c_1}{\rho} \big\| \nabla\phi_k(x_k) \big\| ^2. \end{aligned}$$ Introducing a constant $$\tilde{\tau} = \min\biggl\{ \frac{2\delta(1-\delta)c_1^2}{L \rho c_2^2}, \frac{\delta\alpha_{\max} c_1}{\rho} \biggr\}, $$ we can combine the above inequalities into $$\begin{aligned} \phi_k(x_k+ \alpha_k d_k) \le C_k - \tilde{\tau} \big\| \nabla \phi_k(x_k) \big\| ^2. \end{aligned}$$ Next we show by induction that for all k $$ \frac{1}{Q_k} \ge1-\eta_{\max}, $$ which obviously holds for k=0 given that Q 0=1. Assume that (27) holds for k=j. Then $$\begin{aligned} Q_{j+1} = \eta_j Q_j +1 \le \frac{\eta_j}{1-\eta_{\max}} +1 \le\frac{\eta_{\max}}{1-\eta_{\max}} +1 = \frac{1}{1-\eta_{\max}}, \end{aligned}$$ implying that (27) also holds for k=j+1. Hence, (27) holds for all k. It follows from (26) and (27) that $$\begin{aligned} C_k - C_{k+1} =& C_k - {\eta_k Q_k C_k + \phi_k(x_{k+1})\over Q_{k+1}} \\ =& {C_k(\eta_k Q_k + 1) - (\eta_k Q_k C_k + \phi_k(x_{k+1})) \over Q_{k+1} } \\ =& {C_k - \phi_k(x_{k+1}) \over Q_{k+1} } \\ \ge& \frac{\tilde{\tau} \|\nabla\phi_k(x_k)\|^2}{Q_{k+1}} \\ \ge& \tilde{\tau} (1-\eta_{\max}) \big\|\nabla\phi_k(x_k) \big\| ^2. \end{aligned}$$ Since ϕ is bounded below by assumption, {C k } is also bounded below. In addition, by Lemma 1, {C k } is monotonically non-increasing, hence convergent. Therefore, the left-hand side of (28) tends to zero, so does the right-hand side; i.e., ∥∇ϕ k (x k )∥→0. Finally, by definition (22), $$\lim_{k\rightarrow0} \partial_1 \phi(x_k, y_k) = 0, $$ which completes the proof. □ Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988) MathSciNetCrossRefzbMATHGoogle Scholar Becker, S., Bobin, J., Candès, E.: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4, 1–39 (2011) MathSciNetCrossRefzbMATHGoogle Scholar Bioucas-Dias, J., Figueiredo, M.: A new TwIST: two-step iterative thresholding algorithm for image restoration. IEEE Trans. Image Process. 16(12), 2992–3004 (2007) MathSciNetCrossRefGoogle Scholar Bioucas-Dias, J., Figueiredo, M.: Two-step algorithms for linear inverse problems with non-quadratic regularization. In: IEEE International Conference on Image Processing—ICIP 2007, San Antonio, TX, USA, September 2007 Google Scholar Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001) CrossRefGoogle Scholar Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006) CrossRefzbMATHGoogle Scholar Candès, E., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006) CrossRefGoogle Scholar Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004) MathSciNetCrossRefGoogle Scholar Chan, T., Wong, C.K.: Total variation blind deconvolution. IEEE Trans. Image Process. 7(3), 370–375 (1998) CrossRefGoogle Scholar Chang, T., He, L., Fang, T.: MR image reconstruction from sparse radial samples using Bregman iteration. In: ISMRM (2006) Google Scholar Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006) MathSciNetCrossRefGoogle Scholar Donoho, D.: Neighborly polytopes and sparse solution of underdetermined linear equations. IEEE Trans. Inf. Theory (2006) Google Scholar Duarte, M.F., Sarvotham, S., Baron, D., Wakin, M.B., Baraniuk, R.G.: Distributed compressed sensing of jointly sparse signals. In: 39th Asilomar Conference on Signals, Systems and Computers, pp. 1537–1541 (2005) Google Scholar Fortin, M., Glowinski, R.: Méthodes de Lagrangien Augmenté. Application à la Résolution Numérique de Problèmes aux Limites. Dunod-Bordas, Paris (1982) (in French) Google Scholar Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Appl. Math. 2, 17–40 (1976) CrossRefzbMATHGoogle Scholar Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, Berlin (1984) CrossRefzbMATHGoogle Scholar Glowinski, R., Marrocco, A.: Sur l'approximation par éléments finis d'ordre un et la résolution par pénalisation-dualité d'une classe de problèmes de Dirichlet nonlinéaires. C. R. Math. Acad. Sci. Paris 278A, 1649–1652 (1974) (in French) MathSciNetGoogle Scholar Goldstein, T., Osher, S.: The split Bregman method for L1 regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009) MathSciNetCrossRefzbMATHGoogle Scholar Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton's method. SIAM J. Numer. Anal. 23, 707–716 (1986) MathSciNetCrossRefzbMATHGoogle Scholar Hager, W.W., Phan, D.T., Zhang, H.: Gradient-based methods for sparse recovery. SIAM J. Imaging Sci. 4, 146–165 (2011) MathSciNetCrossRefGoogle Scholar Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969) MathSciNetCrossRefzbMATHGoogle Scholar Jiang, H., Li, C., Haimi-Cohen, R., Wilford, P., Zhang, Y.: Scalable video coding using compressive sensing. Bell Labs Tech. J. 16, 149–169 (2012) CrossRefGoogle Scholar Laska, J., Kirolos, S., Duarte, M., Ragheb, T., Baraniuk, R., Massoud, Y.: Theory and implementation of an analog-to-information converter using random demodulation. In: Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), New Orleans, Louisiana (2007) Google Scholar Li, C., Jiang, H., Wilford, P., Zhang, Y.: Video coding using compressive sensing for wireless communications. In: IEEE Wireless Communications and Networking Conference (WCNC), pp. 2077–2082 (2011). doi: 10.1109/WCNC.2011.5779474 Google Scholar Li, C., Sun, T., Kelly, K., Zhang, Y.: A compressive sensing and unmixing scheme for hyperspectral data processing. IEEE Trans. Image Process. 21, 1200–1210 (2012) MathSciNetCrossRefGoogle Scholar Li, C., Zhang, Y., Yin, W.: http://www.caam.rice.edu/~optimization/L1/TVAL3/ Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979) MathSciNetCrossRefzbMATHGoogle Scholar Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 227–234 (1995) MathSciNetCrossRefzbMATHGoogle Scholar Nesterov, Yu.: Smooth minimization of non-smooth functions. Math. Program., Ser. A 103, 127–152 (2005) MathSciNetCrossRefzbMATHGoogle Scholar Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterated regularization method for total variation based image restoration. Multiscale Model. Simul. 4, 460–489 (2005) MathSciNetCrossRefzbMATHGoogle Scholar Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, London (1969) Google Scholar Rockafellar, R.T.: The multiplier method of Hestenes and Powell applied to convex programming. J. Optim. Theory Appl. 12(6), 555–562 (1973) MathSciNetCrossRefzbMATHGoogle Scholar Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 259–268 (1992) Google Scholar Sun, T., Woods, G.L., Duarte, M.F., Kelly, K.F., Li, C., Zhang, Y.: OBIC measurements without lasers or raster-scanning based on compressive sensing. In: Proceedings of the 35th International Symposium for Testing and Failure Analysis (2009) Google Scholar Takhar, D., Laska, J.N., Wakin, M.B., Duarte, M.F., Baron, D., Sarvotham, S., Kelly, K.F., Baraniuk, R.G.: A new compressive imaging camera architecture using optical-domain compression. In: Computational Imaging IV, vol. 6065, pp. 43–52 (2006) CrossRefGoogle Scholar Wang, Y., Yang, J., Yin, W., Zhang, Y.: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 1(4), 248–272 (2008) MathSciNetCrossRefzbMATHGoogle Scholar Yang, J., Yin, W., Zhang, Y.: A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data. Technical Report, TR08-27, CAAM, Rice University (2008) Google Scholar Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for ℓ 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008) MathSciNetCrossRefzbMATHGoogle Scholar Yin, W., Morgan, S., Yang, J., Zhang, Y.: Practical compressive sensing with Toeplitz and circulant matrices. In: Proceedings of Visual Communications and Image Processing (VCIP) (2010) Google Scholar Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14, 1043–1056 (2004) MathSciNetCrossRefzbMATHGoogle Scholar Xiao, Y., Song, H.: An inexact alternating directions algorithm for constrained total variation regularized compressive sensing problems. J. Math. Imaging Vis. 44, 114–127 (2012) MathSciNetCrossRefzbMATHGoogle Scholar © Springer Science+Business Media New York 2013 1.Department of Computational and Applied MathematicsRice UniversityHoustonUSA 2.Bell LaboratoriesAlcatel-LucentMurray HillUSA Li, C., Yin, W., Jiang, H. et al. Comput Optim Appl (2013) 56: 507. https://doi.org/10.1007/s10589-013-9576-1 Received 19 August 2012 First Online 03 July 2013
CommonCrawl
About News Pub Seminars Courses Members Blog ADSI Summer School ADSI Workshop IFDS Stochastic subgradient method converges at the rate \(O(k^{-1/4})\) on weakly convex functions Damek Davis and Dmitriy Drusvyatskiy. Apr 2, 2018. In this blog, we discuss our recent paper, (Davis and Drusyatskiy, 2018). This work proves that the proximal stochastic subgradient method converges at a rate \(O(k^{-1/4})\) on weakly convex problems. In particular, it resolves the long-standing open question on the rate of convergence of the proximal stochastic gradient method (without batching) for minimizing a sum of a smooth function and a proximable convex function. Stochastic optimization is a fundamental task in the statistical sciences, underlying all aspects of learning from data. The goal of stochastic optimization in data science is to learn a decision rule from a limited data sample, which generalizes well to the entire population. Learning such a decision rule amounts to minimizing the population risk: \[\begin{align}\label{eqn:SO} \min_{x \in \mathbb{R}^d}~ \mathbb{E}_{\xi\sim P}[f(x,\xi)].\tag{$\mathcal{SO}$} \end{align}\] Here, \(\xi\) encodes the population data, which is assumed to follow some fixed but unknown probability distribution \(P\), and the function \(f(x,\xi)\) evaluates the loss of the decision rule parametrized by \(x\) on a data point \(\xi\). Robbins-Monro's pioneering 1951 work gave the first procedure for solving (\ref{eqn:SO}) when \(f(\cdot, \xi)\) are smooth and strongly convex, inspiring decades of further research. Among such algorithms, the stochastic (sub)gradient method is the most successful and widely used in practice. This method constructs a sequence of approximations \(x_t\) of the minimizer of (\ref{eqn:SO}) by traveling in the direction negative to a sample gradient: \[\begin{equation*}\label{eqn:SG} \textrm{Sample } \xi_t \sim P \\ \textrm{Set } x_{t+1}= x_t - \alpha_t \nabla_x f(x_t, \xi_t),\tag{$\mathcal{SG}$} \end{equation*}\] where \(\alpha_t>0\) is an appropriately chosen control sequence. Nonsmooth convex problems may be similarly optimized by replacing sample gradients by sample subgradients \(v_t\in \partial_x f(x_t,\xi_t)\), where \(\partial_x f(x_t, \xi_t)\) is the subdifferential in the sense of convex analysis; see for example, Part V in (Rockafellar, 1970). Performance of stochastic optimization methods is best judged by their sample complexity – the number of i.i.d. realizations \(\xi_1, \ldots, \xi_N \sim P\) needed to reach a desired accuracy of the decision rule. Classical results such as by (Nemirovsky and Yudin, 1983) stipulate that for convex problems, it suffices to generate \(O(\varepsilon^{-2})\) samples to reach functional accuracy \(\varepsilon\) in expectation, and this complexity is unimprovable without making stronger assumptions. While the sample complexity of the stochastic subgradient method is well-understood for convex problems, much less is known for nonconvex problems. In particular, the sample complexity of the method is not yet known for any reasonably wide class of problems beyond those that are smooth or convex. This is somewhat concerning as the stochastic subgradient method is the simplest and most widely used stochastic optimization algorithm for large-scale problems arising in machine learning and is the core optimization subroutine in industry backed software libraries, such as Google's TensorFlow. In the recent paper (Davis and Drusvyatskiy, 2018), we aim to close the gap between theory and practice and provide the first sample complexity bounds for the stochastic subgradient method applied to a large class of nonsmooth and nonconvex optimization problems. The problem class we consider captures a variety of important computational tasks in data science, as described below. Our guarantees apply to an even broader setting than population risk minimization (\ref{eqn:SO}). Indeed, numerous tasks in machine learning and high dimensional statistics yield problems of the form \[\begin{equation}\label{eqn:gen_err} \min_{x\in\mathbb{R}^d}~ \varphi(x)=g(x)+r(x),\tag{$\mathcal{P}$} \end{equation}\] where the functional components \(g\) and \(r\) play qualitatively different roles. The function \(g\colon\mathbb{R}^d\to\mathbb{R}\) plays a similar role to the population risk in (\ref{eqn:SO}). We will assume that the only access to \(g\) is through stochastic estimates of its (generalized) gradients. That is, given a point \(x\), one can generate a random vector \(v\in\mathbb{R}^d\) satisfying \(\mathbb{E}[v]\in \partial g(x)\). A formal definition of the nonconvex subdifferential \(\partial g(x)\) is standard in the optimization literature; see Definition 8.3 in (Rockafellar and Wets, 1998). The exact details will not be important for the blog. We note however that when \(g\) is differentiable at \(x\), the subdifferential \(\partial g(x)\) consists only of the gradient \(\nabla g(x)\), while for convex functions, it reduces to the subdifferential in the sense of convex analysis. In contrast, we assume the function \(r\colon\mathbb{R}^d\to\mathbb{R}\cup\{+\infty\}\) to be explicitly known and simple. It is often used to model constraints on the parameters \(x\) or to encourage \(x\) to have some low dimensional structure, such as sparsity or low rank. Within a Bayesian framework, the regularizer \(r\) can model prior distributional information on \(x\). One common assumption, which we also make here, is that \(r\) is closed and convex and admits a computable proximal map \[\begin{equation*} {\rm prox}_{\alpha r}(x):= \underset{y}{\operatorname{argmin}}\, \left\{r(y)+\tfrac{1}{2\alpha}\|y-x\|^2\right\}. \end{equation*}\] In particular, when \(r\) is an indicator function of a closed convex set – meaning it equals zero on it and is \(+\infty\) off it – the proximal map \({\rm prox}_{\alpha r}(\cdot)\) reduces to the nearest-point projection. The most widely used algorithm for (\ref{eqn:gen_err}) is a direct generalization of (\ref{eqn:SG}), called the proximal stochastic subgradient method. Given a current iterate \(x_t\), the method performs the update \[\begin{equation*}\left\{ \begin{aligned} &\textrm{Generate a stochastic subgradient } v_t\in\mathbb{R}^d \textrm{ of } g \textrm{ at } x_t\\ & \textrm{Set } x_{t+1}={\rm prox}_{\alpha_t r}\left(x_{t} - \alpha_t v_t\right) \end{aligned}\right\}, \end{equation*}\] where \(\alpha_t>0\) is an appropriately chosen control sequence. The search for stationary points Convex optimization algorithms are judged by the rate at which they decrease the function value along the iterate sequence. Analysis of smooth optimization algorithms focuses instead on the magnitude of the gradients along the iterates. The situation becomes quite different for problems that are neither smooth nor convex. The primary goal, akin to smooth minimization, is the search for stationary points. A point \(x\in\mathbb{R}^d\) is called stationary for the problem (\ref{eqn:gen_err}) if the inclusion \(0\in \partial \varphi(x)\) holds. In "primal terms", these are precisely the points where the directional derivative of \(\varphi\) is nonnegative in every direction. Indeed, under mild conditions on \(\varphi\), equality holds; see Proposition 8.32 in (Rockafellar and Wets, 1998): \[\begin{equation*} %\label{eqn:subdif_direc_der} {\rm dist}(0;\partial \varphi(x))=-\inf_{v:\, \|v\|\leq 1} \varphi'(x;v). \end{equation*}\] Thus a point \(x\), satisfying \({\rm dist}(0;\partial \varphi(x))\leq \varepsilon\), approximately satisfies first-order necessary conditions for optimality. An immediate difficulty in analyzing stochastic subgradient methods for nonsmooth and nonconvex problems is that it is not a priori clear how to measure the progress of the algorithm. Neither the functional suboptimality gap, \(\varphi(x_t)-\min \varphi\), nor the stationarity measure, \({\rm dist}(0;\partial \varphi(x_t))\), necessarily tend to zero along the iterate sequence. Indeed, what is missing is a continuous measure of stationarity to monitor, instead of the highly discontinuous function \(x\mapsto{\rm dist}(0;\partial \varphi(x))\). Weak convexity and the Moreau envelope In the work (Davis and Drusvyatskiy, 2018), we focus on a class of problems that naturally admit a continuous measure of stationarity. We assume that \(g\colon\mathbb{R}^d\to\mathbb{R}\) is a \(\rho\)-weakly convex function, meaning that the assignment \(x\mapsto g(x)+\frac{\rho}{2}\|x\|^2\) is convex. The class of weakly convex functions is broad. It includes all convex functions and smooth functions with Lipschitz continuous gradient. More generally, any function of the form \(g = h\circ c,\) with \(h\) convex and \(L\)-Lipschitz and \(c\) a smooth map with \(\beta\)-Lipschitz Jacobian, is weakly convex with constant \(\rho\leq L\beta\) ; see Lemma 4.2 in (Drusvyatskiy and Paquette, 2016). Notice that such composite functions need not be smooth nor convex. Classical literature highlights the importance of weak convexity in optimization (Rockafellar, 1982; Poliquin and Rockafellar, 1992; Poliquin and Rockafellar, 1996), while recent advances in statistical learning and signal processing have further reinvigorated the problem class. Nonlinear least squares, phase retrieval (Eldar and Mendelson, 2014; Duchi and Ruan, 2017; Davis, Drusvyatskiy, and Paquette, 2017), graph synchronization (Bandeira, Boumal, and Voroninski, 2016; Singer, 2011; Abbe, Bandeira, Bracher, and Singer, 2014), and robust principal component analysis (Candès, Li, Ma, and Wright, 2011; Chandrasekaran, Sanghavi, Parrilo, and Willsky, 2011) naturally lead to weakly convex formulations. For a recent discussion on the role of weak convexity in large-scale optimization, see e.g., (Drusvyatskiy, 2018) or the previous blog post. It has been known since Nurminskii's work (Nurminskii, 1974; Nurminskii 1973) that when \(g\) is \(\rho\)-weakly convex and \(r=0\), the stochastic subgradient method generates an iterate sequence that subsequentially converges to a stationary point of the problem, almost surely. Nonetheless, the sample complexity of the basic method and of its proximal extension, has remained elusive. Our approach to resolving this open problem relies on the elementary observation: weakly convex problems naturally admit a continuous measure of stationarity through implicit smoothing. The key construction we use is the Moreau envelope: \[\varphi_{\lambda}(x):=\min_{y}~ \left\{\varphi(y)+\tfrac{1}{2\lambda}\|y-x\|^2\right\},\] where \(\lambda > 0\). Standard results such as Theorem 31.5 in (Rockafellar, 1970) show that as long as \(\lambda<\rho^{-1}\), the envelope \(\varphi_{\lambda}\) is \(C^1\)-smooth with the gradient given by \[\begin{equation}\label{eqn:grad_form} \nabla \varphi_{\lambda}(x)=\lambda^{-1}(x-{\rm prox}_{\lambda \varphi}(x)). \end{equation}\] When \(r=0\) and \(g\) is smooth, the norm \(\|\nabla \varphi_{\lambda}(x)\|\) is proportional to the magnitude of the true gradient \(\|\nabla g(x)\|\). In the broader nonsmooth setting, the norm of the gradient \(\|\nabla \varphi_{\lambda}(x)\|\) has an intuitive interpretation in terms of near-stationarity for the target problem (\ref{eqn:gen_err}). Namely, the definition of the Moreau envelope directly implies that for any point \(x\in\mathbb{R}^d\), the proximal point \(\hat x:={\rm prox}_{\lambda \varphi}(x)\) satisfies \[\begin{equation*} \left\{\begin{array}{cl} \|\hat{x}-x\|&= \lambda\|\nabla \varphi_{\lambda}(x)\|,\\ %F(\hat x)-F(S_t(x))&\leq \frac{t}{2}(L\beta t+1)\|\mathcal{G}_t(x)\|^2,\\ \varphi(\hat x) &\leq \varphi(x),\\ {\rm dist}(0;\partial \varphi(\hat{x}))&\leq \|\nabla \varphi_{\lambda}(x)\|. \end{array}\right. \end{equation*}\] Thus a small gradient \(\|\nabla \varphi_{\lambda}(x)\|\) implies that \(x\) is near some point \(\hat x\) that is nearly stationary for (\ref{eqn:gen_err}). For a longer discussion of the near-stationarity concept, see (Drusvyatskiy, 2018; Section 4.1 in Drusvyatskiy and Paquette, 2016), or the previous blog post. In the paper (Davis and Drusvyatskiy, 2018), we show that under an appropriate choice of the sequence \(\alpha_t\), the proximal stochastic subgradient method will generate a point \(x\) satisfying \(\mathbb{E}\|\nabla \varphi_{1/(2\rho)}(x)\|\leq \varepsilon\) after at most \(O(\varepsilon^{-4})\) iterations.1This is perhaps surprising, since neither the Moreau envelope \(\varphi_{\lambda}(\cdot)\) nor the proximal map \({\rm prox}_{\lambda \varphi}(\cdot)\) explicitly appear in the definition of the stochastic subgradient method. Our work appears to be the first to recognize the Moreau envelope as a useful potential function for analyzing subgradient methods. The convergence guarantees we develop are new even in simplified cases. Two such settings are (a) when \(g\) is smooth and \(r\) is the indicator function of a closed convex set, and (b) when \(g\) is nonsmooth, \(r = 0\), and we have explicit access to the exact subgradients of \(g\). Related Literature and Context Analogous convergence guarantees when \(r\) is an indicator function of a closed convex set were recently established for a different algorithm in (Davis and Grimmer, 2017), called the proximally guided projected subgradient method. This scheme proceeds by directly applying the gradient descent method to the Moreau envelope \(\varphi_{\lambda}\), with each proximal point \({\rm prox}_{\lambda \varphi}(x)\) approximately evaluated by a convex subgradient method. In contrast, we showed that the basic stochastic subgradient method in the fully proximal setting, and without any modification or parameter tuning, already satisfies the desired convergence guarantees. Our work also improves in two fundamental ways on the results in the seminal papers on the stochastic proximal gradient method for smooth functions (Ghadimi and Lan, 2013; Ghadimi, Lan, and Zhang, 2016; Xu and Yin, 2015): first, we allow \(g\) to be nonsmooth and second, even when \(g\) is smooth we do not require the variance of our stochastic estimator for \(\nabla g(x_t)\) to decrease as a function of \(t\). The second contribution removes the well-known "mini-batching" requirements common to (Ghadimi, Lan, and Zhang, 2016; Xu and Yin, 2015), while the first significantly expands the class of functions for which the rate of convergence of the stochastic proximal subgradient method is known. The results in this paper are orthogonal to the recent line of work on accelerated rates of convergence for smooth nonconvex finite sum minimization problems, e.g.,(Lei, Ju, Chen, and Jordan, 2017; Katyusha, Allen-Zhu, 2017; Reddi, Sra, Poczos, Smola, 2016; Natasha 2, Allen-Zhu, 2017). These works crucially exploit the finite sum structure and/or (higher order) smoothness of the objective functions to push beyond the \(O(\varepsilon^{-4})\) complexity. We leave it as an intriguing open question whether such improvement is possible for the nonsmooth weakly convex setting we consider here. E. Abbe, A.S. Bandeira, A. Bracher, and A. Singer. Decoding binary node labels from censored edge measurements: phase transition and efficient recovery. IEEE Trans. Network Sci. Eng., 1(1):10-22, 2014. Z. Allen-Zhu. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods. In STOC, 2017. Z. Allen-Zhu. Natasha 2: Faster non-convex optimization than sgd. arXiv preprint arXiv:1708.08694, 2017. Z. Allen-Zhu. How to make gradients small stochastically. Preprint arXiv:1801.02982 (version 1), 2018. A.S. Bandeira, N. Boumal, and V. Voroninski. On the low-rank approach for semidefinite programs arising in synchronization and community detection. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, June 23-26, 2016, pages 361-382, 2016. E.J. Candès, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(3):Art. 11, 37, 2011. V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A.S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM J Optim., 21(2):572-596, 2011. D. Davis and D. Drusvyatskiy. Complexity of finding near-stationary points of convex functions stochastically. arXiv:1802.08556, 2018. D. Davis and D. Drusvyatskiy. Stochastic subgradient method converges at the rate $(O(k^{-1/4})$ on weakly convex functions. arXiv: 1802.02988, 2018. D. Davis, D. Drusvyatskiy, and C. Paquette. The nonsmooth landscape of phase retrieval. Preprint arXiv:1711.03247, 2017. D. Davis and B. Grimmer. Proximally guided stochastic method for nonsmooth, non-convex problems. Preprint arXiv:1707.03505, 2017. D. Drusvyatskiy. The proximal point method revisited. To appear in the SIAG/OPT Views and News, arXiv:1712.06038, 2018. D. Drusvyatskiy and C. Paquette. Efficiency of minimizing compositions of convex functions and smooth maps. Preprint arXiv:1605.00125, 2016. J.C. Duchi and F. Ruan. Solving (most) of a set of quadratic equalities: Composite optimization for robust phase retrieval. Preprint arXiv:1705.02356, 2017. Y.C. Eldar and S. Mendelson. Phase retrieval: stability and recovery guarantees. Appl. Comput. Harmon. Anal., 36(3):473-494, 2014. S. Ghadimi and G. Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM J. Optim., 23(4):2341-2368, 2013. S. Ghadimi, G. Lan, and H. Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Math. Program., 155(1):267-305, 2016. L. Lei, C. Ju, J. Chen, and M.I Jordan. Non-convex finite-sum optimization via scsg methods. In Advances in Neural Information Processing Systems, pages 2345-2355, 2017. A.S. Nemirovsky and D.B. Yudin. Problem complexity and method efficiency in optimization. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1983. E. A. Nurminskii. The quasigradient method for the solving of the nonlinear programming problems. Cybernetics, 9(1):145-150, Jan 1973. E. A. Nurminskii. Minimization of nondifferentiable functions in the presence of noise. Cybernetics, 10(4):619-621, Jul 1974. R.A. Poliquin and R.T. Rockafellar. Amenable functions in optimization. In Nonsmooth optimization: methods and applications (Erice, 1991), pages 338-353. Gordon and Breach, Montreaux, 1992. R.A. Poliquin and R.T. Rockafellar. Prox-regular functions in variational analysis. Trans. Amer. Math. Soc., 348:1805-1838, 1996. Sashank J Reddi, Suvrit Sra, Barnabas Poczos, and Alexander J Smola. Proximal stochastic methods for nonsmooth nonconvex finite-sum optimization. In Advances in Neural Information Processing Systems, pages 1145-1153, 2016. R.T. Rockafellar. Convex Analysis. Princeton University Press, 1970. R.T. Rockafellar. Favorable classes of Lipschitz-continuous functions in subgradient optimization. In Progress in nondifferentiable optimization, volume 8 of IIASA Collaborative Proc. Ser. CP-82, pages 125-143. Int. Inst. Appl. Sys. Anal., Laxenburg, 1982. R.T. Rockafellar and R.J-B. Wets. Variational Analysis. Grundlehren der mathemtischen Wissenschaften, Vol 317, Springer, Berlin, 1998. A. Singer. Angular synchronization by eigenvectors and semidefinite programming. Appl. Comput. Harmon. Anal., 30(1):20-36, 2011. Y. Xu and W. Yin. Block stochastic gradient iteation for convex and nonconvex optimization. SIAM J. Optim., 25(3):1686-1716, 2015. In the supplementary text (Davis and Drusvyatskiy, 2018), we also showed that when \(g\) happens to be convex, this complexity can be improved to \(\widetilde{O}(\varepsilon^{-2})\) by adapting a gradual regularization technique of (Allen-Zhu, 2018). ↩
CommonCrawl
Estimation of Genetic Parameters of Some Productive and Reproductive Traits in Italian Buffalo. Genetic Evaluation with BLUP-Animal Model Catillo, G.;Moioli, B.;Napolitano, F. 747 In this study, the Italian milk recorded buffalo population from 1974 to 1996 was analysed with the purpose to estimate genetic and environmental variability and provide genetic parameters for the most important economic traits. High variability between herds was evident due to the poor knowledge of feeding requirements and husbandry technology in this species compared to cattle. Age at first calving was reduced by 57 days during the considered years following efforts made in better feeding and management from 1990; on the contrary, calving interval has increased by 17 days as a consequence of forcing buffaloes to calve in spring, in order to have the peak milk yield when milk is much better paid. Average milk yield increased by 1853 kg during these years, while lactation duration was reduced by 30 days. Season of calving has no effect on all traits. Calving order has a positive effect on milk yield especially because older cows produce more milk in shorter lactations. Heritability for the age at first calving and calving interval was 0.26 and 0.05 respectively. Heritability of productive traits, milk yield and duration of the lactation was 0.19 and 0.13 respectively, with repeatabilities of 0.40 and 0.26. Genetic trend for milk yield was 2.1 kg milk/year for the bulls and 1 kg for all population. The high genetic variability of milk production as well as duration of the lactation, indicates that there are good opportunities for genetic improvement when including these traits in a selection scheme. The low genetic trend registered over 15 years of recording activity can be explained by the fact that neither progeny testing was performed or selection schemes were implemented, due to the difficulties to use artificial insemination in buffalo. Participation of Protein Synthesis in in vitro Oocyte Maturation and Fertilization in Cattle Nakaya, Y.;Hattori, M.;Fujihara, N. 754 Bovine oocytes with compact and complete cumulus cells were cultured for up to 24h in TCM199 buffered with 25mmol/l hepes and supplemented with 10% FBS (fetal bovine serum), 1mg/ml $17{\beta}$-estradiol, 20IU/ml hCG(human chorionic gonadotropin). All of the oocytes were divided into at 6 groups depending upon incubation times (control, 0 hour, 6 hours, 12 hours, 16 hours, 18 hours). To all experimental media, $200{\mu}g/ml$ puromycin was added at different incubation times mentioned above. Following these culture times, in vitro insemination was conducted with frozen-thawed bovine spermatozoa in medium BO (Brackett and Oliphant medium for in vitro insemination) with $10{\mu}g/ml$ BSA(bovine serum albumin) and 10 mg/ml heparin added. After 22h culture, the oocytes were fixed with acetic alcohol solution and stained with orcein acetic solution to evaluate sperm nuclear progression. Addition of puromycin after 0, 6 and 12 h of culture resulted in near of oocyte maturation at the M1 stage. Contrariwise, puromycin addition after 12 h of culture led to restoration of nuclear progression to M2 stage. On the one hand, puromycin affected the synthesis of Cyclin B protein that may be involved in the oocyte maturation and sperm capacitation for in vitro fertilization. The present study suggests the participation of protein synthesis, cyclin B, in the oocyte development from M1 to M2 stages in vitro. Effect of Cell Cycle Stage on the Development of Embryos Produced by Cumulus Cell Nuclear Transfer in Hanwoo (Korean Cattle) Im, G.S.;Yang, B.S.;Yang, B.C.;Chang, W.K.;Yi, Y.J.;Park, C.S. 759 This study was carried out to investigate the effect of activation timing, cell cycle and passage on the development of embryos produced by cumulus cell nuclear transfer in Hanwoo (Korean cattle). Nuclear donor cumulus cells were cultured in Dulbecco's modified Eagle medium supplemented with 10% fetal bovine serum at $38.5^{\circ}C$ in a humidified atmosphere of 5% $CO_2$ in air. The 1~6 passages of serum deprived or actively dividing cumulus cells were isolated and used as donor cells. The in vitro matured oocytes were enucleated and then the isolated donor cells were introduced. One pulse of 180 volts for $15{\mu}s$ was applied to induce the fusion between karyoplast and cytoplast. The activation was done before or after the fusion. To activate, oocytes were treated with $10{\mu}M$ calcium ionophore for 5 min immediately followed by 2 mM 6-dimethylaminopurine for 3 h. The nuclear transfer embryos were cultured in $500{\mu}l$ of modified CRlaa supplemented with 3 mg/ml BSA in four well dish covered with mineral oil. After 3 days culture, culture medium was changed into modified CRlaa medium containing 1.5 mg/ml BSA and 5% FBS for 4 days. The incubation environment was 5% $CO_2$, 5% $O_2$, 90% $N_2$ at $38.5^{\circ}C$. There was no blastocyst formation when the nuclear transfer embryos were activated before the fusion, whereas, 29.9% of blastocyst formation was shown when the nuclear transfer embryos were activated after the fusion. When serum deprived and actively dividing cumulus cells were used as nuclear donor cells, the developmental rates to blastocyst were 38.5% and 40.6%, respectively. There was no significant difference between serum deprived and actively dividing cells in the developmental rates. The developmental rates to blastocyst according to 1~6 passages were 37.5~44.4%. However, there were no significant differences among passages. These results indicate that 1~6 passage cumulus cell irrespective of cell cycle could support development of nuclear transfer embryos activated after the fusion. An Evaluation of Suckling and Post Weaning Practices in Relation to the Stimulation and Ease of Detection of Oestrus in Nepalese Pakhribas Pigs Shrestha, NP;Edwards, S.A.;English, P.R;Robertson, J.F. 765 Thirty second parity sows of the synthetic Nepalese Pakhribas genotype were used to investigate factors which might improve the occurrence and expression of estrus. The experiment had two sequential elements. In part 1, a change in suckling pattern was applied during lactation, and in part 2, different estrus detection methods were evaluated after weaning. All sows received the same pattern of weaning, which imitated the progressive weaning system used in Nepalese villages. Piglets from each litter were weaned at three ages (6, 7 and 8 weeks of age) in the proportion of 0.5 at 6 weeks followed by 0.25 at each of the subsequent weanings. In the first lactation treatment, the suckling pattern was left undisturbed, similar to the practice used in the villages in which the remaining piglets after first weaning are allowed continuous suckling. In the other treatment, the remaining piglets after first weaning were allowed to suckle their sows only during the night, whilst in the day time (09:00-16:00) they were excluded from the sow but left free to roam around. After weaning, estrus detection procedures were carried out in the absence or presence of two different boar stimuli: a synthetic boar pheromone spray or fresh boar urine. These were applied sequentially in a sequence of testing that alternated for each sow on a daily basis. The weaning to re-mating interval was significantly longer for the unrestricted suckling treatment. All sows were re-mated within 30 days after first weaning in the restricted suckling treatment groups, whereas only 71% of sows were re-mated within 30 days after weaning in the unrestricted suckling treatment groups ($x^2=3.877$, 1df, p<0.05). Both boar pheromone spray and boar urine increased the estrus detection probability, with no significant differences between the two stimuli treatments. Cloning and Expression of Bovine Polyadenylate Binding Protein 1 cDNA in Mammary Tissues Kim, J.H.;Jeon, D.H.;Choi, Y.J.;Baik, M.G. 771 A pregnant-induced clone was identified by differential screening from a cDNA library of bovine mammary gland. The clone was identified as a cDNA encoding a polyadenylate binding protein 1 (PABP). The cDNA clone had a total length of 1,911 nucleotides coding for 636 amino acids. The nucleotide sequence of the bovine PABP was 95% and 94% identical to those of human and mouse species, respectively. Comparison of the deduced amino acid sequences of bovine PABP with those of human species showed 100% identity. Induction of the PABP mRNA was observed in bovine mammary tissues at pregnant 7 and 8 months compared to virgin, lactating and involuted states. Expression of the PABP gene was examined in mammary epithelial HC11 cells at proliferating, differentiated and apoptotic conditions. The mRNA levels of PABP gene were similar between proliferating and differentiated cells, but expression levels were very low in apoptotic cells compared to other conditions. Results demonstrate that the PABP gene is induced during pregnancy at which stage mammary epithelial cells are actively proliferating. Meiotic Competence of Caprine Oocytes During IVM on Granulosa Cell Monolayers Developed from Small and Large Follicles in Comparison to the Granulosa Cell Coculture Sharma, G. Taru;Teotia, Alok;Majumdar, A.C. 777 Evaluation of the granulosa cell (GC) monolayers developed from small (<5 mm) and large (>5 mm) follicles on the meiotic competence of caprine oocytes during in vitro maturation was done in this study in comparison to the granulosa cell coculture. Ovaries were collected from the local abattoir and follicular contents were aspirated for the monolayer culture. For IVM the oocytes were collected by puncturing the nonatretic follicles (>4 mm). Results revealed that at the same seeding rate, small follicular granulosa cell monolayer achieved confluence 24-48 h earlier than large follicular granulosa cell monolayer. GC monolayers significantly p (<0.05) improved the rate of meiotic resumption and nuclear maturation (84.76% vs 74.74%) after 27 h of culture in comparison to GC coculture. Statistically there was no significant difference in the maturation rate between the caprine oocytes matured over small or large follicular GC monolayers. It is concluded from the present study that GC monolayers support better nuclear and cytoplasmic maturation of growing caprine oocytes which is evident by better maturation rate over GC monolayer as compared to the oocytes matured with GC coculture. Granulosa cells from small and large follicles can be used for IVM with more or less in the same efficiency after conditioning them with maturation media in 18-24 h before the onset of culture. Iodine Supplementation of Leucaena leucocephala Diet for Goats. I. Effects on Nutrient Utilization Rajendran, D.;Pattanaik, A.K.;Khan, S.A.;Bedi, S.P.S. 785 Twelve indigenous male goats, comprising of six intact and six castrated (2.5-3 years; $24.4{\pm}0.62kg$) were assigned evenly into two dietary treatments, viz. $I_0$ and $I_{100}$ and were used to study the effect of supplementation of iodine on the nutrient utilization when their diet contained Leucaena leaf meal. They were offered a conventional concentrate mixture along with Leucaena leucocephala leaf meal, the latter to meet 50% of their crude protein (CP) requirements, and supplemented with either no iodine ($I_0$) or 0.1 mg of iodine ($I_{100}$)/day/animal as potassium iodide for a period of 105 days. Wheat straw given ad libitum was the sole source of roughage. A metabolism trial of 8 days duration was conducted after 90 days of experimental feeding. It was observed that the overall dry matter (DM) intake during experimental period was higher (p<0.05) in $I_{100}$ group as compared to $I_0$ group (508.6 vs. $443.7g\;d^{-1}$). The intake of CP, digestible crude protein (DCP) and metabolisable energy (ME), although non-significant, tended to be higher in the iodine supplemented group, $I_{100}$. Digestibility of dry matter, organic matter (OM), CP, ether extract and crude fiber (CF) did not differ (p>0.05) between the treatments. However, nitrogen retention was higher (p<0.01) in $I_{100}$ than $I_0$ with the values being 2.63 and $1.70g\;d^{-1}$, respectively. No difference (p>0.05) was evident in the retention of calcium and phosphorus between the two groups. The castrated animal exhibited lower DM intake concurrent with higher digestibility of DM and crude fibre (p<0.05), and organic matter and total carbohydrates (p<0.01) when compared to intact ones. It was concluded that supplementation of iodine to leucaena based ration may help in improving the DM intake and nitrogen utilization by goats. Iodine Supplementation of Leucaena leucocephala Diet for Goats. II. Effects on Blood Metabolites and Thyroid Hormones Twelve adult male goats, comprising of six castrated and six intacts, (2.5-3 years; $24.4{\pm}0.62kg$) were randomly but evenly divided into two groups ($I_0$ and $I_{100}$) and fed conventional concentrate mixture along with Leucaena leucocephala leaf meal (100 g/head approx.), the latter to supply 50 per cent of the crude protein (CP) requirements. The $I_{100}$ group was provided with supplemental iodine as potassium iodide solution at 0.1 mg/day/animal. Wheat straw was provided ad libitum as sole source of roughage during the experimental period of 105 d. Blood samples were collected at the begining (0 d) and thereafter at 30, 60 and 90 d of experimental feeding. The study revealed that the serum glucose level was significantly higher (p<0.01) in $I_{100}$ group as compared to $I_0$. Haemoglobin, packed cell volume and serum concentrations of total protein, albumin, globulin, calcium, inorganic phosphorus and alkaline phosphatase did not show significant differences as a result of iodine supplementation. Though the serum levels of triiodothyronine ($T_3$) were comparable between the two groups, that of thyroxine ($T_4$) increased significantly (p<0.001) in the $I_{100}$ group. The $T_3:T_4$ ratio was also similar between both the groups. The study indicated that the adverse effect of Leucaena feeding on thyroid gland could possibly be alleviated by provision of extra iodine. However, this needs further confirmation using long duration studies. Degradation of Rice Straw by Rumen Fungi and Cellulolytic Bacteria through Mono-, Co- or Sequential- Cultures Ha, J.K.;Lee, S.S.;Kim, S.W.;Han, In K.;Ushida, K.;Cheng, K.J. 797 Two strains of rumen fungi (Piromyces rhizinflata B157, Orpinomyces joyonii SG4) and three strains of rumen cellulolytic bacteria (Ruminococcus albus B199, Ruminococcus flavefaciens FD1 and Fibrobacter succinogenes S85) were used as mono-cultures or combinationally arranged as co- and sequential-cultures to assess the relative contributions and interactions between rumen fungi and cellulolytic bacteria on rice straw degradation. The rates of dry matter degradation of co-cultures were similar to those of corresponding bacterial mono-cultures. Compared to corresponding sequential-cultures, the degradation of rice straw was reduced in all co-cultures (P<0.01). Regardless of the microbial species, the cellulolytic bacteria seemed to inhibit the degradation of rice straw by rumen fungi. The high efficiency of fungal cellulolysis seems to affect bacterial degradation rates. Relationships among Behavior, Physiological States and Body Weight Gain in Grazing Holstein Heifers Hasegawa, N.;Hidari, H. 803 This study examined the behavior of dairy heifers and the factors affecting the performance of them on pasture. Behavior of 10 Holstein heifers in a herd of 25 animals that rotationally grazed five 8-ha pastures was observed and recorded every 5 minutes during 24 hours; body weights were measured once a month from June to October. Blood and rumen fluid samples were collected from 5 of them bimonthly. Chemical composition was analyzed for the forage samples collected each month. CP content (DM basis) of herbage ranged from 12.2 (June) to 17.2% (October) and ADF from 31.1 (October) to 39.1% (July). Standing (posture) time was different significantly among months (p<0.001) ranging from 48.3 to 61.3% of 24 hours and was longer in July and August (61.3% and 58.3%, respectively) when ADF content of herbage was higher than in the other months. Grazing time which significantly differed among months (p<0.001) ranged from 29.1 to 41.6% of 24 hours and was shorter in June and September (29.1% and 33.0%, respectively) when ADF content was lower than in the other months. Average DG through the experiment period was 0.74 kg/day. August was the lowest in DG (0.41 kg/day) and the longest in rumination time and standing-rumination time among months. Animals of higher DG had a shorter standing time (r=-0.36, p<0.01) and a longer lying-rumination time (r=0.55, p<0.001) throughout the experiment. Total protein concentration in blood ranged from 9.04 to 9.64 g/dl and was negatively correlated with DG (r=-0.65, p<0.05). Phospholipid concentration of blood ranged from 119.66 to 156.40 mg/dl and was negatively correlated with DG (r=-0.57, p<0.05). VFA in rumen fluid, acetic acid proportion (ranging from 69.35 to 74.76%) and butyric acid proportion (ranging from 7.18 to 12.05%) showed significant differences among months (p<0.05, p<0.001, respectively). Butyric acid proportion was significantly related with DG (r=0.60, p<0.05). Enumeration and Recovery of Bacterial Isolates from Ruminants Fed with Different Dietary Regimes and Their Antibacterial Activity Pattnaik, P.;Grover, Sunita;Batish, V.K. 811 The study evaluated different synthetic and semisynthetic media for maximal recovery of rumen bacteria and expression of their antibacterial activity. Rumen Glucose Cellobiose Agar (RGCA) medium was found to be the best for recovery of rumen bacteria. However, L-10 medium was the best for expression of antibacterial activity of ruminal isolates followed by Easy, M-10, RGCA and M-98-5 medium. The present study recommends the use of L-10 medium as the medium of choice for screening of antibacterial activity of ruminal isolates. Comparative evaluation of bacterial counts on different dietary regimes indicated significant difference between different growth media on a specific diet and between diets on specific growth media within a species. However, there is no overall significant difference between total bacterial counts obtained from rumen liquor of cattle and buffalo with respect to either the feeding regime or growth media. Feeding straw based diet to the animal is the best for high recovery of rumen bacteria. Relationship between Body Condition Score and Ultrasonographic Measurement of Subcutaneous Fat in Dairy Cows Zulu, Victor Chisha;Nakao, Toshihiko;Moriyoshi, Masaharu;Nakada, Ken;Sawamukai, Yutaka;Tanaka, Yoshinobu;Zhang, Wen-Chang 816 This study aimed at relating body condition score (BCS) to ultrasound measurements of subcutaneous fat over the areas most commonly used to BCS Holstein-Friesian cows, and determining the practicality of ultrasound measurement of subcutaneous fat for assessment of energy status of the cow. Twenty-eight cows were scored to the nearest quarter point on a scale of 1-5 (1=thin and 5=fat) using both visual and tactile techniques. On the same day, ultrasound measurements of subcutaneous fat were obtained at the lumbar transverse process, thurl and near the tailhead areas on both sides of the cow making six locations. Spearman's rank correlation coefficients between the six ultrasound locations ranged from 0.72-0.93 and were all significantly different from zero (p<0.01). Correlation coefficients between BCS and the mean lumbar, thurl and tailhead ultrasound measurements ranged between 0.67-0.72 and were also significantly different from zero (p<0.01). BCS was highly and significantly correlated to ultrasound measurements of subcutaneous fat. Ultrasound can be used independently or in conjunction with BCS to estimate the nutrition and energy status of cows. Effect of Partial Replacement of Soybean Meal with Palm Kernel Meal and Copra Meal on Growth Performance, Nutrient Digestibility and Carcass Characteristics of Finishing Pigs Kim, B.G.;Lee, J.H.;Jung, H.J.;Han, Y.K.;Park, K.M.;Han, In K. 821 To study the effects of partial replacement of soybean meal (SBM) with palm kernel meal (PKM) and copra meal (CM) on growth performance, nutrient digestibility and carcass characteristics in finishing pigs, a total of 150 crossbred pigs (Landrace$\times$Duroc$\times$Yorkshire; average $52.11{\pm}1.08kg$ body weight) were alloted to five treatments, in a randomized block design. The treatments included 1) Control: without PKM or CM, 2) PKM2: 2% of palm kernel meal, 3) PKM4: 4% palm kernel meal, 4) CM2: 2% of copra meal, 5) CM4: 4% of copra meal. During the early finishing period (52~74 kg), growth performance was better in CM diets than in PKM diets or control diet, and in overall period (74~100 was lower (p<0.05) in PKM4 diet than the other diets. Nutrient digestibilies of PKM or CM substituted diets showed the tendency to be lower than those of control diet. In the early finishing period, total amino acid digestibilities of PKM and CM diets had the tendency to be lower than control diet, and in the late finishing period, they were lower (p<0.05) than control diet. Carcass length was longer (p<0.05) in the pigs fed 2% CM than in the pigs fed 4% PKM diet, but other carcass characteristics were not different among treatments. Although the dietary C14:0 content affected (p<0.05) on the C14:0 content in the carcass, the inclusion of PKM or CM in the diet did not affect the total saturated fatty acids and unsaturated fatty acids in the backfat of finishing pigs. Although it was not significant, supplementation of CM at the 2% and 4% of control group tended to decrease feed cost per kg weight gain by 2.89 to 1.42%, respectively. In conclusion, copra meal can be a valuable source of protein in the diet for finishing pigs and may replace other protein sources in pig diets to a considerable extent. Cholesterol Contents and Fatty Acid Composition of Chukar, Pheasant, Guinea Fowl and Quail Egg Yolk Choi, S.H.;Song, K.T.;Oh, H.R. 831 Little information on the cholesterol content and the fatty acid composition of avian species other than chicken is available. This study was conducted to compare the yolk cholesterol content and the fatty acid profiles of some wild birds maintained in captivity on commercial grain-based chicken diets. The concentration of cholesterol/g of yolk as well as the total yolk cholesterol per egg varied among species. Yolk cholesterol concentration, expressed as mg/g of yolk, was highest in chukar, followed by pheasant, guinea fowl and quail, while total yolk cholesterol in an egg was highest in guinea fowl, followed by pheasant, chuckar and quail. An inverse relationship between yolk cholesterol concentration and egg weight was observed among species with an exception of quail. Although major fatty acids of egg yolk were oleic acid, palmitic acid, linoleic acid and stearic acid in all birds, the composition varied among species. Chukar and quail showed higher oleic acid content than pheasant and guinea fowl, while showing lower linoleic acid. Fatty acids of chukar and guinea fowl eggs were more saturated than those of pheasant and quail. Chukar and especially quail had higher monounsaturated fatty acids (MUFA) than pheasant and guinea fowl; in quail egg 51.6% of total fatty acids were MUFA. Polyunsaturated fatty acids (PUFA), essential fatty acids (EFA) and the ratio of PUFA to saturated fatty acid (P/S ratio) were higher in pheasant and guinea fowl than in chukar and quail. Differences in fatty acid profile of triglyceride (TG) among birds were largely similar to those of total lipid. In comparison to TG, phosphatidyl choline (PC) was low in MUFA while high in saturated fatty acids (SFA), PUFA, P/S ratio and EFA. PC was most saturated in guinea fowl egg yolk, followed by chukar, quail and pheasant. PUFA, P/S ratio and EFA in PC were highest in pheasant followed by chukar, guinea fowl and quail. PE was distinguished from PC by its high contents of stearic acid, eicosapentenoic acid (EPA) and docosahexenoic acid (DHA) while low in palmitic, oleic and linoleic acids. In egg yolk of all birds MUFA was significantly lower in PE than in PC except in quail. Compared to other species, quail had a considerably higher content of MUFA in PE at the expense of SFA and PUFA. Feed Intake Patterns and Growth Performance of Purebred and Crossbred Meishan and Yorkshire Pigs Hyun, Y.;Wolter, B.F.;Ellis, M. 837 Two experiments were conducted to compare the feed intake patterns and growth performance of Meishan and Yorkshire growing pigs. Experiment 1 was carried out over a 6-wk period and used 48 barrows with equal numbers of purebred Meishan (M) and Yorkshire (Y). Pigs were allocated to four groups of 12 pigs consisting of equal numbers of M and Y. Initial BW were $36.4{\pm}0.32kg$ and $42.1{\pm}1.41kg$ for M and Y, respectively. Experiment 2 was carried out over a 5-week period and used 48 pigs consisting of equal numbers of both barrows and gilts and of crossbred Meishan$\times$Yorkshire (MY) and purebred Yorkshire (Y) animals. Pigs were allotted to 6 pens of 8 pigs, with 4 single- and 2 mixed-genotype groups (initial $BW=28.5{\pm}0.99kg$). In both experiments, pigs were given ad libitum access to a grower diet (17% crude protein, 0.9% lysine, 3365 kcal/kg ME) via feed intake recording equipment (F.I.R.E.). Pigs carried an ear-tag transponder with an unique identification which allowed the time, duration, and size of individual meals to be recorded. In Exp. 1, Y had higher ADG (721 vs 353 g, p<0.01), daily feed intake (DFI; 2.338 vs 1.363 kg, p<0.01), made more frequent visits to the feeder per day (NFV; 18.5 vs 7.7, p<0.01), had a shorter feeder occupation time per visit (FOV; 7.4 vs 12.9 min, p<0.01), and ate less feed per visit (FIV; 130 vs 177 g, p<0.01) than M pigs. Feed consumption rates (CR) were greater for Y compared to M (19.3 vs 14.8 g/min, p<0.01). Feeder occupation time per day (FOD) was longer for Y than M (114.3 vs 82.8 min/pig, p<0.01). Yorkshire pigs visited the feeder more frequently between 0800 and 1100 h. Meishan pigs showed more frequent feeder visits between 0600 and 0800 h, and between 1600 and 2100 h when feeding competition with Y was reduced. In Exp. 2, there was no effect of genotype or group composition on DFI, ADG or gain:feed ratio. Crossbred pigs (MY) made fewer feeder visits (12.6 vs 17.7, p<0.01), and had greater FIV (124 vs 98 g/visit, p<0.01), and longer FOV (8.11 vs 7.24 min/visit, p<0.01) and FOD (112 vs 100 min, p<0.05) than Y pigs. Results of this study suggest substantial genetic variation in feeding patterns as well as in growth performance. Optimization of Cholesterol Removal Conditions from Homogenized Milk by Treatment with Saponin Chang, E.J.;Oh, H.I.;Kwak, H.S. 844 This study was carried out to determine the optimum conditions for cholesterol removal from homogenized milk by treatment with saponin using a response surface methodology (RSM). The effects of temperature, reaction time, and amounts of celite or saponin added on cholesterol removal from milk were investigated. The level of cholesterol removal from milk increased with saponin concentration and varied from 57.4 to 73.3%. The optimum reaction time, amount of celite addition determined by a partial differentiation of the model equation, and amount of saponin addition were 30min, 0.95% and 1.5%, respectively. Under these conditions, the predicted cholesterol removal by RSM was estimated to be 73.4%. The experimental removal value was 73.7%. Thus, there was no appreciable difference between the experimental value and the predicted value based on RSM. Effect of Cool Drinking Water on Production and Shell Quality of Laying Hens in Summer Glatz, P.C. 850 Feed intake, egg weight, rate of lay and shell quality characteristics were measured in an Australian tinted egg laying strain from 31-42 weeks of age, housed at $30^{\circ}C$ and provided drinking water at 5, 10, 17 and $30^{\circ}C$. In a second experiment a European brown egg laying strain (59-66 weeks of age) housed at $30^{\circ}C$ were provided drinking water at 5, 10, 15 and $30^{\circ}C$. Brown egg layers given cool drinking water (5, 10 and $15^{\circ}C$) consumed more (p<0.05) feed and produced significantly (p<0.05) thicker and heavier shells than hens given drinking water at ambient temperature ($30^{\circ}C$). However the tinted egg layers given chilled drinking water only consumed more (p<0.05) feed and produced thicker (p<0.05) and heavier (p<0.05) shells when consuming drinking water at $5^{\circ}C$. As the tinted egg layers acclimatised to the environmental temperature there was a decline in the influence of cool drinking water on feed intake and shell quality. For brown egg layers, however, cool drinking water resulted in an improvement (p<0.05) in feed intake and shell quality over the entire period birds were provided cool water. These studies suggest that there is potential for using cool drinking water to improve feed intake and shell quality of hens housed under hot conditions. The combination of high ambient temperature and high drinking water temperature, a common occurrence in Australian layer sheds, should be avoided. A Note on Risk Factors for Calf Mortality in Large-Scale Dairy Farms in the Tropics : A Case Study on Rift Valley Area of Kenya Bebe, B.O.;Abdulrazak, S.A.;Ogore, P.O.;Ondiek, J.O.;Fujihara, T. 855 The aim of this study was to assess the associations of some potential risk factors and occurrence of calf mortality in large-scale dairy farms. Njoro area of the Rift valley, Kenya was selected due to its potential of large-scale dairy farms, since the time of the Europeans settlers. The study was retrospective and focused on the calves dying from January 1996 through October 1998. Sample of studied population consisted of 105 calves extracted from the farm records. Data was collected using a questionnaire and were grouped into farm-level and animal-level factors. Calf mortality was 15.6% and important risk factors for calf mortality were sex of calf, season of birth, pneumonia disease, age of dam when calf was born and house type for calves. Female calve born during colder wet seasons and born to dams of 2-4.5 years of age were equally at higher risk. Calves raised in movable pens compared to those raised in permanent pens were at higher risk of mortality from pneumonia. Animal level factors were major causes of calf mortality in commercial farms used in this study and therefore details study is needed in these factors in controlling the calf mortality rates. Effect of Grinding on Color and Chemical Composition of Pork Sausages by Near Infrared Spectrophotometric Analyses Kang, J.O.;Park, J.Y.;Choy, Y.H. 858 Near Infrared spectroscopy was applied to the samples of processed pork to see the effect of grinding on chemical components analyses. Data from conventional chemical analyses of moisture, fat, protein, NaCl were put into calibration model by NIR of reflectance mode. The other properties observed were pH and color parameters ($L^*,\;a^*,\;b^*$). Spectral ranges of 400~2500 nm and 400~1100 nm were compared for color parameters. Spectral ranges of 400~2500 nm and 1100~2500 nm were compared for chemical components and pH. Different spectral ranges caused little changes in the coefficients of determination or standard errors. $R^{2,}s$ of calibration models for color parameters were in the range of 0.97 to 1.00. $R^{2,}s$ of calibration models of intact sausages for moisture, protein, fat, NaCl and pH were 0.98, 0.89, 0.95, 0.73 and 0.77, respectively using spectra at 1100~2500 nm. $R^{2,}s$ of calibration models of ground sausages for moisture, protein, fat, NaCl and pH were 0.97, 0.91, 0.97, 0.42 and 0.56, respectively using spectra at 1100~2500 nm. Structural Characteristics of Cell Walls of Forage Grasses - Their Nutritional Evaluation for Ruminants - - Review - Iiyama, Kenji;Tuyet Lam, Thi Bach 862 The walls of all higher plants are organized as a cellulosic, fibrillar phase embedded in a matrix phase composed of non-cellulosic polysaccharides, some proteins and, in most secondary walls, lignin. At the effective utilization of plant biomass, qualitative and quantitative analyses of plant cell walls are essential. Structural features of individual components are being clarified using newly developed equipments and techniques. However, "empirical" procedures to elucidate plant cell walls, which are not due to scientific definition of components, are still applied in some fields. These procedures may give misunderstanding for the effective utilization of plant biomass. In addition, interesting the investigation of wall organization is moving towards not only qualitatively characterisation, but also quantitation of the associations between wall components. These involve polysaccharide-polysaccharide and polysaccharide-lignin cross-links. Investigation of the associations is being done in order to understand the chemical structure, organization and biosynthesis of the cell wall and physiology of the plants. Procedures for qualitative and quantitative analyses based on the definition of cell wall components are reviewed focussing in nutritional elucidation of forage grasses by ruminant microorganisms. Genomic and Proteomic Analysis of Microbial Function in the Gastrointestinal Tract of Ruminants - Review - White, Bryan A.;Morrison, Mark 880 Rumen microbiology research has undergone several evolutionary steps: the isolation and nutritional characterization of readily cultivated microbes; followed by the cloning and sequence analysis of individual genes relevant to key digestive processes; through to the use of small subunit ribosomal RNA (SSU rRNA) sequences for a cultivation-independent examination of microbial diversity. Our knowledge of rumen microbiology has expanded as a result, but the translation of this information into productive alterations of ruminal function has been rather limited. For instance, the cloning and characterization of cellulase genes in Escherichia coli has yielded some valuable information about this complex enzyme system in ruminal bacteria. SSU rRNA analyses have also confirmed that a considerable amount of the microbial diversity in the rumen is not represented in existing culture collections. However, we still have little idea of whether the key, and potentially rate-limiting, gene products and (or) microbial interactions have been identified. Technologies allowing high throughput nucleotide and protein sequence analysis have led to the emergence of two new fields of investigation, genomics and proteomics. Both disciplines can be further subdivided into functional and comparative lines of investigation. The massive accumulation of microbial DNA and protein sequence data, including complete genome sequences, is revolutionizing the way we examine microbial physiology and diversity. We describe here some examples of our use of genomics- and proteomics-based methods, to analyze the cellulase system of Ruminococcus flavefaciens FD-1 and explore the genome of Ruminococcus albus 8. At Illinois, we are using bacterial artificial chromosome (BAC) vectors to create libraries containing large (>75 kbases), contiguous segments of DNA from R. flavefaciens FD-1. Considering that every bacterium is not a candidate for whole genome sequencing, BAC libraries offer an attractive, alternative method to perform physical and functional analyses of a bacterium's genome. Our first plan is to use these BAC clones to determine whether or not cellulases and accessory genes in R. flavefaciens exist in clusters of orthologous genes (COGs). Proteomics is also being used to complement the BAC library/DNA sequencing approach. Proteins differentially expressed in response to carbon source are being identified by 2-D SDS-PAGE, followed by in-gel-digests and peptide mass mapping by MALDI-TOF Mass Spectrometry, as well as peptide sequencing by Edman degradation. At Ohio State, we have used a combination of functional proteomics, mutational analysis and differential display RT-PCR to obtain evidence suggesting that in addition to a cellulosome-like mechanism, R. albus 8 possesses other mechanisms for adhesion to plant surfaces. Genome walking on either side of these differentially expressed transcripts has also resulted in two interesting observations: i) a relatively large number of genes with no matches in the current databases and; ii) the identification of genes with a high level of sequence identity to those identified, until now, in the archaebacteria. Genomics and proteomics will also accelerate our understanding of microbial interactions, and allow a greater degree of in situ analyses in the future. The challenge is to utilize genomics and proteomics to improve our fundamental understanding of microbial physiology, diversity and ecology, and overcome constraints to ruminal function. Increasing the Flow of Protein from Ruminal Fermentation - Review - Wallace, R.J.;Newbold, C.J.;Bequette, B.J.;MacRae, J.C.;Lobley, G.E. 885 This review summarizes some recent research into ways of improving the productivity of ruminal fermentation by increasing protein flow from the rumen and decreasing the breakdown of protein that results from the action of ruminal microorganisms. Proteinases derived from the plant seem to be of importance to the overall process of proteolysis in grazing animals. Thus, altering the expression of proteinases in grasses may be a way of improving their nutritive value for ruminants. Inhibiting rumen microbial activity in ammonia formation remains an important objective: new ways of inhibiting peptide and amino acid breakdown are described. Rumen protozoa cause much of the bacterial protein turnover which occurs in the rumen. The major impact of defaunation on N recycling in the sheep rumen is described. Alternatively, if the efficiency of microbial protein synthesis can be increased by judicious addition of certain individual amino acids, protein flow from ruminal fermentation may be increased. Proline may be a key amino acid for non-cellulolytic bacteria, while phenylalanine is important for cellulolytic species. Inhibiting rumen wall tissue breakdown appears to be an important mechanism by which the antibiotic, flavomycin, improves N retention in ruminants. A role for Fusobacterium necrophorum seems likely, and alternative methods for its regulation are required, since growth-promoting antibiotics will soon be banned in many countries. Genetics and Molecular Biology in Aquaculture - Review - Lakra, W.S. 894 Genetics has played a pivotal role in increasing the world food production through revolutions in plant and animal sciences. Though the attention on fisheries has been inadequate but the growing importance of modern genetic manipulations and biotechnological innovations to aquaculture has been realized. Recent advances in fish genetics and molecular biology have provided a suite of useful techniques, which have several applications in aquaculture. This paper reviews the advancement in the applications of selection, hybridization, chromosome engineering, sex control, gene transfer and molecular technologies for enhanced aquaculture productivity.
CommonCrawl
String (computer science) This article is about the data type. For other uses, see String (disambiguation). Find sources: "String" computer science – news · newspapers · books · scholar · JSTOR (March 2015) (Learn how and when to remove this template message) Strings are applied e.g. in Bioinformatics to describe DNA strands composed of nitrogenous bases. In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed (after creation). A string is generally considered as a data type and is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence (or list) data types and structures. Depending on the programming language and precise data type used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined maximum length or employ dynamic allocation to allow it to hold a variable number of elements. When a string appears literally in source code, it is known as a string literal or an anonymous string.[1] In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set called an alphabet. 1 String datatypes 1.1 String length 1.2 Character encoding 1.3 Implementations 1.4 Representations 1.4.1 Null-terminated 1.4.2 Byte- and bit-terminated 1.4.3 Length-prefixed 1.4.4 Strings as records 1.4.5 Other representations 1.5 Security concerns 2 Literal strings 3 Non-text strings 4 String processing algorithms 5 Character string-oriented languages and utilities 6 Character string functions 7 String buffers 7.1 In Java 7.1.1 Theory 7.1.2 Implications 7.2 In .NET 7.3 In other languages 7.4 String instructions 8 Formal theory 8.1 Concatenation and substrings 8.2 Prefixes and suffixes 8.3 Reversal 8.4 Rotations 8.5 Lexicographical ordering 8.6 String operations 8.7 Topology String datatypesEdit See also: Comparison of programming languages (string functions) A string datatype is a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly every programming language. In some languages they are available as primitive types and in others as composite types. The syntax of most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called a literal or string literal. String lengthEdit Although formal strings can have an arbitrary finite length, the length of strings in real languages is often constrained to an artificial maximum. In general, there are two types of string datatypes: fixed-length strings, which have a fixed maximum length to be determined at compile time and which use the same amount of memory whether this maximum is needed or not, and variable-length strings, whose length is not arbitrarily fixed and which can use varying amounts of memory depending on the actual requirements at run time (see Memory management). Most strings in modern programming languages are variable-length strings. Of course, even variable-length strings are limited in length – by the size of available computer memory. The string length can be stored as a separate integer (which may put another artificial limit on the length) or implicitly through a termination character, usually a character value with all bits zero such as in C programming language. See also "Null-terminated" below. Character encodingEdit String datatypes have historically allocated one byte per character, and, although the exact character set varied by region, character encodings were similar enough that programmers could often get away with ignoring this, since characters a program treated specially (such as period and space and comma) were in the same place in all the encodings a program would encounter. These character sets were typically based on ASCII or EBCDIC. If text in one encoding was displayed on a system using a different encoding, text was often mangled, though often somewhat readable and some computer users learned to read the mangled text. Logographic languages such as Chinese, Japanese, and Korean (known collectively as CJK) need far more than 256 characters (the limit of a one 8-bit byte per-character encoding) for reasonable representation. The normal solutions involved keeping single-byte representations for ASCII and using two-byte representations for CJK ideographs. Use of these with existing code led to problems with matching and cutting of strings, the severity of which depended on how the character encoding was designed. Some encodings such as the EUC family guarantee that a byte value in the ASCII range will represent only that ASCII character, making the encoding safe for systems that use those characters as field separators. Other encodings such as ISO-2022 and Shift-JIS do not make such guarantees, making matching on byte codes unsafe. These encodings also were not "self-synchronizing", so that locating character boundaries required backing up to the start of a string, and pasting two strings together could result in corruption of the second string. Unicode has simplified the picture somewhat. Most programming languages now have a datatype for Unicode strings. Unicode's preferred byte stream format UTF-8 is designed not to have the problems described above for older multibyte encodings. UTF-8, UTF-16 and UTF-32 require the programmer to know that the fixed-size code units are different than the "characters", the main difficulty currently is incorrectly designed APIs that attempt to hide this difference (UTF-32 does make code points fixed-sized, but these are not "characters" due to composing codes). ImplementationsEdit Some languages like C++ implement strings as templates that can be used with any datatype, but this is the exception, not the rule. Some languages, such as C++ and Ruby, normally allow the contents of a string to be changed after it has been created; these are termed mutable strings. In other languages, such as Java and Python, the value is fixed and a new string must be created if any alteration is to be made; these are termed immutable strings. Strings are typically implemented as arrays of bytes, characters, or code units, in order to allow fast access to individual units or substrings—including characters when they have a fixed length. A few languages such as Haskell implement them as linked lists instead. Some languages, such as Prolog and Erlang, avoid implementing a dedicated string datatype at all, instead adopting the convention of representing strings as lists of character codes. RepresentationsEdit Representations of strings depend heavily on the choice of character repertoire and the method of character encoding. Older string implementations were designed to work with repertoire and encoding defined by ASCII, or more recent extensions like the ISO 8859 series. Modern implementations often use the extensive repertoire defined by Unicode along with a variety of complex encodings such as UTF-8 and UTF-16. The term byte string usually indicates a general-purpose string of bytes, rather than strings of only (readable) characters, strings of bits, or such. Byte strings often imply that bytes can take any value and any data can be stored as-is, meaning that there should be no value interpreted as a termination value. Most string implementations are very similar to variable-length arrays with the entries storing the character codes of corresponding characters. The principal difference is that, with certain encodings, a single logical character may take up more than one entry in the array. This happens for example with UTF-8, where single codes (UCS code points) can take anywhere from one to four bytes, and single characters can take an arbitrary number of codes. In these cases, the logical length of the string (number of characters) differs from the physical length of the array (number of bytes in use). UTF-32 avoids the first part of the problem. Null-terminatedEdit Main article: Null-terminated string The length of a string can be stored implicitly by using a special terminating character; often this is the null character (NUL), which has all bits zero, a convention used and perpetuated by the popular C programming language.[2] Hence, this representation is commonly referred to as a C string. This representation of an n-character string takes n + 1 space (1 for the terminator), and is thus an implicit data structure. In terminated strings, the terminating code is not an allowable character in any string. Strings with length field do not have this limitation and can also store arbitrary binary data. An example of a null-terminated string stored in a 10-byte buffer, along with its ASCII (or more modern UTF-8) representation as 8-bit hexadecimal numbers is: F R A N K NUL k e f w 4616 5216 4116 4E16 4B16 0016 6B16 6516 6616 7716 The length of the string in the above example, "FRANK", is 5 characters, but it occupies 6 bytes. Characters after the terminator do not form part of the representation; they may be either part of other data or just garbage. (Strings of this form are sometimes called ASCIZ strings, after the original assembly language directive used to declare them.) Byte- and bit-terminatedEdit Using a special byte other than null for terminating strings has historically appeared in both hardware and software, though sometimes with a value that was also a printing character. $ was used by many assembler systems, : used by CDC systems (this character had a value of zero), and the ZX80 used "[3] since this was the string delimiter in its BASIC language. Somewhat similar, "data processing" machines like the IBM 1401 used a special word mark bit to delimit strings at the left, where the operation would start at the right. This bit had to be clear in all other parts of the string. This meant that, while the IBM 1401 had a seven-bit word, almost no-one ever thought to use this as a feature, and override the assignment of the seventh bit to (for example) handle ASCII codes. Early microcomputer software relied upon the fact that ASCII codes do not use the high-order bit, and set it to indicate the end of a string. It must be reset to 0 prior to output.[4] Length-prefixedEdit The length of a string can also be stored explicitly, for example by prefixing the string with the length as a byte value. This convention is used in many Pascal dialects; as a consequence, some people call such a string a Pascal string or P-string. Storing the string length as byte limits the maximum string length to 255. To avoid such limitations, improved implementations of P-strings use 16-, 32-, or 64-bit words to store the string length. When the length field covers the address space, strings are limited only by the available memory. If the length is bounded, then it can be encoded in constant space, typically a machine word, thus leading to an implicit data structure, taking n + k space, where k is the number of characters in a word (8 for 8-bit ASCII on a 64-bit machine, 1 for 32-bit UTF-32/UCS-4 on a 32-bit machine, etc.). If the length is not bounded, encoding a length n takes log(n) space (see fixed-length code), so length-prefixed strings are a succinct data structure, encoding a string of length n in log(n) + n space. In the latter case, the length-prefix field itself doesn't have fixed length, therefore the actual string data needs to be moved when the string grows such that the length field needs to be increased. Here is a Pascal string stored in a 10-byte buffer, along with its ASCII / UTF-8 representation: length F R A N K k e f w 0516 4616 5216 4116 4E16 4B16 6B16 6516 6616 7716 Strings as recordsEdit Many languages, including object-oriented ones, implement strings as records with an internal structure like: class string { char *text; However, since the implementation is usually hidden, the string must be accessed and modified through member functions. text is a pointer to a dynamically allocated memory area, which might be expanded as needed. See also string (C++). Other representationsEdit Both character termination and length codes limit strings: For example, C character arrays that contain null (NUL) characters cannot be handled directly by C string library functions: Strings using a length code are limited to the maximum value of the length code. Both of these limitations can be overcome by clever programming. It is possible to create data structures and functions that manipulate them that do not have the problems associated with character termination and can in principle overcome length code bounds. It is also possible to optimize the string represented using techniques from run length encoding (replacing repeated characters by the character value and a length) and Hamming encoding[clarification needed]. While these representations are common, others are possible. Using ropes makes certain string operations, such as insertions, deletions, and concatenations more efficient. The core data structure in a text editor is the one that manages the string (sequence of characters) that represents the current state of the file being edited. While that state could be stored in a single long consecutive array of characters, a typical text editor instead uses an alternative representation as its sequence data structure—a gap buffer, a linked list of lines, a piece table, or a rope—which makes certain string operations, such as insertions, deletions, and undoing previous edits, more efficient.[5] Security concernsEdit The differing memory layout and storage requirements of strings can affect the security of the program accessing the string data. String representations requiring a terminating character are commonly susceptible to buffer overflow problems if the terminating character is not present, caused by a coding error or an attacker deliberately altering the data. String representations adopting a separate length field are also susceptible if the length can be manipulated. In such cases, program code accessing the string data requires bounds checking to ensure that it does not inadvertently access or change data outside of the string memory limits. String data is frequently obtained from user input to a program. As such, it is the responsibility of the program to validate the string to ensure that it represents the expected format. Performing limited or no validation of user input can cause a program to be vulnerable to code injection attacks. Literal stringsEdit See also: String literal Sometimes, strings need to be embedded inside a text file that is both human-readable and intended for consumption by a machine. This is needed in, for example, source code of programming languages, or in configuration files. In this case, the NUL character doesn't work well as a terminator since it is normally invisible (non-printable) and is difficult to input via a keyboard. Storing the string length would also be inconvenient as manual computation and tracking of the length is tedious and error-prone. Two common representations are: Surrounded by quotation marks (ASCII 0x22 double quote or ASCII 0x27 single quote), used by most programming languages. To be able to include special characters such as the quotation mark itself, newline characters, or non-printable characters, escape sequences are often available, usually prefixed with the backslash character (ASCII 0x5C). Terminated by a newline sequence, for example in Windows INI files. Non-text stringsEdit While character strings are very common uses of strings, a string in computer science may refer generically to any sequence of homogeneously typed data. A bit string or byte string, for example, may be used to represent non-textual binary data retrieved from a communications medium. This data may or may not be represented by a string-specific datatype, depending on the needs of the application, the desire of the programmer, and the capabilities of the programming language being used. If the programming language's string implementation is not 8-bit clean, data corruption may ensue. C programmers draw a sharp distinction between a "string", aka a "string of characters", which by definition is always null terminated, vs. a "byte string" or "pseudo string" which may be stored in the same array but is often not null terminated. Using C string handling functions on such a "byte string" often seems to work, but later leads to security problems.[6][7][8] String processing algorithmsEdit "Stringology" redirects here. For the physical theory, see String theory. There are many algorithms for processing strings, each with various trade-offs. Competing algorithms can be analyzed with respect to run time, storage requirements, and so forth. Some categories of algorithms include: String searching algorithms for finding a given substring or pattern String manipulation algorithms Regular expression algorithms Parsing a string Sequence mining Advanced string algorithms often employ complex mechanisms and data structures, among them suffix trees and finite state machines. The name stringology was coined in 1984 by computer scientist Zvi Galil for the issue of algorithms and data structures used for string processing.[9][third-party source needed] Character string-oriented languages and utilitiesEdit Character strings are such a useful datatype that several languages have been designed in order to make string processing applications easy to write. Examples include the following languages: SNOBOL Many Unix utilities perform simple string manipulations and can be used to easily program some powerful string processing algorithms. Files and finite streams may be viewed as strings. Some APIs like Multimedia Control Interface, embedded SQL or printf use strings to hold commands that will be interpreted. Recent scripting programming languages, including Perl, Python, Ruby, and Tcl employ regular expressions to facilitate text operations. Perl is particularly noted for its regular expression use,[10] and many other languages and applications implement Perl compatible regular expressions. Some languages such as Perl and Ruby support string interpolation, which permits arbitrary expressions to be evaluated and included in string literals. Character string functionsEdit String functions are used to manipulate a string or change or edit the contents of a string. They also are used to query information about a string. They are usually used within the context of a computer programming language. The most basic example of a string function is the string length function – the function that returns the length of a string (not counting any terminator characters or any of the string's internal structural information) and does not modify the string. This function is often named length or len. For example, length("hello world") would return 11. String buffersEdit In some programming languages, a string buffer is an alternative to a string. It has the ability to be altered through adding or appending, whereas a String is normally fixed or immutable. In JavaEdit TheoryEdit Java's standard way to handle text is to use its String class. Any given String in Java is an immutable object, which means its state cannot be changed. A String has an array of characters. Whenever a String must be manipulated, any changes require the creation of a new String (which, in turn, involves the creation of a new array of characters, and copying of the original array). This happens even if the original String's value or intermediate Strings used for the manipulation are not kept. Java provides two alternate classes for string manipulation, called StringBuffer and StringBuilder. Both of these, like String, each has an array to hold characters. They, however, are mutable (its state can be altered). Their array of characters is not necessarily completely filled (as opposed to a String, whose array is always the exact required length for its contents). Thus, a StringBuffer or StringBuilder has the capability to add, remove, or change its state without creating a new object (and without the creation of a new array, and array copying). The exception to this is when its array is no longer of suitable length to hold its content (a case which rarely happens because of the default Dynamic memory allocation provided by the JVM). In this case, it is required to create a new array, and copy the contents. For these reasons, Java would handle an expression like String newString = aString + anInt + aChar + aDouble; String newString = (new StringBuilder(aString)).append(anInt).append(aChar).append(aDouble).toString(); ImplicationsEdit Generally, a StringBuffer is more efficient than a String in string handling. However, this is not necessarily the case, since a StringBuffer will be required to recreate its character array when it runs out of space. Theoretically, this is possible to happen the same number of times as a new String would be required, although this is unlikely (and the programmer can provide length hints to prevent this). Either way, the effect is not noticeable in modern desktop computers. As well, the shortcomings of arrays are inherent in a StringBuffer. In order to insert or remove characters at arbitrary positions, whole sections of arrays must be moved. The method by which a StringBuffer is attractive in an environment with low processing power takes this ability by using too much memory, which is likely also at a premium in this environment. This point, however, is trivial, considering the space required for creating many instances of Strings in order to process them. As well, the StringBuffer can be optimized to "waste" as little memory as possible. The StringBuilder class, introduced in J2SE 5.0, differs from StringBuffer in that it is unsynchronized. When only a single thread at a time will access the object, using a StringBuilder processes more efficiently than using a StringBuffer. StringBuffer and StringBuilder are included in the java.lang package. In .NETEdit Microsoft's .NET Framework has a StringBuilder class in its Base Class Library. In other languagesEdit In C++ and Ruby, the standard string class is already mutable, with the ability to change the contents and append strings, etc., so a separate mutable string class is unnecessary. In Objective-C (Cocoa/OpenStep frameworks), the NSMutableString class is the mutable version of the NSString class. String instructionsEdit Some microprocessor's instruction set architectures contain direct support for string operations, such as block copy (e.g. In intel x86m REPNZ MOVSB).[11] Formal theoryEdit See also: Tuple Let Σ be a non-empty finite set of symbols (alternatively called characters), called the alphabet. No assumption is made about the nature of the symbols. A string (or word) over Σ is any finite sequence of symbols from Σ.[12] For example, if Σ = {0, 1}, then 01011 is a string over Σ. The length of a string s is the number of symbols in s (the length of the sequence) and can be any non-negative integer; it is often denoted as |s|. The empty string is the unique string over Σ of length 0, and is denoted ε or λ.[12][13] The set of all strings over Σ of length n is denoted Σn. For example, if Σ = {0, 1}, then Σ2 = {00, 01, 10, 11}. Note that Σ0 = {ε} for any alphabet Σ. The set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ { 0 } Σ n {\displaystyle \Sigma ^{*}=\bigcup _{n\in \mathbb {N} \cup \{0\}}\Sigma ^{n}} For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, ...}. Although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ (i.e. any subset of Σ*) is called a formal language over Σ. For example, if Σ = {0, 1}, the set of strings with an even number of zeros, {ε, 1, 00, 11, 001, 010, 100, 111, 0000, 0011, 0101, 0110, 1001, 1010, 1100, 1111, ...}, is a formal language over Σ. Concatenation and substringsEdit Concatenation is an important binary operation on Σ*. For any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ = {a, b, ..., z}, s = bear, and t = hug, then st = bearhug and ts = hugbear. String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element; for any string s, εs = sε = s. Therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers (that is, a function L : Σ ∗ ↦ N ∪ { 0 } {\displaystyle L:\Sigma ^{*}\mapsto \mathbb {N} \cup \{0\}} , such that L ( s t ) = L ( s ) + L ( t ) ∀ s , t ∈ Σ ∗ {\displaystyle L(st)=L(s)+L(t)\quad \forall s,t\in \Sigma ^{*}} A string s is said to be a substring or factor of t if there exist (possibly empty) strings u and v such that t = usv. The relation "is a substring of" defines a partial order on Σ*, the least element of which is the empty string. Prefixes and suffixesEdit A string s is said to be a prefix of t if there exists a string u such that t = su. If u is nonempty, s is said to be a proper prefix of t. Symmetrically, a string s is said to be a suffix of t if there exists a string u such that t = us. If u is nonempty, s is said to be a proper suffix of t. Suffixes and prefixes are substrings of t. Both the relations "is a prefix of" and "is a suffix of" are prefix orders. ReversalEdit The reverse of a string is a string with the same symbols but in reverse order. For example, if s = abc (where a, b, and c are symbols of the alphabet), then the reverse of s is cba. A string that is the reverse of itself (e.g., s = madam) is called a palindrome, which also includes the empty string and all strings of length 1. RotationsEdit A string s = uv is said to be a rotation of t if t = vu. For example, if Σ = {0, 1} the string 0011001 is a rotation of 0100110, where u = 00110 and v = 01. As another example, the string abc has three different rotations, viz. abc itself (with u=abc, v=ε), bca (with u=bc, v=a), and cab (with u=c, v=ab). Lexicographical orderingEdit It is often useful to define an ordering on a set of strings. If the alphabet Σ has a total order (cf. alphabetical order) one can define a total order on Σ* called lexicographical order. For example, if Σ = {0, 1} and 0 < 1, then the lexicographical order on Σ* includes the relationships ε < 0 < 00 < 000 < ... < 0001 < 001 < 01 < 010 < 011 < 0110 < 01111 < 1 < 10 < 100 < 101 < 111 < 1111 < 11111 ... The lexicographical order is total if the alphabetical order is, but isn't well-founded for any nontrivial alphabet, even if the alphabetical order is. See Shortlex for an alternative string ordering that preserves well-foundedness. String operationsEdit A number of additional operations on strings commonly occur in the formal theory. These are given in the article on string operations. TopologyEdit (Hyper)cube of binary strings of length 3 Strings admit the following interpretation as nodes on a graph, where k is the number of symbols in Σ: Fixed-length strings of length n can be viewed as the integer locations in an n-dimensional hypercube with sides of length k-1. Variable-length strings (of finite length) can be viewed as nodes on a perfect k-ary tree. Infinite strings (otherwise not considered here) can be viewed as infinite paths on a k-node complete graph. The natural topology on the set of fixed-length strings or variable-length strings is the discrete topology, but the natural topology on the set of infinite strings is the limit topology, viewing the set of infinite strings as the inverse limit of the sets of finite strings. This is the construction used for the p-adic numbers and some constructions of the Cantor set, and yields the same topology. Isomorphisms between string representations of topologies can be found by normalizing according to the lexicographically minimal string rotation. Binary-safe — a property of string manipulating functions treating their input as raw data stream Bit array — a string of binary digits C string handling — overview of C string handling C++ string handling — overview of C++ string handling Comparison of programming languages (string functions) Connection string — passed to a driver to initiate a connection (e.g., to a database) Empty string — its properties and representation in programming languages Incompressible string — a string that cannot be compressed by any algorithm Rope (data structure) — a data structure for efficiently manipulating long strings String metric — notions of similarity between strings Wikimedia Commons has media related to Strings. ^ "Introduction To Java - MFC 158 G". Archived from the original on 2016-03-03. String literals (or constants) are called 'anonymous strings' ^ Bryant, Randal E.; David, O'Hallaron (2003), Computer Systems: A Programmer's Perspective (2003 ed.), Upper Saddle River, NJ: Pearson Education, p. 40, ISBN 0-13-034074-X, archived from the original on 2007-08-06 ^ Wearmouth, Geoff. "An Assembly Listing of the ROM of the Sinclair ZX80". Archived from the original on August 15, 2015. CS1 maint: unfit url (link) ^ Allison, Dennis. "Design Notes for Tiny BASIC". Archived from the original on 2017-04-10. ^ Charles Crowley. "Data Structures for Text Sequences" Archived 2016-03-04 at the Wayback Machine. Section "Introduction" Archived 2016-04-04 at the Wayback Machine. ^ "strlcpy and strlcat - consistent, safe, string copy and concatenation." Archived 2016-03-13 at the Wayback Machine ^ "A rant about strcpy, strncpy and strlcpy." Archived 2016-02-29 at the Wayback Machine ^ Keith Thompson. "No, strncpy() is not a "safer" strcpy()". 2012. ^ "The Prague Stringology Club". stringology.org. Archived from the original on 1 June 2015. Retrieved 23 May 2015. ^ "Essential Perl". Archived from the original on 2012-04-21. Perl's most famous strength is in string manipulation with regular expressions. ^ "x86 string instructions". Archived from the original on 2015-03-27. ^ a b Barbara H. Partee; Alice ter Meulen; Robert E. Wall (1990). Mathematical Methods in Linguistics. Kluwer. ^ John E. Hopcroft, Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. ISBN 0-201-02988-X. Here: sect.1.1, p.1 Retrieved from "https://en.wikipedia.org/w/index.php?title=String_(computer_science)&oldid=936111002"
CommonCrawl
AMS Home MAA Press Books An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart The following link can be shared to navigate to this page. You can select the link to copy or click the 'Copy To Clipboard' button below. Successfully Copied! Compact Lie Groups and Their Representations D. P. Želobenko Softcover ISBN: 978-0-8218-1590-8 Product Code: MMONO/40 448 pp MAA Member Price: $117.90 AMS Member Price: $104.80 Electronic ISBN: 978-1-4704-4455-6 Product Code: MMONO/40.E Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. Softcover + Electronic Click above image for expanded view Translations of Mathematical Monographs Volume: 40; 1973 MSC: Primary 22; Secondary 17; The contents of this volume are somewhat different from the traditional connotations of the title. First, the author, bearing in mind the needs of the physicist, has tried to make the exposition as elementary as possible. The need for an elementary exposition has influenced the distribution of the material; the book is divided into three largely independent parts, arranged in order of increasing difficulty. Besides compact Lie groups, groups with other topological structure ("similar" to compact groups in some sense) are considered. Prominent among these are reductive complex Lie groups (including semisimple groups), obtained from compact Lie groups by analytic continuation, and also their real forms (reductive real Lie groups). The theory of finite-dimensional representation for these classes of groups is developed, striving whenever possible to emphasize the "compact origin" of these representations, i.e. their analytic relationship to representations of compact Lie groups. Also studied are infinite-dimensional representations of semisimple complex Lie algebras. Some aspects of the theory of infinite-dimensional representations of Lie groups are presented in a brief survey. Topological groups. Lie groups Linear groups Fundamental problems of representation theory Compact Lie groups. Global theorem The infinitesimal method in representation theory Analytic continuation Irreducible representations of the group $\mathrm {U}(n)$ Tensors and Young diagrams Casimir operators Indicator systems and the Gel′fand-Cetlin basis Tensor product of two irreducible representations of $\mathrm {U}(n)$ Basic types of Lie algebras and Lie groups Classification of compact and reductive Lie algebras Compact Lie groups in the large Description of irreducible finite-dimensonal representations Infinitesimal theory (characters, weights, Casimir operators) Some problems of spectral analysis for finite-dimensional representations Request Review Copy Please select which format for which you are requesting permissions. American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · Contact Us AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility and AMS Online Content Shipping Information · International Orders
CommonCrawl
July 2018 , 78:580 | Cite as Noncommutative 3-colour scalar quantum field theory model in 2D Alexander Hock Raimar Wulkenhaar We introduce the 3-colour noncommutative quantum field theory model in two dimensions. For this model we prove a generalised Ward–Takahashi identity, which is special to coloured noncommutative QFT models. It reduces to the usual Ward–Takahashi identity in a particular case. The Ward–Takahashi identity is used to simplify the Schwinger–Dyson equations for the 2-point function and the N-point function. The absence of any renormalisation conditions in the large \((\mathcal {N},V)\)-limit in 2D leads to a recursive integral equation for the 2-point function, which we solve perturbatively to sixth order in the coupling constant. Consider a hexagonal lattice with three different coloured links, where at each vertex all three links carry different colours. The mathematical problem of counting the number of colourings of a lattice with N vertices was solved by Baxter [1]. A generalisation of this so-called 3-colour model as a Hermitian matrix model problem was introduced by Eynard and Kristjansen [2] and solved by Kostov [3]. Eynard and Kristjansen reduced the partition function (without external fields) to an integral over eigenvalues, which could be solved by saddle-point techniques. Graphs in \(\mathcal {N}{\times }\mathcal {N}\)-matrix models are ribbon graphs on a Riemann surface. These ribbon graphs are dual to the triangulations of the corresponding surface. The large-\(\mathcal {N}\) limit is dominated by planar graphs, corresponding to triangulations of the sphere. Two-dimensional quantum gravity can be formulated as a counting problem for triangulations of random surfaces, which leads to the connection between 2D quantum gravity and random matrices [4, 5]. Moreover, it was proved by Kontsevich [6] that the solution of an action of the form \(\mathrm {tr}(E\cdot M^2+\frac{\mathrm {i}}{6}M^3)\) with Hermitian matrices M and external matrix E can be mapped to a Hermitian matrix model with arbitrary potential. On the other hand the Kontsevich model proves Witten's conjecture about intersection numbers of stable cohomology classes on the moduli space of curves [7]. One particularly elegant solution technique reduced the partition function to an integral over the eigenvalues \(x_i\) and observed that these integrals are unchanged under diffeomorphisms of \(x_i\) generated by \(x_i^{n+1}\frac{d}{dx_i}\). The corresponding Virasoro constraints all descend from a master constraint which was solved by Makeenko and Semenoff [8]. Matrix models gained renewed interest in a non-perturbative approach to quantum field theories on Moyal-Weyl deformed noncommutative space [9, 10]. These approaches use the matrix basis of the Moyal space and add a harmonic oscillator potential to the Laplacian [11]. The most established noncommutative quantum field theory is the \(\Phi ^4\)-model [12], which is a candidate for an exactly solvable quantum field theory in 4D due to its vanishing \(\beta \)-function at all orders [13]. Recently all boundary sectors of noncommutative \(\Phi ^3\)-model in \(\{2,4,6\}\) dimensions were solved exactly in the large \((\mathcal {N},V)\)-limit [14, 15]. In this paper we will study the noncommutative 3-colour model as a quantum field theoretical model. Roughly speaking, it is the model solved by Kostov with an additional external dynamical field E of linearly-spaced eigenvalues. Although it shares topologically some graphs with the noncommutative \(\Phi ^3\)-model [14], it has more similarities to the \(\Phi ^4\)-model [12] due to the absent 1-point function. For the large \((\mathcal {N},V)\)-limit a closed integral equation for the 2-point function will be derived. In the two-dimensional case the first-order loop correction has no UV divergence so that this 2D noncommutative 3-colour model needs no renormalisation in this limit. The closed integral equation defines a recursion formula for its perturbative expansion. Absence of any renormalisation makes this recursion easy. We are able to determine perturbatively the 2-point function up to the sixth order in the coupling constant. The action of the noncommutative 3-colour model for real scalar fields \(\phi ^a\) with colour \(a\in \{1,2,3\}\) is given by $$\begin{aligned} S[\phi ]&:=\frac{1}{8 \pi }\int _{\mathbb {R}^2} dx \left( \frac{1}{2}\sum _{a=1}^3\phi ^a\left( -\Delta +\Vert 2\Theta ^{-1}x\Vert ^2+\mu ^2\right) \phi ^a\right. \nonumber \\&\quad \left. + \frac{\lambda '}{3} \sum _{a,b,c=1}^3 \sigma _{abc} \phi ^a\star \phi ^b\star \phi ^c\right) (x), \end{aligned}$$ where \(\sigma _{abc}=1\) for \(a\ne b\ne c\ne a\) and \(\sigma _{abc}=0\) else. Here \(\lambda '\in \mathbb {R}\) is the coupling constant, \(\mu ^2\) the mass squared and \(\Delta \) the Laplacian, where independence of any colour is assumed. The Laplacian has non-compact resolvent, therefore the harmonic oscillator potential is added to achieve compactness. The Moyal \(\star \)-product is defined by $$\begin{aligned} (f\star g)(x)&=\int _{\mathbb {R}^2\times \mathbb {R}^2} \frac{dy \,dk}{(2\pi )^2}f\left( x+\tfrac{1}{2}\Theta k\right) \\&\quad \times g\big (x+y\big ) e^{\mathrm {i}\langle k,y\rangle },\quad f,g\in \mathcal {S}(\mathbb {R}^2), \end{aligned}$$ where \(\Theta \) is a \(2\times 2\) skew-symmetric matrix with \(\Theta _{12}=-\Theta _{21}=:4 V>0\). The vector space \(\mathcal {S}(\mathbb {R}^2)\) equipped with the Moyal \(\star \)-product has a matrix basis \(f_{nm}(x)\), which is the 2-dimensional harmonic oscillator basis (independent of the harmonic oscillator term in (1.1), although equality of both frequencies reduces a tensor product to a matrix product). The formulation of the action in the matrix basis is obtained from the expansion $$\begin{aligned} \phi ^a(x)=\sum _{n,m=0}^\infty \phi ^a_{nm}f_{nm}(x),\quad \text {with}\quad x\in \mathbb {R}^2. \end{aligned}$$ The matrix basis satisfies [16] $$\begin{aligned} (f_{nm}\star f_{kl})(x)=\delta _{mk}f_{nl}(x),\quad \int _{\mathbb {R}^2}dx f_{nm}(x)=8\pi V\delta _{nm}. \end{aligned}$$ Accordingly, the action in the matrix basis with an UV cut-off \(\mathcal {N}\) is given by $$\begin{aligned} S[\phi ]&=V\left( \sum _{a=1}^3\sum _{n,m=0}^\mathcal {N}\frac{H_{nm}}{2}\phi ^a_{nm}\phi ^a_{mn}\right. \nonumber \\&\quad \left. +\frac{\lambda '}{3} \sum _{a,b,c=1}^3\sum _{n,m,l=0}^\mathcal {N}\sigma _{abc}\phi ^a_{nm} \phi ^b_{ml}\phi ^c_{ln}\right) \nonumber \\ H_{nm}&:=E_n+E_m,\quad E_m=\frac{m}{V}+\frac{\mu ^2}{2}, \end{aligned}$$ where \((\phi ^a_{nm})\) are Hermitian matrices. The linear and discrete dependence of \(E_m\) reflects the eigenvalue spectrum of the quantum-mechanical harmonic oscillator. 2 Graph computation As a perturbative theory the noncommutative 3-colour model can be expanded by graphs. In the large \((\mathcal {N},V)\)-limit taken later, all functions of genus \(>0\) are suppressed, which is a well-known behaviour for matrix models. Therefore we will consider only planar ribbon graphs, but we admit a non-trivial boundary structure for which we next give an alternative description. Let \(\Gamma \) be a planar ribbon graph on \(S^2\) consisting of vertices, edges and faces subject to the following conditions: It has two different vertices, black (internal) and white (external) vertices. The external vertex is also called boundary component. The number of white vertices is \(B\ge 1\), and each white vertex has the valence \(N_\beta \) for \(\beta \in \{1,\ldots ,B\}\). The edges have one of three different colours; they separate two faces. The black vertices are of valence three. At a black vertex all three colours must occur once. We require that every face has at most one external vertex, in which case it is called external. If the \(\beta ^{\mathrm {th}}\) external vertex has valence \(N_\beta \) it is a corner of \(N_\beta \) external faces which are labelled by positive numbers \(p_1^\beta ,\ldots ,p_{N_\beta }^\beta \). Let \(a_i^\beta \) be the colour of the edge which ends at the \(\beta ^{\mathrm {th}}\) external vertex and separates the faces labelled by \(p_i^\beta \) and \(p_{i+1}^\beta \), where \(i\in \{1,\ldots ,N_\beta \}\) and \(N_\beta +1\equiv 1\). Faces without an external vertex are called internal and are labelled by \(q_1,\ldots ,q_L\). To every white vertex a weight 1 is associated and to every black vertex a weight \(\lambda \). An edge is weighted by \(\frac{1}{1+z_1+z_2}\) if \(z_1\) and \(z_2\) are the labels of the two faces separated by the edge. To a graph \(\Gamma \) we associate the function \(\tilde{G}^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}}\) \((\Gamma )\) given by multiplication of all weights of vertices and faces of \(\Gamma \) and with integration over the labels \(q_1, \ldots ,q_L\) of all internal faces from 0 to \(\infty \). We consider two graphs \(\Gamma ,\Gamma '\) as equivalent, \(\Gamma \sim \Gamma '\), if they are topologically the same and have the same labels \( p_1^1, \ldots ,p^1_{N_1}, \ldots ,p_1^B, \ldots ,p^B_{N_B}\) and \(a_1^1, \ldots ,a_{N_1}^1, \ldots ,a_1^B, \ldots ,a^B_{N_B} \), but different assignment of colours at internal edges. Such graphs \(\Gamma \sim \Gamma '\) have the same amplitude \(\tilde{G}^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}} (\Gamma )=\tilde{G}^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}}(\Gamma ')\). We denote by \(s(\Gamma ):=|[\Gamma ] |\) the number of graphs equivalent to \(\Gamma \). For a fixed number B of external vertices of valences \(N_1, \ldots ,N_B\), but arbitrary number of internal vertices, one can ask whether the sum over all possible planar graphs converges for sufficient small \(|\lambda |\). This sum can formally be defined over all equivalence classes by $$\begin{aligned} G^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \cdots a^B_{N_B}}_{p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}}: =&\sum _{[\Gamma ]\in \mathcal {G}^{\mathbf {a}}_{\mathbf {p}}}s(\Gamma )\tilde{G}^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}}(\Gamma ), \end{aligned}$$ where \(\mathcal {G}^{\mathbf {a}}_\mathbf {p}\) is the set of equivalence classes of all planar graphs with B external vertices, external edges of colour \(a_1^\beta , \ldots ,a_{N_\beta }^\beta \) and external faces labelled by \(p_1^\beta , \ldots ,p^\beta _{N_\beta }\) for all \( \beta \in \{1, \ldots ,B\}\). This sum can clearly be rearranged as a series in \(\lambda \) $$\begin{aligned} G^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}}=: \sum _{n=0}^\infty \lambda ^n G^{\quad a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{n,\,p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}}. \end{aligned}$$ The equivalence class corresponding to the second example contains two elements, so we obtain for \(a=a^1_1=a^1_2\) $$\begin{aligned} G^{\,\,\,\,\,\,\,aa}_{2,\, p_1p_2}=\frac{2\log \left( \frac{1+p_1}{1+p_2}\right) }{(1+p_1+p_2)^2(p_1-p_2)}. \end{aligned}$$ 3 Partition function and correlation function In the following we demonstrate the techniques to determine correlation functions from the partition function. The partition function \(\mathcal {Z}[J]\) of the noncommutative 3-colour model with external Hermitian matrices \(\left( J^a_{nm}\right) \) and \(a\in \{1,2,3\}\) is formally defined by $$\begin{aligned} \mathcal {Z}[J]:=&\int \left( \prod _{a=1}^{3}\mathcal {D}\phi ^a\right) \nonumber \\&\times \exp \left( -S[\phi ]+V\sum _{a=1}^{3}\sum _{n,m=0}^\mathcal {N}J^a_{nm}\phi ^a_{mn}\right) \nonumber \\ =&K\,\exp \left( -\frac{\lambda '}{3V^2}\sum _{a,b,c=1}^3\right. \nonumber \\&\left. \sum _{n,m,l=0}^\mathcal {N}\sigma _{abc}\frac{\partial ^3}{\partial J^a_{nm}\partial J^b_{ml}\partial J^c_{ln}} \right) \mathcal {Z}_{free}[J], \end{aligned}$$ $$\begin{aligned} \mathcal {Z}_{free}[J]:=&\exp \left( \sum _{a=1}^3\sum _{n,m=0}^\mathcal {N}\frac{V}{2H_{nm}}J^a_{nm}J^a_{mn}\right) ,\nonumber \\ K:=&\int \left( \prod _{a=1}^{3} \mathcal {D}\phi ^a\right) \nonumber \\&\times \exp \left( -\sum _{a=1}^3\sum _{n,m=0}^\mathcal {N}\frac{VH_{nm}}{2}\phi ^a_{nm}\phi ^a_{mn}\right) . \end{aligned}$$ The logarithm of \(\mathcal {Z}[J]\) will be expanded into a series of moments with different number B of boundary components. These moments are called correlation functions, which not necessarily correspond to planar graphs. The sources are cyclic within every boundary \(\beta \in \{1,\ldots ,B\}\). For simplification we use the notation \(\mathbb {J}_{p^\beta _1 \ldots p^\beta _{N_\beta }}^{a^\beta _1 \ldots a^\beta _{N_\beta }}:=\prod _{i=1}^{N_\beta } J_{p_i^\beta p_{i+1}^\beta }^{a_i^\beta }\) with \(N_\beta +1\equiv 1\). The correlation functions are then defined by $$\begin{aligned} \log \frac{\mathcal {Z}[J]}{\mathcal {Z}[0]}&=:\sum _{B=1}^\infty \sum _{1\le N_1\le \cdots \le N_B}^\infty \sum _{p_1^1, \ldots ,p_{N_B}^B=0}^\mathcal {N}\nonumber \\&\times \sum _{a_1^1, \ldots ,a_{N_B}^B=1}^{3}V^{2-B}\frac{G^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{|p_1^1 \ldots p^1_{N_1}| \cdots |p_1^B \ldots p^B_{N_B}|}}{S_{(N_1, \ldots ,N_B)}} \prod _{\beta =1}^B\frac{\mathbb {J}_{p^\beta _1 \ldots p^\beta _{N_\beta }}^{a^\beta _1 \ldots a^\beta _{N_\beta }}}{N_\beta }. \end{aligned}$$ If we regroup identical numbers \(N_\beta \) by \((N_1, \ldots ,N_B)=(\underbrace{N_1', \ldots ,N_1'}_{\nu _1}, \ldots ,\underbrace{N_s', \ldots ,N_s'}_{\nu _s})\), the symmetry factor is then defined by \(S_{(N_1, \ldots ,N_B)}:=\prod \nolimits _{i=1}^{s}\nu _i!\), due to the symmetry of boundaries with the same valence. The expansion coefficients \(G^{a_1^1 \ldots a^1_{N_1}| \cdots |a_1^B \ldots a^B_{N_B}}_{|p_1^1 \ldots p^1_{N_1}| \ldots |p_1^B \ldots p^B_{N_B}|}\) are called \((N_1+ \cdots +N_B)\)-point functions. For the time being, the factor \(V^{2-B}\) in (3.3) is a matter of convention; it could be absorbed in G. However, as known from [14] precisely this convention leads to equations for the \((N_1+ \cdots +N_B)\)-point functions which all have a well-defined large \((\mathcal {N},V)\)-limit. Due to the vanishing 1-point function for the 3-colour model, the partition function can be expanded with (3.3) to $$\begin{aligned} \frac{\mathcal {Z}[J]}{\mathcal {Z}[0]}&=1+ \sum _{a,b=1}^3\sum _{n,m=0}^\mathcal {N}\left( \frac{V}{2}G^{ab}_{|nm|}\mathbb {J}^{ab}_{nm}+ \frac{1}{2}G^{a|b}_{|n|m|}\mathbb {J}^a_{n}\mathbb {J}^b_{m}\right) \nonumber \\&\quad +\sum _{a,b,c=1}^3\sum _{n,m,l=0}^\mathcal {N}\Bigg (\frac{V}{3}G^{abc}_{|nml|}\mathbb {J}^{abc}_{nml} +\frac{1}{2}G^{a|bc}_{|n|ml|}\mathbb {J}^a_{n}\mathbb {J}^{bc}_{ml}\nonumber \\&\quad +\frac{1}{6V}G^{a|b|c}_{|n|m|l|}\mathbb {J}^a_{n} \mathbb {J}^b_{m}\mathbb {J}^c_{l}\Bigg )\nonumber \\&\quad +\sum _{a,b,c,d=1}^3\sum _{n,m,l,p=0}^\mathcal {N}\Bigg (\frac{V}{4}G^{abcd}_{|nmlp|}\mathbb {J}^{abcd}_{nmlp} +\frac{1}{3}G^{a|bcd}_{|n|mlp|}\mathbb {J}^a_{n}\mathbb {J}^{bcd}_{mlp}\nonumber \\&\quad +\left( \frac{1}{8}G^{ab|cd}_{|nm|lp|}+\frac{V^2}{8}G^{ab}_{|nm|}G^{cd}_{|lp|}\right) \mathbb {J}^{ab}_{nm}\mathbb {J}^{cd}_{lp}\nonumber \\&\quad +\left( \frac{1}{4V}G^{a|b|cd}_{|n|m|lp|}+\frac{V}{4}G^{a|b}_{|n|m|}G^{cd}_{|lp|}\right) \mathbb {J}^a_{n} \mathbb {J}^b_{m}\mathbb {J}^{cd}_{lp}\nonumber \\&\quad +\left( \frac{1}{24V^2}G^{a|b|c|d}_{|n|m|l|p|}+\frac{1}{8}G^{a|b}_{|n|m|}G^{c|d}_{|l|p|}\right) \mathbb {J}^a_{n} \mathbb {J}^b_{m}\mathbb {J}^c_{l}\mathbb {J}^d_{p}\Bigg )+\cdots \quad . \end{aligned}$$ The calculation rule for later purpose is $$\begin{aligned} \frac{\partial }{\partial J^a_{p_1p_2}}J^b_{p_3p_4}=\delta _{ab}\delta _{p_1p_3}\delta _{p_2p_4}+J^b_{p_3p_4}\frac{\partial }{\partial J^a_{p_1p_2}}. \end{aligned}$$ 4 Ward–Takahashi identity The Ward–Takahashi identity is obtained by the requirement of invariance of \(\mathcal {Z}[J]\) under inner automorphisms. For a colour model we choose a transformation as follows: \(\phi ^a \mapsto (\phi ^a)'=U^\dagger \phi ^a U\) for \(U\in \mathrm {U}(\mathcal {N})\) for one colour \(a\in \{1,2,3\}\). The Ward–Takahashi identity following from this transformation [12, 13] for \(p_1\ne p_2\) is given by $$\begin{aligned}&\sum _{m=0}^\mathcal {N}\frac{\partial ^2}{\partial J^a_{p_1m}\partial J^a_{mp_2}}\mathcal {Z}[J]+\frac{V}{E_{p_1}-E_{p_2}}\nonumber \\&\qquad \times \sum _{m=0}^\mathcal {N}\left( J^a_{p_2m}\frac{\partial }{\partial J^a_{p_1m}}-J^a_{mp_1}\frac{\partial }{\partial J^a_{mp_2}}\right) \mathcal {Z}[J]\nonumber \\&\quad =\frac{\lambda '}{V(E_{p_1}-E_{p_2})}\sum _{m,n=0}^\mathcal {N}\nonumber \\&\qquad \times \! \sum _{b,c=1}^3\sigma _{abc}\left( \frac{\partial ^3}{\partial J^a_{p_1m}\partial J^b_{mn}\partial J^c_{np_2}} \!-\!\frac{\partial ^3}{\partial J^b_{p_1m}\partial J^c_{mn}\partial J^a_{np_2}} \right) \!\mathcal {Z}[J]. \end{aligned}$$ The interaction terms are certainly not invariant under the transformation of only one colour. However, the sum over all colours in (4.1) gives $$\begin{aligned}&\sum _{a=1}^3\sum _{m=0}^\mathcal {N}\frac{\partial ^2}{\partial J^a_{p_1m}\partial J^a_{mp_2}}\mathcal {Z}[J]= \frac{V}{(E_{p_1}-E_{p_2})}\sum _{a=1}^3\nonumber \\&\quad \times \sum _{m=0}^\mathcal {N}\left( J^a_{mp_1}\frac{\partial }{\partial J^a_{mp_2}}-J^a_{p_2m}\frac{\partial }{\partial J^a_{p_1m}}\right) \mathcal {Z}[J], \end{aligned}$$ which has the usual form of a Ward–Takahashi identity. Equation (4.2) shows that the interaction term is invariant under the simultaneous transformation of all three colours. A crucial r\(\hat{\text {o}}\)le plays a more general identity: Let \(p_1\ne p_2\). The generalised Ward–Takahashi identity for the 3-colour matrix model with an external field E is $$\begin{aligned}&\sum _{m=0}^\mathcal {N}\frac{\partial ^2}{\partial J^a_{p_1m}\partial J^b_{mp_2}}\mathcal {Z}[J]+\frac{V}{E_{p_1}-E_{p_2}}\\&\qquad \times \sum _{m=0}^\mathcal {N}\left( J^b_{p_2m}\frac{\partial }{\partial J^a_{p_1m}}-J^a_{mp_1}\frac{\partial }{\partial J^b_{mp_2}}\right) \mathcal {Z}[J]\\&\quad =\frac{\lambda '}{V(E_{p_1}-E_{p_2})}\sum _{m,n=0}^\mathcal {N}\sum _{c,d=1}^3\left( \sigma _{bcd}\frac{\partial ^3}{\partial J^a_{p_1m}\partial J^c_{mn}\partial J^d_{np_2}}\right. \\&\qquad \left. -\sigma _{acd}\frac{\partial ^3}{\partial J^c_{p_1m}\partial J^d_{mn}\partial J^b_{np_2}} \right) \mathcal {Z}[J]. \end{aligned}$$ Let \(S_{int}[\phi ]= V\frac{\lambda '}{3} \sum \nolimits _{a,b,c=1}^3\sum \nolimits _{n,m,l=0}^\mathcal {N}\sigma _{abc}\phi ^a_{nm} \phi ^b_{ml}\phi ^c_{ln}\) be the interaction term of the action. Direct computation gives then $$\begin{aligned}&\frac{E_{p_1}-E_{p_2}}{V}\sum _{m=0}^\mathcal {N}\frac{\partial ^2}{\partial J^a_{p_1m}\partial J^b_{mp_2}}\mathcal {Z}[J]\\&\quad =\frac{1}{V}\sum _{m=0}^\mathcal {N}\frac{\partial ^2}{\partial J^a_{p_1m}\partial J^b_{mp_2}}\left( (E_{p_1}\!+\!E_m)-(E_m\!+\!E_{p_2})\right) \mathcal {Z}[J]\\&\quad =K\sum _{m=0}^\mathcal {N}\Bigg \{\frac{\partial }{\partial J^b_{mp_2}}\exp \left( -S_{int}\left[ \frac{1}{V}\frac{\partial }{\partial J}\right] \right) J^a_{mp_1}\\&\qquad - \frac{\partial }{\partial J^a_{p_1m}}\exp \left( -S_{int}\left[ \frac{1}{V}\frac{\partial }{\partial J}\right] \right) J^b_{p_2m}\Bigg \}\mathcal {Z}_{free}[J]\\&\quad =\sum _{m=0}^\mathcal {N}\left( J^a_{mp_1}\frac{\partial }{\partial J^b_{mp_2}}-J^b_{p_2m}\frac{\partial }{\partial J^a_{p_1m}}\right) \mathcal {Z}[J]\\&\qquad -\frac{\lambda '}{V^2}\sum _{m,n=0}^\mathcal {N}\sum _{c,d=1}^3\left( \sigma _{acd}\frac{\partial ^3}{\partial J^c_{p_1n}\partial J^d_{nm}\partial J^b_{mp_2}}\right. \\&\qquad \left. - \sigma _{bcd}\frac{\partial ^3}{\partial J^a_{p_1m}\partial J^c_{mn}\partial J^d_{np_2}}\right) \mathcal {Z}[J]. \end{aligned}$$ We have used the second form of \(\mathcal {Z}[J]\) in (3.1) and the Leibniz rule in the last step. Technically one expands the exponential function and resums after using the Leibniz rule. Since \(E_{p_1}\ne E_{p_2}\) the proof is finished. \(\square \) Equation (4.1) is a special case of Proposition 1 by setting \(b=a\). The derivation of both identities is completely different. Proposition 1 can not be obtained by a symmetry transformation of only one colour due to the discrete mixing of the colours if \(a\ne b\). Applying the procedure of the proof of Proposition 1, it is also possible to derive the usual Ward–Takahashi identity even in other models. For later purpose we combine two identities to get a more useful expression: Lemma 1 Let a be fixed and \(p_1\ne p_2\), then it follows $$\begin{aligned}&\sum _{b,c=1}^{3}\sum _{m=0}^\mathcal {N}\sigma _{abc}\frac{\partial ^2}{\partial J^b_{p_1m}\partial J^c_{mp_2}}\mathcal {Z}[J] \\&\quad =\frac{V}{E_{p_1}-E_{p_2}}\left[ \sum _{b,c=1}^{3}\sigma _{abc} \sum _{m=0}^\mathcal {N}\left( J^b_{mp_1}\frac{\partial }{\partial J^c_{mp_2}}-J^c_{p_2m}\frac{\partial }{\partial J^b_{p_1m}}\right) \right. \\&\qquad +\frac{\lambda '}{V^2}\sum _{b=1}^3\left\{ \sum _{m=0}^\mathcal {N}\left( \frac{\partial ^3}{\partial J^b_{p_1m}\partial J^b_{mp_1}\partial J^a_{p_1p_ 2}}-\frac{\partial ^3}{\partial J^a_{p_1p_2}\partial J^b_{p_2m}\partial J^b_{mp_2}}\right) \right. \\&\qquad +\sum _{\begin{array}{c} m,n=0 \\ n\ne p_1 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_1}-E_n}\frac{\partial }{\partial J^a_{np_2}}\left( J^b_{mp_1}\frac{\partial }{\partial J^b_{mn}}-J^b_{nm}\frac{\partial }{\partial J^b_{p_1m}}\right) \\&\left. \left. \qquad -\sum _{\begin{array}{c} m,n=0 \\ n\ne p_2 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_2}-E_{n}}\frac{\partial }{\partial J^a_{p_1n}}\left( J^b_{p_2m}\frac{\partial }{\partial J^b_{nm}} -J^b_{mn}\frac{\partial }{\partial J^b_{mp_2}}\right) \right\} \right] \mathcal {Z}[J]. \end{aligned}$$ Inserting Proposition 1 for the LHS yields $$\begin{aligned}&\sum _{b,c=1}^{3}\sum _{m=0}^\mathcal {N}\sigma _{abc}\frac{\partial ^2}{\partial J^b_{p_1m}\partial J^c_{mp_2}}\mathcal {Z}[J]\nonumber \\&\quad =\frac{V}{E_{p_1}-E_{p_2}}\sum _{b,c=1}^{3}\sigma _{abc} \sum _{m=0}^\mathcal {N}\left( J^b_{mp_1}\frac{\partial }{\partial J^c_{mp_2}}-J^c_{p_2m}\frac{\partial }{\partial J^b_{p_1m}}\right) \mathcal {Z}[J]\nonumber \\&\qquad +\!\frac{\lambda '}{V(E_{p_1}\!\!-\!E_{p_2})}\sum _{m,n=0}^\mathcal {N}\sum _{b,c,d,e=1}^3\sigma _{abc} \left( \sigma _{cde}\frac{\partial ^3}{\partial J^b_{p_1m}\partial J^d_{mn}\partial J^e_{np_2}}\right. \nonumber \\&\qquad \left. -\sigma _{bde}\frac{\partial ^3}{\partial J^d_{p_1m}\partial J^e_{mn}\partial J^c_{np_2}} \right) \mathcal {Z}[J]. \end{aligned}$$ By the sum over the colours b, c, d, e, we obtain for the multiplication of two \(\sigma \)'s with one common index $$\begin{aligned} \sigma _{abc}\sigma _{cde}=\,\,&\sigma _{abc}(\delta _{ad}\delta _{be}+\delta _{ae}\delta _{bd})\\ \sigma _{abc}\sigma _{bde}=\,\,&\sigma _{abc}(\delta _{ad}\delta _{ce}+\delta _{ae}\delta _{cd}). \end{aligned}$$ Therefore, the last line in (4.3) gives $$\begin{aligned}&\frac{\lambda '}{V(E_{p_1}-E_{p_2})}\sum _{m,n=0}^\mathcal {N}\sum _{b,c=1}^3\nonumber \\&\qquad \times \sigma _{abc}\left( \frac{\partial ^3}{\partial J^b_{p_1m}\partial J^a_{mn}\partial J^b_{np_2}}+\frac{\partial ^3}{\partial J^b_{p_1m}\partial J^b_{mn}\partial J^a_{np_2}}\right. \nonumber \\&\left. \qquad -\frac{\partial ^3}{\partial J^a_{p_1m}\partial J^c_{mn}\partial J^c_{np_2}}-\frac{\partial ^3}{\partial J^c_{p_1m}\partial J^a_{mn}\partial J^c_{np_2}}\right) \mathcal {Z}[J]. \end{aligned}$$ The first and the last term in parentheses vanish because of the total symmetry of \(\sigma _{abc}\). Adding \(0=\) \(\left( \frac{\partial ^3}{\partial J^a_{p_1m}\partial J^a_{mn}\partial J^a_{np_2}}-\frac{\partial ^3}{\partial J^a_{p_1m}\partial J^a_{mn}\partial J^a_{np_2}}\right) \mathcal {Z}[J]\) and renaming the indices, (4.4) can be rewritten to $$\begin{aligned} \frac{\lambda '}{V(E_{p_1}-E_{p_2})}\sum _{m,n=0}^\mathcal {N}\sum _{b=1}^3\left( \frac{\partial ^3}{\partial J^b_{p_1m}\partial J^b_{mn}\partial J^a_{np_2}}-\frac{\partial ^3}{\partial J^a_{p_1m}\partial J^b_{mn}\partial J^b_{np_2}}\right) \mathcal {Z}[J]. \end{aligned}$$ Inserting (4.2) for \(n\ne p_1\) in the first and \(m\ne p_2\) in the second term gives after renaming indices finally $$\begin{aligned}&\frac{\lambda '}{V(E_{p_1}-E_{p_2})}\sum _{b=1}^3 \left\{ \sum _{m=0}^\mathcal {N}\left( \frac{\partial ^3}{\partial J^b_{p_1m}\partial J^b_{mp_1}\partial J^a_{p_1p_2}}-\frac{\partial ^3}{\partial J^a_{p_1p_2}\partial J^b_{p_2m}\partial J^b_{mp_2}}\right) \right. \nonumber \\&\quad +\sum _{\begin{array}{c} m,n=0 \\ n\ne p_1 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_1}-E_n}\frac{\partial }{\partial J^a_{np_2}}\left( J^b_{mp_1}\frac{\partial }{\partial J^b_{mn}}-J^b_{nm}\frac{\partial }{\partial J^b_{p_1m}}\right) \nonumber \\&\left. \quad -\sum _{\begin{array}{c} m,n=0 \\ n\ne p_2 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_2}-E_{n}}\frac{\partial }{\partial J^a_{p_1n}}\left( J^b_{p_2m}\frac{\partial }{\partial J^b_{nm}}-J^b_{mn}\frac{\partial }{\partial J^b_{mp_2}}\right) \right\} \mathcal {Z}[J]. \end{aligned}$$ The identity follows by combining (4.3) and (4.5). \(\square \) 5 Schwinger–Dyson equations for \(B=1\) 5.1 For matrix basis In this section we derive the Schwinger–Dyson equations with the help of Ward–Takahashi identity. The Schwinger–Dyson equation for the 2-point function in the 3-colour matrix model with an external field E is for \(p_1\ne p_2\) given by $$\begin{aligned} G^{aa}_{|p_1p_2|}&=\frac{1}{H_{p_1p_2}}+\frac{\lambda '^2}{(E^2_{p_1}-E^2_{p_2})V}\\&\qquad \times \Bigg [\sum _{m=0}^\mathcal {N}\sum _{b=1}^3\left( G^{aa}_{|p_1p_2|}\left( G^{bb}_{|p_2m|} -G^{bb}_{|p_1m|}\right) \right. \\&\qquad \left. +\frac{1}{V}\left( G^{aabb}_{|p_2p_1p_2m|}-G^{aabb}_{|p_1p_2p_1m|}\right) \right) \\&\qquad +\sum _{b=1}^3\frac{1}{V^2}\bigg (\sum _{m=0}^\mathcal {N}\left( G^{aa|bb}_{|p_2p_1|p_2m|}-G^{aa|bb}_{|p_1p_2|p_1m|}\right) \\&\qquad +\left( G^{b|baa}_{|p_2|p_2p_2p_1|}-G^{b|baa}_{|p_1|p_1p_1p_2|}\right) \bigg )\\&\qquad +\sum _{b=1}^3\left( \frac{1}{V^3}\left( G^{b|b|aa}_{|p_2|p_2|p_2p_1|}-G^{b|b|aa}_{|p_1|p_1|p_1p_2|}\right) \right. \\&\qquad \left. +\frac{1}{V}G^{aa}_{|p_1p_2|}\left( G^{b|b}_{|p_2|p_2|}-G^{b|b}_{|p_1|p_1|}\right) \right) \\&\qquad +\!\!\sum _{\begin{array}{c} m=0\\ m\ne p_2 \end{array}}^\mathcal {N}\frac{G^{aa}_{|p_1m|}\!-\!G^{aa}_{|p_1p_2|}}{E_{p_2}-E_m}-\!\! \sum _{\begin{array}{c} m=0\\ m\ne p_1 \end{array}}^\mathcal {N} \frac{G^{aa}_{|p_1p_2|}\!-\!G^{aa}_{|p_2m|}}{E_{m}-E_{p_1}}\\&\qquad +\frac{1}{V}\frac{G^{a|a}_{|p_1|p_1|}-2G^{a|a}_{|p_1|p_2|}+G^{a|a}_{|p_2|p_2|}}{E_{p_2}-E_{p_1}}\Bigg ]. \end{aligned}$$ Assuming \(p_1\ne p_2\) the 2-point function is given via definition (3.3) and expansion (3.4). Using (3.1) leads to $$\begin{aligned} G^{aa}_{|p_1p_2|}&=\frac{1}{V}\frac{\partial ^2}{\partial J^a_{p_1p_2}\partial J^a_{p_2p_1}}\mathrm {log}\mathcal {Z}[J]\Big \vert _{J=0}\\&=\frac{1}{V\mathcal {Z}[0]}\frac{\partial ^2}{\partial J^{a}_{p_1p_2}\partial J^{a}_{p_2p_1}}\mathcal {Z}[J]\Big |_{J=0}\\&=\frac{K}{H_{p_1p_2}\mathcal {Z}[0]}\frac{\partial }{\partial J^{a}_{p_2p_1}}\\&\quad \times \exp \left( -S_{int}\left[ \frac{1}{V}\frac{\partial }{\partial J}\right] \right) J^{a}_{p_2p_1}\mathcal {Z}_{free}[J]\Big |_{J=0}\\&=\frac{1}{H_{p_1p_2}}-\frac{\lambda ' }{H_{p_1p_2}\mathcal {Z}[0]V^2}\frac{\partial }{\partial J^{a}_{p_2p_1}} \sum _{b,c=1}^3\\&\quad \times \sum _{m=0}^\mathcal {N}\sigma _{abc}\frac{\partial ^2}{\partial J^b_{p_1m}\partial J^c_{mp_2}}\mathcal {Z}[J]\Big |_{J=0}. \end{aligned}$$ Inserting the expansion of (3.4) would give the Schwinger–Dyson equation between the 2-point and 3-point function. At first sight the application of Lemma 1 seems to make the equation more complicated. However, it yields a better behaviour in the large (\(\mathcal {N},V\))-limit. The first term on the RHS of the equation of Lemma 1 vanishes by setting J to zero. Therefore, we obtain where \(H_{p_1p_2}(E_{p_1}-E_{p_2})=(E^2_{p_1}-E^2_{p_2})\) has been used and the fact that in the last two lines only colour a survives. By taking \(p_1\ne p_2\) into account and \(J=0\) gives with the Leibniz rule $$\begin{aligned} =&\frac{1}{H_{p_1p_2}}-\frac{\lambda '^2 }{(E^2_{p_1}-E^2_{p_2})\mathcal {Z}[0]V^3}\\&\quad \times \left\{ \sum _{b=1}^3\sum _{m=0}^\mathcal {N}\left( \frac{\partial ^4}{\partial J^b_{p_1m}\partial J^b_{mp_1}\partial J^a_{p_1p_2}\partial J^a_{p_2p_1}}\right. \right. \\&\qquad \left. - \frac{\partial ^4}{\partial J^a_{p_2p_1}\partial J^a_{p_1p_2}\partial J^b_{p_2m}\partial J^b_{mp_2}}\right) \mathcal {Z}[J]\Big |_{J=0}\\&\qquad +\sum _{\begin{array}{c} m=0 \\ m\ne p_1 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_1}-E_m} \left( \frac{\partial ^2}{\partial J^a_{mp_2}\partial J^a_{p_2m}}\right. \\&\qquad \left. - \frac{\partial ^2}{\partial J^a_{p_1p_2}\partial J^a_{p_2p_1}}\right) \mathcal {Z}[J]\Big |_{J=0}\\&\qquad -\sum _{\begin{array}{c} m=0 \\ m\ne p_2 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_2}-E_{m}} \left( \frac{\partial ^2}{\partial J^a_{mp_1}\partial J^a_{p_1m}}\right. \\&\qquad \left. - \frac{\partial ^2}{\partial J^a_{p_1p_2}\partial J^a_{p_2p_1}}\right) \mathcal {Z}[J]\Big |_{J=0}\\&\qquad + \frac{V}{E_{p_2}-E_{p_1}}\left( \frac{\partial ^2}{\partial J^a_{p_2p_2}\partial J^a_{p_1p_1}}\right. \\&\qquad \left. \left. - \frac{\partial ^2}{\partial J^a_{p_2p_2}\partial J^a_{p_1p_1}}\right) \mathcal {Z}[J]\Big |_{J=0}\right\} . \end{aligned}$$ The first line generates for \(m\ne p_1\) and \(m\ne p_2\) either a 4-point function with one boundary or two 2-point functions with one boundary, respectively. Functions with higher boundaries \(B\ge 2\) appear in case of \(m=p_1\) or \(m=p_2\) . All terms are found by comparing with the expansion (3.4). \(\square \) We remind that in Proposition 2 correlation functions of genus \(g\ge 1\) are also included. To see this one has to expand the correlation functions in a genus expansion. More information can be found in [12]. The Schwinger–Dyson equation of the 2-point function depends on \(\lambda '^2\), since graphs exist only with an even number of vertices. Let \(N\ge 3\). The Schwinger–Dyson equation for the N-point function in the 3-colour matrix model with an external field E is for pairwise different \(p_i, p_j\) given by $$\begin{aligned} G^{a_1 \ldots a_N}_{|p_1 \ldots p_N|}&=-\frac{\lambda ' }{(E^2_{p_1}-E^2_{p_2})}\sum _{b=1}^3\left( \sigma _{a_1a_Nb}G^{a_2 \ldots a_{N-1}b}_{|p_2 \ldots p_{N-1}p_N|} -\sigma _{a_1a_2b}G^{ba_3a_4 \ldots a_{N}}_{|p_1p_3p_4 \ldots p_N|}\right) \\&\qquad -\frac{\lambda '^2 }{V^2(E^2_{p_1}-E^2_{p_2})}\\&\quad \times \Bigg \{V\Bigg (\sum _{\begin{array}{c} m=0\\ m\ne p_1 \end{array}}^\mathcal {N}\frac{G^{a_1a_2 \ldots a_N}_{|mp_2 \ldots p_N|}-G^{a_1a_2 \ldots a_N}_{|p_1p_2 \ldots p_N|}}{E_{p_1}-E_m}\\&\qquad -\sum _{\begin{array}{c} m=0\\ m\ne p_2 \end{array}}^\mathcal {N}\frac{G^{a_1a_2a_3 \ldots a_N}_{|p_1mp_3 \ldots p_N|}-G^{a_1a_2 \ldots a_N}_{|p_1p_2 \ldots p_N|}}{E_{p_2}-E_m}\Bigg )\\&\qquad +\sum _{k=2}^{N}\Bigg (\frac{G^{a_1a_2 \ldots a_{k-1}|a_ka_{k+1} \ldots a_N}_{|p_kp_2 \ldots p_{k-1}|p_kp_{k+1} \ldots p_N|}-G^{a_1a_2 \ldots a_{k-1}|a_ka_{k+1} \ldots a_N}_{|p_kp_2 \ldots p_{k-1}|p_1p_{k+1} \ldots p_N|}}{E_{p_1}-E_{p_k}}\\&\qquad -\frac{G^{a_2a_3 \ldots a_{k}|a_1a_{k+1} \ldots a_N}_{|p_{k+1}p_3 \ldots p_{k}|p_1p_{k+1} \ldots p_N|}- G^{a_2 \ldots a_k|a_1a_{k+1} \ldots a_N}_{|p_2 \ldots p_k|p_1p_{k+1} \ldots p_N|}}{E_{p_2}-E_{p_{k+1}}}\Bigg )\\&\qquad +\sum _{k=3}^{N-1}V^2\Bigg (G^{a_1a_2 \ldots a_{k-1}}_{|p_kp_2 \ldots p_{k-1}|}\frac{G^{a_k \ldots a_N}_{|p_k \ldots p_N|} -G^{a_ka_{k+1} \ldots a_N}_{|p_1p_{k+1} \ldots p_N|}}{E_{p_1}-E_{p_k}}\\&\qquad -G^{a_1a_{k+1} \ldots a_N}_{|p_1p_{k+1} \ldots p_N|}\frac{G^{a_2a_3 \ldots a_{k}}_{|p_{k+1}p_3 \ldots p_{k}|} -G^{a_2 \ldots a_k}_{|p_2 \ldots p_k|}}{E_{p_2}-E_{p_{k+1}}}\Bigg )\\&\qquad +\sum _{b=1}^3\sum _{m=0}^\mathcal {N}\bigg (G^{bba_1 \ldots a_N}_{|p_1mp_1 \ldots p_N|}-G^{a_1bba_2 \ldots a_N}_{|p_1p_2mp_2 \ldots p_N|}\\&\qquad +\frac{1}{V} \left( G^{bb|a_1 \ldots a_N}_{|p_1m|p_1 \ldots p_N|}-G^{bb|a_1 \ldots a_N}_{|p_2m|p_1 \ldots p_N|}\right) \\&\qquad +VG^{a_1 \ldots a_N}_{|p_1 \ldots p_N|}\left( G^{bb}_{|p_1m|}-G^{bb}_{|p_2m|}\right) \bigg )\\&\qquad +\sum _{b=1}^3\sum _{k=2}^N\bigg (\frac{1}{V}\left( G^{ba_1 \ldots a_{k-1}|ba_k \ldots a_N}_{|p_kp_1 \ldots p_{k-1}|p_1p_k \ldots p_N|}\right. \\&\qquad \left. -G^{ba_2 \ldots a_{k}|ba_{k+1} \ldots a_Na_1}_{|p_{k+1}p_2 \ldots p_{k}|p_2p_{k+1} \ldots p_Np_1|}\right) \\&\qquad +V\left( G^{ba_1 \ldots a_{k-1}}_{|p_kp_1 \ldots p_{k-1}|}G^{ba_k \ldots a_N}_{|p_1p_k \ldots p_N|}\right. \\&\qquad \left. -G^{ba_2 \ldots a_{k}}_{|p_{k+1}p_2 \ldots p_{k}|}G^{ba_{k+1} \ldots a_Na_1}_{|p_2p_{k+1} \ldots p_Np_1|}\right) \bigg )\\&\qquad +\sum _{b=1}^3\bigg (\frac{1}{V^2}\left( G^{b|b|a_1 \ldots a_N}_{|p_1|p_1|p_1 \ldots p_N|}-G^{b|b|a_1 \ldots a_N}_{|p_2|p_2|p_1 \ldots p_N|}\right) \\&\qquad + \frac{1}{V}\left( G^{b|ba_1 \ldots a_N}_{|p_1|p_1p_1 \ldots p_N}-G^{b|ba_2 \ldots a_Na_1}_{|p_2|p_2p_2 \ldots p_Np_1}\right) \\&\qquad +G^{a_1 \ldots a_N}_{|p_1 \ldots p_N|}\left( G^{b|b}_{|p_1|p_1|}-G^{b|b}_{|p_2|p_2|}\right) \bigg ) \Bigg \}, \end{aligned}$$ where \(p_{N+1}\equiv p_1\). We use the definition of the N-point function for pairwise different \(p_i,p_j\). With the expression of the partition function (3.1), we obtain $$\begin{aligned} G^{a_1 \ldots a_N}_{|p_1 \ldots p_N|}&=\frac{1}{V}\frac{\partial ^N}{\partial J^{a_1}_{p_1p_2} \ldots J^{a_N}_{p_Np_1}}\frac{\mathcal {Z}[J]}{\mathcal {Z}[0]}\bigg |_{J=0}\\&=-\frac{\lambda ' }{H_{p_1p_2}V^2\mathcal {Z}[0]}\frac{\partial ^{N-1}}{\partial J^{a_2}_{p_2p_3} \ldots J^{a_N}_{p_Np_1}} \sum _{b,c=1}^3\sigma _{a_1bc}\\&\quad \times \sum _{n=0}^\mathcal {N}\frac{\partial ^{2}}{\partial J^{b}_{p_1n}J^{c}_{np_2}}\mathcal {Z}[J]\bigg |_{J=0}. \end{aligned}$$ Here the first derivative \(\frac{\partial }{\partial J^{a_1}_{p_1p_2}}\) applied to \(\mathcal {Z}_{free}[J]\) yields \(\frac{V}{H_{p_1p_2}} J^{a_1}_{p_2p_1}\), which can only be differentiated by the interaction in \(\mathcal {Z}[Z]\) because of \(p_3\ne p_1\) and \(p_2\ne p_4\). Applying Lemma 1 yields $$\begin{aligned}&=-\frac{\lambda ' }{(E^2_{p_1}-E^2_{p_2})V\mathcal {Z}[0]}\frac{\partial ^{N-1}}{\partial J^{a_2}_{p_2p_3} \ldots J^{a_N}_{p_Np_1}}\nonumber \\&\quad \times \Bigg [\sum _{b,c=1}^{3}\sigma _{a_1bc} \sum _{m=0}^\mathcal {N}\left( J^b_{mp_1}\frac{\partial }{\partial J^c_{mp_2}}-J^c_{p_2m}\frac{\partial }{\partial J^b_{p_1m}}\right) \end{aligned}$$ (5.1a) $$\begin{aligned}&\qquad +\frac{\lambda '}{V^2}\sum _{b=1}^3\Bigg \{\sum _{m=0}^\mathcal {N}\left( \frac{\partial ^3}{\partial J^b_{p_1m}\partial J^b_{mp_1}\partial J^{a_1}_{p_1p_2}}\right. \nonumber \\&\qquad \left. -\frac{\partial ^3}{\partial J^{a_1}_{p_1p_2}\partial J^b_{p_2m}\partial J^b_{mp_2}}\right) \end{aligned}$$ (5.1b) $$\begin{aligned}&\qquad +\sum _{\begin{array}{c} m,n=0 \\ n\ne p_1 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_1}-E_n}\frac{\partial }{\partial J^{a_1}_{np_2}}\left( J^b_{mp_1}\frac{\partial }{\partial J^b_{mn}}-J^b_{nm}\frac{\partial }{\partial J^b_{p_1m}}\right) \end{aligned}$$ (5.1c) $$\begin{aligned}&\qquad -\sum _{\begin{array}{c} m,n=0 \\ n\ne p_2 \end{array}}^{\mathcal {N}}\frac{V}{E_{p_2}-E_{n}}\frac{\partial }{\partial J^{a_1}_{p_1n}} \left( J^b_{p_2m}\frac{\partial }{\partial J^b_{nm}}\right. \nonumber \\&\quad \left. -J^b_{mn}\frac{\partial }{\partial J^b_{mp_2}}\right) \Bigg \}\Bigg ]\mathcal {Z}[J]\bigg |_{J=0}. \end{aligned}$$ (5.1d) The first term of (5.1a) contributes only for \(b=a_N\) and \(m=p_N\) and the second term only for \(c=a_2\) and \(m=p_3\). This generates the term proportional to \(\lambda '\). Line (5.1b) produces three different types of terms for arbitrary m, the \((2+N)\)-point functions with \(B=1\), the multiplication of 2-point with N-point functions, and \((2+N)\)-point functions with \(B=2\). If in (5.1b) \(m=p_k\) for the first term with \(2\le k\le N\) (for the second term with \(3\le k\le N\) or \(k=1\)), additionally \((k+(N+2-k))\)-point functions with \(B=2\) and the multiplication of k-point with \((N+2-k)\)-point functions with \(B=1\) are generated. In case of \(m=p_1\) for the left term (\(m=p_2\) for the right term) (5.1b) produces either \((1+1+N)\)-point functions with \(B=3\), \((1+(1+N))\)-point functions with \(B=2\) or the multiplication of \((1+1)\)-point with N-point functions. Finally, we look at (5.1c) and (5.1d) together. The first terms again contribute only for \(b=a_N\) and \(m=p_N\) in (5.1c) or for \(b=a_2\) and \(m=p_3\) in (5.1d). Since the sum over n survives, N-point functions arise. If \(n=p_k\) for \(k\ne 1\) in (5.1c) and for \(k\ne 2\) in (5.1d) one gets either \((k+(N-k))\)-point functions or the multiplication of k-point functions with \((N-k)\)-point functions with \(B=1\). For the second term in (5.1c) and (5.1d) each derivative have to be taken into account. If the derivative in front of the brackets in (5.1c) and (5.1d) acts on \(J^b_{nm}\) or \(J^b_{mn}\), the sum over n survives again and has a prefactor depending on \(E_n\), but no n appears in the N-point function. If any other derivative \(\frac{\partial }{\partial J^{a_{k+1}}_{p_{k+1} p_{k+2}}}\), for some \(k\ge 1\), acts on the second term, n, m, b will be fixed and it will produces N-point functions, \((k+(N-k))\)-point functions with \(B=2\) and the multiplication of k-point with \((N-k)\)-point functions. Collecting all and making use of (3.3) to get the correct prefactor in V, one find all the terms appearing in Proposition 3. \(\square \) The first term shows that a \((N-1)\)-point function only contributes for different adjacent colours, because of \(\sigma _{a_1a_Nb}\) and \(\sigma _{a_1a_2b}\). This fact fits perfectly with a loop expansion. Furthermore, the 2-point function is assigned with a special r\(\hat{\text {o}}\)le, since the sum over m only appears for the N-point and 2-point function even in the large \(\mathcal {N},V\) limit. It should be emphasised that not all combinations of the colours for the correlation functions are possible. The 2-point function is of the form \(G^{aa}_{|p_1p_2|}\) and the 3-point function \(\sigma _{abc}G^{abc}_{|p_1p_2p_3|}\). There exists no 4-point function equipped with all three colours simultaneously, and so on. These properties which are first recognized by loop expansion are intrinsically presented in the Schwinger–Dyson equations. 5.2 Large (\(\mathcal {N},V\))-limit Sending \(\mathcal {N},V \rightarrow \infty \) with constant ratio \(\frac{\mathcal {N}}{V}=\mu ^2\Lambda ^2\), the sum is turned into an integral by the transformation of the discrete elements to continuous variables \(m \rightarrow V\mu ^2 q\) $$\begin{aligned} \lim _{\begin{array}{c} \mathcal {N},V\rightarrow \infty \\ \frac{\mathcal {N}}{V}=\mu ^2\Lambda ^2 \end{array}}\frac{1}{V}\sum _{m=0}^\mathcal {N}f\left( \frac{m}{V}\right) = \mu ^2\int _0^{\Lambda ^2}dq\, f(\mu ^2q). \end{aligned}$$ The eigenvalues of the external field are in the linear case given by \(E_m=\mu ^2(q+\frac{1}{2})\). To get rid of the mass squared \(\mu ^2\), we redefine in the following way $$\begin{aligned} G^{aa}_{p_1p_2}:=\lim _{\mathcal {N},V\rightarrow \infty }\mu ^2 G^{aa}_{|p_1p_2|},\quad \lambda :=\frac{\lambda '}{\mu ^2}. \end{aligned}$$ An important fact is that in this limit only functions with genus \(g=0\) survive [12]. Furthermore, we assume that all these functions of genus zero are \(\mathcal {O}(V^0)\). The perturbation theory shows that in combination with the definition (3.3) this is the right assumption (see also [14, 15]). The equation of Proposition 2 breaks down to a closed equation, since 4-point functions and functions with \(B\ge 2\) vanish. The limit \(\lim _{p_2\rightarrow p_1} \frac{G^{\,\,\,\,aa}_{n,p_1q}-G^{\,\,\,\,aa}_{n,p_2q}}{p_1-p_2}\) is in perturbation theory well-defined, therefore this limit should also exist in the non-perturbative case. Sending \(\Lambda ^2\rightarrow \infty \) the closed integral equation for the 3-colour model is obtained $$\begin{aligned} G^{aa}_{p_1p_2}&= \frac{1}{1+p_1+p_2}+ \frac{\lambda ^2}{\left( 1+p_1+p_2\right) (p_1-p_2)}\nonumber \\&\qquad \times \bigg (3G^{aa}_{p_1p_2}\int _0^{\infty }dq \,\left( G^{aa}_{qp_2}-G^{aa}_{p_1q}\right) \nonumber \\&\quad -\!\int _0^{\infty }dq\,\frac{G^{aa}_{p_1q} \!-\!G^{aa}_{p_1p_2}}{q-p_2}\!+\!\int _0^{\infty }dq\, \frac{G^{aa}_{p_2q}\!-\!G^{aa}_{p_1p_2}}{q-p_1}\bigg ). \end{aligned}$$ We have assumed that \(G^{bb}_{p_1p_2}\) does not depend directly on the colour b so that \(\sum _{b=1}^3G^{bb}_{p_1p_2}=3G^{aa}_{p_1p_2}\). 5.3 Perturbative solution Using the expansion (2.1) the closed integral equation (5.2) provides a recursive equation for \(n\ge 1\) of the form $$\begin{aligned} G^{\quad \,aa}_{2n,\,p_1p_2}&= \frac{1}{\left( 1+p_1+p_2\right) (p_1-p_2)}\nonumber \\&\qquad \times \bigg (3\sum _{i=0}^{n-1}G^{\qquad \,\,\,\quad aa}_{2(n-1-i),\,p_1p_2}\int _0^{\infty }dq \,\left( G^{\quad aa}_{2i,\,qp_2}- G^{\quad aa}_{2i,\,p_1q}\right) \nonumber \\&\quad -\int _0^{\infty }dq\,\frac{G^{\qquad aa}_{2n-2,\,p_1q} -G^{\qquad aa}_{2n-2,\,p_1p_2}}{q-p_2}\nonumber \\&\quad +\int _0^{\infty }dq\, \frac{G^{\qquad aa}_{2n-2,\,p_2q}-G^{\qquad aa}_{2n-2,\,p_1p_2}}{q-p_1}\bigg ) \end{aligned}$$ and \(G^{\quad aa}_{0,\,p_1p_2}=\frac{1}{1+p_1+p_2}\). Equation (5.3) is linear which enables a very easy way to study this model in comparison to other noncommutative quantum field theory models. The convergence of loop expansion is hopeless, since the number of graphs of order \(\lambda ^{2n}\) is at least of order \(\mathcal {O}(n!)\), however the recursive equation gives directly the sum over all graphs of a certain order \(\lambda ^{2n}\). Order \(n=1\) It is easy to verify with \(\frac{\frac{1}{1+x+y}-\frac{1}{1+x+z}}{y-z}=-\frac{1}{(1+x+y)(1+x+z)}\) the result of (2.2) by $$\begin{aligned} G^{\quad aa}_{2,\,p_1p_2}&=\frac{1}{\left( 1+p_1+p_2\right) (p_1-p_2)}\\&\quad \times \Bigg (\frac{3}{1+p_1+p_2}\int _0^\infty dq \left( \frac{1}{1+p_2+q}-\frac{1}{1+p_1+q}\right) \\&\quad -\frac{1}{1+p_1+p_2}\int _0^\infty dq \left( \frac{1}{1+p_2+q}-\frac{1}{1+p_1+q}\right) \Bigg )\\&=\frac{2\log (\frac{1+p_1}{1+p_2})}{(1+p_1+p_2)^2(p_1-p_2)}. \end{aligned}$$ Inserting \(G^{\quad aa}_{0,\,p_1p_2}\) and \(G^{\quad aa}_{2,\,p_1p_2}\) into (5.3) to obtain $$\begin{aligned} G^{\quad aa}_{4,\,p_1p_2}&= \frac{6\log \left( \frac{1+p_1}{1+p_2}\right) }{(1+p_1+p_2)^2(p_1-p_2)^2}\nonumber \\&\quad \times \int _0^\infty dq\,\frac{p_1-p_2}{(1+p_1+q)(1+p_2+q)} \end{aligned}$$ $$\begin{aligned}&\quad + \frac{6}{(1+p_1+p_2)^2(p_1-p_2)}\nonumber \\&\quad \times \int _0^\infty dq\left( \frac{\log \left( \frac{1+p_2}{1+q}\right) }{(1+q+p_2)^2(p_2-q)}\right. \nonumber \\&\quad \left. - \frac{\log \left( \frac{1+p_1}{1+q}\right) }{(1+q+p_1)^2(p_1-q)}\right) \end{aligned}$$ $$\begin{aligned}&\quad -\frac{2}{(1+p_1+p_2)(p_1-p_2)}\nonumber \\&\quad \times \int _0^\infty dq\frac{\frac{\log \left( \frac{1+p_1}{1+q}\right) }{(1+q+p_1)^2(p_1-q)}-\frac{\log \left( \frac{1+p_1}{1+p_2}\right) }{(1+p_1+p_2)^2(p_1-p_2)}}{q-p_2} \end{aligned}$$ $$\begin{aligned}&\quad +\frac{2}{(1+p_1+p_2)(p_1-p_2)}\nonumber \\&\quad \times \int _0^\infty dq\frac{\frac{\log \left( \frac{1+q}{1+p_2}\right) }{(1+q+p_2)^2(q-p_2)}-\frac{\log \left( \frac{1+p_1}{1+p_2}\right) }{(1+p_1+p_2)^2(p_1-p_2)}}{q-p_1}. \end{aligned}$$ (5.4a) is given by $$\begin{aligned} \frac{6\log \left( \frac{1+p_1}{1+p_2}\right) ^2}{(1+a+b)^3(a-b)^2}. \end{aligned}$$ Using the definition of the dilogarithm $$\begin{aligned} \mathrm {Li}_2(-x)=-\int _{0}^{x}du\frac{\log (1+u)}{u}, \end{aligned}$$ (5.4b) is then determined by $$\begin{aligned}&\frac{6}{(1+p_1+p_2)^2(p_1-p_2)}\Bigg (\frac{\log (1+p_1)}{p_1(1+p_1)(1+2p_1)}\\&\quad +\frac{\log (1+p_1)^2+2\mathrm {Li}_2\left( -p_1\right) -\frac{\pi ^2 }{6}}{(1+2p_1)^2}\\&\quad -\frac{\log (1+p_2)}{p_2(1+p_2)(1+2p_2)}\\&\quad -\frac{\log (1+p_2)^2+2\mathrm {Li}_2\left( -p_2\right) -\frac{\pi ^2 }{6}}{(1+2p_2)^2}\Bigg ). \end{aligned}$$ Lines (5.4c \(+\) 5.4d) are computed directly to avoid a singularity at \(p_1=p_2\), and is given by $$\begin{aligned}&\frac{2}{(1+p_1+p_2)^2(p_1-p_2)}\Bigg (\frac{\log (1+p_2)}{p_2(1+2p_2)(1+p_2)}\\&\quad -\frac{\log (1+p_1)}{p_1(1+2p_1)(1+p_1)}\\&\quad +\frac{(2+3p_1+p_2)\left( \frac{\pi ^2}{6}-\log (1+p_1)^2-2\mathrm {Li}_2\left( -p_1\right) \right) }{(1+2p_1)^2(1+p_1+p_2)}\\&\quad -\frac{(2+3p_2+p_1)\left( \frac{\pi ^2}{6}-\log (1+p_2)^2-2\mathrm {Li}_2\left( -p_2\right) \right) }{(1+2p_2)^2(1+p_1+p_2)}\Bigg ). \end{aligned}$$ Adding all terms and using the well known identity $$\begin{aligned} \mathrm {Li}_2\left( -x\right) +\frac{1}{2}\log (1+x)^2=-\mathrm {Li}_2\left( \frac{x}{1+x}\right) , \end{aligned}$$ the result is then given by $$\begin{aligned} G^{\quad aa}_{4,\,p_1p_2}&=\frac{2}{(1+p_1+p_2)^2(p_1-p_2)}\nonumber \\&\quad \times \Bigg (\frac{3\log \left( \frac{1+p_1}{1+p_2}\right) ^2}{(1+p_1+p_2)(p_1-p_2)}\nonumber \\&\quad +\frac{2\log (1+p_1)}{p_1(1+2p_1)(1+p_1)}-\frac{2\log (1+p_2)}{p_2(1+2p_2)(1+p_2)}\nonumber \\&\quad -\frac{(1+2p_2)\left( \frac{\pi ^2}{6}+2\mathrm {Li}_2\left( \frac{p_1}{1+p_1}\right) \right) }{(1+2p_1)^2(1+p_1+p_2)}\nonumber \\&\quad +\frac{(1+2p_1)\left( \frac{\pi ^2}{6}+2\mathrm {Li}_2\left( \frac{p_2}{1+p_2}\right) \right) }{(1+2p_2)^2(1+p_1+p_2)}\Bigg ). \end{aligned}$$ Equation (5.6) is confirmed by loop expansion in Appendix A. The 2-point functions \(G^{\quad aa}_{0,\,p_1p_2},\,G^{\quad aa}_{2,\,p_1p_2}\) and \(G^{\quad aa}_{4,\,p_1p_2}\) are inserted into (5.3). We split the integrals into individual parts, which certainly converge. The identity (5.5) is used to achieve terms of the form \(\mathrm {Li}_2\left( -x\right) \), which are easier to integrate. With the definition of the trilogarithm $$\begin{aligned} \mathrm {Li}_3(-x)=\int _{0}^{x}du\frac{\mathrm {Li}_2(-u)}{u}, \end{aligned}$$ and the identities $$\begin{aligned} \mathrm {Li}_3(-x)= & {} \mathrm {Li}_3\left( -\,\frac{1}{x}\right) -\frac{1}{6}\log (x)^3-\frac{\pi ^2}{6}\log (x)\\ \mathrm {Li}_3(-x)= & {} -\mathrm {Li}_3\left( \frac{x}{1+x}\right) +\frac{\log (1\!+\!x)^3}{3}-\!\frac{\log (x)\log (1\!+\!x)^2}{2}\\&-\frac{\pi ^2\log (1+x)}{6}+\zeta (3), \end{aligned}$$ where \(\zeta (x)\) is the Riemannian \(\zeta \) function, we finally find the correlation function of order \(\lambda ^6\) $$\begin{aligned} G^{\quad aa}_{6,\,p_1p_2}&=\Big \{\log (1+p_1)f_1(p_1,p_2)\nonumber \\&\quad +\pi ^2\log (1+p_1)f_2(p_1,p_2)\nonumber \\&\quad +\log (1+p_1)^2f_3(p_1,p_2)\nonumber \\&\quad +\left( \mathrm {Li}_2\left( \tfrac{p_1}{1+p_1}\right) +\tfrac{\pi ^2}{6}\right) f_4(p_1,p_2)\nonumber \\&\quad +\text {Li}_2\left( \tfrac{p_1}{1+p_1}\right) \log \left( \tfrac{1+p_1}{1+p_2}\right) f_5(p_1,p_2)\nonumber \\&\quad + \left( \text {Li}_3(-p_1)+\text {Li}_3\left( \tfrac{p_1}{1+p_1}\right) \right. \nonumber \\&\quad \left. +\text {Li}_2\left( \tfrac{p_1}{1+p_1}\right) \log (1+p_1)\right. \nonumber \\&\quad \left. +\tfrac{\log (1+p_1)^3}{6} -\tfrac{\pi ^2 \log (1+p_1)}{6}\right) f_6(p_1,p_2)\nonumber \\&\quad +\left( \text {Li}_3\left( \tfrac{p_1}{1+p_1}\right) +\tfrac{\pi ^2 \log (1+p_1)}{3} \right) f_7(p_1,p_2)\Big \}\nonumber \\&\quad +\{p_1 \leftrightarrow p_2\}\nonumber \\&\quad +\log \left( \tfrac{1+p_1}{1+p_2}\right) ^3f_8(p_1,p_2)\nonumber \\&\quad +\log (1+p_1)\log (1+p_2)f_9(p_1,p_2)\nonumber \\&\quad +\pi ^2f_{10}(p_1,p_2)\nonumber \\&\quad +\pi ^2\log (2)f_{11}(p_1,p_2) +\zeta (3)f_{12}(p_1,p_2) \end{aligned}$$ $$\begin{aligned} f_1(p_1,p_2)&= -\frac{8}{p_1(1+p_1)^2(1+2p_1)^2(p_1-p_2)(1+p_1+p_2)^2}\\ f_2(p_1,p_2)&=\! \frac{4\left\{ (p_1\!-\!p_2)^3\!+\!(p_1\!+\!p_2\!+\!1) \left( 7 (p_1\!+\!p_2\!+\!1)^2-\!3 (2 p_2\!+\!1) (p_1\!-\!p_2)\right) \right\} }{(1+2p_1)^3(p_1-p_2)(1+p_1+p_2)^4(1+2p_2)^2}\\ f_3(p_1,p_2)&= \frac{6}{p_1^2(1+p_1)^2(1+2p_1)^3(p_1-p_2)^2(1+p_1+p_2)^3}\\&\quad \times \big \{ (1 + p_1 + p_2) (2 (p_1 - p_2) (1 + 10 p_1 (1 + p_1)) \\&\qquad + 3 (1 + p_1) (1 + 2 p_1))\\&\qquad +2 p_1 (1 + p_1) (1 + 2 p_1)^2 \big \}\\ f_4(p_1,p_2)&=-\frac{4}{p_1(1+p_1)^2(1+2p_1)^3(p_1-p_2)^2(1+p_1+p_2)^3}\\&\quad \times \big \{ (1 + p_1 + p_2) (2 (1 + p_2) + p_1 (11 + 43 p_1 \\&\qquad + 38 p_1^2 - 6 (3 + 4 p_1) p_2))\\&\qquad -2 (1 + p_1) (1 + 2 p_1)^3 \big \}\\ f_5(p_1,p_2)&=\frac{12\left\{ 2 (p_1-p_2)^2+(2 p_2+1) (p_1+p_2+1)\right\} }{(1+2p_1)^2(p_1-p_2)^3(1+p_1+p_2)^4}\\ f_6(p_1,p_2)&=-\frac{24\left\{ (1 \!+\! p_1 + p_2) (10 (p_1 \!-\! p_2)^2 + (1 + 3 p_1 - p_2)^2) - (p_1 - p_2)^3\right\} }{(1+2p_1)^4(p_1-p_2)^3(1+p_1+p_2)^3}\\ f_7(p_1,p_2)&=-\frac{12\left\{ 5+6p_1+4p_2\right\} }{(1+2p_1)^4(p_1-p_2)(1+p_1+p_2)^3}\\ f_8(p_1,p_2)&=\frac{20}{(1+p_1+p_2)^4(p_1-p_2)^3}\\ f_9(p_1,p_2)&=-\frac{24\left\{ 2 p_1^2-2 p_1 p_2+p_1+2 p_2^2+p_2\right\} }{p_1(1+p_1)(1+2p_1)(p_1-p_2)^2p_2(1\!+\!p_2)(1\!+2p_2)(1\!+\!p_1+p_2)^2}\\ f_{10}(p_1,p_2)&=\frac{4}{3p_1(1+p_1)(1+2p_1)^3p_2(1+p_2)(1+2p_2)^3(1+p_1+p_2)^3}\\&\quad \times \big [ p_1 p_2 \big \{(p_1+p_2+1) \big (48 p_1^3\\&\quad +(-48 p_1^2-24 p_1+72) p_2^2+(-40 p_1^2-12 p_1+56) p_2\\&\quad + 88 p_1^2+56 p_1+32 p_2^3+24\big )\\&\quad -(2 p_1+1)^2 (4 p_1 (p_1+1)-1)\big \}\\&\quad +2 (2 p_1+1)^2 (2 p_2+1)^2 (p_1+p_2+1)^3\\&\quad +p_1 (p_1+1) (2 p_1+1)^2+p_2 (p_2+1) (2 p_2+1)^2\big ]\\ f_{11}(p_1,p_2)&=-\frac{32\left\{ 9 (p_1-p_2)^2+7 (p_1+p_2+1)^2\right\} }{(1+2p_1)^4(1+2p_2)^4(1+p_1+p_2)}\\ f_{12}(p_1,p_2)&=\frac{24\left\{ (p_1-p_2)^2+5 (p_1+p_2+1)^2\right\} }{(1+2p_1)^3(1+2p_2)^3(1+p_1+p_2)^3}, \end{aligned}$$ where \(\{p_1\leftrightarrow p_2\}\) are the first seven terms by interchanging \(p_1\) and \(p_2\). To obtain these results we computed a primitive of the integrals using a computer algebra system and took the limits \(q\rightarrow 0\) and \(q \rightarrow \infty \) by hand. More than 20 different type of loops contribute at sixth order in \(\lambda \), and (5.7) is the sum of all of them. Most of the terms in (5.7) are individually divergent in the limit \(p_2\rightarrow p_1\) or \(p_{1/2}\rightarrow 0\). However, in both limits \(G^{\quad aa}_{6,\,p_1p_2}\) has a finite result. For \(p=p_1=p_2\) we find $$\begin{aligned} G^{\quad aa}_{6,\,pp}&=\frac{1776}{(1+2p)^7}\Bigg \{\text {Li}_3(-p)+\frac{93}{74} \text {Li}_3\left( \tfrac{p}{p+1}\right) \nonumber \\&\quad + \text {Li}_2\left( \tfrac{p}{p+1}\right) \log (p+1)\nonumber \\&\quad +\frac{1}{6} \log ^3(p+1)-\frac{14}{111} \pi ^2 \log (p+1)\nonumber \\&\quad -\frac{14}{111} \pi ^2 \log (2)+\frac{5}{74} \zeta (3)\Bigg \}\nonumber \\&\quad + \frac{2\pi ^2(10 p (p (4 p+39)+60)+257)}{3(1+p)^3(1+2p)^6}\nonumber \\&\quad +\frac{2(9+10p)}{p(1+2p)^7}+\frac{4\log (1+p)(5+7p)}{p^2(1+p)^2(1+2p)^5}\nonumber \\&\quad -\frac{2\log (1+p)^2(p (p+1) (546 p (p+1)+125)+11)}{p^3(1+p)^3(1+2p)^6}\nonumber \\&\quad +\frac{4\mathrm {Li}_2(\tfrac{p}{1+p})(7+(1+p)(176 p^3+75 p^2-44 p-11))}{p^2(1+p)^3(1+2p)^6}. \end{aligned}$$ It is nice to see how in (5.8) the linear divergence for \(p\rightarrow 0\) in the last four terms cancels perfectly, since \(\lim _{p\rightarrow 0}\frac{\mathrm {Li}_2(\tfrac{p}{1+p})}{p^2}=\frac{1}{p}+\mathcal {O}(1)\). Also remarkable to assess is the fact that all kind of functions appearing the first time at order \(\lambda ^6\) [first two lines of (5.8)] have the same dependence of p in the denominator. Sending \(p_2\rightarrow p_1\) in (5.2) an integral equation with derivatives \(\frac{\partial G^{aa}_{p_1q}}{\partial p_1}\) appears. Making use of all results (2.2),(5.6) and (5.7), the numerical solution for the 2-point function with zero momenta can be given up to the eighth-order $$\begin{aligned} G^{aa}_{00}&= 1+2\lambda ^2+2(\pi ^2-6)\lambda ^4\\&\quad +\left\{ \pi ^2\left( \tfrac{514}{3}-224 \log (2)\right) +120\,\zeta (3)-266\right\} \lambda ^6\\&\quad +194.612 \,\lambda ^8+\mathcal {O}(\lambda ^{10}). \end{aligned}$$ 6 Conclusion and outlook We have introduced the noncommutative 3-colour model as a quantum field theoretical model in two dimensions. We derived the Schwinger–Dyson equations of the 2-point function and the N-point functions for a single boundary component. This required a generalisation of the Ward–Takahashi identity to coloured models. This new identity seems to be related to a mixing symmetry between two colours. In the large \(\mathcal {N},V\) limit a closed integral equation (5.2) occurs, which is a non-perturbative result. This equation was used to find perturbative solutions up to the sixth order in the coupling constant. The main aim for the future is to find an exact solution of (5.2) or to prove existence, if possible also uniqueness, of a solution. Furthermore, we want to extend this work to determine Schwinger–Dyson equations for \(B\ge 2\), where already problems arise for the \((1+1)\)-point function. Finally, we would like to treat the renormalisation problems in dimension 4 and 6. This work was supported by the Deutsche Forschungsgemeinschaft (SFB 878). A. H. wants to thank Jins de Jong for helpful discussions. A. \(G^{\quad aa}_{4,\,p_1p_2}\) computation by graphs Representatives of all graphs with one boundary component and two external edges at the fourth order in \(\lambda \) are the following graphs: Let \(a= a_1^1=a_2^1\). With straightforward computation by using the introduced rules in Sect. 2 one finds $$\begin{aligned} \tilde{G}^{aa}_{p_1p_2}(\Gamma _1)&=\frac{\lambda ^4}{(1+p_1+p_2)^2}\\&\quad \times \int _{0}^{\infty } \frac{\frac{dq_1dq_2}{1+q_1+q_2}}{(1+p_1+q_1)(1+p_1+q_2)(1+p_2+q_1)(1+p_2+q_2)}\\&=\frac{\lambda ^4}{(1+p_1+p_2)^2}\Bigg (-\frac{\log (1+p_1)^2}{(p_1-p_2)^2(1+2p_1)}\\&\quad -\frac{\log (1+p_2)^2}{(p_1-p_2)^2(1+2p_2)}\\&\quad -\frac{\pi ^2/6-2\mathrm {Li}_2\left( -p_1\right) }{(1+2p_1)(p_1-p_2)(1+p_1+p_2)}\\&\quad +\frac{\pi ^2/6-2\mathrm {Li}_2\left( -p_2\right) }{(1+2p_2)(p_1-p_2)(1+p_1+p_2)}\\&\quad +\frac{2\log (1+p_1)\log (1+p_2)}{(p_1-p_2)^2(1+p_1+p_2)}\Bigg )\\ \tilde{G}^{aa}_{p_1p_2}(\Gamma _2)&=\frac{\lambda ^4}{(1+p_1+p_2)^3}\\&\quad \times \int _{0}^{\infty } \frac{dq_1dq_2}{(1+p_1+q_1)(1+p_2+q_1)(1+p_2+q_1)(1+p_2+q_2)}\\&=\frac{\lambda ^4}{(1+p_1+p_2)^3}\frac{\log (\frac{1+p_1}{1+p_2})^2}{(p_1-p_2)^2}\\ \tilde{G}^{aa}_{p_1p_2}(\Gamma _3)&=\frac{\lambda ^4}{(1+p_1+p_2)^2}\\&\quad \times \int _{0}^{\infty }\frac{\frac{dq_1dq_2}{1+q_1+q_2}}{(1+p_1+q_1)^2(1+p_1+q_2)(1+p_2+q_1)}\\&=\frac{\lambda ^4}{(1+p_1+p_2)^2}\Bigg (-\frac{ \mathrm {Li}_2\left( -p_1\right) }{(1+2p_1)^2(1+p_1+p_2)}\\&\quad -\frac{\frac{\pi ^2}{6}-\log (1+p_1)^2- \mathrm {Li}_2\left( -p_1\right) }{(1+2p_1)(p_1-p_2)^2}\\&\quad -\frac{\frac{\pi ^2}{6}-\log (1+p_1)^2- \mathrm {Li}_2\left( -p_1\right) }{(1+2p_1)^2(p_1-p_2)}\\&\quad +\frac{\frac{\pi ^2}{6}-\log (1+p_2)\log (1+p_1)- \mathrm {Li}_2\left( -p_2\right) }{(1+p_1+p_2)(p_1-p_2)^2}\\&\quad +\frac{\log (1+p_1)}{p_1(1+p_1)(1+2p_1)(p_1-p_2)}\Bigg )\\ \tilde{G}^{aa}_{p_1p_2}(\Gamma _4)&=\frac{\lambda ^4}{(1+p_1+p_2)^2}\\&\quad \int _{0}^{\infty }\frac{\frac{dq_1dq_2}{1+q_1+q_2}}{(1+p_1+q_1)(1+p_2+q_1)^2(1+p_2+q_2)}\\&=\frac{\lambda ^4}{(1+p_1+p_2)^2}\Bigg (-\frac{ \mathrm {Li}_2\left( -p_2\right) }{(1+2p_2)^2(1+p_1+p_2)}\\&\quad -\frac{\frac{\pi ^2}{6}-\log (1+p_2)^2- \mathrm {Li}_2\left( -p_2\right) }{(1+2p_2)(p_1-p_2)^2}\\&\quad +\frac{\frac{\pi ^2}{6}-\log (1+p_2)^2- \mathrm {Li}_2\left( -p_2\right) }{(1+2p_2)^2(p_1-p_2)}\\&\quad +\frac{\frac{\pi ^2}{6}-\log (1+p_2)\log (1+p_1)- \mathrm {Li}_2\left( -p_1\right) }{(1+p_1+p_2)(p_1-p_2)^2}\\&\quad -\frac{\log (1+p_2)}{p_2(1+p_2)(1+2p_2)(p_1-p_2)}\Bigg ). \end{aligned}$$ We verify easily that \(s(\Gamma _1)=2,\,s(\Gamma _2)=4,\, s(\Gamma _3)=4\) and \( s(\Gamma _4)=4\). The correlation function of order \(\lambda ^4\) is finally given with identity (5.5) by $$\begin{aligned} G^{\quad aa}_{4,\,p_1p_2}&=\sum _{i=1}^4 s(\Gamma _i)\tilde{G}^{aa}_{p_1p_2}(\Gamma _i)\\&=\frac{2}{(1+p_1+p_2)^2(p_1-p_2)}\\&\quad \times \Bigg (\frac{3\log \left( \frac{1+p_1}{1+p_2}\right) ^2}{(1+p_1+p_2)(p_1-p_2)}\\&\quad +\frac{2\log (1+p_1)}{p_1(1+2p_1)(1+p_1)}-\frac{2\log (1+p_2)}{p_2(1+2p_2)(1+p_2)}\\&\quad -\frac{(1+2p_2)\left( \frac{\pi ^2}{6}+2\mathrm {Li}_2\left( \frac{p_1}{1+p_1}\right) \right) }{(1+2p_1)^2(1+p_1+p_2)}\\&\quad +\frac{(1+2p_1)\left( \frac{\pi ^2}{6}+2\mathrm {Li}_2\left( \frac{p_2}{1+p_2}\right) \right) }{(1+2p_2)^2(1+p_1+p_2)}\Bigg ). \end{aligned}$$ R.J. Baxter, Colorings of a hexagonal lattice. J. Math. Phys. 11, 784 (1970)ADSMathSciNetCrossRefGoogle Scholar B. Eynard, C. Kristjansen, An iterative solution of the three-colour problem on a random lattice. Nucl. Phys. B 516, 529 (1998)ADSMathSciNetCrossRefGoogle Scholar I.K. Kostov, Exact solution of the three-color problem on a random lattice. Phys. Lett. B 549, 245 (2002)ADSMathSciNetCrossRefGoogle Scholar M.R. Douglas, S.H. Shenker, Strings in less than one dimension. Nucl. Phys. B 335, 635 (1990)ADSMathSciNetCrossRefGoogle Scholar D.J. Gross, A.A. Migdal, Nonperturbative two-dimensional quantum gravity. Phys. Rev. Lett. 64, 127 (1990)ADSMathSciNetCrossRefGoogle Scholar M. Kontsevich, Intersection theory on the moduli space of curves and the matrix airy function. Commun. Math. Phys. 147, 1 (1992)ADSMathSciNetCrossRefGoogle Scholar E. Witten, Two-dimensional gravity and intersection theory on moduli space. Surv. Differ. Geom. 1, 243 (1990)CrossRefGoogle Scholar Y. Makeenko, G.W. Semenoff, Properties of hermitian matrix models in an external field. Mod. Phys. Lett. A 6, 3455 (1991)ADSMathSciNetCrossRefGoogle Scholar E. Langmann, R.J. Szabo, K. Zarembo, Exact solution of quantum field theory on noncommutative phase spaces. J. High Energy Phys. 1, 17 (2004)ADSMathSciNetCrossRefGoogle Scholar H. Grosse, H. Steinacker, Renormalization of the noncommutative \(\phi ^3\) model through the Kontsevich model. Nucl. Phys. B 746, 202 (2006)ADSMathSciNetCrossRefGoogle Scholar H. Grosse, R. Wulkenhaar, Renormalisation of \(\phi ^4\)-theory on noncommutative \(\mathbb{R}^4\) in the matrix base. Commun. Math. Phys. 256, 305 (2005)ADSCrossRefGoogle Scholar H. Grosse, R. Wulkenhaar, Self-dual noncommutative \(\phi ^4\)-theory in four dimensions is a non-perturbatively solvable and non-trivial quantum field theory. Commun. Math. Phys. 329, 1069 (2014)ADSMathSciNetCrossRefGoogle Scholar M. Disertori, R. Gurau, J. Magnen, V. Rivasseau, Vanishing of beta function of non-commutative \(\Phi ^4_4\) theory to all orders. Phys. Lett. B 649, 95 (2007)ADSMathSciNetCrossRefGoogle Scholar H. Grosse, A. Sako, R. Wulkenhaar, Exact solution of matricial \(\phi ^3_2\) quantum field theory. Nucl. Phys. B 925, 319 (2017)ADSCrossRefGoogle Scholar H. Grosse, A. Sako, R. Wulkenhaar, The \(\phi ^3_4\) and \(\phi ^3_6\) matricial QFT models have reflection positive two-point function. Nucl. Phys. B 926, 20 (2018)ADSCrossRefGoogle Scholar H. Grosse, R. Wulkenhaar, Renormalisation of \(\phi ^4\)-theory on noncommutative \(\mathbb{R}^2\) in the matrix base. J. High Energy Phys. 12, 19 (2003)ADSCrossRefGoogle Scholar Funded by SCOAP3 1.Mathematisches Institut der WestfälischenWilhelms-UniversitätMünsterGermany Hock, A. & Wulkenhaar, R. Eur. Phys. J. C (2018) 78: 580. https://doi.org/10.1140/epjc/s10052-018-6042-3 Received 30 April 2018
CommonCrawl
typos; converted images to equations; better typesetting edit approved Feb 13 '15 at 22:19 p.s.w.g Figure. A schematic diagram showing the effect of the temperature on the stability of an enzyme catalysed reaction. The curves show the percentage activity remaining as the incubation period increases. From the top they represent equal increases in the incubation temperature (50º C50 °C, 55º C55 °C, 60º C60 °C, 65º C65 °C and 70º C70 °C). The $Q_{10}$ is a unitless number, that summarizes the effect of raising temperature 10º C10°C on the rate of a chemical reaction. A $Q_{10}$ of 2.0 suggests that raising the temperature of a system by 10º C10 °C will effectively double the rate of the reaction. This value would be expected for most chemical reactions occurring within normal physiological temperatures. Mathematically, $Q_{10}$ can be represented by the following expression: $$Q_{10}=\left(\frac{k_2}{k_1}\right)^{\frac{10}{t_2-t_1}}$$ where$t_2$ = higher temperature $k_2$ = rate at $t_2$ $t_1$ = lower temperature t2 = higher temperature k2 = rate at t2 t1 = lower temperature k1 = rate at t1 UssualyUsually the temperature difference is about 10º C10 °C, then you can simplify the equation $$Q_{10}=\left(\frac{k_1}{k_2}\right)^{\frac{10}{10}}=\frac{k_1}{k_2}$$ Edit: You can easlyeasily calculate k$k$ form Arrhenius equation $$k=Ae^{\frac{-\Delta G^*}{RT}}$$ where k$k$ is the kinetic rate constant for the reaction, A$A$ is the Arrhenius constant, also known as the frequency factor, $-\Delta G^*$ is the standard free energy of activation (kJ M-1$kJ/mol$) which depends on entropic and enthalpic factors, R$R$ is the gas law constant and T$T$ is the absolute temperature. Figure. A schematic diagram showing the effect of the temperature on the stability of an enzyme catalysed reaction. The curves show the percentage activity remaining as the incubation period increases. From the top they represent equal increases in the incubation temperature (50º C, 55º C, 60º C, 65º C and 70º C). The is a unitless number, that summarizes the effect of raising temperature 10º C on the rate of a chemical reaction. A of 2.0 suggests that raising the temperature of a system by 10º C will effectively double the rate of the reaction. This value would be expected for most chemical reactions occurring within normal physiological temperatures. Mathematically, can be represented by the following expression: Ussualy the temperature difference is about 10º C, then you can simplify the equation Edit: You can easly calculate k form Arrhenius equation where k is the kinetic rate constant for the reaction, A is the Arrhenius constant, also known as the frequency factor, is the standard free energy of activation (kJ M-1) which depends on entropic and enthalpic factors, R is the gas law constant and T is the absolute temperature. Figure. A schematic diagram showing the effect of the temperature on the stability of an enzyme catalysed reaction. The curves show the percentage activity remaining as the incubation period increases. From the top they represent equal increases in the incubation temperature (50 °C, 55 °C, 60 °C, 65 °C and 70 °C). The $Q_{10}$ is a unitless number, that summarizes the effect of raising temperature 10°C on the rate of a chemical reaction. A $Q_{10}$ of 2.0 suggests that raising the temperature of a system by 10 °C will effectively double the rate of the reaction. This value would be expected for most chemical reactions occurring within normal physiological temperatures. $t_2$ = higher temperature Usually the temperature difference is about 10 °C, then you can simplify the equation Edit: You can easily calculate $k$ form Arrhenius equation where $k$ is the kinetic rate constant for the reaction, $A$ is the Arrhenius constant, also known as the frequency factor, $-\Delta G^*$ is the standard free energy of activation ($kJ/mol$) which depends on entropic and enthalpic factors, $R$ is the gas law constant and $T$ is the absolute temperature. friveroll created Jun 12 '12 at 17:07
CommonCrawl
What is exactly the difference between a definition and an axiom? I am wondering what the difference between a definition and an axiom. Isn't an axiom something what we define to be true? For example, one of the axioms of Peano Arithmetic states that $\forall n:0\neq S(n)$, or in English, that zero isn't the successor of any natural number. Why can't we define 0 to be the natural number such that it isn't the successor of any natural number? In general, why is an axiom not a definition? Are there cases where we can formulate definitions as axioms or vice versa? terminology definition axioms proof-theory wythagoras $\begingroup$ related: math.stackexchange.com/questions/7717/… $\endgroup$ – imranfat Sep 9 '15 at 15:49 $\begingroup$ @imranfat Interesting although its not the same because it doesn't include definition. $\endgroup$ – wythagoras Sep 9 '15 at 15:53 $\begingroup$ The first example that comes to my mind is that vector spaces are defined by a set of axioms. Which leads me to think that the difference is simply when we define a thing as having a list of properties, we call those properties axioms. $\endgroup$ – user137731 Sep 9 '15 at 15:55 $\begingroup$ Basically, from a math log point of view a definition is an axiom; if we are working in a theory $T$ and we can prove : $T \vdash \exists ! y P(y)$, then we can add to the language of the theory the new symbol $o$ and the "defining axiom" : $y=o \leftrightarrow P(y)$. $\endgroup$ – Mauro ALLEGRANZA Sep 9 '15 at 15:56 $\begingroup$ In the example of first-order Peano Arithmetic we cannot prove that there exists $0$ "from scratch". But what we can do is to prove $PA \vdash \exists ! y (y=S(0))$; this licenses us to add the new symbol $1$ with the "defining axiom" : $y=1 \leftrightarrow y=S(0)$. $\endgroup$ – Mauro ALLEGRANZA Sep 9 '15 at 15:59 Axioms are not "defined to be true"; I'm not even sure what that would mean. What they are is evaluated as true. Practically speaking all this means is that in the mathematical context at hand, you're allowed to jot them down at any time as the next line in your proof. Definitions have virtually nothing to do with truth, but are instead shorthand for formulae or terms of the language. Using the language of set theory as my example, "$x\subset y$" is going to be an abbreviation for "$\forall z(z\in x\to z\in y)$". If you were to put these two expressions on either side of a biconditional symbol, it would of course be true, but not because we have assumed it to be true, but rather because when you have unpacked everything into the actual formal language of set theory (of which $\subset$ is not a part) you have simply put exactly the same formula on both sides; it is a logical truth of the form $\phi\iff\phi$. Update: I realized this answer would be more complete if I addressed the example you show above with $0$, and addressed comments made below by @MauroALLEGRANZA. Let's say I want to define $0$ as a shorthand for the unique $x$ such that $\forall n(x\neq S(n))$. What this is saying is that we can state a uniqueness condition, namely "$\forall y(y=x\Leftrightarrow \forall n(y\neq S(n)))$," and, moreover, $\forall y(y=0\Leftrightarrow \forall n(y\neq S(n)))$. This latter, however, is a substantial statement entailing the existence of a certain kind of object, and if we don't have $\forall n(0\neq S(n))$ as an axiom, how will we derive it? It should be obvious that merely having a way to say "predecessor-less object" does nothing to guarantee the existence of a predecessor-less object; at best, you've shifted the burden of the axiom $\forall n(0\neq S(n))$ onto another axiom that circumlocutes the constant symbol $0$. Having two ways to say "predecessor-less object", one in the original language and one in a metalanguage, doesn't do any more work than only having one way to say it. Sig. Allegranza brought up a variant where the defined symbol becomes a genuine formal symbol of an expanded formal language, and we only look at extensions of the theory axiomatizing the equivalence of the new predicate with a formula of the old language. In this case the axiom stating the equivalence is not even utterable in our old language, much less will it have any consequences for the models of said language. With our above example, we might have the new one-place predicate $Z(v)$ added to the language of $\mathsf{PA}$, and take as our new axiom $Z(v)\Leftrightarrow \forall n(v\neq S(n))$. That is, we have a predicate, now part of the formal language, that is equivalent to the statement that $v$ is predecessor-less. But now that there's a formal axiom about $Z$, just try to derive $\exists x\forall n(x\neq S(n))$, much less $\forall n(0\neq S(n))$, from just this axiom alone. It should be easy to see that you're not going to be able to derive any non-trivial sentences in the language of $\mathsf{PA}$. In either case, we see that definitions simply don't do the work of axioms. Malice VidrineMalice Vidrine $\begingroup$ ... $\land{} x \neq y$ (which is itself shorthand for $\land \lnot \forall z(z\in x \leftrightarrow z \in y)$) $\endgroup$ – immibis Sep 9 '15 at 22:23 $\begingroup$ @immibis: Only if you define $\subset$ as proper subset. $\endgroup$ – chirlu Sep 10 '15 at 0:43 $\begingroup$ And only if equality is not part of the underlying logic. $\endgroup$ – Malice Vidrine Sep 10 '15 at 2:07 $\begingroup$ I would like to add the nitpicking comment : from a math log point of view, adding to the first-order set theory (with equality) the new defined symbol $\subseteq$ means to "expand" the base language with the new symbols (a binary predicate symbol) and adding to the axioms of the theory the "defining axiom" : $\forall x \forall y[(x \subseteq y) \leftrightarrow \forall z(z \in x \to z \in y)]$. $\endgroup$ – Mauro ALLEGRANZA Sep 10 '15 at 8:23 $\begingroup$ @EricTowers - adding a new symbol, we have "expanded" the original language : thus, there are new formulae (and new theorems). But the definition is "conservative" in the sense that no new theorems can be proved that do not include the new symbol. $\endgroup$ – Mauro ALLEGRANZA Sep 10 '15 at 19:05 A definition is a conservative extension of the language by a new symbol and some axioms involving this symbol. The key word here is conservative; in general axioms strengthen the system in question, while definitions are not allowed to do so. Mario CarneiroMario Carneiro In this instance, I'm taking Peano arithmetic to be defined in the first-order theory over functions $0, s, +, \times$ of arity 0, 1, 2, 2 respectively. The symbol $0$ is just that - a symbol. It needs no definition in this language. It already exists. We need the axioms to tell us what we're allowed to do with these symbols. You want to "define" how $0$ behaves, and you do this by setting up some axioms. Completely alternative point of view: the reason we haven't defined $0$ to be the thing such that no successor is $0$, is because such a thing doesn't a priori exist. There's nothing in the other axioms to tell us that $0$ behaves in this special way: we can't prove it exists, but nor can we prove that it doesn't. For instance, I could define an eggly function $\mathbb{C} \to \mathbb{C}$ to be entire and bounded but non-constant. It takes a bit of work to show that no eggly functions exist, but it is a fact. Therefore, this is very much a definition and not an axiom: we have shown that no eggly functions can exist. On the other hand, the Peano axiom that "induction holds" is something we simply know to be true, but it's so basic that it's just not possible to prove it or its converse. Therefore, we just say "the induction axiom is true", and call it an axiom. Axioms are true by fiat; definitions must be proved to be valid. Additionally, consider the following. There's an axiom of the set theory ZF that an empty set exists. However, this can be derived from the axiom of infinity (which asserts that a set exists) and the axiom of comprehension (which lets us select a subset for which "false" holds). Therefore, you could quite reasonably remove the empty-set axiom and instead supply a definition of "empty set". In this instance, we supply the axiom because it really seems like overkill not to - it's a matter of aesthetic. We like to have an axiom which says "there is an empty set", even if it's really a theorem that can be derived from the other axioms, because it's just a bit neater. To summarise, the line is not always very clear-cut, and it's not always clear whether something should be an axiom or a definition. Both can be appropriate, and it may be down to what is most aesthetically pleasing. Patrick StevensPatrick Stevens From a proof theory perspective, there is no difference. They are both effectively announce the truthhood of something without providing a proof thereof. The difference arises when one applies model theory, which is required to apply the mathematical results (whether applied explicitly or implicitly). A definition is wholly contained within the mathematical system. One cannot disagree with it because it is simply an artifact of the way the system is written. One can also sometimes rewrite the system to exclude a definition which is "offensive." An axiom, on the other hand, reaches outside towards the system that is being modeled. These axioms define the range of problems for which the mathematical systems are applicable. If one disagrees with an axiom, it simply states that the mathematical system is not applicable to a particular class of problems because you are not willing to accept the axioms. From a practical perspective, there is some difference between writing a definition from writing an axiom. You have a little more freedom when naming and defining definitions, because you wholly control their meaning. When it comes to axioms, you tend to have to interact with what others define things to mean. As an example, within a mathematical system, I may elect to redefine "+" to have a meaning not usually associated with addition. This may be effective for visually depicting a concept and making sure the reader remembers it (so long as it is close enough to addition to not give the cognitive dissonance). However, if I provide an axiom which requires something be "continuous," and my use of "continuous" is actually not the same as the more agreed upon definition, now I can cause great confusion. The axioms are something which are typically addressed up front, before your own style has leaked into the notation and verbiage. If one uses a standard terminology in the axioms, it is more likely to confuse someone who is scanning across a bunch of papers looking for a solution to their problem. A great example of an axiom shows up in physics: "a closed system." A closed system is one where no energy crosses the border of the system (derivative of energy flux is zero). This could be a definition in some abstract scenarios, but in almost all cases it is an axiom. Not all systems satisfy the "closed system" axiom (in fact, technically speaking, no system 100% satisfies it except perhaps the universe as a whole). The applicability of any mathematical modeling under the axiomatic assumption of a closed system is limited by how well "closed system" describes the system someone is exploring. On the other hand, there could be cases where one would elect to use it in the sense of a definition. For example, if you were working with an abstract mathematical construct and you found a subset of this construct which has behaviors similar to a closed system in thermodynamics, you may elect to define a closed system to match that subset of your construct. One might be exploring a class of ring generators, and notice that some of them demonstrate a behavior like entropic decay. One may choose to identify these behaviors with thermodynamics terms like "closed system" because it does a good job of capturing the relationships you are focused on. However, since it is purely encapsulated within your mathematics, it's okay if it's not "the official definition." That definition does not have to interact with the thousands of papers on thermodynamically closed systems quite as much as you would if your construct was only applicable to closed systems. In that case, you would want to treat it as an axiom. In all, it's effective to think of a "definition" as something internal to your work, while an "axiom" tends to connect to the greater body of work, defining which classes of problems allow the application of your work. Consider a similar example from set theory : Kenneth Kunen, The Foundations of Mathematics (2009), page 10 : Axiom 0. Set Existence : $\exists x(x = x)$. Axiom 1. Extensionality : $\forall z(z \in x \leftrightarrow z \in y) \to x=y$. Axiom 3. Comprehension Scheme : For each formula, $\varphi$, without $y$ free, $\exists y\forall x(x \in y \leftrightarrow x \in z \land \varphi(x))$. Page 17 : first theorem. There is at most one empty set: Definition I.6.1 : $\text {Emp}(x)$ iff $\forall z(z \notin x)$. Then the Axiom of Extensionality implies: Theorem I.6.2 : $\text {Emp}(x) \land \text {Emp}(y) \to x=y$. Now, to prove that there is an empty set, you can't do it just by Extensionality, since Extensionality is consistent with $\lnot [\exists x \ \text {Emp}(x)]$. To prove that $\exists y \ [\text {Emp}(y)]$, start with any set $z$ (there is one by Axiom 0) and apply Comprehension with $\varphi(x)$ a statement which is always false (for example, $x \ne x$) to get a $y$ such that $\forall x(x \in y \leftrightarrow FALSE)$, i.e. $\forall x(x \notin y)$. By Theorem I.6.2, there is at most one empty set, so $\exists !y \ \text {Emp}(y)$, so we can name this unique object $\emptyset$. Definition I.6.8 : $\emptyset$ denotes the (unique) $y$ such that $\text {Emp}(y)$ (i.e. $\forall x[x \notin y]$). As usual in mathematics, before giving a name to an object satisfying some property (e.g., $\sqrt 2$ is the unique $y > 0$ such that $y^2 = 2$), we must prove that that property really is held by a unique object. For the mathematical treatment of definitions, see : II.15 Extensions by Definitions, page 148-on. The case of Peano's Axioms is similar; see : Edmund Landau, Foundations of analysis : The arithmetic of whole rational irrational and complex numbers (ed or 1930), page 1 (and compare with : The original axioms in Giuseppe Peano's Arithmetices Principia Novo Methodo Exposita (1889) ) : We assume the following to be given: A set (i.e. totality) of objects called natural numbers, possessing the properties — called axioms — to be listed below. Page 2 : Axiom 1 : $1$ [Landau uses $1$ instead of $0$] is a natural number. That is, our set is not empty; it contains an object called $1$ (read "one"). Axiom 3 : We always have $S(x) \ne 1$. That is, there exists no number whose successor is $1$. Theorem 3 : If $x \ne 1$, then there exists one (hence, by Axiom 4, exactly one) $u$ such that $x=S(u)$. The proof is by induction [Axiom 5]; it is worth noticing the contrapositive of Th.3 : if for all $u, \ x \ne S(u)$, then $x=1$. Thus, $1$ is the unique object in the set of natural numbers which is not a successor. In conclusion, we can "merge" axioms 1 and 3 into a single one stating : "$0$ is a natural number and it isn't the successor of any natural number", but we cannot simple say : "let $0$ denote the natural number that isn't the successor of any natural number" if we have no axiom asserting the existence of such a number. Mauro ALLEGRANZAMauro ALLEGRANZA $\begingroup$ Please refer to some such source that points to the definition aspect of $\gcd$. $\endgroup$ – jiten Mar 30 '18 at 5:12 $\begingroup$ @jiten - what's the link to the question ? $\endgroup$ – Mauro ALLEGRANZA Apr 2 '18 at 9:00 $\begingroup$ @jiten - see Greatest common divisor. "the greatest common divisor ($\text {gcd}$) of two or more integers, which are not all zero, is [defined as] the largest positive integer that divides each of the integers." $\endgroup$ – Mauro ALLEGRANZA Apr 2 '18 at 9:01 I found exactly the same question on Quora and I am just copying the answer given by David Joyce, Professor of Mathematics at Clark University. Here's the link to the original answer. Axioms come mainly in two different kinds—existential and universal. They often go along with definitions. For instance, an existential axiom says that something exists. In Euclid's Elements there's an axiom Euclid's Elements, Book I, Postulate 3 that says given two points, C and D, there exists a circle whose center is at the first point C and whose circumference passes through the second D. It's preceded by Euclid's Elements, Book I, Definitions 15-18 which define circles, centers, diameters, and circumferences. Other axioms are universal. Another example from Euclid: Euclid's Elements, Book I, Postulate 4: all right angles are equal.That's preceded by the definition Euclid's Elements, Book I, Definition 10 of right angles. Definitions aren't used to say things exist or something is true about things. They're used to make it easier to talk about things. Euclid didn't have a word for radius, but it would have made things easier. He called it a line from the center of the circle to the circumference. schzanschzan A definition is a choice to call something by a specific name/reference/identifier/pointer. What is a name? A cognitive synonym for what people understand to be "equivalent". That is a long philosophical discussion. In any sense and reference, Sinn und Bedeutung for Frege, what part of a name attaches to the thing being named? None. A name is an ablative concept. Another way to describe a definition is it's a convention. Let's call Pluto a planet. Now let's call it a dwarf planet. Define what a dwarf planet is. Does that definition "equate" to what Pluto means? Now you have a generally accepted convention. You can point to a planet, you can point to Pluto, all by the convention commonly understood. How you come to acquire that knowledge is another long thesis with many theories. The truth of a definition is beside the point. One can make a Russellian existence statement about a definition. But that sidesteps the issue of common understanding / cognitive synonym. You can't state anything about gravity until we all understand what is and isn't gravity. Axioms are facts / assumptions taken as true. One can assume: "Pluto is a planet" and then investigate to prove that Pluto does not meet the criteria to be a planet. Then one can assume: "Pluto is a dwarf planet" and lo, it meets the defined criteria to be a dwarf planet. daemondavedaemondave Definitions aren't used to say things exist or something is true about things. They're used to make it easier to talk about things. where as an axiom is a statement or proposition which is regarded as being established, accepted, or self-evidently true. Rahul SingalRahul Singal Here is my simple explanation of what I think the difference is between a definition and an axiom: An axiom is a rule or a "law of the land" that we decide we will follow/enforce. For example, the axiom of choice in set theory says if you have any arbitrary collection of non-empty sets, you can always form a new set by picking one element from each of the existing sets. This is a rule we are saying will hold. It is a law we are choosing to enforce in mathematics (unless, of course, you decide to reject this axiom). Now, once you have a set of axioms or laws, we start to get consequences of these. For example, with all of the axioms of set theory, we start to realize that when we work with sets, we see some instances where a collection of sets (let's index them by $i \in \Lambda$, so the collection is { $A_{i}$ }) satisfies $A_{i} \cap A_{j} = \emptyset$ if $i \neq j$ (i.e., every set in the collection does not share any elements with any other set). Then, we want to give this "occurrence in the wild" a name (I call it an occurrence in the wild because, as a result of the axioms, this happens, and I am imagining axioms are laws in some foreign math land). So, we "define" that a collection of sets {$A_{i}$} is pairwise disjoint if $A_{i} \cap A_{j} = \emptyset$ when $i \neq j$. All we are doing when we define something is really giving a name to something that already occurs as a result of the axioms. laymanlayman $\begingroup$ Note that normally ZFC includes the axiom of extensionality, which says precisely what you claimed to have defined. Also, if you do not have that axiom, then your definition isn't even enough to capture equality, in the sense that there are models that satisfy ZFC (with all equality replaced by your definition) minus extensionality plus its negation. So you should change your example. $\endgroup$ – user21820 Sep 15 '15 at 2:34 $\begingroup$ @user21820 Done. $\endgroup$ – layman Sep 15 '15 at 17:43 A definition is just a name you attach to something. For example you can attach the name "zero" to something. But for the definition to make sense that "something" must exist and that is usually guaranteed by some axiom or theorem. AretinoAretino In a sense one can say that are just different forms of postulation, one defines definitions and one defines axioms (see for example equivalence between natural deduction systems and formal axiomatic systems). In a further investigation, there can be (more or less) differences, between the two. Definitions define new concepts (based on previously defined concepts). Axioms (usualy) describe behavior of (inter-related) concepts. Definitions cannot be circular, while axioms in some cases can be. Axioms can be in the form of templates or axiom-schemas (e.g ZF), while definitons are not Definitions are finitistic, while axioms are not necessarily so. According to Aristotelian system, definitions should be based either on direct primitives or on previous definitions, while (other) axioms are not necessary to do that (see above point 2) Definitions do not use negative statements (i.e this is not so), only positive forms of statements (e.g this is so and so), while (other) axioms are not necessary to do so. One can interchange (in a certain theory) (some) theorems with axioms (see also reverse mathematics), but theorems with definitions cannot be interchanged (see point 7 above) (note: the above points are refinements in mathematical practice of what a definition achieves unlike other axioms, i.e does not define the unknown with the unknown, formaly one can take the stance that they are equivalent, then there is a slight shift of meaning, but tenable nonetheless) Nikos M.Nikos M. $\begingroup$ Definitions cannot be circular, while axioms in some cases can be. How do you mean exactly? Can you give an axiom that is circular? I'm a bit confused by the sentence. $\endgroup$ – wythagoras Sep 9 '15 at 17:14 $\begingroup$ Also, what is meant by 'Definitions are finitistic, while axioms are not necessarily so.' (point 5) $\endgroup$ – wythagoras Sep 9 '15 at 17:15 $\begingroup$ @wythagoras, 1. definitions cannot define something with that something (cannot be circular in this sense), while (other) axioms describing behaviors can be circular in this sense (or recursive), similar reasoning is for the finitistic part, a definition is finite in terms $\endgroup$ – Nikos M. Sep 9 '15 at 17:17 $\begingroup$ @wythagoras, also note that in the start, is stated that these are just refinements of the concepts (as used in practice), one can take the stance that there is not actual difference (and one can do that) $\endgroup$ – Nikos M. Sep 9 '15 at 17:19 Not the answer you're looking for? Browse other questions tagged terminology definition axioms proof-theory or ask your own question. Are definitions axioms? Are "if" and "iff" interchangeable in definitions? Difference between axioms, theorems, postulates, corollaries, and hypotheses How did mathematicians decide on the axioms of linear algebra The existence of the empty set is an axiom of ZFC or not? Prove all other common divisors divide $\gcd$ Should axioms be seen as "building blocks of definitions"? What are the consequences if Axiom of Infinity is negated? Why do we take the axiom of induction for natural numbers (Peano arithmetic)? Can we apply a successor operation in Peano's arithmetic infinitely many times, is the successor well defined? Peano's 3rd axiom -explain Meaning of the word "axiom" Peano axioms: 3 or 5 axioms? Why isn't the additive identity also an axiom? How can I prove this proposition from Peano Axioms: (Cancellation law). Let a, b, c be natural numbers such that a + b = a + c. Then we have b = c. Definition of an axiom? In Peano's Axioms are the uniqueness of the successor and $x^{\prime}=y^{\prime}\implies{x=y}$ redundant?
CommonCrawl
Is this the right formula for the total permutations of a set of dice? [closed] Want to improve this question? Update the question so it's on-topic for Role-playing Games Stack Exchange. I know the total number of permutations (binomial) on tossing \$n\$ fair coins is given by: $$\mathrm{Total} = 2^n$$ What about dice? If I grab some dice and roll \$4\mathrm{d}2 + 4\mathrm{d}4 + 6\mathrm{d}3\$, is the total number of permutations given by this formula? $$\mathrm{Total} = 4^2 \times 4^4 \times 6^3$$ dice statistics okeefe \$\begingroup\$ This might be better as a Math.SE question. \$\endgroup\$ – Ian Miller May 6 '17 at 2:05 \$\begingroup\$ did you mean 4 to the power of 2, and 6 to the power of 3? would that not be rolling 2d4 and 3d6? \$\endgroup\$ – Tritium21 May 6 '17 at 8:07 \$\begingroup\$ Did you mean 4d2 + 4d4 + 6d3, or did you actually mean to ask about 2d4 + 4d4 + 3d6? (Few people I know could "grab some dice" and end up with any d2s or d3s, let alone many of them.) \$\endgroup\$ – SevenSidedDie May 8 '17 at 15:28 \$\begingroup\$ I'm voting to close this question as off-topic because it's a statistics and dice question with no direct relation to RPGs or RPG design. \$\endgroup\$ – user5834 Jul 14 '17 at 19:06 \$\begingroup\$ @WrongOnTheInternet I think it's on-topic because understanding dice probabilities and how to calculate them is understanding how to use dice, which is understanding how to use an RPG tool, which is on-topic. There's an ongoing meta on this, though, so I'll hold off on reopen votes. \$\endgroup\$ – Please stop being evil Jul 15 '17 at 5:40 Dice permutations are calculated as "Permutations with repetition/replacement," which means, intuitively speaking, that if you roll 2d6, getting a 6 on the first die does not prevent you from getting a 6 on the second die. (As opposed to "Permutations without repetition/replacement." An example of that is putting putting six numbered slips of paper into an urn and drawing out two of them, literally without replacing the first slip before you draw out the second-- now, getting a 6 on the first slip does prevent you from getting a 6 on the second slip.) Calculating permutations with repetition is very easy. I'll build up the intuition in a few short steps: Consider rolling 1d6. The number of permutations is, trivially, 6: \begin{array}{r|llllll} \text{Die One} & \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} \\ \hline \end{array} I'll note in passing that \$ 6^1 \$ happens to be \$ 6 \$ and that we could use this for any single roll of some die with Y faces: $$ \text{1dY} \rightarrow Y^1 = Y $$ Consider rolling 2d6: \begin{array}{r|llllll} \text{Die Two\Die One} & \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} \\ \hline 1 & \text{1,1} & \text{1,2} & \text{1,3} & \text{1,4} & \text{1,5} & \text{1,6}\\ 2 & \text{2,1} & \text{2,2} & \text{2,3} & \text{2,4} & \text{2,5} & \text{2,6}\\ 3 & \text{3,1} & \text{3,2} & \text{3,3} & \text{3,4} & \text{3,5} & \text{3,6}\\ 4 & \text{4,1} & \text{4,2} & \text{4,3} & \text{4,4} & \text{4,5} & \text{4,6}\\ 5 & \text{5,1} & \text{5,2} & \text{5,3} & \text{5,4} & \text{5,5} & \text{5,6}\\ 6 & \text{6,1} & \text{6,2} & \text{6,3} & \text{6,4} & \text{6,5} & \text{6,6}\\ \end{array} You can count the entries in the table and come up with 36. But you can also think of this as extending the table for a 1d6 roll: Each of the six entries in the first table is turned into its own new, separate list with six unique entries. This is because no matter what we roll on the first die, we can get any of the six values on the second die. So we can simply calculate 6 times 6 entries = 36. If we generalize that to two dice, each with Y sides, that leaves us with: $$ \text{2dY} \rightarrow Y^2 $$ Consider rolling 3d6. I'm not going to draw the big table, but extending what we did above, we turn each of the 36 table entries into, again, its own unique list of six more entries. (Because again, no matter what we roll on the first two dice, we can roll any of the six values for the third one.) For the specific case of 3d6 we would have 36 times 6 = 216 entries. This basic thought process holds for any die with Y sides that we toss X times, i.e., $$ \text{XdY} \rightarrow Y^X $$ Note that the X and the Y swap places on either side of the expression! This is not a mistake. But what about something weird, like 1d6 + 1d4? The same basic process: \begin{array}{r|llllll} \text{Die Two\Die One} & \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} \\ \hline 1 & \text{1,1} & \text{1,2} & \text{1,3} & \text{1,4} & \text{1,5} & \text{1,6}\\ 2 & \text{2,1} & \text{2,2} & \text{2,3} & \text{2,4} & \text{2,5} & \text{2,6}\\ 3 & \text{3,1} & \text{3,2} & \text{3,3} & \text{3,4} & \text{3,5} & \text{3,6}\\ 4 & \text{4,1} & \text{4,2} & \text{4,3} & \text{4,4} & \text{4,5} & \text{4,6}\\ \end{array} Just note here that each of the six entries from the first die is only matched with FOUR entries from the second die. In the expression below, I'm using Y values with subscripts $$ \text{1dY}_1 + \text{1dY}_2 \rightarrow Y_1^1 \times Y_2^1 $$ It doesn't matter how weird we get from there, as long as we're talking about simple dice, we just keep multiplying by the number of faces on the new die we've just added. The full formula in all its generality becomes: $$ \text{X}_1\text{dY}_1 + \text{X}_2\text{dY}_2 + \dots + \text{X}_N\text{dY}_N \rightarrow Y_1^{X_1} \times Y_2^{X_2} \dots Y_N^{X_N}$$ As a final note, the addition of a constant (e.g., 3d6 + 6) changes nothing, and the constant can be ignored for finding the number of permutations. Technically, one might consider it "a single sided die" and multiply by one, but that's a bit precious. For your specific example, there are 2,985,984 permutations: \begin{array}{r|llll} \ N & X_N & Y_N & {Y_N ^ {X_N}} \\ \hline 1 & \text{4} & \text{2} & \text{16} \\ 2 & \text{4} & \text{4} & \text{256} \\ 3 & \text{6} & \text{3} & \text{729}\\ \hline \text{product} & \text{ } & \text{ } & \text{2985984}\\ \end{array} NovakNovak \$\begingroup\$ Addition of MathJax to site: +1, Novak approves. \$\endgroup\$ – Novak May 6 '17 at 21:00 Each die is an independent event. The correct number of permutations is \$ 2^4 4^4 3^6 \$. okeefeokeefe Not the answer you're looking for? Browse other questions tagged dice statistics or ask your own question. How do I calculate dice probability in the A Song of Ice and Fire system? Are d8, 10, 12, etc. dice, fair dice? What is the average roll of a die with an expanded explosion threshold as compared to one with an expanded success threshold? Using AnyDice to determine the odds of getting a specific number sequence on multiple dice Does the chance of rolling a 1 in a pool of d10s increase as the number of dice increases? Help with probability for a complicated dice pool mechanic What is the name of the online dice roller that makes it less likely to roll the same number sequentially?
CommonCrawl
3.2.2 Synthesising percussion sounds The sound examples in this chapter were all computed by the same approach. The system is characterised by the natural frequencies $\omega_n$ and Q-factors $Q_n$ of the set of modes, up to some chosen cutoff frequency so that we only have a finite number $N$ of modes to deal with. If mode $n$ has amplitude $a_n$, the waveform we compute and save as a sound file is simply $$f(t)=\sum_{n=1}^N~a_n \cos(\omega_n t) e^{-\omega_n t /(2 Q_n)} \tag{1}$$ where $Q_n = 1/(2\zeta_n)$ in terms of the damping ratio $\zeta_n$ defined in section 2.2.7. The amplitudes are chosen by a very simple strategy. First, the "instrument" is assumed to be set into motion using a hammer that behaves like the simple model presented in section 2.2.6. That model has a variable that governs the "hardness" of the hammer: the duration of the impact, given by a half-period of the vibration frequency $\Omega$ defined in 2.2.6. To represent this effect, $a_n$ includes a factor $\dfrac{\cos[\pi \omega_n/(2 Q_n)]}{(\Omega^2-\omega^2)}$. The second, rather ad hoc, factor included in the amplitudes is associated with the fact that we want to make sounds that are reasonably familiar. But the radiation of sound by vibrating structures is complicated (we will study it in Chapter 4). For the present purpose, none of this complication need be considered. Instead, the amplitude is simply scaled by a power of frequency, with a power $\alpha$ chosen to give an acceptable sound. The chosen value is $\alpha = -0.6$. So in total, $$a_n=\dfrac{\cos[\pi \omega_n/(2 \Omega)]}{(\Omega^2-\omega^2)}~\omega_n^\alpha.\tag{2}$$ There is a final factor that could have been included, but for the moment has been ignored. If we wanted to investigate the effect of hitting an instrument at different locations, we would use the result of eq. (11) from section 2.2.5 and include a factor $u_n(x)$, where $u_n$ is the nth mode shape and $x$ represents the chosen hammer position. For this purpose the mode shapes need to be normalised according to eq. (10) of section 2.2.5. All modes are assigned the same Q-factor $Q_n$. For cases with a "hard" hammer, the duration of the impact $d=\pi / \Omega$ has the value 0.1 ms, while for cases with a "soft" hammer, $d=1$ ms. The value of $N$ is chosen to ensure that enough modes are included to cover the frequency range over which amplitudes are significant.
CommonCrawl
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. Truel extended to n persons n players numbered 1~n play a shooting game. Their accuracy rates p1~pn are strictly between 0 and 1, and strictly increases from p1 to pn. This is common knowledge. Before the game starts, the referee arranges the n players in some order. When game starts, players take turns to fire at one another according to that order. (For example, if n=4, and the referee arranges the players in order (3,4,1,2), when game starts, 3 fires first, then 4 fires, then 1, then 2, then 3 again, so on and so forth, as long as they are alive) The last person left is the winner of the game. To be more specific, define $S_{i}$={1,2,3,...,i-1,i+1,i+2,...,n}. Let $S_{i}^{k}$ be any subset of $S_{i}$ that contains k elements(n>k>0). Then the strategy of player i is a function that maps $S_{i}^{k}$ to one of its element, for any $S_{i}^{k}$. What this definition means is that, given any k players (excluding i himself) left, player i's strategy tells him whom to shoot first. Notice that we rule out the possibility that any player can hold fire in his turn: he must choose someone to shoot. After the referee arranges the firing order, all players must announce their strategies simultaneously. A player's payoff is his winning probability. Question 1: Is there always an Nash equilibrium in this game, for any firing order? Question 2: Suppose firing order is (1,2,3,...,n). Which players have fixed optimal strategies with respect to changes in (p1,p2,...,pn)? (When n=3, all have fixed optimal strategies; when n=4, player 1,2 and 4 have fixed optimal strategies) Furthermore, these fixed optimal strategies are intuitive and simple in the sense that they always instruct the player to fire at the most accurate person alive. I guess there could be regularities as n gets larger? At least can we say for player 1 this strategy is always optimal? EDIT: "dominant strategy" in Question 2 is changed to "fixed optimal strategy with respect to changes in probabilities", which is more appropriate. game-theory recreational-mathematics EricEric This is a non-cooperative game of perfect information. In the absence of degeneracy, there is always a unique optimal pure strategy for each player. Note that your probability of hitting your target doesn't depend on who the target is, and the situation if you miss also doesn't depend on who the target is. Thus your only concern is what happens if you hit your target. You shoot at the target whose death would give you the highest probability of being the survivor. These probabilities for all participants can be computed by "dynamic programming", starting with the case of only one surviving participant and working backwards. The only complication is how ties might be broken. Robert IsraelRobert Israel $\begingroup$ @Robert: Thanks! I agree there's always a unique pure strategy for a player. And indeed with probabilities given, we can recursively determine everyone's optimal strategy. But this doesn't answer Quesiton 2 (which i've modified): Will the naive strategy always remain most players' optimal strategy as n gets larger? Or can we predict which players will always be able to adopt the naive strategy as optimal, as n gets larger? $\endgroup$ – Eric $\begingroup$ @Robert: Can you tell us how you programmed it, more specificly? Whose death maximizes your probability of survive is so entangled with what all others do at each level that there're to much probabilities to specify or compute. $\endgroup$ $\begingroup$ You have to consider the survival probabilities (under optimal play) of each player given the set of initial players and whose turn it is. So if there are initially $n$ players, there are $\sum_{j=1}^n j {n \choose j} = n 2^{n-1}$ scenarios (set of players + whose turn). Not bad at all if $n = 10$, but $n=30$ would be challenging. $\endgroup$ – Robert Israel $\begingroup$ Nice insight! I've explained the dynamic programming algorithm in more detail at cs.stackexchange.com/a/71133/755. (Cc: @Eric) $\endgroup$ – D.W. Consider a scenario with firing order $(1,2,\ldots,n)$ where players 1 to $n-3$ have hit probabilities so close to 0 that their only real purpose is to allow the top 3 players to waste shots, while players n-2 to n have hit probabilities very close to 1. Clearly player $n$ will be the target whenever player $n-2$ or $n-1$ makes a non-wasted shot, so his best chance at survival will always be to shoot player $n-1$ and hope that player $n-2$ misses. Therefore player $n-1$ will always shoot player $n$ and hope that player $n-2$ misses. On the other hand, whenever players $n-1$ and $n$ and some of $1$ to $n-3$ are still alive, player $n-2$ would be foolish to shoot player $n$ or $n-1$ (becoming the target for the next shot): instead he shoots a low-ranking player (presumably number $n-3$ if available), and next will get to shoot at the survivor of players $n-1$ and $n$. answered Aug 4, 2011 at 16:19 $\begingroup$ I tried the case $n=10$ with hitting probabilities $1/20, 3/20, \ldots, 19/20$. If my programming is correct, there are very many cases where the optimal strategy does not say to shoot at the most accurate person alive. For example, when all are still alive players 1 to 10 should target players 8, 7, 5, 2, 9, 8, 3, 5, 5, 9 respectively. $\endgroup$ $\begingroup$ @Robert: If I computed it correctly, with the same probabilities, if players are allowed to hold fire in his turn, then 1 to 10 should target: 4,5,5,2,3,H,1,1,7,8, where H means hold fire. Interestingly, anyone could be a first target, even if he's most inaccurate. $\endgroup$ Thanks for contributing an answer to MathOverflow! Is perfect play possible in continuous rock-paper-scissors? game "step size" vs. "acceleration" Simple(?) game theory Algorithm on winning strategy of Winner (Simplified card game) Is there always a symmetric "subset equilibrium" for an equilibrium in a symmetric game? Do random asymmetric games have more complicated strategies than random symmetric games?
CommonCrawl
HomeAboutResearchcategoriesSubscribeInstituteshop  © 2015 - 2021 Math3ma Ps. 148 © 2015 – 2022 Math3ma Warming Up to Enriched Category Theory, Part 1 It's no secret that I like category theory. It's a common theme on this blog, and it provides a nice lens through which to view old ideas in new ways — and to view new ideas in new ways! Speaking of new ideas, my collaborators and I are planning to upload a new paper on the arXiv soon. I've really enjoyed the work and can't wait to share it with you. But first, you'll have to know a little something about enriched category theory. (And before that, you'll have to know something about ordinary category theory... here's an intro!) So that's what I'd like to introduce today. It's a warm up, if you will. What is enriched category theory? As the name suggests, it's like a "richer" version of category theory, and it all starts with a simple observation. (Get your category theory hats on, people. We're jumping right in!) In a category, you have some objects and some arrows between them, thought of as relationships between those objects. Now in the formal definition of a category, we usually ask for a set's worth of morphisms between any two objects, say $X$ and $Y$. You'll typically hear something like, "The hom set $\text{hom}(X,Y)$ bla bla...." Now here's the thing. Quite often in mathematics, the set $\text{hom}(X,Y)$ may not just be a set. It could, for instance, be a set equipped with extra structure. You already know lots of examples. Let's think about about linear algebra, for a moment. If we have a pair of real vector spaces, say $V$ and $W$, then the set of linear transformations $V\to W$ has lots of structure: if $f,g\colon V\to W$ are linear transformations, then their sum $f+g$ is also a linear transformation from $V$ to $W$, and so is the scalar multiple $kf$ for any real number $k$. In fact, the point here is that the set $\hom(V,W)$ of linear transformations is itself a real vector space. So the hom sets evidently have "richer" structure. They are enriched! Now, linear algebra is just one example. You can think of others! The set of continuous functions between a pair of topological spaces can itself be given a topology; the set of homomorphisms between a pair of abelian groups is itself an abelian group; and so on. And that's the main idea behind enriched category theory. (Well, that's the gist. The theory gets deep quickly.) You have a category $\mathsf{C}$, and the hom sets between the objects in $\mathsf{C}$ are themselves objects in some other category, which is often called the base category or the category over which $\mathsf{C}$ is enriched. In the examples above, the categories were each enriched over themselves, but it's totally fine to have a category $\mathsf{C}$ enriched over a different category. Admittedly, the graphic above isn't the whole story. For the math to really work out, we have to be a little careful with the axioms for composition of morphisms and identity morphisms, and so there's a little more to say. Indeed, there is a very formal definition of an enriched category, but I'm not sharing it just yet. We're still warming up! In fact, I want to dial things back even further and consider the following super simple scenario. Let's take it down a notch. Suppose we have a pair of objects $X$ and $Y$, and let's further suppose there is at most one morphism from $X$ to $Y.$ This is much simpler than our examples above from linear algebra, group theory, and topology, where in principle there could've been loads of morphisms. But we want to keep things simple for now. So, suppose we're in a situation where either there's an arrow $X\to Y$ or there's not. Now, let's also consider the possibility that that arrow can be "decorated" with — or simply replaced by — some number. Perhaps that number indicates the degree to which that arrow is there. Or perhaps it represents the amount of effort it takes to "get" from $X$ to $Y$. Or maybe it represents the probability (or some fuzziness*) of going from $X$ to $Y$. Or the distance it takes to travel from $X$ to $Y$. Or maybe it just represents the Boolean truth-value of whether or not that arrow is even there. Use your imagination! Imagination is good, but so is thinking systematically. So let's rope things in a bit. I really like those last three suggestions — truth values, distances, and probabilities — and I'd like us to be little more formal about it. To that end, let's say the arrow $X\to Y$ can be either be decorated with a number from the two-element set $\{0,1\}$ (thought of as truth values), or perhaps the unit interval $[0,1]$ (thought of as probabilities or some fuzziness), or perhaps the set of nonnegative extended reals $[0,\infty]$ (thought of as distances). See the analogy, here? Earlier, the collection of arrows from $X$ to $Y$ was an object in the category $\mathsf{Vect}$ of vector spaces, or the category $\mathsf{Top}$ or topological spaces, or the category $\mathsf{AbGrp}$ of abelian groups, or.... But now the "collection" consisting of the single arrow $X\to Y$ is basically just an element in the set $\{0,1\}$, or the set $[0,1]$, or the set $[0,\infty]$, or... See where we're going with this? I'll summarize it as a question and answer: Question: Is there a way to view the sets $\{0,1\}$ and $[0,1]$ and $[0,\infty]$ as categories so that the number "$\text{hom}(X,Y)$" is actually an object in that category? And is there a more formal way to talk about categories "enriched" over these categories? Answer: YES and YES! And rather than belaboring this point any longer, let's cut to the chase. Ramping back up. Each of the three sets above are examples of preordered sets, which are very easy-to-understand kinds of categories. A preordered set, or simply a preorder, is a set equipped with a reflexive and transitive relation typically denoted by $\leq$. If the relation is also antisymmetric, then it's actually a partially-ordered set, i.e. a poset. But it turns out that just having a preorder means you automatically have yourself a category. The objects are elements in the set, and morphisms are provided by $\leq$. Identity morphisms are provided by reflexivity, and composition is provided by transitivity. So, every preorder is a category. In particular, there is a category $\{0\leq 1\}$ of truth values, whose only two objects are the numbers $0$ and $1$ and where the only non-identity morphism is $0\leq 1$. The unit interval $[0,1]$ is also a category, since it's a preorder with the usual ordering $\leq$. You can think of having a morphism $0.3\to 0.75$ since $0.3\leq 0.75$. And the nonnegative extended reals $[0,\infty]$ are also a preorder, but for historical reasons (i.e. Lawvere — we'll get to him later!), we'll view it as a category where there's an arrow between real numbers $a\to b$ whenever $a\geq b$, which is the opposite of the usual ordering. Now a word of caution: don't let the simplicity of these preorders fool you. Categories enriched over them pave the way for tons of nice examples, and lovely theory, and current threads of research. But wait — what exactly IS a category enriched over a preorder? I'll tell you next time. There's so much more to say, but this post is already quite long. *If you're already an expert on enriched category theory, then you'll know that when folks enrich over $[0,1]$, they're usually thinking about fuzzy logic. And you'll also know that fuzzy logicians are pretty adamant that you not think of elements in $[0,1]$ as probabilities. But let's allow ourselves to be flexible about this. Limits and Colimits Part 3 (Examples) The Fibonacci Sequence as a Functor Viewing Matrices & Probability as Graphs Topology Book Launch
CommonCrawl
Based on virtual beamforming cooperative jamming with Stackelberg game for physical layer security in the heterogeneous wireless network Shuanglin Huang ORCID: orcid.org/0000-0002-6860-68481, Li Zhu1 & Sanjun Liu1 The physical layer security technology is a technical scheme developed in recent years to solve the problem of information security transmission in wireless communication networks. As one of the physical layer security technologies, cooperative jamming often requires collaborative nodes to actively cooperate with other nodes with secure communication requirements to transmit information. In the environment of heterogeneous wireless network, each wireless node is relatively independent, the relationship is both cooperative and competitive, and the nodes are selfish. In this paper, we study the information transmission between the source and destination nodes, and form a virtual beamforming through the cooperation of the jamming nodes to point to the malicious wiretap nodes, so as to achieve the physical layer secure communication. First, the interest distribution relationship between the source node and other cooperative interference nodes is modeled as the Stackelberg game. The source node pays the consumption of the power consumed by the cooperative jamming nodes and motivates the cooperative interference nodes to participate actively. Then, the competition relationship among all the cooperative nodes is built as a non-cooperative game, so as to promote the reasonable pricing of the consumed power when each node participates in collaboration. When the security rate between the source node and destination node is constant, the power allocation of source and cooperative nodes and the equilibrium point of power price exist and are unique. Through the combined optimization of the two games, the power pricing and power allocation can be dynamically optimized according to the change of the network environment. The simulation results show that the power dynamic allocation and power dynamic pricing have good convergence, and the source node provides a train of thought for the selection of cooperative nodes and their number. With the rapid development of wireless communication technology, the problem of information security transmission in wireless networks is becoming more and more important. In recent years, physical layer security technology has become a research hotspot in the field of information security, because it does not rely on data encryption and encapsulation but has the absolute security of information transmission [1, 2]. For an additive noise-degraded wiretap channel, the security capacity CS is CS = CM − CE; the CM and CE are the main channel and wiretap channel capacity respectively. Wyner [3] has done an earlier research work in this area. His research shows that the source node and destination node can exchange secret information at a non-zero rate without stealing information from an eavesdropper. However, when the channel condition between the source node and its destination node is worse than that between the source node and the eavesdropper, the secrecy capacity of the source node and its destination node can be zero. Early wireless communication is mainly point-to-point communication. The wireless communication nodes are basically single-antenna configurations with a single function. These characteristics make the actual security capacity zero. The physical layer security technology uses the characteristics of randomness, time variability, and reciprocity of wireless channels. It can make sure that both sides of legitimate communication cannot be wiretapped to any information in the presence of the eavesdropper [3]. Now, the rapid progress of wireless communication physical layer technology has promoted the emergence of a new form of eavesdropping channel. For example, the antenna array eavesdropper channel [4, 5], orthogonal frequency division multiplexing (OFDM) wiretap channel [6,7,8], and relay cooperative eavesdropping channel [9,10,11] can all get effective secrecy capacity. According to reference [4], when the degree of freedom of artificial noise is greater than that of the eavesdropper receiving signal, the eavesdropper cannot separate secret information and artificial noise from the received signal, and the artificial noise method can achieve a certain secrecy speed. Reference [5] has studied how to introduce the idea of frequency diversity array into an OFDM transmitter and to form effective physical layer secure communication capacity in free space. Reference [6] studied the maximum achievable secrecy rate of the OFDM system through reasonable power allocation. Reference [7] took the max-min fairness criterion of confidentiality rate as an optimization objective and studies how to allocate channels and power among multiple users in a cellular network downlink based on OFDM technology in the presence of an eavesdropper node. The literature in [8] studied the power allocation problem for the wireless users in the downlink of the OFDM system to consider the energy collection and the secret information decoding process. In reference [9], the author studied the physical layer security of the relay network model with multiple relay nodes. With the goal of maximizing the security rate, several different cooperative mechanisms were proposed. Reference [10] studied how to use cooperative nodes to send blocking signals to suppress information disclosure of an eavesdropper. Under the scenario of multiple relay nodes and multiple eavesdropping nodes, the relay adopts decode and forward technology; reference [11] first adopts the finite rate feedback scheme to study the resource allocation of the wireless source node. When the channel quality between the legitimate users is inferior to the eavesdropping channel, the effective safe transmission rate cannot be obtained. Therefore, the cooperative interference mechanism is proposed, which reduces the quality of the eavesdropping channel by means of artificial interference and destroys the listening ability of the eavesdropping node. In reference [9], the cooperative jamming schemes for improving the physical layer security rate of wireless communication through cooperative nodes are studied. In reference [12], it studied how to use cooperative relay nodes to improve the physical layer security rate of wireless communication by combining decode and forward and cooperative jamming. The literature in [13] discussed the main technology and difficulties in the physical layer security of the OFDM communication system. In that paper, the OFDM beamforming was briefly introduced, and the robustness was also reviewed in the presence of noise and multipath fading. Then, the robustness of OFDM beamforming technology under various noise jamming attacks was discussed. Finally, the latest jamming attack techniques were explored and some potential anti-jamming attacks to improve the robustness and reliability were pointed out. On the basis of reference [14], in reference [15], the time domain artificial noise generation technology for the physical layer security in the multiple input and multiple output (MIMO) OFDM system was studied. It extends the limitation that the number of sender antennas must be less than the number of legitimate receiver antennas in application. In an OFDM technology access network where exist a source node, multiple untrusted nodes, and a friendly interference node, reference [16] studied how to interfere with friendly nodes or improve the sum of secrecy rates or improve the fairness of the whole system. However, the wireless collaboration nodes in the heterogeneous wireless network are selfish and need an incentive mechanism to ensure their participation in collaboration. A game-based cooperative scheme can motivate the selfish relay nodes to participate in the cooperative [17,18,19,20]. Han and Zhang [17, 18] respectively analyze the system game performance of two cooperative interference nodes and the game performance of multi-user shared single cooperative interference node system. In view of the high requirement of communication quality among legitimate communication users and the limited energy of cooperative nodes, multiple collaboration nodes are required to interfere with the service. In order to allocate the compensation and improve the energy efficiency of the cooperative node, a scheme of energy efficiency optimal power allocation based on a Stackelberg game was proposed. The scheme used a two-tier game strategy, the first layer game determines the optimal payment compensation, and the second layer game is used for compensation allocation and power adjustment among the cooperative nodes. Reference [19] proposed a scheme for optimal energy efficiency compensation and power allocation based on a Stackelberg game theory. The two-level game model proved that there exists a global optimal energy efficiency only and gives a closed form solution of the optimal power allocation scheme. Reference [20] studied sub-carrier allocation and cooperative partner selection based on the Nash bargaining game for physical layer security in OFDM wireless networks. This paper studies the cooperation of multiple cooperative jamming nodes in a heterogeneous wireless network environment, forming virtual beamforming, which helps to secure information transmission between source node and destination nodes, and to prevent an eavesdropper node from eavesdropping on useful information. For example, in the heterogeneous network environment where wireless sensor network and wireless fidelity (WiFi) network coexist, the wireless sensor networks often need to communicate important information through WiFi network. The joint points connecting the wireless sensor network and the WiFi network are crucial. There are many trusted wireless sensor nodes around them, which can be used to collaborate and interfere in the process of important information interaction to prevent malicious nodes from eavesdropping. Similar situations will also occur in other wireless communication networks such as cellular networks. In addition, in order to encourage potential trusted wireless nodes to participate in collaboration and optimize the allocation of overall energy consumption, on the one hand, the source node needs a paid use of the power consumed by the cooperative jamming node; on the other hand, the jamming nodes involved in cooperation can reasonably price the power cost and adjust the market price dynamically according to the importance of their own energy, which depends on the number of nodes involved in cooperation, the channel state information of the node itself, and the relative relationship between the node and the source node, the destination node, and the eavesdropper node. Therefore, the cooperation relationship between source node and cooperative nodes is modeled as a Stackelberg game, and the competition and cooperation relationship among all cooperative jamming nodes is modeled as a non-cooperative game. The contents of this paper are organized as follows: the second part is the system model, the third part is the game modeling, the fourth part is the power allocation strategy and the power price dynamic adjustment program, the fifth part is the simulation, and the sixth part is the summary of this paper. In Fig. 1, there is a source node and destination node pair, which communication is helped by N jamming nodes (J i , i∈N, N = {1,2,…,N}) with the existence of an eavesdropper E. The eavesdropping node is always eavesdropping on the information sent by the source node. When the quality of the eavesdropping channel between the source node and the eavesdropping node is weaker than the quality of the main channel, the two parties of the legitimate communication can realize the physical layer security communication. When the main channel quality is weaker than the eavesdropper channel, in order to ensure the security of transmission information, it is necessary to request some cooperative jamming nodes to assist in sending artificial interference signals to destroy the quality of the eavesdropper channel, so as to create a secure communication environment. Here, N cooperative interference node J1,…,J N are equipped with a omnidirectional single antenna to transmit and receive data and jointly implement beamforming interference eavesdropping node. The system model for cooperative transmission with terminals S transmitting information to destination D The whole communication process includes two parts for the cooperation scene of jamming nodes. First, the source node sends the signal to the destination node with power Ps. Meanwhile, the information will also be eavesdropped by the eavesdropping node. The channel gain of link S → D and S → E is |h0|2 and |g0|2, respectively. The second is that all jamming nodes are combined to send an artificial interference signal with power PJ. The weight vector of all the cooperative nodes to transmit interference signals is wJ(N × 1), h(N × 1) represents the channel vector between the N jamming nodes and the destination node, and g(N × 1) represents the channel vector between the N jamming nodes and the eavesdropping node, defining R h = hh† and R g = gg†. In addition, it is assumed that all communication channels are ergodic, flat fading, and semi-static. It is assumed that the source node can obtain the instantaneous channel information of each communication channel and the noise power at the eavesdropping node and the destination node are σ2. In this paper, the variables are expressed in the following form. The black body capitals represent the matrix, while the black body lowercase letter represents the column vector. The conjugate, transposition, and conjugate transposition of the matrix are expressed by three markers, (·)∗, (·)T, and (·)† respectively. In the recommended scheme, the N trusted relay nodes transmit the human interference signals completely independent of the source node, and the purpose is to confuse the eavesdropping nodes. This can help the secure communication between the source node and its destination node. The cooperative jamming nodes that participate in the cooperative transmission are transmitting to the human interference signal according to the weight, which are expressed as the vector z. In this way, the signal received at the destination can be expressed as follows $$ {y}_d=\sqrt{P_s}{h}_0x+{\mathbf{h}}^{\dagger }{\mathbf{w}}_Jz+{n}_d $$ And the signals received at the eavesdropping node can be $$ {y}_e=\sqrt{P_s}{g}_0x+{\mathbf{g}}^{\dagger }{\mathbf{w}}_Jz+{n}_e $$ where n d and n e represent the noise signals received at the destination node and the eavesdropping node, respectively. Furthermore, the information rates that can be obtained at the destination node and the eavesdropping node are expressed as R d and R e respectively, which are expressed as follows $$ {R}_d=\frac{1}{2}\log \left(1+\frac{P_s{\left|{h}_0\right|}^2}{\sigma^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_h{\mathbf{w}}_J}\right) $$ $$ {R}_e=\frac{1}{2}\log \left(1+\frac{P_s{\left|{g}_0\right|}^2}{\sigma^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_g{\mathbf{w}}_J}\right) $$ As a result, in the presence of an eavesdropping node, the secrecy rate that can be obtained at the destination node is shown as follows $$ {R}_s=\max \left\{0,{R}_d-{R}_e\right\} $$ In this paper, discussed only is R d > R e , so the above formula can be further expressed as $$ {R}_s=\frac{1}{2}\log \left(\frac{\sigma^2+{P}_s{\left|{h}_0\right|}^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_h{\mathbf{w}}_J}{\sigma^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_h{\mathbf{w}}_J}\right)\left(\frac{\sigma^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_g{\mathbf{w}}_J}{\sigma^2+{P}_s{\left|{g}_0\right|}^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_g{\mathbf{w}}_J}\right) $$ Problem description and game modeling In practical applications, the demand for the secrecy rate of user link needs to be guaranteed. Assuming that the user's secrecy rate requirement is \( {R}_s^0 \), the user link secrecy rate demand will be satisfied when \( {R}_s\ge {R}_s^0 \) is satisfied. As a result, the key problem is to be built to minimize the payment of the source node under the constraints of \( {R}_s\ge {R}_s^0 \). It is obvious that the nodes in the wireless collaboration network belong to different individuals and are selfish. As a result, the source nodes need to take measures to encourage possible collaboration nodes to participate in collaboration and interfere with eavesdropping in eavesdropping nodes. At the same time, the source node needs to select the most beneficial collaboration nodes for themselves. According to the behavior characteristics of the source node and the cooperative node, the distributed resource allocation scheme based on game theory is used to analyze. For the source node, it can be regarded as a buyer whose purpose is to use as small as possible to achieve link secrecy rate requirements. Suppose that Us represents the payment of the source node and Us is defined as a linear function of the transmission power. It is expressed as follows: $$ {U}_s={v}_s{P}_s+\sum \limits_{m=1}^M{v}_{J_m}{P}_{J_m} $$ where vs and \( {v}_{J_m} \) respectively represent the power price of the source node S and the cooperative jamming node J m , and \( {P}_{J_m} \) represents the power purchased by the source node to the cooperative jamming node J m to interfere with the eavesdropping node. By combining the transmission power of the source node and the transmission power of the cooperative interference node, each source node always minimizes the payment of its own. As a result, the optimization problem for the source node can be expressed as the following formula: $$ \underset{R_s\ge {R}_s^0}{\min }{U}_s={v}_s{P}_s+\sum \limits_{m=1}^N{v}_{J_m}{P}_{J_m} $$ where \( \mathbf{P}=\left\{{P}_s,{P}_{J_1},{P}_{J_2},\dots \dots {P}_{J_N}\right\} \) is a power vector, and $$ {R}_s\left(\mathbf{P}\right)=\frac{1}{2}\log \left(\frac{\sigma^2+{P}_s{\left|{h}_0\right|}^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_h{\mathbf{w}}_J}{\sigma^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_h{\mathbf{w}}_J}\right)\left(\frac{\sigma^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_g{\mathbf{w}}_J}{\sigma^2+{P}_s{\left|{g}_0\right|}^2+{\mathbf{w}}_J^{\dagger }{\mathbf{R}}_g{\mathbf{w}}_J}\right) $$ For cooperative jamming nodes, they can be considered as sellers. The goal is not only to satisfy the payment that the source nodes give them to participate in collaboration, but also to gain as much extra benefits as possible by competing with each other. Then, the utility function of the cooperative jamming node J m can be defined as $$ {U}_{J_m}=\left({v}_{J_m}-{c}_{J_m}\right){P}_{J_m} $$ where \( {c}_{J_m} \) is the power cost of the cooperative jamming node J m . As a result, the optimization problem for the revenue of cooperative jamming nodes can be expressed as $$ \underset{0<{P}_{J_m}\le {P}_{\mathrm{max}}}{\max }{U}_{J_m},m=1,2,\dots, M $$ In the above network models, in order to maximize their profits, each cooperative jamming node needs not only to compete with other jamming nodes, but also to compete with the source node. For the source node, it will optimize the power allocation between the source node and the various jamming nodes based on the power price provided by the cooperative jamming node. For each of the jamming nodes, they must provide the optimal power price to maximize the utility. Between all the jamming nodes, they compete with each other by constantly adjusting their power prices. As a result, the source node can be regarded as the main party of the game, and the jamming node is regarded as a slave. Therefore, there is a Stackelberg game between the source node and the jamming nodes, while all the jamming nodes are non-cooperative games [21]. Lemma 1: Order \( {\mathbf{w}}_J^{\dagger}\mathbf{g}=\mu \), and \( {\mathbf{w}}_J^{\dagger}\mathbf{h}=0 \). The solution of the following problem [9] $$ \min {\mathbf{w}}_J^{\dagger }{\mathbf{w}}_J $$ can be expressed as $$ {\mathbf{w}}_J=\mu \left[\mathbf{g}\kern0.5em \mathbf{h}\right]{\left[\begin{array}{cc}{\mathbf{g}}^{\dagger}\mathbf{g}& {\mathbf{g}}^{\dagger}\mathbf{h}\\ {}{\mathbf{h}}^{\dagger}\mathbf{g}& {\mathbf{h}}^{\dagger}\mathbf{h}\end{array}\right]}^{-1}\left[\begin{array}{c}1\\ {}0\end{array}\right] $$ The power and price selection method Power allocation method We first fix P s and find the weights that minimize the payment of the source node to the jamming nodes. Then, we find the value of P s that minimizes the overall payment. In practical applications, it is difficult to obtain the global information of malicious eavesdropping nodes. In addition, it is noted that the problem (8) is the product of two correlated generalized eigenvector problems, which is generally quite difficult. In order to simplify the analysis, we will add a constraint to completely eliminate the interference signals at the destination, i.e., $$ {\mathbf{w}}_J^{\dagger}\mathbf{h}=0 $$ Thus, the optimization problem of Eq. (8) can be expressed as $$ {\displaystyle \begin{array}{l}\underset{R_s\ge {R}_s^0}{\min }{U}_s={v}_s{P}_s+\sum \limits_{m=1}^M{v}_{J_m}{P}_{J_m}\\ {} st\Big\{\begin{array}{l}{\mathbf{w}}_J^{\dagger}\mathbf{h}=0\\ {}{\mathbf{w}}_J^{\dagger}\mathbf{g}=\mu \end{array}\operatorname{}\end{array}} $$ where \( \mu =\sqrt{\frac{P_s{\left|{g}_0\right|}^2}{4^{-{R}_s^0}\left(1+{P}_s{\left|{h}_0\right|}^2/{\sigma}^2\right)-1}-{\sigma}^2}. \) Under the assumption that P s is a constant value, the optimal power allocation of the jamming nodes involved in the cooperation is obtained. In order to solve the optimization problem of Eq. (15), let \( \tilde{\mathbf{h}}=\left\{\frac{h_{J_1}}{\sqrt{v_{J_1}}},\frac{h_{J_2}}{\sqrt{v_{J_2}}},\dots \dots, \frac{h_{J_N}}{\sqrt{v_{J_N}}}\right\} \), \( \tilde{\mathbf{g}}=\left\{\frac{g_{J_1}}{\sqrt{v_{J_1}}},\frac{g_{J_2}}{\sqrt{v_{J_2}}},\dots \dots, \frac{g_{J_N}}{\sqrt{v_{J_N}}}\right\} \) and \( {\tilde{\mathbf{w}}}_J=\left(\sqrt{v_{J_1}}{w}_{J_1},\sqrt{v_{J_2}}{w}_{J_2},\dots \dots, \sqrt{v_{J_N}}{w}_{J_N}\right) \). Then, the optimization problem of Eq. (15) can be further transformed into $$ {\displaystyle \begin{array}{l}\underset{R_s\ge {R}_s^0}{\min }{U}_s={v}_s{P}_s+{\overset{\sim }{\mathbf{w}}}_J^{\dagger }{\overset{\sim }{\mathbf{w}}}_J\\ {} st\Big\{\begin{array}{l}{\overset{\sim }{\mathbf{w}}}_J^{\dagger}\overset{\sim }{\mathbf{h}}=0\\ {}{\overset{\sim }{\mathbf{w}}}_J^{\dagger}\overset{\sim }{\mathbf{g}}=\mu \end{array}\operatorname{}\end{array}} $$ From the above formula, it can be seen that \( {\tilde{\mathbf{w}}}_J^{\dagger}\tilde{\mathbf{g}} \) is a positive real number. According to Lemma 1, \( {\left\Vert {\tilde{\mathbf{w}}}_J\right\Vert}^2 \) can be first expressed as a function of μ2: $$ {\tilde{\mathbf{w}}}_J=\frac{\mu \left({\tilde{\mathbf{h}}}^{\dagger}\tilde{\mathbf{h}}\tilde{\mathbf{g}}-{\tilde{\mathbf{h}}}^{\dagger}\tilde{\mathbf{g}}\tilde{\mathbf{h}}\right)}{{\mathbf{g}}^{\dagger}\mathbf{g}\left({\mathbf{h}}^{\dagger}\mathbf{h}\right)-{\mathbf{g}}^{\dagger}\mathbf{h}\left({\mathbf{h}}^{\dagger}\mathbf{g}\right)} $$ Therefore, it can be further obtained $$ {\left\Vert {\tilde{\mathbf{w}}}_J\right\Vert}^2={k}_0{\mu}^2 $$ $$ {P}_{J_m}={\left\Vert {w}_{J_m}\right\Vert}^2=\frac{\mu^2{k}_{m1}}{{\left\Vert {k}_{m2}{v}_{J_m}+{k}_{m3}\right\Vert}^2} $$ where the expressions of k0, km1, km2, and km3 are as follows $$ {k}_0={\left\Vert \frac{\left({\tilde{\mathbf{h}}}^{\dagger}\tilde{\mathbf{h}}\tilde{\mathbf{g}}-{\tilde{\mathbf{h}}}^{\dagger}\tilde{\mathbf{g}}\tilde{\mathbf{h}}\right)}{{\mathbf{g}}^{\dagger}\mathbf{g}\left({\mathbf{h}}^{\dagger}\mathbf{h}\right)-{\mathbf{g}}^{\dagger}\mathbf{h}\left({\mathbf{h}}^{\dagger}\mathbf{g}\right)}\right\Vert}^2 $$ $$ \left\{\begin{array}{l}{k}_{m1}={\mu}^2{\left\Vert {g}_{J_m}\sum \limits_{i=1,i\ne m}^N\frac{h_{J_i}^{\dagger }{h}_{J_i}}{v_{J_i}}-{h}_{J_m}\sum \limits_{i=1,i\ne m}^N\frac{h_{J_i}^{\dagger }{g}_{J_i}}{v_{J_i}}\right\Vert}^2\\ {}{k}_{m2}=\sum \limits_{i=1,i\ne m}^N\frac{g_{J_i}^{\dagger }{g}_{J_i}}{v_{J_i}}\sum \limits_{i=1,i\ne m}^N\frac{h_{J_i}^{\dagger }{h}_{J_i}}{v_{J_i}}-{\left(\sum \limits_{i=1,i\ne m}^N\frac{h_{J_i}^{\dagger }{g}_{J_i}}{v_{J_i}}\right)}^2\\ {}{k}_{m3}=\sum \limits_{i=1,i\ne m}^N\left(\frac{h_{J_m}^{\dagger }{h}_{J_m}{g}_{J_i}^{\dagger }{g}_{J_i}}{v_{J_i}}+\frac{g_{J_m}^{\dagger }{g}_{J_m}{h}_{J_i}^{\dagger }{h}_{J_i}}{v_{J_i}}-\frac{2{h}_{J_m}^{\dagger }{g}_{J_m}{h}_{J_i}^{\dagger }{g}_{J_i}}{v_{J_i}}\right)\end{array}\right. $$ Therefore, Eq. (16) is further expressed as the following form with P s as a variable $$ \underset{R_s\ge {R}_s^0}{\min }{U}_s={v}_s{P}_s+\frac{k_0{P}_s{\left|{g}_0\right|}^2}{4^{-{R}_s^0}\left(1+{P}_s{\left|{h}_0\right|}^2/{\sigma}^2\right)-1}-{k}_0{\sigma}^2 $$ Equation (22) is the convex function of P s , and there is a unique optimal solution. To obtain the first derivative of P s and to make it zero, the optimal solution of P s is obtained $$ {P}_s^{\ast }=\frac{\sqrt{\left(1-{4}^{-{R}_s^0}\right){k}_0{\left|{g}_0\right|}^2}}{4^{-{R}_s^0}{\left|{h}_0\right|}^2/{\sigma}^2}\frac{1}{\sqrt{v_s}}+\frac{1-{4}^{-{R}_s^0}}{4^{-{R}_s^0}{\left|{h}_0\right|}^2/{\sigma}^2} $$ It can be seen that the power of the source node decreases with the increase of the power cost v s of the source node. However, the source node power value P s will not be lower than the second half \( \frac{1-{4}^{-{R}_s^0}}{4^{-{R}_s^0}{\left|{h}_0\right|}^2/{\sigma}^2} \) of the above formula on the right side. It is equivalent to the minimum power consumption of the source node in order to achieve the secret rate \( {R}_s^0 \) without the presence of the eavesdropping node. Power price method for jamming nodes In this section, we will discuss the power price strategy of the jamming nodes. To replace the \( {P}_{J_m} \) into Eq. (19), it can be obtained $$ \underset{0<{P}_m\le {P}_{\mathrm{max}}}{\max }{U}_{J_m}=\left({v}_{J_m}-{c}_{J_m}\right){P}_{J_m}^{\ast },m=1,2,\dots, M $$ It is noted that Eq. (24) is a non-cooperative game between the cooperative jamming nodes, and there is a tradeoff between the utility \( {U}_{J_m} \) and the energy price \( {v}_{J_m} \) of the interference nodes. If the jamming node J m has good channel conditions and its energy price is relatively low, the source node will ask for more cooperative power from the jamming node J m , so that \( {U}_{J_m} \) will increase with \( {v}_{J_m} \) growth. When \( {v}_{J_m} \) grows to more than one value, it is no longer useful for the source node to select it to participate, even if the channel of J m is dominant. In this way, J m will reduce \( {v}_{J_m} \), and \( {U}_{J_m} \) also decreases. Therefore, every jamming node J m is required to dynamically give the optimal power price which changes with the channel condition. Because the source node will only choose the most favorable jamming nodes, the optimal price will also be influenced by other jamming nodes. In addition, when the power cost of cooperative jamming node is increased (for example, the energy of the node itself is reduced, the request of cooperation is increased, the maximum power limit value, and so on), the starting point of cooperative node's cooperation and power price will rise. Property 1: When the power price of the source node and other cooperative jamming nodes are fixed, the equilibrium point of utility function \( {U}_{J_m} \) of every cooperative jamming node exists and unique. Proof: from the above formula, Eq. (19) shows that $$ {P}_{J_m}=\frac{\mu^2{k}_{m1}}{{\left({k}_{m2}{v}_{J_m}+{k}_{m3}\right)}^2} $$ Then, substituting the above equation into the utility function of the interference node, it can be obtained. $$ \underset{0<{P}_m\le {P}_{\mathrm{max}}}{\max }{U}_{J_m}=\frac{\left({v}_{J_m}-{c}_{J_m}\right){k}_{m1}{\mu}^2}{{\left({k}_{m2}{v}_{J_m}+{k}_{m3}\right)}^2},m=1,2,\dots, M $$ Taking the first order derivative of \( {U}_{J_m} \) to \( {v}_{J_m} \), it can be obtained. $$ \frac{\partial {U}_{J_m}}{\partial {v}_{J_m}}=\frac{\mu^2{k}_{m1}\left({k}_{m3}+2{k}_{m2}{c}_{J_m}-{k}_{m2}{v}_{J_m}\right)}{{\left({k}_{m2}{v}_{J_m}+{k}_{m3}\right)}^3} $$ Then, taking the two order derivation of the objective function \( {U}_{J_m} \) to \( {v}_{J_m} \), it can be further obtained. $$ \frac{\partial^2{U}_{J_m}}{\partial {v}_{J_m}^2}=\frac{2{k}_{m2}{k}_{m1}{\mu}^2\left({k}_{m2}{v}_{J_m}-2{k}_{m3}-3{k}_{m2}{c}_{J_m}\right)}{{\left({k}_{m2}{v}_{J_m}+{k}_{m3}\right)}^4} $$ Through the first derivative \( \partial {U}_{J_m}/\partial {v}_{J_m} \) and the two order derivations \( {\partial}^2{U}_{J_m}/\partial {v}_{J_m}^2 \) of the above, we can analyze it piecewise. (1) When \( 0<{v}_{J_m}<3{c}_{J_m}+2{k}_{m3}/{k}_{m2} \), \( {\partial}^2{U}_{J_m}/\partial {v}_{J_m}^2 \) is always less than zero. This shows that \( {U}_{J_m}\left(0<{v}_{J_m}<3{c}_{J_m}+2{k}_{m3}/{k}_{m2}\right) \) is a concave function, and there is a unique maximum value. (2)When \( {v}_{J_m}\ge 3{c}_{J_m}+2{k}_{m3}/{k}_{m2} \), \( \partial {U}_{J_m}/\partial {v}_{J_m} \) is always less than zero. This explanation decreases with the increase of \( {U}_{J_m}\left({v}_{J_m}\ge 3{c}_{J_m}+\frac{2{k}_{m3}}{k_{m2}}\right) \). Therefore, the maximum value of \( {U}_{J_m}\left({v}_{J_m}>{c}_{J_m}\right) \) exists and is unique, and the Property 1 is proved. According to the above analysis, we need to take the derivative of \( {U}_{J_m} \) to \( {v}_{J_m} \) and make it equal to zero, and it can be obtained. $$ \frac{\partial {U}_{J_m}}{\partial {v}_{J_m}}={P}_{J_m}^{\ast }+\left({v}_{J_m}-{c}_{J_m}\right)\frac{\partial {P}_{J_m}^{\ast }}{\partial {v}_{J_m}}=0 $$ After solving all these equations about \( {v}_{J_m} \), the optimal price of all the jamming nodes can be obtained in theory, which can be expressed as $$ {v}_{J_m}^{\ast }={v}_{J_m}^{\ast}\left({\sigma}^2,{G}_{sd},{G}_{sJ_m},{G}_{J_md},\left\{{G}_{sJ_n}\right\},\left\{{G}_{J_nd}\right\},\left\{{v}_{J_n}\right\}\right),n\ne m $$ Solving Eq. (29), it can be obtained $$ {v}_{J_m}=2{c}_{J_m}+\frac{k_{m3}}{k_{m2}} $$ It is important to note that the value of \( {v}_{J_m} \) calculated by the upper type is obtained when the power of the source node and the power price of other relay nodes are given. So, the result of the upper calculation is not optimal. The value of the optimal \( {v}_{J_m} \) to meet the requirements of a certain precision can be recursively obtained by the gradient method. The steps are as follows: (1) The calculation of the initial price \( {v}_{J_m}(0)=2{c}_{J_m}+\frac{k_{m3}}{k_{m2}} \) by (31); (2) with Eq. (24) to calculate \( {U}_{J_m}\left({v}_{J_m}(n)\right) \) and \( {U}_{J_m}\left({v}_{J_m}(n)+\Delta \right) \) (when the cost is \( {c}_{J_m}=1 \), the step size Δ is generally 0.01); (3) the price update formula is\( {v}_{J_m}\left(n+1\right)={v}_{J_m}(n)+\lambda \left[{U}_{J_m}\left({v}_{J_m}(n)+\Delta \right)-{U}_{J_m}\left({v}_{J_m}(n)\right)\right] \); (4) repeat (2) and (3) until \( \left|{v}_{J_m}\left(n+1\right)-{v}_{J_m}(n)\right| \) is less than the stop value. In heterogeneous wireless networks, each node is often able to obtain only local channel state information. Therefore, it is difficult to provide the optimal value directly, whether it is the power allocation by the source nodes or the pricing of the power price of the cooperative jamming nodes. In this case, it is necessary for the source node to cooperate with all the cooperative jamming nodes through "the power pricing of each jamming node → the power allocation → the power pricing of each jamming node → the power allocation of source node." After several rounds, it converges to the optimal value while meeting the error requirement. Simulation and result In this part, the dynamic power allocation, price dynamic pricing, cost price change, and convergence are simulated. The same system setting as reference [9] is used in this paper, where the source node, the destination node, and the eavesdropping node are placed in a straight line. In order to illustrate the effect of distance (the effect of distance is used to represent the change of wireless channel environment), the channel model between any two nodes is set as a line-of-sight transmission channel model. The path gain is expressed as d−c/2eiθ, where d represents the distance between any two nodes (unit: meter), c = 3.5 represents the exponential factor of the path loss, and θ is a random phase that is evenly distributed between [0, 2π]. In the following simulation, it is assumed that the distance between the cooperative jamming nodes is negligible relative to the distance from source node, destination node, and eavesdropping node. The distance between the cooperative jamming nodes and the source node, the destination node, and the eavesdropping node can be approximately regarded as the same. The source node and the destination node are fixed in the two-dimensional coordinate system at the point S (0, 0) and point D (100, 0),respectively(unit: meter). The noise in the channel is additive Gauss white noise, and the noise power is 10−9 W. The next simulation in this paper has carried out 1000 Monte Carlo independent experiments and then averages to get the average results. Figure 2 describes that the benefit \( {U}_{J_1} \) of the cooperative jamming node J1 is a curve with the change of its power price \( {v}_{J_1} \). It can be seen from the diagram that the maximum value of the revenue \( {U}_{J_1} \) exists only. The revenue \( {U}_{J_1} \) of J1 varies with its power price \( {v}_{J_1} \) (There are 5 cooperative jamming nodes) In Figs. 3 and 4, the cooperative jamming node power price and utility function with recursion times and the convergence of the situation are described (The cooperative interference is at coordinate (30, 5), and the eavesdropping node is at coordinate (50, 0)). As it can be seen from the chart after five rounds of the dynamic adjustment of pricing power and power allocation, the power price and the utility function of each node can quickly converge. The power price of the cooperative jamming nodes vs the recursion times The utility function vs the recursion times Figure 5 describes a curve that the power price \( {v}_{J_1} \) of a cooperative jamming node J1 changes dynamically as its position changes, and Fig. 6 describes a curve of dynamic changes in the revenue \( {U}_{J_1} \) of a cooperative jamming node J1 with its location (The location of the cooperative jamming node moves along the straight line from the coordinate point (10, 5) to the coordinate point (90, 5), and the eavesdropping node is fixed at the coordinate point (50, 0)). When the jamming nodes are in different positions and their channel conditions are different, the optimal power price will be adjusted dynamically. As it can be seen from the above two figures, the highest power price does not deserve the highest income. The income of the jamming node is determined by its power price and the power consumed by it. Only the power price of the jamming node is appropriate, and the source node is willing to assign it more power to participate in the collaboration. The jamming nodes also gain the most benefit because of their reasonable choice. The power price \( {v}_{J_1} \) of cooperative jamming node vs its position change The revenue \( {U}_{J_1} \) of cooperative jamming node vs its position change Figure 7 describes the total payment US of the source node changing along with the location of the cooperative jamming node (the cooperative jamming node moves along the straight line from the coordinate point (10, 5) to the coordinate point (90, 5), and the eavesdropping node is fixed at the coordinate point (50, 0)). As you can see from Fig. 7, the total payment of the source node is the lowest when the cooperative jamming node is nearest to the eavesdropper node. This is because the cooperative jamming node has the best effect on the eavesdropping node when the distance from the eavesdropping node is nearest and the power consumption is the lowest. The source node total payment US vs the location change of cooperative jamming nodes Figure 8 describes the total payment US of the source node changing along with the location of the eavesdropping node (the eavesdropping node moves along the straight line from the coordinate point (20, 0) to the coordinate point (90, 0)). The five simulation curves above correspond to the situation where the cooperative jamming nodes are located at the coordinate points (30, 5), (40, 5), (50, 5), (60, 5), and (70, 5), respectively. It can be seen from the figure that the total payment of the source node is the lowest when the eavesdropping node is nearest to the cooperative jamming node. And the closer the distance from the source node to the cooperative jamming node, the lower the maximum value of each curve (the source node total payment US). The source node total payment US vs the location change of the eavesdropping node Combining with the previous analysis, when the information of the channel state of the eavesdropping node is known, it is most favorable to select the cooperative jamming node closest to the eavesdropping node; when the channel state information of the eavesdropping node is unknown, it is necessary to consider the worst case, that is, the best choice is to choose the cooperative jamming node closest to the source node. Figure 9 is a curve that increases the power price V1 of a cooperative jamming node J1 as its cost price c1 increases. This is due to the higher cost price of the cooperative jamming node, and it is bound to increase the power price in order to obtain the same income. From the simulation results, the curve approximated linearly. The curve of the change in the price of power with the cost price Figure 10 describes the curve that the total payment US of the source node varies with the number of cooperative jamming nodes. The eavesdropping node is fixed at the coordinate point (50, 0), and the three simulation curves above correspond to the situation where the jamming nodes are located at the coordinate points (30, 5), (50, 5), and (70, 5) respectively. It can be seen that the total payment US of the source node decreases with the increase of the number of the cooperative jamming nodes. This is due to the increase of the number of cooperation nodes, which will lead to more intense competition among the cooperative nodes, resulting in lower power price of the cooperative jamming node. This inevitably reduces the cost that the source node seeks for collaboration. Therefore, the source nodes always want more cooperative jamming nodes to participate in cooperative jamming in order to reduce the total cost of payment. However, from Fig. 9, it can be seen that when the number of jamming nodes involved in cooperative jamming transmission reaches five, the total payment US of the source node will decrease slowly with the increase of the number of nodes involved in cooperative jamming. In practice, the participation of more nodes in collaboration will bring more complex communication overhead of channel state information. Therefore, it is not necessary for source nodes to seek more than six interference nodes to participate in the cooperative jamming transmission. The total payment US of the source node vs the number of cooperative jamming nodes In a word, the following conclusions can be obtained from the above simulations. The cooperative jamming nodes are competitive and cooperative. They dynamically optimize their own power price independently according to the network environment change (including channel characteristics, competition intensity, and energy status). Correspondingly, the source node can also optimize the power allocation according to the power pricing of the cooperative node, channel characteristics and its energy status, so as to improve the dynamic adaptability of the physical layer security rate. In a heterogeneous wireless network environment, this paper studied, in the presence of an eavesdropping node, the source node and destination node cooperating to intercept the eavesdropping nodes through trusted jamming nodes, so as to achieve the physical layer secure communication. In order to encourage the potential nodes to participate in cooperation and interfere with the eavesdropping of malicious nodes, the relationship between the source node and the cooperative interference node was modeled as a Stackelberg game in this paper. The jamming power consumed by the cooperative jamming nodes was paid according to the market price, and the power allocation solution under the market price was given. At the same time, the competition relationship among the jamming nodes involved in cooperative jamming was modeled as a non-cooperative game. Each jamming node dynamically adjusts the power cost price and market price independently based on its own channel characteristics, surplus energy, and consumed power. In a word, through the joint optimization of these two games, the power pricing and power allocation can be dynamically optimized according to the change of channel characteristics and competition intensity. In this paper, the following several cases were simulated. First, the simulation of the dynamic power allocation and the dynamic power pricing of each cooperative jamming node shows that the power allocation and the market price would soon reach the optimum value after more than five rounds of dynamic adjustment. And it had good convergence. Secondly, the dynamic changes of the location of the cooperative jamming node and the eavesdropping node were simulated respectively, and the results illustrated the cooperative node selection idea under different circumstances. Finally, it could be seen that the total payment US of the source node decreases with the increase in the number of participating cooperative jamming nodes. However, when the number of nodes involved in cooperative jamming transmission reaches five, the total payment US of the source node will decrease slowly as the number of cooperative jamming nodes increases, which is of guiding significance for the selection of the number of cooperative jamming nodes. M Bloch, M Barros, M Rodrigues, ML SW, Wireless information-theoretic security. IEEE Trans. Inf. Theory 54, 2515–2534 (2008). https://doi.org/10.1109/TIT.2008.921908 L Lai, H El Gamal, The relay-eavesdropper channel: cooperation for secrecy. IEEE Trans. Inf. Theory 4, 4005–4019 (2008). https://doi.org/10.1109/TIT.2008.928272 AD Wyner, The wire-tap channel. Bell Syst. Tech. J. 54(8), 1355–1387 (1975) XY Zhou, MR Mckay, Secure transmission with artificial noise over fading channels: achievable rate and optimal power allocation. IEEE Trans. Veh. Technol. 59, 3831–3842 (2010). https://doi.org/10.1109/TVT.2010.2059057 Y Ding, J Zhang, V Fusco, Frequency diverse array OFDM transmitter for secure wireless communication. Electron. Lett. 51(17), 1374–1376 (2015). https://doi.org/10.1049/el.2015.1491 X Lin, S Xiaoting, X Wang, et al., TSVC: timed efficient and secure vehicular communications with privacy preserving. IEEE Trans. Wirel. Commun. 7(12), 4987–4998 (2008). https://doi.org/10.1109/T-WC.2008.070773 Sotiris Karachontzitis, Member, Stelios Timotheou. Security-aware max–min resource allocation in multiuser OFDMA downlink. IEEE Trans. Inf. Forensics Secur. 10(3): 529–542, 2015. https://doi.org/10.1109/TIFS.2014.2384392. M Zhang, Y Liu, Energy harvesting for physical-layer security in OFDMA networks. IEEE Trans. Inf. Forensics Secur. 11(1), 154–162 (2016. Digital Object Identifier 10.1109/TIFS.2015.2481797). https://doi.org/10.1109/TIFS.2015.2481797 L Dong, Z Han, AP Petropulu, et al., Improving wireless physical layer security via cooperating relays. IEEE Trans. Signal Process. 58(3), 1875–1888 (2010). https://doi.org/10.1109/TSP.2009.2038412 J Huang, AL Swindlehurst, Cooperative jamming for secure communications in MIMO relay networks. IEEE Trans. Signal Process. 59, 4871–4884 (2011). https://doi.org/10.1109/TSP.2011.2161295 MR Abedi, N Mokari, MR Javan, H Yanikomeroglu, Limited rate feedback scheme for resource allocation in secure relay-assisted OFDMA networks. IEEE Trans. Wirel. Commun. 15(4), 2604–2618 (2016). https://doi.org/10.1109/TWC.2015.2505728 S Huang, J Wei, C Yang, C Liu, Joint decode-and-forward and cooperative jamming for secure wireless communications. Int. Conf. Wirel. Commun. Netw. Mob. Comput. IEEE, Wuhan 2011, 1–4 (2011). https://doi.org/10.1109/wicom.2011.6040145 C Shahriar, M La Pan, M Lichtman, T Charles Clancy, R McGwier, R Tandon, S Sodagari, JH Reed, PHY-layer resiliency in OFDM communications: a tutorial. IEEE Commun. Surv. Tutorials. 17(1), 292–314 (2015). https://doi.org/10.1109/COMST.2014.2349883 H Qin, Y Sun, TH Chang, X Chen, CY Chi, M Zhao, J Wang, Power allocation and time-domain artificial noise design for wiretap OFDM with discrete inputs. IEEE Trans. Wirel. Commun. 12(6), 2717–2729 (2013). https://doi.org/10.1109/TCOMM.2013.050713.120730 Tomoki Akitaya, Shunta Asano, Takahiko Saba. Time-domain artificial noise generation technique using time-domain and frequency-domain processing for physical layer security in MIMO-OFDM systems. ©2014 IEEE ICC'14 - W1: Workshop on Wireless Physical Layer Security, 807–812, 2014. https://doi.org/10.1109/ICCW.2014.6881299. R Saini, A Jindal, S De, Jammer-assisted resource allocation in secure OFDMA with untrusted users. IEEE Trans. Inf. Forensics Secur. 11(5), 1055–1070 (2016). https://doi.org/10.1109/TIFS.2016.2516912 Han Z, Marina N, Debbah M, et al. Physical layer security game: how to date a girl with her boyfriend on the same table. International Conference on Game Theory for Networks. Istanbul: [s. n. ]: 287–294, 2009. https://doi.org/10.1109/GAMENETS.2009.5137412. Zhang Rongqing, Song Lingyang, Han Zhu. Improve physical layer security in cooperative wireless network using distributed auction game. 2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). Shanghai: [s. n.]: 18–23, 2011. https://doi.org/10.1109/INFCOMW.2011.5928805. D Chen-hui, S Mei, W Li, M Yue, Improved physical layer security with cooperative jamming based on Stackelberg game. J. Beijing Univ. Posts. Telecommun. 37(5), 11–15 (2014). https://doi.org/10.13190/j.jbupt.2014.05.003 S Huang, A Jing, J Tan, X Jian, Subcarrier allocation and cooperative partner selection based on nash bargaining game for physical layer security in OFDM wireless networks. Concurr. Comput. Prac. Exp. 29(3), 1–15 (2017). https://doi.org/10.1002/cpe.3790 G Owen, Game Theory, 3rd edn. (New York, Academic, 2001), pp. 120–200 This work was supported by the National Science Foundation of China under grant no. 61461018, the Hubei Province and the colleges and universities in the Outstanding Youth Science and Technology Innovation team plan (no: T201512), and the Science and Technology Program Project of Enshi Autonomous Prefecture in 2016 (XYJ2016000155). School of Information Engineering, Hubei University for Nationalities, Enshi, 445000, China Shuanglin Huang, Li Zhu & Sanjun Liu Shuanglin Huang Li Zhu Sanjun Liu SLH contributed to the conception and algorithm design of the study. SJL and LZ contributed to the acquisition of simulation. SLH, SJL, and LZ contributed to the analysis of simulation data and approved the final manuscript. Correspondence to Shuanglin Huang. Shuanglin Huang received M.S. and Ph.D. degrees from the Taiyuan University of Technology and Huazhong University of Science and Technology, in 2008 and 2012, respectively. He is an assistant professor in the School of Information Engineering, Hubei University for Nationalities, Enshi, Hubei, China. His research interests lie in wireless sensor networks, wireless communications and networks, game theory, parallel computing, and Internet of things. (E-mail:[email protected]) Li Zhu received a M.S. degree from the China University of Geosciences in 2009. She is a lecturer in the School of Information Engineering, Hubei University for Nationalities, Enshi, Hubei, China. Her research interests lie in wireless sensor networks, parallel computing, and Internet of things. (E-mail:[email protected]) Sanjun Lin received M.S. and Ph.D. degrees from the Chinese Academy of Sciences and Peking University, Beijing, China in 2007 and 2017, respectively. He is a lecturer in the School of Information Engineering, Hubei University for Nationalities, Enshi, Hubei, China. His research interests lie in wireless communication, embedded system, co-frequency co-time full-duplex and information theory, wireless sensor networks, parallel computing, and Internet of things. (E-mail: [email protected]) Huang, S., Zhu, L. & Liu, S. Based on virtual beamforming cooperative jamming with Stackelberg game for physical layer security in the heterogeneous wireless network. J Wireless Com Network 2018, 69 (2018). https://doi.org/10.1186/s13638-018-1081-x The heterogeneous wireless network Physical layer security Cooperative jamming Stackelberg game Power dynamic pricing Research and Challenges of Wireless Networks in Internet of Things
CommonCrawl
I Mean of the derivative of a periodic function Thread starter Robin04 I'm wondering if given that the mean of a periodic fuction is zero than the mean of all of its derivatives is zero too. We have a periodic function ##f: \mathbb{R} \rightarrow \mathbb{R}## with period ##T, f(x+T)=f(x)## The statement is the following: $$\frac{1}{T}\int_0^T f(x)dx =0 \implies \frac{1}{T}\int_0^T\frac{d}{dx} f(x)dx =0$$ Can you give me a hint on how to prove/disprove it? The examples I tried all confirmed this. Let ##F(x)## be the antiderivative of ##\left( \dfrac{d}{dx}\,f(x) \right)##. Then the right hand side is? fresh_42 said: So then ##F(x)=f(x)+c##, where ##c## is the integration constant. $$\frac{1}{T}\int_0^T \frac{d}{dx}f(x) dx = \frac{1}{T}[F(x)]_0^T=\frac{1}{T}[f(x)+c]_0^T=\frac{1}{T}(f(T)+c-f(0)-c)=0$$ pasmith Robin04 said: Summary: I'm wondering if given that the mean of a periodic fuction is zero than the mean of all of its derivatives is zero too. [itex]\frac{1}{T}\int_0^T\frac{d}{dx} f(x)dx = 0[/itex] is a direct result of the fundamental theorem of caclulus and the fact that [itex]f(0) = f(T)[/itex]. It holds irrespective of the value of [itex]\frac{1}{T}\int_0^T f(x)dx[/itex]. Reactions: Robin04 jbriggs444 Would you count ##\frac{\sin x\ |\cos x|}{\cos x}## as a periodic function with mean zero for purposes of this question? pasmith said: Oh, you're right. Interesting, haven't thought about that. jbriggs444 said: Well, that's interesting. I cannot plot its derivative for some reason, but I suppose it's continuity the issue here. It is continously differentiable over its domain. But its domain misses the odd multiples of ##\frac{\pi}{2}##. That means that the anti-derivative of its derivative over almost any interval of length ##\pi## has two disjoint segments. Two c's, instead of just one. *WHAM* There goes that cancellation of the c's you did in post #3. "Mean of the derivative of a periodic function" You must log in or register to reply here. Related Threads for: Mean of the derivative of a periodic function The period of trigonometric functions ShayanJ Period of function phymatter Calculating the period of a harmonic function No0bzDown Find the period of Cosine of Quadratic function cybershakith Period of trig functions sutupidmath
CommonCrawl
Why do just continuous maps are morphisms in the category of topological spaces My question is quite simple, I would like to know why the maps (not being necessarily continuous) can't be a morphism in the category of the topological spaces, since they satisfy the properties to be a morphism (compositions are well defined, associativity and identity). Note that the maps are the morphisms in the category of the sets, so it should be morphism also in the category of the topological spaces, since topological spaces are sets in particular. category-theory $\begingroup$ Because that's not how we define the category of topological spaces. $\endgroup$ – user98602 Apr 4 '14 at 5:04 $\begingroup$ You could do that, but it's not interesting to do so, because then it's essentially the same as the category of sets. The objects of a category are in a certain sense incidental; it's the morphisms that are important. If you take the objects as sets with a certain structure (say, a topology) and then let the morphisms be functions that completely ignore the structure, what do you need the structure for? The theorems about the resulting category won't tell you anything about topological spaces. $\endgroup$ – MJD Apr 4 '14 at 5:07 $\begingroup$ Maybe you should look up the words "full functor" and "faithful functor." Not all functors are required to be full or faithful. This is particularly true of the so-called "forgetful functors." Under your proposed definition, category of sets and category of spaces would become equivalent categories if you think it through. $\endgroup$ – user36931 Apr 4 '14 at 5:08 $\begingroup$ @MJD It's true, thanks $\endgroup$ – user42912 Apr 4 '14 at 5:08 $\begingroup$ You might consider this simpler example of the same type: let the objects of $\mathbf{Set}^\ast$ be pointed sets, which are sets from which a single element has been somehow distinguished. Usually we take the morphisms of $\mathbf{Set}^\ast$ to be functions that map distinguished points to distinguished points, but we can follow your idea and take them to be ordinary functions instead. Now consider in what way this category is different from the usual $\mathbf{Set}$. $\endgroup$ – MJD Apr 4 '14 at 5:15 In general, you want maps which preserve the relevant structure of the objects in your category. So, in the category of groups, you want your morphisms to preserve the group structure, i.e. group homomorphisms. In the category of vector spaces over a given field, you want linear transformations. So, in the category of topological spaces, you want continuous maps. Note that in each of these examples, the morphisms are still morphisms if we forget to the underlying category of sets. wckronholmwckronholm Not the answer you're looking for? Browse other questions tagged category-theory or ask your own question. morphisms on topological spaces Are monics and epics in the category of finite dimensional vector spaces actually injective and surjective linear transformations? Why are continuous functions the "right" morphisms between topological spaces? Concrete category of topological spaces over preordered sets: what are the initial morphisms? Why care about the $(\infty, 1)$-category of topological spaces? Why are "the" morphisms of the category of topological spaces continuous maps? Is there another way to describe the category $\mathbf{Top}$ of topological spaces? Is there an easy-to-define class of maps of topological spaces that is not closed under composition? Why don't we take clopen maps as morphisms of the category of topological spaces? Group object in category of topological spaces over a fixed space
CommonCrawl
Published by editor on December 4, 2021 Biology and medicine in the landscape of quantum advantages. (arXiv:2112.00760v1 [quant-ph]) 上午9:53 | Benjamin A. Cordier, Nicolas P. D. Sawaya, Gian G. Guerreschi, Shannon K. McWeeney | quant-ph updates on arXiv.org Quantum computing holds significant potential for applications in biology and medicine, spanning from the simulation of biomolecules to machine learning approaches for subtyping cancers on the basis of clinical features. This potential is encapsulated by the concept of a quantum advantage, which is typically contingent on a reduction in the consumption of a computational resource, such as time, space, or data. Here, we distill the concept of a quantum advantage into a simple framework that we hope will aid researchers in biology and medicine pursuing the development of quantum applications. We then apply this framework to a wide variety of computational problems relevant to these domains in an effort to i) assess the potential of quantum advantages in specific application areas and ii) identify gaps that may be addressed with novel quantum approaches. Bearing in mind the rapid pace of change in the fields of quantum computing and classical algorithms, we aim to provide an extensive survey of applications in biology and medicine that may lead to practical quantum advantages. The Berry phase from the entanglement of future and past light cones: detecting the timelike Unruh effec. (arXiv:2112.00898v1 [gr-qc]) 上午9:53 | James Q. Quach, Timothy C. Ralph, William J. Munro | quant-ph updates on arXiv.org The Unruh effect can not only arise out of the entanglement between modes of left and right Rindler wedges, but also between modes of future and past light cones. We explore the geometric phase resulting from this timelike entanglement between the future and past, showing that it can be captured in a simple $\Lambda$-system. This provides an alternative paradigm to the Unruh-deWitt detector. The Unruh effect has not been experimentally verified because the accelerations needed to excite a response from Unruh-deWitt detectors are prohibitively large. We demonstrate that a stationary but time-dependent $\Lambda$-system detects the timelike Unruh effect with current technology. A Quantum Informational Approach to the Problem of Time. (arXiv:2112.00918v1 [gr-qc]) 上午9:53 | Salman Sajad Wani, James Q. Quach, Mir Faizal, Sebastian Bahamonde, Behnam Pourhassan | quant-ph updates on arXiv.org Several novel approaches have been proposed to resolve the problem of time by relating it to change. We argue using quantum information theory that the Hamiltonian constraint in quantum gravity cannot probe change, so it cannot be used to obtain a meaningful notion of time. This is due to the absence of quantum Fisher information with respect to the quantum Hamiltonian of a time-reparametization invariant system. We also observe that the inability of this Hamiltonian to probe change can be related to its inability to discriminate between states of such a system. However, if the time-reparametization symmetry is spontaneously broken due to the formation of quantum cosmological time crystals, these problems can be resolved, and it is possible for time to emerge in quantum gravity. How to engineer a quantum wavefunction. (arXiv:2112.01105v1 [quant-ph]) 上午9:53 | Peter W. Evans, Dominik Hangleiter amd Karim P. Y. Thébault | quant-ph updates on arXiv.org In a conventional experiment, inductive inferences between source and target systems are typically justified with reference to a uniformity principle between systems of the same material type. In an analogue quantum simulation, by contrast, scientists aim to learn about target quantum systems of one material type via an experiment on a source quantum system of a different material type. In this paper, we argue that such an inference can be justified by reference to the two quantum systems being of the same empirical type. We illustrate this novel experimental practice of wavefunction engineering with reference to the example of Bose-Hubbard systems. Optimal unambiguous discrimination and quantum nonlocality without entanglement: locking and unlocking by post-measurement information. (arXiv:2112.01139v1 [quant-ph]) 上午9:53 | Donghoon Ha, Jeong San Kim | quant-ph updates on arXiv.org The phenomenon of nonlocality without entanglement(NLWE) arises in discriminating multi-party quantum separable states. Recently, it has been found that the post-measurement information about the prepared subensemble can lock or unlock NLWE in minimum-error discrimination of non-orthogonal separable states. Thus It is natrual to ask whether the availability of the post-measurement information can influence on the occurrence of NLWE even in other state-discrimination stratigies. Here, we show that the post-measurement information can be used to lock as well as unlock the occurence of NLWE in terms of optimal nambiguous discrimination. Our results can provide a useful application for hiding or sharing information based on non-orthogonal separable states. The second law of thermodynamics as a deterministic theorem for quantum spin systems. (arXiv:2112.01175v1 [math-ph]) 上午9:53 | Walter F. Wreszinski | quant-ph updates on arXiv.org The second law of thermodynamics, viewed as a theorem asserting the growth of the mean (Gibbs-von Neumann) entropy of a class of quantum spin systems undergoing automorphic (unitary) adiabatic transformations, is proved. Non-automorphic interactions with the environment, although known to produce on the average a strict reduction of the entropy of systems with finite number of degrees of freedom, are proved to conserve the mean entropy on the average, for some models of quantum spin systems. Some related results on the approach (or return) to equilibrium are also reviewed. The results depend crucially on two properties of the mean entropy, proved by Robinson and Ruelle for classical systems, and Lanford and Robinson for quantum lattice systems: upper semicontinuity and affinity. A Quantum Annealing Approach to Reduce Covid-19 Spread on College Campuses. (arXiv:2112.01220v1 [cs.CY]) 上午9:53 | James Sud, Victor Li | quant-ph updates on arXiv.org Disruptions of university campuses caused by COVID-19 have motivated strategies to prevent the spread of infectious diseases while maintaining some level of in person learning. In response, the proposed approach recursively applied a quantum annealing algorithm for Max-Cut optimization on D-Wave Systems, which grouped students into cohorts such that the number of possible infection events via shared classrooms was minimized. To test this approach, available coursework data was used to generate highly clustered course enrollment networks representing students and the classes they share. The algorithm was then recursively called on these networks to group students, and a disease model was applied to forecast disease spread. Simulation results showed that under some assumptions on disease statistics and methods of spread, the quantum grouping method reduced both the total and peak percentage of infected students when compared against random groupings of students. Scaling to larger networks, it is possible that this quantum annealer-assisted grouping approach may provide practical advantage over classical approaches. This paper, however, is strictly a proof-of-concept demonstration of the approach and is not intended to argue for a quantum speedup. Are Brain-Computer Interfaces Feasible with Integrated Photonic Chips?. (arXiv:2112.01249v1 [q-bio.NC]) 上午9:53 | Vahid Salari, Serafim Rodrigues, Erhan Saglamyurek, Christoph Simon, Daniel Oblak | quant-ph updates on arXiv.org The present paper examines the viability of a radically novel idea for brain-computer interface (BCI), which could lead to novel technological, experimental and clinical applications. BCIs are computer-based systems that enable either one-way or two-way communication between a living brain and an external machine. BCIs read-out brain signals and transduce them into task commands, which are performed by a machine. In closed-loop, the machine can stimulate the brain with appropriate signals. In recent years, it has been shown that there is some ultraweak light emission from neurons within or close to the visible and near-infrared parts of the optical spectrum. Such ultraweak photon emission (UPE) reflects the cellular (and body) oxidative status, and compelling pieces of evidence are beginning to emerge that UPE may well play an informational role in neuronal functions. In fact, several experiments point to a direct correlation between UPE intensity and neural activity, oxidative reactions, EEG activity, cerebral blood flow, cerebral energy metabolism, and release of glutamate. Here, we propose a novel skull implant BCI that uses UPE. We suggest that a photonic integrated chip installed on the interior surface of the skull may enable a new form of extraction of the relevant features from the UPE signals. In the current technology landscape, photonic technologies advance rapidly and poised to overtake many electrical technologies, due to their unique advantages, such as miniaturization, high speed, low thermal effects, and large integration capacity that allow for high yield, volume manufacturing, and lower cost. For our proposed BCI, we make some major conjectures, which need to be experimentally verified, and hence we discuss the controversial parts, feasibility of technology and limitations, and potential impact of this envisaged technology if successfully implemented in the future. Emergent universe revisited through the CSL theory. (arXiv:2108.01472v2 [gr-qc] UPDATED) 上午9:53 | Gabriel R. Bengochea, María Pía Piccirilli, Gabriel León | quant-ph updates on arXiv.org In this work we analyze how the spectrum of primordial scalar perturbations is modified, within the emergent universe scenario, when a particular version of the Continuous Spontaneous Localization (CSL) model is incorporated as the generating mechanism of initial perturbations, providing also an explanation to the quantum-to-classical transition of such perturbations. On the other hand, a phase of super-inflation, prior to slow-roll inflation, is a characteristic feature of the emergent universe hypothesis. In recent works, it was shown that the super-inflation phase could generically induce a suppression of the temperature anisotropies of the CMB at large angular scales. We study here under what conditions the CSL maintains or modifies these characteristics of the emergent universe and their compatibility with the CMB observations. Detectable Gravitational Wave Signals from Inflationary Preheating. (arXiv:2112.00762v1 [hep-ph]) Authors: Yanou Cui, Evangelos I. Sfakianakis We consider gravitational wave (GW) production during preheating in hybrid inflation models where an axion-like waterfall field couples to Abelian gauge fields. Based on a linear analysis, we find that the GW signal from such models can be within the reach of a variety of foreseeable GW experiments such as LISA, AEDGE, ET and CE, and is close to that of LIGO A+, both in terms of frequency range and signal strength. Furthermore, the resultant GW signal is helically polarized and thus may distinguish itself from other sources of stochastic GW background. Finally, such models can produce primordial black holes that can compose dark matter and lead to merger events detectable by GW detectors. Generalised proofs of the first law of entanglement entropy. (arXiv:2112.00972v1 [hep-th]) Authors: Marika Taylor, Linus Too In this paper we develop generalised proofs of the holographic first law of entanglement entropy using holographic renormalisation. These proofs establish the holographic first law for non-normalizable variations of the bulk metric, hence relaxing the boundary conditions imposed on variations in earlier works. Boundary and counterterm contributions to conserved charges computed via covariant phase space analysis have been explored previously. Here we discuss in detail how counterterm contributions are treated in the covariant phase approach to proving the first law. Our methodology would be applicable to generalizing other holographic information analyses to wider classes of gravitational backgrounds. Stochastic Quantization of General Relativity \`a la Ricci-Flow. (arXiv:2112.01490v1 [gr-qc]) Authors: Matteo Lulli, Antonino Marciano, Xiaowen Shan We follow a new pathway to the definition of the Stochastic Quantization (SQ), first proposed by Parisi and Wu, of the action functional yielding the Einstein equations. Hinging on the functional similarities between the Ricci-Flow equation and the SQ Langevin equations, we push forward a novel approach in which the stochastic time converges to the proper time of a space-like foliation in the equilibrium limit. This procedure in turn requires adding to the usual symmetric connection a projective Weyl term that does not modify the classical equations of motion. Furthermore, we express the starting system of equations using the Arnowitt-Deser-Misner (ADM) variables and their conjugated Hamiltonian momenta. Such a choice is instrumental for understanding the newly derived equations in terms of the breaking the diffeomorphism invariance of the classical theory, which will hold on average at the steady state. We comment on the physical interpretation of the Ricci flow equations, and argue how they can naturally provide, in a geometrical way, the renormalization group equation for gravity theories. In the general setting, the equation associated to the shift vector yields the Navier-Stokes equation with a stochastic source. Moreover, we show that the fluctuations of the metric tensor components around the equilibrium configurations, far away from the horizon of a Schwarzschild black hole, are forced by the Ricci flow to follow the Kardar-Parisi-Zhang equation, whose probabilistic distribution can yield an intermittent statistics. We finally comment on the possible applications of this novel scenario to the cosmological constant, arguing that the Ricci flow may provide a solution to the Hubble tension, as a macroscopic effect of the quantum fluctuation of the metric tensor. The Chances of Propensities 2021年12月2日 星期四 下午6:31 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Suárez, Mauricio (2016) The Chances of Propensities. [Preprint] What is So Special about Analogue Simulations? Nappo, Francesco (2021) What is So Special about Analogue Simulations? [Preprint] Quantum and Classical Temporal Correlations in $(1+1)\mathrm{D}$ Quantum Cellular Automata 2021年12月1日 星期三 下午6:00 | Edward Gillman, Federico Carollo, and Igor Lesanovsky | PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): Edward Gillman, Federico Carollo, and Igor Lesanovsky We employ (1+1)-dimensional quantum cellular automata to study the evolution of entanglement and coherence near criticality in quantum systems that display nonequilibrium steady-state phase transitions. This construction permits direct access to the entire space-time structure of the underlying none… Against the disappearance of spacetime in quantum gravity 2021年12月1日 星期三 上午8:00 | Latest Results for Synthese This paper argues against the proposal to draw from current research into a physical theory of quantum gravity the ontological conclusion that spacetime or spatiotemporal relations are not fundamental. As things stand, the status of this proposal is like the one of all the other claims about radical changes in ontology that were made during the development of quantum mechanics and quantum field theory. However, none of these claims held up to scrutiny as a consequence of the physics once the theory was established and a serious discussion about its ontology had begun. Furthermore, the paper argues that if spacetime is to be recovered through a functionalist procedure in a theory that admits no fundamental spacetime, standard functionalism cannot serve as a model: all the known functional definitions are definitions in terms of a causal role for the motion of physical objects and hence presuppose spatiotemporal relations. Spatial experience, spatial reality, and two paths to primitivism I explore two views about the relationship between spatial experience and spatial reality: spatial functionalism and spatial presentationalism. Roughly, spatial functionalism claims that the instantiated spatial properties are those playing a certain causal role in producing spatial experience while spatial presentationalism claims that the instantiated spatial properties include those presented in spatial experience. I argue that each view, in its own way, leads to an ontologically inflationary form of primitivism: whereas spatial functionalism leads to primitivism about phenomenal representation, spatial presentationalism leads to primitivism about spatial properties. I conclude by discussing how to adjudicate between spatial functionalism and spatial presentationalism. Degeneration and Entropy 2021年12月1日 星期三 上午7:21 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Chua, Eugene Y. S. (2021) Degeneration and Entropy. [Preprint] Double charge wave 2021年11月30日 星期二 上午8:00 | Young-Woo Son | Nature Physics – Issue – nature.com science feeds Nature Physics, Published online: 30 November 2021; doi:10.1038/s41567-021-01457-z Charge density waves are the periodic spatial modulation of electrons in a solid. A new experiment reveals that they can originate from two different electronic bands in a prototypical transition metal dichalcogenide, NbSe2. Is the photon really a particle? 2021年11月29日 星期一 上午8:31 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Klevgard, Paul A. (2021) Is the photon really a particle? Optik International Journal for Light and Electron Optics, 237 (166679). ISSN 0030-4026 Forced Changes Only: A New Take on the Law of Inertia Hoek, Daniel (2021) Forced Changes Only: A New Take on the Law of Inertia. [Preprint] Duerr, Patrick and Ehmann, Alexander (2021) The physics and metaphysics of Tychistic Bohmian Mechanics. Studies in History and Philosophy of Science Part A, 90. pp. 168-183. ISSN 00393681 Could Charge and Mass be Universals? 2021年11月26日 星期五 上午10:52 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Gilton, Marian (2020) Could Charge and Mass be Universals? [Preprint] Setting the demons loose: computational irreducibility does not guarantee unpredictability or emergence Tabatabaei Ghomi, Hamed (2021) Setting the demons loose: computational irreducibility does not guarantee unpredictability or emergence. [Preprint]
CommonCrawl
The origins of breast cancer associated with mammographic density: a testable biological hypothesis Norman Boyd1, Hal Berman1, Jie Zhu1, Lisa J. Martin1, Martin J. Yaffe4, Sofia Chavez4, Greg Stanisz4, Greg Hislop6, Anna M. Chiarelli5, Salomon Minkin1 & Andrew D. Paterson2,3 Breast Cancer Research volume 20, Article number: 17 (2018) Cite this article Our purpose is to develop a testable biological hypothesis to explain the known increased risk of breast cancer associated with extensive percent mammographic density (PMD), and to reconcile the apparent paradox that although PMD decreases with increasing age, breast cancer incidence increases. We used the Moolgavkar model of carcinogenesis as a framework to examine the known biological properties of the breast tissue components associated with PMD that includes epithelium and stroma, in relation to the development of breast cancer. In this model, normal epithelial cells undergo a mutation to become intermediate cells, which, after further mutation, become malignant cells. A clone of such cells grows to become a tumor. The model also incorporates changes with age in the number of susceptible epithelial cells associated with menarche, parity, and menopause. We used measurements of the radiological properties of breast tissue in 4454 healthy subjects aged from 15 to 80+ years to estimate cumulative exposure to PMD (CBD) in the population, and we examined the association of CBD with the age-incidence curve of breast cancer in the population. Extensive PMD is associated with a greater number of breast epithelial cells, lobules, and fibroblasts, and greater amounts of collagen and extracellular matrix. The known biological properties of these tissue components may, singly or in combination, promote the acquisition of mutations by breast epithelial cells specified by the Moolgavkar model, and the subsequent growth of a clone of malignant cells to form a tumor. We also show that estimated CBD in the population from ages 15 to 80+ years is closely associated with the age-incidence curve of breast cancer in the population. These findings are consistent with the hypothesis that the biological properties of the breast tissue components associated with PMD increase the probability of the transition of normal epithelium to malignant cells, and that the accumulation of mutations with CBD may influence the age-incidence curve of breast cancer. This hypothesis gives rise to several testable predictions. Percent mammographic density (PMD) is one of the strongest known risk factors for breast cancer [1]. Fibroglandular tissue attenuates X-rays more than does fat [2], and it appears white (dense) in mammograms, whereas adipose tissue appears dark. PMD, illustrated in Additional file 1: Figure S1a, refers to the area of white tissue divided by the total area of the breast in the image. The dense area and PMD are both associated positively with risk of breast cancer, and PMD is the stronger risk factor [3]. The nondense area is associated inversely with risk of breast cancer [3, 4]. The increased risk of breast cancer associated with PMD persists for at least 8–10 years after the date of the mammogram used to assess PMD [5, 6], and it cannot be explained by the "masking" of cancers by dense breast tissue [6, 7]. In addition to an increased risk of breast cancer, PMD is also associated with an increased risk of lesions that are thought to be nonobligate precursors of breast cancer [8]. Average PMD in the population decreases with increasing age [5]. A cross-sectional study of 11,000 women in 22 countries showed that average PMD declined with increasing age. Decline was present before and after menopause and was most pronounced over the menopausal transition [9]. Longitudinal data within individuals has shown average reductions in PMD of from 5% [10] to 8% [11] respectively over 10 to 5 years. Similar variations in breast tissue composition can be seen using measures of fat and water obtained by magnetic resonance (MR). Radiologically dense breast tissue and breast water both reflect fibroglandular breast tissue (Additional file 1: Figure S1b). Antoni et al. showed in a meta-analysis of 19 studies with a total of > 24,000 breast cancer cases [12] that, relative to women in the lowest density category, women in the highest density category had 3.1-fold (95% CI 2.2–4.2) and 3.2-fold (1.7–5.9) increased risk of estrogen receptor-positive (ER+) and ER− breast cancer, respectively. In case-only analyses, relative risks of breast tumors for ER+ versus ER− were 1.13 (95% CI 0.89–1.42) for medium versus minimal mammographic density (MD). MD remained associated with screen-detected ER+ tumors. In eight contributing studies, the association of MD did not differ by HER2 status. Variations in the distribution by age of ER+ and ER− breast cancer are likely to be influenced by factors other than MD. Breast cancer risk increases with increasing extent of PMD, and estimates of attributable risk (which assume causality) suggest that 30–50% of breast cancer may be attributed to the most extensive categories of PMD [5, 6]. Although MD is associated with relative and attributable risks that are large compared with other risk factors for the disease, the accuracy of risk prediction in individuals is modest [13]. The mechanisms that underlie the association of PMD with risk of breast cancer are not well defined [14], and the apparent paradox that with increasing age average PMD decreases while breast cancer incidence increases remains unexplained. We have previously proposed [15] that the radiological features of the breast of PMD provide an index of cumulative exposure to events that influence the incidence of breast cancer, similar to the concept of "breast tissue aging" proposed by Pike et al. [16]. However, to date, there is only one published study to support this suggestion [10]. In this paper, we develop a testable biological hypothesis to explain the origins of breast cancer associated with mammographic density. We summarize evidence that PMD reflects the relative quantities of epithelium, stroma, and fat in the breast, and we use a two-stage model of carcinogenesis as a framework to examine how the known biological properties of these tissues may influence the transition of normal breast epithelial cells to malignant cells [17]. We expect cumulative exposure to these biological factors to contribute to the age-specific incidence of breast cancer, and we examine the relationship between estimated cumulative exposure to PMD (CBD) in the population and the age-specific incidence of breast cancer. Two-stage model of carcinogenesis in the breast Figure 1 shows the two-stage model described by Moolgavkar and colleagues applied to breast cancer [17]. Normal stem cells, with a birth rate (α1) divided into two daughter cells, and rate of death or differentiation (β1) can be transformed into cells of an intermediate form at a stochastic event rate (μ1) (the first mutation rate). These intermediate cells can divide into two further intermediate cells at a stochastic rate (α2) or die or differentiate at rate β2. In addition, intermediate cells can divide into one intermediate and one transformed (malignant) cell at a second stochastic event rate (μ2). Two-stage model of carcinogenesis of Moolgavkar et al. [17]. In this model, normal stem cells, with a birth rate (α1) and rate of death (β1), can be transformed into cells of an intermediate form at a stochastic event rate (μ1) (the first mutation rate). These intermediate cells can divide into two further intermediate cells at a stochastic rate (α2), then die or differentiate at rate β2. In addition, intermediate cells can divide into one intermediate and one transformed (malignant) cell with a second stochastic event rate (μ2). The malignant cells are assumed to develop into a tumor after a deterministic lag time We recognize that molecular studies implicate several genetic changes in progression for breast cancer [18], and at the time of diagnosis, breast cancer cells contain multiple somatic mutations [19, 20]. In light of this, the two stages of the model may be viewed as rate-limiting steps in the process of breast carcinogenesis, with additional mutations occurring in intermediate cells that confer sustained proliferative and survival advantages to an expanding clone of cells that ultimately undergoes malignant transformation [19, 21]. We use the two-stage model solely as a framework for examining how the biological properties of the components of breast tissue associated with PMD might influence carcinogenesis in the breast. The Moolgavkar model applied to female breast cancer is as follows: $$ \mathrm{I}\left(\mathrm{t}\right)\approx {\upmu}_1{\upmu}_2\mathrm{F}\left(\mathrm{t}\right) $$ where It is breast cancer incidence at age t; μ1 and μ2 are the respective first and second mutation rates; and F is the susceptible cell population, modified by menarche, parity, and menopause, to age t. After assigning numerical values to μ1, μ2, and F, the model accurately predicts the age-specific incidence of breast cancer [17]. Breast tissue associated with PMD The histologic features of the breast associated with PMD have been examined using several approaches that include randomly selected breast tissue at forensic autopsy [22], as well as a comparison of radiologically dense and nondense regions in mastectomy specimens [23] and in surgical biopsies [24,25,26]. These approaches have shown similar results. Selected results derived from randomly selected sections of breast tissue collected at forensic autopsy by Bartow et al. [27] are shown in Fig. 2. PMD was assessed in the BioVision (Faxitron Bioptics, Tucson, AZ, USA) image of the enucleated breast from which the section had been taken (Fig. 2a) [22]. We used quantitative microscopy in randomly selected areas of the tissue section (Fig. 2b) to measure the total, epithelial, and nonepithelial nuclear areas, which we used as an index of the number of cells (Fig. 2d), the area of Masson's trichrome-stained collagen (Fig. 2g), and the glandular area. PMD was associated inversely with age, and after adjustment for age, positively with the nuclear area (a measure of the number of cells) (Fig. 2e) of epithelial and nonepithelial cells, glandular area, and the area of collagen (Fig. 2h). As shown in Additional file 1: Table S1, age, parity, and menopause were each associated inversely with one or more of these tissue components [22]. Breast tissue components associated with percent mammographic density (PMD). PMD was assessed in the BioVision (Faxitron Bioptics) image of the enucleated breast from which the section had been taken (a) [Li T, et al. Cancer Epidemiol Biomarkers Prev. 2005;14(2):343–9]. We used quantitative microscopy in randomly selected areas of the tissue section (b) to measure the total, epithelial, and nonepithelial nuclear areas (H&E stain in c) as an index of the number of cells (outlined in green in d), the area of collagen (H&E stain in f and Masson's trichrome in g), and the glandular area. PMD was associated inversely with age, and, after age adjustment, positively with the nuclear area (e) of epithelial and nonepithelial cells, glandular area, and the area of collagen (h). Box plots in e and h show the associations of total nuclear area (e) and collagen (h) with PMD. The median values are shown as horizontal lines, and the boxes show the 25th and 75th percentiles of the distributions. Age, parity, and menopausal status were also associated with variations in one or more of these tissue components. Similar associations of PMD with these breast tissue components have been found in prophylactic mastectomies [13]. Original magnification ×10 (c, d, f, and g) The stronger risk prediction seen with PMD compared with dense area suggests that the nondense area of the mammogram, which reflects fat, may provide some protection. A reduced risk of breast cancer associated with the nondense area of the mammogram has been shown in a meta-analysis by Pettersson et al. [3]. The mechanism underlying this protection is currently uncertain, but as shown in Fig. 2, breasts with low PMD are associated with fewer cells (as shown by nuclear area) and less extensive collagen, tissue components whose biological properties we show are associated with radiologically dense breast tissue and that may contribute to carcinogenesis in the breast. Further, aromatase activity in the breast is predominately in stromal preadipocytes [28] and is reduced when preadipocytes differentiate to mature adipocytes. The loss of this source of local estrogen production may contribute to the reduced risk of breast cancer associated with breast fat. As Fig. 2 shows, there can be wide variation in the number of cells (as shown by nuclear area) and the area of collagen between individuals, and it is not currently possible to assess these variations separately, or to link them to risk, using only currently available methods of imaging. Biological properties of breast tissue associated with PMD Breast epithelium Breast cancer is thought to originate in the epithelial cells of the terminal ductal lobular unit [29, 30] and to be the result of the accumulation of genetic mutations [21]. The greater number of epithelial cells and greater glandular area associated with PMD may be the result of either increased cell proliferation or a reduction in the rate of cell death. Both processes increase the size of the population of susceptible epithelial cells and increase the probability of a mutation. Some breast mitogens have been associated with risk of breast cancer [31,32,33], and proliferative activity in epithelial cells, as shown by the Ki-67 index, predicts risk of breast cancer in premenopausal women [34]. Although in adult life epithelial cells associated with PMD do not have an increased Ki-67 proliferative index [35], the greater number of epithelial cells associated with PMD in adult life may be the result of greater proliferation of progenitor cells during breast development, when susceptibility to carcinogens is also greatest [36]. The chemokine CCL2 has been detected in human mammary epithelium, and when overexpressed in mouse mammary epithelium, it induces a state of low-level inflammation that increases stromal density and risk of mammary cancer [37]. Stroma Collagen, fibroblasts, other mesenchymal cells, and extracellular matrix (ECM) are stromal components that contribute to PMD. Selected biological properties of these components of stroma are discussed briefly in the following sections and summarized in Table 1. Some of the components of stroma considered here have multiple biological functions, and we have selected those functions that appear most likely to be relevant to the processes outlined above in the two-stage model of carcinogenesis. Table 1 Selected biological properties of components of breast stroma Provenzano et al. [38] showed in a bitransgenic mouse tumor model, with both increased density of stromal collagen (Col1a1tmJae) and carrying the polyoma middle T transgene under the control of mammary-specific mouse mammary tumor virus promoter, that both epithelial cell proliferation and tumor formation were increased. Tumor formation increased approximately threefold, and tumors had a more invasive phenotype and a greater frequency of metastasis [38]. Preliminary human data suggest that periductal aligned collagen fibrils, rather than amorphous collagen, is associated with PMD [39]. Aligned collagen matrices also enhances the migration of cancer stem cells [40]. Stromal fibroblasts are the principal source of collagen and can regulate the morphogenesis of breast epithelial cells. Kuperwasser et al. showed that human stromal fibroblasts from reduction mammoplasty, immortalized with human telomerase, and implanted with normal human mammary epithelial cells (MECs) into the cleared mammary fat pad of severe combined immunodeficiency mice, resulted in the outgrowth of benign and malignant epithelial lesions [41]. Stromal fibroblasts regulate the growth of epithelial cells in part through the secretion of growth factors and chemokines [42] that include hepatocyte growth factor (HGF), insulin-like growth factor 1 (IGF-1), and transforming growth factor-β (TGF-β). HGF and IGF-1 both promote epithelial cell proliferation and tumor growth. Stromal fibroblast-derived TGF-β [43] inhibits MEC proliferation in vivo but can promote malignant behavior through diverse mechanisms that include stimulation of epithelial-mesenchymal transition [44]. TGF-β also has several effects on the microenvironment, including increasing ECM and inducing endothelial cell recruitment and proliferation, that promote tumor progression (reviewed in [45]). Fibroblasts also deposit ECM and produce collagen types I, III, and V and fibronectin (reviewed in [43]). Fibroblasts derived from disease-free breasts with extensive PMD promote adipocyte differentiation in culture and show decreased expression of CD36 [46] (see below). The stroma associated with breast cancer contains fibroblasts (cancer-associated fibroblasts [CAFs]) that produce chemokines, growth factors, and ECM proteins, which are thought to contribute to the dissemination of malignant tumors [43, 47], and foci of fibrous tissue within invasive breast cancer are associated with an increased risk of disease recurrence [48, 49]. Other cells Aromatase activity in the breast is a source of estrogen that may stimulate proliferation of epithelial cells and promote the growth of malignant clones [50,51,52,53]. Aromatase activity in adipose tissue is expressed primarily in stromal mesenchymal preadipocytes rather than in lipid-laden adipocytes and is greatest in breast tissue where the ratio of fibroblasts to adipocytes is greatest [54], and most aromatase activity in the breast is in radiologically dense regions [50, 51, 53, 55]. The role of immune cells in PMD has received little attention to date, but Huo et al. showed in prophylactic mastectomy samples that radiologically dense areas of the breast contained fewer CD26 activated macrophages and more vimentin+/CD45 immune cells than nondense regions in the same individuals [23, 56]. The ECM is comprised of collagens, fibronectin, laminins, polysaccharides, and proteoglycans, and it influences the changes that occur in the breast during pregnancy, lactation, involution, and tumorigenesis (see [57, 58] for reviews). Expression of the proteoglycans lumican and decorin, assessed by semiquantitative scoring of immunohistochemistry, is increased in stromal tissue associated with breast cancer and, in the absence of invasive breast cancer, in women with extensive PMD. PMD is associated with lumican and decorin scores and with duct fibrosis and collagen, but not with the tissue or ductal lobular density [59]. Proteoglycans bind growth factors, contribute to the mechanical integrity of tissues, and influence the stiffness of breast tissue that promotes tumorigenesis, tumor growth, and the invasion of malignant tissue [57, 58]. Radiologically dense breast tissue also has greater amounts of the stromal matrix regulatory protein tissue inhibitor of metalloproteinase 3 [60] that regulates stromal matrix, the activation of growth factors, and influences susceptibility to breast cancer [61]. In addition, the transmembrane receptor CD36 controls adipogenesis and deposition of the ECM. CD36-knockout mice show increased collagen and decreased fat in the mammary gland, and reduced expression of CD36 has been found to be associated with greater PMD and tumor stroma in human breast tissue [46]. Radiologically dense human breast tissue obtained from mastectomy specimens has been shown to promote the growth and progression of human carcinoma in situ xenografts in immunodeficient mice [62]. The biological properties shown in Table 1 have all, with the exception of collagen density, been observed in human cells or tissues, but only three (proteoglycan expression, matrix metalloproteinase 3 [MMP-3], and CD36 expression) have been examined to date in relation to PMD. Genetic variants associated with histologic features Twin and sister studies have shown that more than 60% of the variation in PMD in the population can be explained by additive genetic effects [63, 64]. Genetic variants associated with PMD dense or nondense areas are likely to be associated, directly or indirectly, with one or more of the tissue components that are reflected by these mammographic features. Genome-wide association studies (GWASs) have identified some of the genetic variants associated with PMD. Here we limit our attention to the nine regions, comprised of eight genes and one locus on chromosome 8, shown using a two-stage design to be reproducibly associated with PMD adjusted for age and body mass index. Eight of the nine loci are also associated with the risk of breast cancer [65, 66]. Single-nucleotide polymorphisms (SNPs) near or in PRDM6 and THEM184B and the locus on chromosome 8 have been associated with PMD, and SNPs near AREG, ESR1, ZNF365, LSP1, IGF1, and SGSM3/MKL1 have all been associated with the area of dense tissue in the mammogram. The locus on chromosome 8 has also been associated with nondense area. Although it is recognized that proximity of SNPs to genes may not identify causal genes [67, 68], functions for genes near eight of these regions were found by searching under the gene names in PubMed, the National Center for Biotechnology Information database of Genotypes and Phenotypes (dbGaP) of genotype-phenotype associations, and the GWAS Catalog (http://www.ebi.ac.uk/gwas/) [69]. The known functions of these eight genes of potential relevance to the components of breast tissue that are associated with PMD include the following: AREG encodes amphiregulin that binds epidermal growth factor receptor, stimulates cell growth and survival, and plays a role in the development of the mammary gland [70]. Amphiregulin also promotes the growth of fibroblasts, the expression of collagen and other genes associated with the ECM, and interacts with TGF-β to stimulate fibroblast proliferation [71]. ESR1 encodes ER-α that mediates the physiological effects of estrogen [72]. Estrogen influences epithelial cell proliferation, and the secretion of the pituitary hormones growth hormone and prolactin that are breast mitogens [32, 73,74,75]. IGF-1 encodes IGF-1 [76] and has mitogenic and antiapoptotic effects on breast epithelial cells [77]. Serum levels of IGF-1 have been associated with breast cancer risk in meta-analysis [31] and with PMD in some but not all studies (reviewed in [1]). Greater adult height is associated with risk of breast cancer [78] and has been positively associated with percent breast water (which, like PMD, reflects fibroglandular tissue) in young women [75] and with PMD [79] in adult women. Variants near MKL1, SGSM3, and IGF1 (in Japanese subjects) are associated with height [80, 81]. MKL1 is the human homologue of a murine gene (Bsac) that, when overexpressed in mice that are double-knockout for tumor necrosis factor (TNF)-associated factor, protects murine embryonic fibroblasts against cell death induced by TNF [82]. LSP1 [83] does not currently have any described function that is specific to breast tissue, apart from the observed associations with PMD and breast cancer. Application of two-stage model to association of PMD with breast cancer Figure 3 summarizes how the biological properties of the tissue components associated with PMD that are summarized in Table 1 might influence the first and second stages and transitions of the two-stage model. The third column represents the growth of a clone of malignant cells to become a detectable tumor. Proposed model of the two-stage model of carcinogenesis with risk of breast cancer in percent mammographic density. (The third column represents effects on the growth of a clone of malignant cells.) CAF Cancer-associated fibroblast, ECM Extracellular matrix, HGF Hepatocyte growth factor, IGF-1 Insulin-like growth factor 1, MMP-3 Matrix metalloproteinase 3, TGF-β Transforming growth factor-β The probability that the first event converts a normal epithelial stem cell to an intermediate cell is expected to be proportional to the number of cells at risk, their survival time, the number of cell divisions, and the dose and duration of exposure to mutagens [84]. As shown above, the extent of PMD is associated positively with the number of epithelial cells. Further, as shown in Table 1, experimental data show that epithelial cell proliferation and survival are increased by greater density of collagen, by the growth factors associated with fibroblasts and proteoglycans, by greater stiffness of the ECM, and by the local production of estrogen by aromatase, as well as by the influence of systemic hormones and growth factors. These factors may operate singly or, more likely, in combination to promote the expansion of the pool of normal and intermediate cells, the acquisition of additional mutations that confer proliferative and survival advantages, the transition of intermediate cells to malignancy, and the subsequent growth of a clone of malignant cells to become a detectable tumor. Cumulative exposure to PMD and age-specific incidence of breast cancer The postulated expansion with increasing age of the number of intermediate cells with mutations, together with continuing exposure to several components of the breast stroma that promote carcinogenesis, suggests that CBD may be related to the age-specific incidence of breast cancer [85, 86]. CBD may account for the observation that with increasing age, PMD and the total number of epithelial cells and lobular units decrease, whereas breast cancer incidence increases. Estimated cumulative breast density in the population We estimated CBD in the population using cross-sectional data from 4454 healthy females, predominately Caucasian and aged 15–81 years, who had participated in previous studies in which PMD was measured. Additional file 1: Table S2 shows selected characteristics of these subjects. We measured PMD by mammography [87] (shown in Additional file 1: Figure S1a) in women over the age of 35 and by percent breast water by MR (shown in Additional file 1: Figure S1b) in those under 35 [75]. Both measures reflect fibroglandular breast tissue [88] and are strongly correlated with each other within the same individuals (rs = 0.85) [75]. We used percent breast water by MR and PMD obtained in 100 adult women to calibrate MR measures in young women to the equivalent mammographic measure (see Table 2 footnote). Table 2 Calculation of cumulative percent mammographic density As shown in Table 2, we divided subjects into the same 5-year age categories in which breast cancer incidence in the population is reported. Median PMD decreased with increasing age and was 51.7% after calibration in the youngest group and 15.2% in the oldest. We multiplied the median PMD in each age category by the 5 years in the category to generate a variable we call "breast density years," and we summed the product for each age group to give an estimate of CBD from ages 15 to 80+ years in the population. Association of cumulative breast density with age-specific incidence of breast cancer We examine in Fig. 4 the association of CBD with the age-specific incidence of invasive breast cancer in Canada. We compared the log age-specific breast cancer incidence in the Canadian population predicted for each 5-year age group using regression models, one based on log age alone, one based on log CBD alone, and one based on log age + log CBD. We used R2 (the proportion of the total variance explained) and a comparison of the observed and predicted age-incidence curve of breast cancer to assess the fit of each model. The models, the associated coefficients, and the results are shown in Additional file 1: Table S3. Cumulative breast density: observed and predicted breast cancer incidence. Left: Breast density according to age. Values derived from mammogram in open circles; values from calibrated measures derived from magnetic resonance in closed circles. Right: Log breast cancer incidence in closed circles, log cumulative breast density in open circles. Incidence data for age-specific incidence of invasive breast cancer for Canada were obtained from Curado MP et al. Cancer incidence in five continents. Vol. IX. IARC Scientific Publication no. 160. Lyon, France: IARC Press; 2007 As shown in Fig. 4 and Additional file 1: Table S3, we found a strong association (r2 = 0.99) between log CBD and log breast cancer incidence in the population using the following model: $$ \mathrm{Log}\ \mathrm{I}\left(\mathrm{t}\right)\approx \log\ {\left(\mathrm{CBD}\left(\mathrm{t}\right)\right)}^{\mathrm{k}} $$ where Log It is log breast cancer incidence at age t, CBDt is the sum of median PMD in each age group (each multiplied by 5, the age interval) from age 15 to age t, and the exponent k has the estimated value of 3.5. The model based on log age alone was less strongly associated with breast cancer incidence (Additional file 1: Table S3), and the addition of log age to log CBD did not change the association with breast cancer incidence. This model based on log CBD and the two-stage model of Moolgavkar et al. described above both accurately predict the age-specific incidence of breast cancer in the populations considered. The function Ft in the Moolgavkar model, and CBDt in the present model both reflect variations in the number of susceptible cells in the breast modified by menarche, pregnancy, and menopause [89]. The strong correlation observed between log CBD and log breast cancer incidence cannot be explained by their shared association with age. Log CBD alone was a better predictor of age-specific log breast cancer incidence than was log age, and there was no change in prediction by a model containing both CBD and age. The close relationship observed between log CBD and log breast cancer incidence is consistent with the hypothesis of carcinogenesis in which the accumulation of mutations or other molecular changes increases with increasing duration of exposure to PMD rather than with age. Limitations of our data include the cross-sectional observations and the ecological comparison with breast cancer incidence, as well as the small numbers at ages 30–34 and 80+ years. We estimated CBD from cross-sectional rather than longitudinal observations, using film rather than digital mammograms. However, longitudinal assessments of breast density in women aged 40 or older have shown a decline in average PMD with increasing age and menopause that is very similar to the differences seen here [10]. Further, Maskarinec et al. showed a strong association between cumulative density and age-specific breast cancer incidence in serial mammograms from 607 patients with breast cancer and 667 control subjects in the Hawaii component of the multiethnic cohort, in which the average age at first mammogram was 57 years [10]. However, the associations of cumulative density and breast cancer incidence with age were not examined [17]. CBD may also explain many of the known epidemiological associations with breast cancer risk. As shown above, the estimated size of the susceptible cell population of epithelial cells and epithelial cell proliferation are greatest at early ages and decline with increasing age. The greater amount of fibroglandular tissue, as shown by percent water, present at ages 15–18 may be related to the greater susceptibility of the breast at early ages to the effects of known exposures on risk of breast cancer, including radiation, alcohol, and smoking [36]. Early menarche is associated with an increased risk of breast cancer in later life [90] and advances the age at which fibroglandular breast tissue develops. This addition to the time of exposure will influence all estimates of PMD at later ages and will increase CBD. An early pregnancy and early menopause both reduce later risk of breast cancer and PMD [90]. The reductions in PMD associated with these events will influence all measures of PMD at later ages and reduce average CBD in parous and postmenopausal women, respectively. At least some of the effect of pregnancy in reducing risk of breast cancer has been shown to be mediated by the reduction in PMD associated with pregnancy [91]. Tamoxifen reduces PMD and risk of breast cancer, and reduction in PMD appears to predict response to adjuvant therapy with tamoxifen [92]. Progesterone as a postmenopausal replacement therapy has been shown to increase both PMD and breast cancer incidence, and the effect of progesterone on breast cancer incidence has been shown to be mediated through the effect on PMD [93]. The proliferation of mammary epithelium in response to progesterone is mediated by receptor activator of nuclear factor-κB ligand (RANKL), and increased expression of RANKL has been found to be associated with more extensive PMD in premenopausal women [94]. The biological hypothesis that we propose from the foregoing considerations is that the transition of breast epithelial cells from normal to malignant cells is completed more frequently in dense breast tissue than in nondense tissue. We propose that this transition is associated with the acquisition of mutations or other molecular changes in breast epithelial cells that increase in frequency with increasing exposure to both the amount and duration of PMD. We propose that the probability of acquiring mutations is influenced by the greater number of epithelial cells and by the several known biological properties of the stromal tissues that are associated with PMD, described in Table 1, by the amounts of such tissues, and by the duration of exposure to these influences. Proteoglycans and MMP-3 in the ECM of radiologically dense breast tissue have already been shown, in the absence of breast cancer, to be similar to those expressed in breast tissue associated with breast cancer. Additional influences may include the greater number of stromal fibroblasts and associated chemokines associated with PMD that may, in the absence of breast cancer, resemble CAFs. CAFs can be distinguished from normal fibroblasts by markers and functional assays. Among these properties is the production of TGF-β1, which promotes epithelial-mesenchymal transition and has effects on the microenvironment that promote tumorigenesis and tumor invasion (reviewed in [45]). Epithelial-mesenchymal transition and other changes in the microenvironment may, in the absence of breast cancer, be more extensive in radiologically dense breast tissue than in nondense tissue [45]. PMD has reproducibly been shown to be a strong risk factor for breast cancer that may account for a substantial fraction of the disease. The biological basis for this association is currently unknown, however. We have examined potential biological mechanisms for the risk of breast cancer associated with PMD using a two-stage model of carcinogenesis as a framework. It is understood that it is the biological properties of the breast tissues associated with PMD, not the radiological properties, that are responsible for the association of PMD with risk of breast cancer. PMD is known to be associated with a greater number of epithelial cells, greater glandular area, a greater area of collagen, and a greater number of nonepithelial cells. The known biological properties of these breast tissue components increase the probability of mutation and of transition to malignant cells. The finding that CBD in healthy subjects in the population, estimated from cross-sectional observations in healthy women, was strongly associated with the age-specific incidence of breast cancer in Canada and is consistent with the accumulation of mutations with increasing time of exposure to CBD. This biological model gives rise to a number of testable predictions concerning the properties of breast tissue associated with PMD and suggests that the radiological features of the breast may be useful in the design, sampling, analysis, and interpretation of research on the biology of breast tissues in relation to breast cancer. CAF: Cancer-associated fibroblast Cumulative exposure to breast density ECM: GWAS: Genome-wide association study HGF: Hepatocyte growth factor IGF-1: Mammographic density MEC: Mammary epithelial cell MMP-3: Matrix metalloproteinase 3 MMTV: Mouse mammary tumor virus MR: Magnetic resonance PMD: Percent mammographic density PyVT: Polyomavirus middle T antigen RANKL: Receptor activator of nuclear factor-κB ligand SNP: Single-nucleotide polymorphism TGF-β: Transforming growth factor-β TNF: Tumor necrosis factor Boyd NF, Martin LJ, Yaffe MJ, Minkin S. Mammographic density and breast cancer risk: current understanding and future prospects. Breast Cancer Res. 2011;13(6):223. Johns PC, Yaffe MJ. X-ray characterisation of normal and neoplastic breast tissues. Phys Med Biol. 1987;32(6):675–95. Pettersson A, Graff RE, Ursin G, Santos Silva ID, McCormack V, Baglietto L, Vachon C, Bakker MF, Giles GG, Chia KS, et al. Mammographic density phenotypes and risk of breast cancer: a meta-analysis. J Natl Cancer Inst. 2014:106(5). Bertrand KA, Scott CG, Tamimi RM, Jensen MR, Pankratz VS, Norman AD, Visscher DW, Couch FJ, Shepherd J, Chen YY, et al. Dense and nondense mammographic area and risk of breast cancer by age and tumor characteristics. Cancer Epidemiol Biomarkers Prevent. 2015;24(5):798–809. Byrne C, Schairer C, Wolfe J, Parekh N, Salane M, Brinton LA, Hoover R, Haile R. Mammographic features and breast cancer risk: effects with time, age, and menopause status. J Natl Cancer Inst. 1995;87(21):1622–9. Boyd NF, Guo H, Martin LJ, Sun L, Stone J, Fishell E, Jong RA, Hislop G, Chiarelli A, Minkin S, et al. Mammographic density and the risk and detection of breast cancer. N Engl J Med. 2007;356(3):227–36. McCormack VA, dos Santos SI. Breast density and parenchymal patterns as markers of breast cancer risk: A meta-analysis. Cancer Epidemiol Biomarkers Prevent. 2006;15(6):1159–69. Boyd NF, Jensen HM, Cooke G, Han HL. Relationship between mammographic and histological risk factors for breast cancer. J Natl Cancer Inst. 1992;84(15):1170–9. Burton A, Maskarinec G, Perez-Gomez B, Vachon C, Miao H, Lajous M, Lopez-Ridaura R, Rice M, Pereira A, Garmendia ML, et al. Mammographic density and ageing: A collaborative pooled analysis of cross-sectional data from 22 countries worldwide. PLoS Med. 2017;14(6):e1002335. Maskarinec G, Pagano I, Lurie G, Kolonel LN. A longitudinal investigation of mammographic density: the multiethnic cohort. Cancer Epidemiol Biomarkers Prev. 2006;15(4):732–9. Boyd N, Martin L, Stone J, Little L, Minkin S, Yaffe M. A longitudinal study of the effects of menopause on mammographic features. Cancer Epidemiol Biomarkers Prev. 2002;11(10 Pt 1):1048–53. Antoni S, Sasco AJ, dos Santos SI, McCormack V. Is mammographic density differentially associated with breast cancer according to receptor status? A meta-analysis. Breast Cancer Res Treat. 2013;137(2):337–47. Chen J, Pee D, Ayyagari R, Graubard B, Schairer C, Byrne C, Benichou J, Gail MH. Projecting absolute invasive breast cancer risk in white women with a model that includes mammographic density. J Natl Cancer Inst. 2006;98(17):1215–26. Sherratt MJ, McConnell JC, Streuli CH. Raised mammographic density: causative mechanisms and biological consequences. Breast Cancer Res. 2016;18(1):45. Boyd NF, Lockwood GA, Martin LJ, Byng JW, Yaffe MJ, Tritchler DL. Mammographic density as a marker of susceptibility to breast cancer: a hypothesis. IARC Sci Publ. 2001;154:163–9. Pike MC, Krailo MD, Henderson BE, Casagrande JT, Hoel DG. 'Hormonal' risk factors, 'breast tissue age' and the age-incidence of breast cancer. Nature. 1983;303(5920):767–70. Moolgavkar SH, Day NE, Stevens RG. Two-stage model for carcinogenesis: Epidemiology of breast cancer in females. J Natl Cancer Inst. 1980;65(3):559–69. Newburger DE, Kashef-Haghighi D, Weng Z, Salari R, Sweeney RT, Brunner AL, Zhu SX, Guo X, Varma S, Troxell ML, et al. Genome evolution during progression to breast cancer. Genome Res. 2013;23(7):1097–108. Nik-Zainal S, Van Loo P, Wedge DC, Alexandrov LB, Greenman CD, Lau KW, Raine K, Jones D, Marshall J, Ramakrishna M, et al. The life history of 21 breast cancers. Cell. 2012;149(5):994–1007. Martincorena I, Raine KM, Gerstung M, Dawson KJ, Haase K, Van Loo P, Davies H, Stratton MR, Campbell PJ. Universal Patterns of Selection in Cancer and Somatic Tissues. Cell. 2017;171(5):1029–41. e1021 Li T, Sun L, Miller N, Nicklee T, Woo J, Hulse-Smith L, Tsao MS, Khokha R, Martin L, Boyd N. The association of measured breast tissue characteristics with mammographic density and other risk factors for breast cancer. Cancer Epidemiol Biomarkers Prev. 2005;14(2):343–9. Huo CW, Chew G, Hill P, Huang D, Ingman W, Hodson L, Brown KA, Magenau A, Allam AH, McGhee E, et al. High mammographic density is associated with an increase in stromal collagen and immune cells within the mammary epithelium. Breast Cancer Res. 2015;17:79. Bland KI, Kuhns JG, Buchanan JB, Dwyer PA, Heuser LF, O'Connor CA, Gray LA Sr, Polk HC Jr. A clinicopathologic correlation of mammographic parenchymal patterns and associated risk factors for human mammary carcinoma. Ann Surg. 1982;195(5):582–94. Ghosh K, Brandt KR, Reynolds C, Scott CG, Pankratz VS, Riehle DL, Lingle WL, Odogwu T, Radisky DC, Visscher DW, et al. Tissue composition of mammographically dense and non-dense breast tissue. Breast Cancer Res Treat. 2012;131(1):267–75. Urbanski S, Jensen HM, Cooke G, McFarlane D, Shannon P, Kruikov V, Boyd NF. The association of histological and radiological indicators of breast cancer risk. Br J Cancer. 1988;58(4):474–9. Bartow SA, Mettler FA Jr, Black Iii WC. Correlations between radiographic patterns and morphology of the female breast. Rad Patterns Morph. 1997;13:263–75. Simpson ER, Clyne C, Rubin G, Boon WC, Robertson K, Britt K, Speed C, Jones M. Aromatase--a brief overview. Annu Rev Physiol. 2002;64:93–127. Wellings SR, Jensen HM. On the origin and progression of ductal carcinoma in the human breast. J Natl Cancer Inst. 1973;50(5):1111–8. Wellings SR, Jensen HM, Marcum RG. An atlas of subgross pathology of the human breast with special reference to possible precancerous lesions. J Natl Cancer Inst. 1975;55(2):231–73. Renehan AG, Harvie M, Howell A. Insulin-like growth factor (IGF)-I, IGF binding protein-3, and breast cancer risk: eight years on. EndocrRelat Cancer. 2006;13(2):273–8. Tworoger SS, Eliassen AH, Sluss P, Hankinson SE. A prospective study of plasma prolactin concentrations and risk of premenopausal and postmenopausal breast cancer. J Clin Oncol. 2007;25(12):1482–8. Horne HN, Sherman ME, Pfeiffer RM, Figueroa JD, Khodr ZG, Falk RT, Pollak M, Patel DA, Palakal MM, Linville L, et al. Circulating insulin-like growth factor-I, insulin-like growth factor binding protein-3 and terminal duct lobular unit involution of the breast: a cross-sectional study of women with benign breast disease. Breast Cancer Res. 2016;18(1):24. Huh SJ, Oh H, Peterson MA, Almendro V, Hu R, Bowden M, Lis RL, Cotter MB, Loda M, Barry WT, et al. The Proliferative Activity of Mammary Epithelial Cells in Normal Tissue Predicts Breast Cancer Risk in Premenopausal Women. Cancer Res. 2016;76(7):1926–34. Hawes D, Downey S, Pearce CL, Bartow S, Wan P, Pike MC, Wu AH. Dense breast stromal tissue shows greatly increased concentration of breast epithelium but no increase in its proliferative activity. Breast Cancer Res. 2006;8(2):R24. Colditz GA, Frazier LA. Models of breast cancer show that risk is set by events of early life: prevention efforts much shift focus (review). Cancer Epidemiol Biomarkers Prevent. 1995;4(5):567–71. Sun X, Glynn DJ, Hodson LJ, Huo C, Britt K, Thompson EW, Woolford L, Evdokiou A, Pollard JW, Robertson SA, et al. CCL2-driven inflammation increases mammary gland stromal density and cancer susceptibility in a transgenic mouse model. Breast Cancer Res. 2017;19(1):4. Provenzano PP, Inman DR, Eliceiri KW, Knittel JG, Yan L, Rueden CT, White JG, Keely PJ. Collagen density promotes mammary tumor initiation and progression. BMC Med. 2008;6(0):11. McConnell JC, O'Connell OV, Brennan K, Weiping L, Howe M, Joseph L, Knight D, O'Cualain R, Lim Y, Leek A, et al. Increased peri-ductal collagen micro-organization may contribute to raised mammographic density. Breast Cancer Res. 2016;18(1):5. Ray A, Slama ZM, Morford RK, Madden SA, Provenzano PP. Enhanced Directional Migration of Cancer Stem Cells in 3D Aligned Collagen Matrices. Biophys J. 2017;112(5):1023–36. Kuperwasser C, Chavarria T, Wu M, Magrane G, Gray JW, Carey L, Richardson A, Weinberg RA. Reconstruction of functionally normal and malignant human breast tissues in mice. Proc Natl Acad Sc U S A. 2004;101(14):4966–71. Medina D. Stromal fibroblasts influence human mammary epithelial cell morphogenesis. Proc Natl Acad Sci U S A. 2004;101(14):4723–4. http://www.pnas.org/ Kalluri R. The biology and function of fibroblasts in cancer. Nat Rev Cancer. 2016;16(9):582–98. O'Connor JW, Gomez EW. Biomechanics of TGFbeta-induced epithelial-mesenchymal transition: implications for fibrosis and cancer. Clin Transl Med. 2014;3:23. Pickup M, Novitskiy S, Moses HL. The roles of TGFbeta in the tumour microenvironment. Nat Rev Cancer. 2013;13(11):788–99. DeFilippis RA, Chang H, Dumont N, Rabban JT, Chen YY, Fontenay GV, Berman HK, Gauthier ML, Zhao J, Hu D, et al. CD36 repression activates a multicellular stromal program shared by high mammographic density and tumor tissues. Cancer Discov. 2012;2(9):826–39. Folgueira MA, Maistro S, Katayama ML, Roela RA, Mundim FG, Nanogaki S, de Bock GH, Brentani MM. Markers of breast cancer stromal fibroblasts in the primary tumour site associated with lymph node metastasis: a systematic review including our case series. Biosci Rep. 2013;33(6) Hasebe T, Sasaki S, Imoto S, Mukai K, Yokose T, Ochiai A. Prognostic significance of fibrotic focus in invasive ductal carcinoma of the breast: a prospective observational study. Mod Pathol. 2002;15(5):502–16. Mujtaba SS, Ni YB, Tsang JY, Chan SK, Yamaguchi R, Tanaka M, Tan PH, Tse GM. Fibrotic focus in breast carcinomas: relationship with prognostic parameters and biomarkers. Ann Surg Oncol. 2013;20(9):2842–9. Vachon CM, Sasano H, Ghosh K, Brandt KR, Watson DA, Reynolds C, Lingle WL, Goss PE, Li R, Aiyar SE, et al. Aromatase immunoreactivity is increased in mammographically dense regions of the breast. Breast Cancer Res Treat. 2011;125(1):243–52. Simpson ER, Clyne CD, Rubin G, Boon WC, Robertson K, Britt K, Speed C, Jones M. Aromatase--a brief overview. Annu Rev Physiol. 2002;64(0):93–127. Simpson ER, McInnes KJ, Brown KA, Knower KC, Chand AL, Clyne CD, Simpson ER. Characterisation of aromatase expression in the human adipocyte cell line SGBS. Breast Cancer Res Treat. 2008;112(3):429–35. Bulun SE, Mahendroo MS, Simpson ER. Aromatase gene expression in adipose tissue: relationship to breast cancer. J Steriod Biochem Molec Biol. 1994;49(4-6):319–26. Bulun SE, Sharda G, Rink J, Sharma S, Simpson ER. Distribution of aromatase P450 transcripts and adipose fibroblasts in the human breast. J Clin Endocrinol Metabol. 1996;81(3):1273–7. Simpson ER. Biology of aromatase in the mammary gland. J Mammary Gland Biol Neoplasia. 2000;5(3):251–8. Lisanti MP, Tsirigos A, Pavlides S, Reeves KJ, Peiris-Pages M, Chadwick AL, Sanchez-Alvarez R, Lamb R, Howell A, Martinez-Outschoorn UE, et al. JNK1 stress signaling is hyper-activated in high breast density and the tumor stroma: connecting fibrosis, inflammation, and stemness for cancer prevention. Cell Cycle. 2014;13(4):580–99. Butcher DT, Alliston T, Weaver VM. A tense situation: forcing tumour progression. Nat Rev Cancer. 2009;9(2):108–22. Paszek MJ, Weaver VM. The tension mounts: mechanics meets morphogenesis and malignancy. J Mammary Gland Biol Neoplasia. 2004;9(4):325–42. Alowami S, Troup S, Al-Haddad S, Kirkpatrick I, Watson PH. Mammographic density is related to stroma and stromal proteoglycan expression. Breast Cancer Res. 2003;5(5):R129–35. Guo YP, Martin LJ, Hanna W, Banerjee D, Miller N, Fishell E, Khokha R, Boyd NF. Growth factors and stromal matrix proteins associated with mammographic densities. Cancer Epidemiol Biomarkers Prev. 2001;10(3):243–8. Hojilla CV, Mohammed FF, Khokha R. Matrix metalloproteinases and their tissue inhibitors direct cell fate during cancer development. Br J Cancer. 2003;89(10):1817–21. Huo CW, Waltham M, Khoo C, Fox SB, Hill P, Chen S, Chew GL, Price JT, Nguyen CH, Williams ED, et al. Mammographically dense human breast tissue stimulates MCF10DCIS.com progression to invasive lesions and metastasis. Breast Cancer Res. 2016;18(1):106. Boyd NF, Dite GS, Stone J, Gunasekara A, English DR, McCredie MRE, Giles GG, Tritchler D, Chiarelli A, Yaffe MJ, et al. Heritability of mammographic density, a risk factor for breast cancer. N Engl J Med. 2002;347(12):886–94. Varghese JS, Thompson DJ, Michailidou K, Lindstrom S, Turnbull C, Brown J, Leyland J, Warren RM, Luben RN, Loos RJ, et al. Mammographic breast density and breast cancer: evidence of a shared genetic basis. Cancer Res. 2012;72(6):1478–84. Lindstrom S, Thompson DJ, Paterson AD, Li J, Gierach GL, Scott C, Stone J, Douglas JA, dos-Santos-Silva I, Fernandez-Navarro P, et al. Genome-wide association study identifies multiple loci associated with both mammographic density and breast cancer risk. Nat Commun. 2014;5:5303. Lindstrom S, Vachon CM, Li J, Varghese J, Thompson D, Warren R, Brown J, Leyland J, Audley T, Wareham NJ, et al. Common variants in ZNF365 are associated with both mammographic density and breast cancer risk. Nat Genet. 2011;43(3):185–7. Musunuru K, Strong A, Frank-Kamenetsky M, Lee NE, Ahfeldt T, Sachs KV, Li X, Li H, Kuperwasser N, Ruda VM, et al. From noncoding variant to phenotype via SORT1 at the 1p13 cholesterol locus. Nature. 2010;466(7307):714–9. Smemo S, Tena JJ, Kim KH, Gamazon ER, Sakabe NJ, Gomez-Marin C, Aneas I, Credidio FL, Sobreira DR, Wasserman NF, et al. Obesity-associated variants within FTO form long-range functional connections with IRX3. Nature. 2014;507(7492):371–5. Welter D, MacArthur J, Morales J, Burdett T, Hall P, Junkins H, Klemm A, Flicek P, Manolio T, Hindorff L, et al. The NHGRI GWAS Catalog, a curated resource of SNP-trait associations. Nucleic Acids Res. 2014;42(Database issue):D1001–6. Berasain C, Avila MA. Amphiregulin. Semin Cell Dev Biol. 2014;28:31–41. Zhou Y, Lee JY, Lee CM, Cho WK, Kang MJ, Koff JL, Yoon PO, Chae J, Park HO, Elias JA, et al. Amphiregulin, an epidermal growth factor receptor ligand, plays an essential role in the pathogenesis of transforming growth factor-beta-induced pulmonary fibrosis. J Biol Chem. 2012;287(50):41991–2000. Herrington DM, Howard TD, Hawkins GA, Reboussin DM, Xu J, Zheng SL, Brosnihan KB, Meyers DA, Bleecker ER. Estrogen-receptor polymorphisms and effects of estrogen replacement on high-density lipoprotein cholesterol in women with coronary disease. N Engl J Med. 2002;346(13):967–74. Wiedemann E, Schwartz E, Frantz AG. Acute and chronic estrogen effects upon serum somatomedin activity, growth hormone, and prolactin in man. J Clin Endocrinol Metab. 1976;42(5):942–52. Hankinson SE, Willett WC, Michaud DS, Manson JE, Colditz GA, Longcope C, Rosner B, Speizer FE. Plasma prolactin levels and subsequent risk of breast cancer in postmenopausal women. J Natl Cancer Inst. 1999;91(7):629–34. Boyd N, Martin L, Chavez S, Gunasekara A, Salleh A, Melnichouk O, Yaffe M, Friedenreich C, Minkin S, Bronskill M. Breast-tissue composition and other risk factors for breast cancer in young women: a cross-sectional study. Lancet Oncol. 2009;10(6):569–80. Franco L, Williams FM, Trofimov S, Malkin I, Surdulescu G, Spector T, Livshits G. Assessment of age-related changes in heritability and IGF-1 gene effect on circulating IGF-1 levels. Age (Dordr). 2014;36(3):9622. Pollak M. The insulin and insulin-like growth factor receptor family in neoplasia: an update. Nat Rev Cancer. 2012;12(3):159–69. Zhang B, Shu XO, Delahanty RJ, Zeng C, Michailidou K, Bolla MK, Wang Q, Dennis J, Wen W, Long J, et al. Height and Breast Cancer Risk: Evidence From Prospective Studies and Mendelian Randomization. J Natl Cancer Inst. 2015;107(11) Boyd NF, Lockwood GA, Byng JW, Little LE, Yaffe MJ, Tritchler DL. The relationship of anthropometric measures to radiological features of the breast in premenopausal women. Br J Cancer. 1998;78(9):1233–8. Johansson A, Marroni F, Hayward C, Franklin CS, Kirichenko AV, Jonasson I, Hicks AA, Vitart V, Isaacs A, Axenovich T, et al. Linkage and genome-wide association analysis of obesity-related phenotypes: association of weight with the MGAT1 gene. Obesity. 2010;18(4):803–8. Okada Y, Kamatani Y, Takahashi A, Matsuda K, Hosono N, Ohmiya H, Daigo Y, Yamamoto K, Kubo M, Nakamura Y, et al. A genome-wide association study in 19 633 Japanese subjects identified LHX3-QSOX2 and IGF1 as adult height loci. Hum mol Genet. 2010;19(11):2303–12. Sasazuki T, Sawada T, Sakon S, Kitamura T, Kishi T, Okazaki T, Katano M, Tanaka M, Watanabe M, Yagita H, et al. Identification of a novel transcriptional activator, BSAC, by a functional cloning to inhibit tumor necrosis factor-induced cell death. J Biol Chem. 2002;277(32):28853–60. Hossain M, Qadri SM, Su Y, Liu L. ICAM-1-mediated leukocyte adhesion is critical for the activation of endothelial LSP1. Am J Physiol Cell Physiol. 2013;304(9):C895–904. Vogelstein B, Kinzler KW. The multistep nature of cancer. Trends Genet. 1993;9(4):138–41. Boyd NF, Martin LJ, Bronskill M, Yaffe MJ, Duric N, Minkin S. Breast tissue composition and susceptibility to breast cancer. J Natl Cancer Inst. 2010;102(16):1224–37. Byng JW, Boyd NF, Fishell E, Jong RA, Yaffe MJ. The quantitative analysis of mammographic densities. Phys Med Biol. 1994;39(10):1629–38. Graham SJ, Ness S, Hamilton BS, Bronskill MJ. Magnetic resonance properties of ex vivo breast tissue at 1.5 T. Magn Reson Med. 1997;38(4):669–77. Boyd NF, Lockwood GA, Byng JW, Tritchler DL, Yaffe MJ. Mammographic densities and breast cancer risk. Cancer Epidemiol Biomarkers Prev. 1998;7(12):1133–44. Bernstein L. Epidemiology of endocrine-related risk factors for breast cancer. J Mammary Gland Biol Neoplasia. 2002;7(1):3–15. Rice MS, Bertrand KA, VanderWeele TJ, Rosner BA, Liao X, Adami HO, Tamimi RM. Mammographic density and breast cancer risk: a mediation analysis. Breast Cancer Res. 2016;18(1):94. Cuzick J. Breast density predicts endocrine treatment outcome in the adjuvant setting. Breast Cancer Res. 2012;14(4):109. Byrne C, Ursin G, Martin CF, Peck JD, Cole EB, Zeng D, Kim E, Yaffe MD, Boyd NF, Heiss G, et al. Mammographic Density Change With Estrogen and Progestin Therapy and Breast Cancer Risk. J Natl Cancer Inst. 2017;109(9) Toriola AT, Dang HX, Hagemann IS, Appleton CM, Colditz GA, Luo J, Maher CA. Increased breast tissue receptor activator of nuclear factor-kappaB ligand (RANKL) gene expression is associated with higher mammographic density in premenopausal women. Oncotarget. 2017;8(43):73787–92. We acknowledge the contribution of Dr. Alice S. Whittemore, Stanford University, who directed our attention to the Moolgavkar model. The studies that provided data for the present paper were supported by grants from the Canadian Breast Cancer Research Alliance, the National Cancer Institute of Canada, and the National Institutes of Health (grant R01 CA082826-01). The Ontario Ministry of Health and Long-Term Care also supported this work. The datasets used during the present study are available from the corresponding author on reasonable request. Princess Margaret Cancer Centre, 610 University Avenue, Room 9-502, Toronto, ON, M5G 2M9, Canada Norman Boyd, Hal Berman, Jie Zhu, Lisa J. Martin & Salomon Minkin Genetics and Genome Biology, Hospital for Sick Children Research Institute, Toronto, ON, Canada Andrew D. Paterson Divisions of Epidemiology and Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada Imaging Research, Sunnybrook Health Sciences Centre, Toronto, ON, Canada Martin J. Yaffe, Sofia Chavez & Greg Stanisz Cancer Care Ontario, Toronto, ON, Canada Anna M. Chiarelli BC Cancer Agency, Vancouver, BC, Canada Greg Hislop Norman Boyd Hal Berman Jie Zhu Lisa J. Martin Martin J. Yaffe Sofia Chavez Greg Stanisz Salomon Minkin NB, HB, ADP, and SM conceived of the study. NB, HB, ADP, SM, GH, and AMC collected and interpreted data. SC, GS, MJY, and NB measured breast images. JZ, SM. LJM, HB, AP, and NB analyzed and interpreted data. NB, HB, ADP, SM, JZ, and LJM prepared manuscript drafts. All authors commented on the manuscript drafts, and all authors read and approved the final manuscript. Correspondence to Norman Boyd. The data in the section of the manuscript concerned with CBD and the age-specific incidence of breast cancer were obtained in a number of separate studies for which ethics approval was obtained from the University Health Network, Sunnybrook Hospital, and Women's College Hospital (all in Toronto), and from the Toronto District School Board, the Toronto Catholic District School Board, the York Region District School Board, the York Catholic District School Board, and the mammography screening programs of Ontario and British Columbia. All subjects provided signed inform consent. Figure S1. a Examples of percent mammographic density (and grey scale). A = 0%; B = < 10%; C = 10 < 25%; D = 25% < 50%; E = 50% < 75%; F= > 75%. b Examples of percent breast water determined by magnetic resonance (and grayscale). Shows 0% (top left), 20% (top right), 60% (bottom left) and 90% (bottom right). Table S1. Associations of age, age at menarche, parity and menopausal status with breast tissue components. Values shown are regression coefficients, adjusted for age, and p values. Table S2. Selected characteristics of subjects according to study. Table S3. Comparison of observed and predicted age-specific breast cancer incidence using three predictive models (including young woman with calibrated percent water). (DOC 4356 kb) Boyd, N., Berman, H., Zhu, J. et al. The origins of breast cancer associated with mammographic density: a testable biological hypothesis. Breast Cancer Res 20, 17 (2018). https://doi.org/10.1186/s13058-018-0941-y Two-stage model
CommonCrawl
Reading: Correlate not optional: PP sprouting and parallelism in "much less" ellipsis Correlate not optional: PP sprouting and parallelism in "much less" ellipsis Jesse A. Harris , University of California, Los Angeles, Los Angeles, CA, US Katy Carlson Morehead State University, Morehead, KY, US Clauses that are parallel in form and meaning show processing advantages in ellipsis and coordination structures (Frazier et al. 1984; Kehler 2000; Carlson 2002). However, the constructions that have been used to show a parallelism advantage do not always require a strong semantic relationship between clauses. We present two eye tracking while reading studies on focus-sensitive coordination structures, an understudied form of ellipsis which requires the generation of a contextually salient semantic relation or scale between conjuncts. However, when the remnant of ellipsis lacks an overt correlate in the matrix clause and must be "sprouted" in the ellipsis site, the relation between clauses is simplified to entailment. Instead of facilitation for sentences with an entailment relation between clauses, our online processing results suggest that violating parallelism is costly, even when doing so could ease the semantic relations required for interpretation. Keywords: Ellipsis, parallelism, focus-sensitive coordination, sentence processing, scalar meaning How to Cite: Harris, J. A., & Carlson, K. (2019). Correlate not optional: PP sprouting and parallelism in "much less" ellipsis. Glossa: A Journal of General Linguistics, 4(1), 83. DOI: http://doi.org/10.5334/gjgl.707 Accepted on 18 Apr 2019 Submitted on 31 May 2018 The online interpretation of ellipsis structures has become a popular topic among psycholinguists because it highlights an intriguing mismatch between form and meaning, and consequently reveals a unique demand on the human sentence processing system. In particular, that a meaningful interpretation is recovered from ellipsis shows that it is not enough for the processor to simply parse linguistic structure by passively interpreting the word forms presented to it; instead, the processor must actively go beyond the input and infer the correct form at the appropriate level of representation, e.g., syntactic, Logical Form, discourse, etc. We aim to expand the empirical and conceptual coverage of ellipsis processing by exploring an understudied ellipsis type known as focus-sensitive coordination, which requires a contextually salient scale between contrasting phrases. Here we use that scale to explore the role of parallelism in the processing of elided structures. Although there are many types of ellipsis structures, perhaps the most well known case is VP (verb phrase) ellipsis, which we use to illustrate the basic inference problem faced by the human language sentence processor. For example, in (1a) the auxiliary did stands in for the verb phrase ate a cheeseburger, which is made explicit in (1b). (1) a. John ate a cheeseburger. Bill did, too. b. John ate a cheeseburger. Bill ate a cheeseburger, too. To interpret (1a) as (1b), the processor must therefore "fill in" the missing or elided material, presumably by consulting linguistic or discourse representations from the context. Current research on ellipsis suggests that the processor engages in some kind of cost-free mechanism, retrieving the missing form through either copying (Frazier & Clifton 2001, 2005; Frazier 2008) or a content-addressable pointer in memory (Martin & McElree 2008, 2009, 2011; Martin 2010). For concreteness, we assume that a syntactic or logical structure is present covertly at the ellipsis site (Shapiro & Hestvik 1995; Merchant 2001; Shapiro et al. 2003; Frazier 2008) at some level of representation (discourse, syntactic, Logical Form, etc.). Although we believe that the assumption of covert syntax is supported by previous research, we acknowledge that other, non-syntactic accounts are currently live options in the literature which deserve serious attention (e.g., Sag 1976; Dalrymple et al. 1991; Hardt 1993; Ginzburg & Sag 2000; Culicover & Jackendoff 2005; Nykiel and Sag 2011). Our account in no way critically hinges upon or supports this assumption, which is made primarily for convenience, and our processing account could have been formalized in non-syntactic terms. In any event, we speak as though structure from the antecedent clause is interpreted at the ellipsis site (2), represented by < >, and is recovered within the ellipsis site (see Merchant 2016 for a review). (2) John ate a cheeseburger. Bill did <eat a cheeseburger>, too. If the problem of recovering the ellipsis weren't thorny enough, the processor may also need to infer that additional material or content is missing. In cases of clausal ellipsis like sluicing (Ross 1967, 1969), the remnant of ellipsis, e.g., the interrogative pronoun what in (3), is the overt material remaining from a clause that was elided. In the standard case of merger, the remnant is directly paired with a correlate (something) in the same syntactic position within the antecedent clause. However, Chung, Ladusaw & McCloskey (1995, 2011) also identify cases of sluicing in which a remnant has no overt correlate, cases they call sprouting (3b). (3) a. John ate something, but I don't know [CP what1 <John ate t1> ]. b. John ate, but I don't know [CP what1 <John ate t1> ] While there are various syntactic accounts of sluicing (e.g., Chung et al. 1995; Merchant 2001; van Craenenbroeck 2010), many approaches converge on the idea that the elided clause (John ate t1) is syntactically or semantically identical to the antecedent clause at some level of representation. An overt indefinite correlate (3a) would provide a variable (corresponding to the phonetically null trace t1) for the wh-element to bind within the elided phrase. When no correlate is present, as in (3b), a variable must be created within the ellipsis site – that is, a variable is "sprouted" at Logical Form (LF) that does not correspond to any overt material in the matrix clause. In other words, sprouting ensures that the form of the ellipsis clause of (3b) is equivalent to that of (3a), even though the actual antecedent clause in (3b) is not parallel with the ellipsis. Processing sprouting has been shown to elicit online processing costs for sluicing ellipsis (Frazier & Clifton 1998; Dickey & Bunger 2011), indicating that the processor seeks to identify a correlate for the remnant of ellipsis from the surface form of the antecedent clause. Frazier & Clifton (1998) compared sprouting of two kinds of elements: the argument to a verb (something; 4a) and an adjunct (somewhere; 4b), following previous research (Carlson & Tanenhaus 1988; Mauner, Tanenhaus & Carlson 1995) suggesting that implicit arguments, but not implicit adjuncts, are inferred at LF. Although cases of sprouting were costly (i.e., when something or somewhere were absent), no processing time differences between sprouted arguments and adjuncts were observed in their self-paced reading study. (4) a. Argument: The secretary typed (something), but I don't know what. b. Adjunct: The secretary typed (somewhere), but I don't know where. However, processing a conjunction structure is independently facilitated when the conjuncts are parallel in their syntactic, semantic, or prosodic structure (e.g., Frazier et al. 1984; Henstra 1996; Frazier, Munn & Clifton 2000; Knoeferle & Crocker 2009; Sturt, Keller & Dubey 2010; Poirier, Walenski & Shapiro 2012; Knoeferle 2014). Dickey & Bunger (2011) argued that the processing cost attributed to sprouting can be best understood in terms of a general preference for structurally parallel structures rather than an operation specific to the construction of Logical Form or some other level of representation (cf. Chung et al. 1995). In a self-paced reading paradigm, they observed a processing cost for sprouting regardless of whether there was ellipsis in the second conjunct (5a) or not (5b), in addition to replicating the lack of differences between sprouted arguments and adjuncts (Frazier & Clifton 1998). (5) a. Elided: The secretary typed {something/quickly}, but I don't know what exactly. b. Non-elided: The secretary typed {something/quickly}, but I don't know what she typed. The existing evidence thus suggests that structural parallelism1 between remnants and correlates could be a driving factor in processing ellipsis structures (see Carlson 2002 for discussion), while more general parallelism between clauses or phases is common in conjoined structures. However, most experimental studies have concentrated on better-known forms of ellipsis: VP ellipsis (e.g., Murphy 1985; Tanenhaus & Carlson 1990; Ward, Sproat & McKoon 1991; Shapiro & Hestvik 1995; Martin & McElree 2008; Shapiro et al. 2003, among others), gapping (Carlson 2001, 2002; Carlson, Dickey & Kennedy 2005), sluicing (e.g., Frazier & Clifton 1998, 2005; Carlson et al. 2009; Poirier et al. 2010; Martin & McElree 2011; Nykiel 2013; Harris 2015), and to a lesser extent stripping/replacive ellipsis (also known as bare argument ellipsis; see Paterson et al. 2007; Carlson 2013; Sauermann et al. 2013 for experimental findings). While these studies reveal a relatively unified view of processing ellipsis, less canonical forms of ellipsis with specialized constraints on the relation between the antecedent clause and the ellipsis might challenge our current understanding of how elliptical material is recovered and integrated into a representation in real time. We turn to focus-sensitive coordination, an ellipsis construction which imposes an additional relationship between the correlate and remnant: in this case, a contextually salient scale. In particular, we use the properties of that scale to explore the role of parallelism in the retrieval of correlates for remnants in elided structures. 1.1 Focus-sensitive coordination Focus-sensitive coordination structures are a class of constructions that contain a coordinator like let alone, much less, and to some extent never mind, in the scope of explicit or implicit negation,2 which impose a suite of syntactic, pragmatic, and prosodic constraints on their conjuncts. Following current literature (Hulsey 2008; Toosarvandani 2010; Harris 2016), we assume that the second conjunct is the remnant of ellipsis, rather than a simple constituent. For example, Harris (2016) proposes that the overt conjunct vodka in (6a) corresponds to the remnant of ellipsis, which moves into a focus position FocP above the elided clause (I drink t1) as in (6b). This derivation is on par with other move-and-delete accounts of clausal ellipsis, such as those for stripping/bare argument ellipsis (Frazier et al. 2012; Sailor & Thoms 2014), sluicing (Merchant 2001), or fragment answers (Merchant 2004; Weir 2014). (6) a. I can't drink beer, much less vodka. b. I can't drink beer, much less [FocP vodka]1 <I drink t1>. Though not reviewed in detail here, the ellipsis account of focus-sensitive coordination structures is supported by many distributional tests that align it with so-called small conjunct ellipsis (see Hulsey 2008 for a gapping account, and Harris 2016 for a move-and-delete account similar to those proposed for stripping/bare argument ellipsis). Consequently, we will continue to use key terminology from the ellipsis literature to describe the components of the structure: a "remnant" (vodka) that represents the non-elided material in the clause containing the ellipsis, and a "correlate" (beer) in the antecedent clause that contrasts with the remnant. Focus-sensitive coordination is similar to stripping/bare argument ellipsis and sluicing ellipsis in that the correlate and the remnant bear pitch accent in the most felicitous pronunciation, typically with accents that mark contrastive focus (Harris & Carlson 2018). In the examples below, contrastive accent is indicated with CAPS. Note that the contrastive element in the remnant corresponds to a contrastive correlate in the antecedent, regardless of whether the constituents are nouns (7a), verbs (7b), verb phrases (7c), or modifiers (7d). b. I can't DRINK beer, much less MAKE it. c. I can't DRINK BEER, much less MAKE WINE. d. I can't drink ONE beer, much less TWO (beers). However, research on a range of ellipsis structures has found that overt prosodic marking on a noun increases the likelihood of it being selected as a correlate, but does not entirely disambiguate the sentence (e.g., Carlson 2001; Carlson et al. 2009; Carlson, Frazier & Clifton 2009; Harris & Carlson 2018). Listeners still can and do choose unaccented nouns as correlates, potentially due to a preference for local correlates, or an expectation about the default position of focus. For example, Rooth (1992) noted that while contrastive accent is necessary on a remnant, it is not actually required on the correlate, though it could be useful in locating the intended correlate. Finally, and perhaps most crucially for present purposes, the antecedent and the elided clause stand in a scalar relationship. In particular, the negative antecedent (I can't drink beer) seems to strongly imply or contextually entail the clause containing the ellipsis, also negated (I can't drink vodka for (7a)), intuitively evoking a scalar relationship between beer and vodka (Fillmore et al. 1988; Toosarvandani 2010). Two kinds of scales are often discussed in the literature. The first are conventionalized or lexicalized scales, in which closed-class elements are logically ordered via semantic entailment or informativity. For example, lexical elements like cardinal numbers <one, two, three, …>, modals <can, must> and quantifiers <none, some, all> form conventionalized scales via entailment (Horn 1972; Hirschberg 1985). Such scales are thought to be context-independent, as they generalize to many occasions of use (e.g., Stiller, Goodman & Frank 2011). To take (7d) as an example: if you drink two beers, then, in all situations, you have also drunk one. Therefore, the (negated) proposition expressed in the antecedent clause (I can't drink one beer) entails the (negation) of the ellipsis clause with the elided content (I can't drink two beers). The second kind of scales are sometimes known as ad hoc scales, which are highly dependent on context or the conversation at hand, and one must therefore know a great deal about the specific situation in addition to world knowledge and perhaps speaker intention to interpret them correctly (e.g., Hirschberg 1985). For example, (7a) can be understood as implicating that if I am unlikely or lack the capacity to drink beer, I therefore also am unlikely or lack the capacity to drink vodka, given its greater potency. Similarly, (7c) somehow implies that drinking beer is an easier or more expected activity than making wine, and so if I cannot perform the former, I therefore cannot perform the latter, even though drinking beer and making wine are logically unrelated events (a teetotaling oenologist might strike us as unusual, but certainly not as contradictory). It is currently unclear whether one kind of scale might be more difficult to recover than the other. Conventionalized scales might be psychologically privileged, in that they could be accessible in more general contexts than ad hoc relations. Such a view would be compatible with Levinson's (2000) neo-Gricean account, in which generalized conventional implicatures constitute interpretive defaults, and are consequently more readily accessible than particularized conventional implicatures. In other words, the unconstrained nature of ad hoc scalar relations would make them less accessible to comprehenders constructing interpretations in real time. Recent experiments do not shed much light on the matter, though they have shown that children (Papafragou & Tantalou 2004) and adults (Katsos & Bishop 2011) may treat these two kinds of scales in very different ways (for more discussion, see Katsos & Cummins 2012). Example (8) illustrates that the (contextually) weaker element in focus-sensitive coordination must be located in the first conjunct; some may precede all (8a), but in many dialects the reverse is semantically incoherent, though seemingly grammatical (8b). Of course, this restriction does not carry over to other conjunctions, even those like but and although which make similar contributions to the discourse (8c). (8) a. John didn't steal SOME of the cookies, much less ALL of them. b. #John didn't steal ALL of the cookies, much less SOME of them. c. John didn't steal ALL of the cookies, but/although he did steal SOME of them. The basic properties we assume for focus-sensitive coordination are summarized in (9). (9) Properties of focus-sensitive coordination 1. The second conjunct consists of the remnant of ellipsis; 2. The correlate and remnant are usually marked with contrastive focus; and 3. Propositions formed from the clauses containing the correlate and remnant are placed on a contextually salient scale. We believe that these properties place a unique set of demands on the sentence processing system. As we shall see, they also allow us to formulate two potentially opposing hypotheses regarding the processing routines recruited to interpret ellipsis structures in real time. First, however, we articulate our assumptions regarding what basic information the processor needs in order to process focus-sensitive coordination generally. 1.2 Processing ellipsis and focus-sensitive coordination As the literature on processing ellipsis structures is rapidly growing (e.g., Phillips & Parker 2014, for review), we will introduce only the bare essentials here. We assume that the processor must address three basic requirements (10) in order to resolve any remnant-based ellipsis, as discussed in Carlson & Harris (2017) for focus-sensitive coordination and Harris (2019) for sluicing ellipsis. Again, although we assume for concreteness that the structure of the ellipsis site is recovered from the context or from the antecedent site, this assumption is not required for the essentials of our account. (10) Basic tasks of the processor in ellipsis processing: 1. Parse the remnant of the ellipsis, i.e., construct the appropriate phrase structure for the remnant given the input. 2. Locate the correlate, if any, from the antecedent clause. 3. Construct or infer the elided phrase, e.g., by regenerating or copying a structure at Logical Form. We use example (6) to illustrate the three tasks in more depth in (11). We assume that each process depends on the previous one. For instance, the parser must have established the basic syntactic category of the remnant (step 1) before it can locate a correlate of the appropriate type in the antecedent clause (step 2). Similarly, generating a structure for the elided clause (step 3) depends on having selected an appropriate correlate from the antecedent clause (step 2).3 Note that the parsing mechanisms needed for step 1 are standard, and are not processes unique to ellipsis. (11) I can't drink beer, let alone vodka. Step 1. Parse the remnant: Assign the appropriate phrase structure for vodka. I can't drink beer, much less [DP=Remnant vodka]. Step 2. Locate the correlate: Retrieve an appropriate correlate that provides a suitable contrast to the remnant vodka, using various processing strategies. I can't drink [DP=Correlate beer ], much less [DP=Remnant vodka]. Step 3. Construct the elided phrase: Build the ellipsis structure after the remnant. I can't drink [DP=Correlate beer ], much less [DP=Remnant vodka] 1 < I drink t1 >. Although focus-sensitive coordination has been studied far less than other kinds of ellipsis structures, there are a few recent results that bear on the processes outlined above. Regarding step 1, Harris (2016) found a small bias for VP remnants over DP/NP remnants in a variety of offline completion tasks when the fragment after let alone appeared without context (replicated in Harris & Carlson 2016, 2018). However, simply mentioning a salient DP object in preceding text weakened or overturned the bias. Further, VP and DP remnants did not show any categorical differences in online reading in eye tracking. He argued that these results were consistent with an ellipsis account over a simple coordination account, and that the processor constructs the clause containing the remnant upon encountering a focus-sensitive coordinator. In a follow up study, Harris & Carlson (2016) showed that the bias towards VP remnants could not be attributed to exposure in text alone, as English corpus searches showed a general mild preference for DP, not VP, remnants. Following Toosarvandani (2010), they proposed that the remnant-correlate pair is sensitive to an accessible Question Under Discussion, i.e., the question or topic that immediate conversation is meant to address, explicitly or implicitly (Roberts 1996, 2012). With respect to step 2, Harris & Carlson (2016, 2018) found that the processing strategies used to pair the remnant, once parsed, with a correlate were similar to those used in other ellipsis structures. In particular, the processor appears to prefer the most local (or nearest) correlate of the appropriate syntactic type, a preference that prevails in sluicing and most likely reflects the tendency for English focus to appear late in a clause (for arguments from sluicing, see Carlson, Dickey et al. 2009). The preference for the most local correlate is exhibited not only across a variety of written and spoken corpora, but also very early in the online processing record. The general patterns are thus compatible with an ellipsis account in which the processor seeks to make the antecedent and the ellipsis clauses, especially the contrasted elements, parallel along syntactic, semantic, and prosodic dimensions (Carlson 2001; Dickey & Bunger 2011). What, then, is the processor to do when the clause containing the remnant and ellipsis is not parallel with the antecedent? Instances of sprouting discussed above represent an extreme case, in which the remnant lacks a constituent in the antecedent. As reviewed above, prior studies have found that sprouted correlates incur a processing cost during online comprehension, due either to an ellipsis-specific operation in which the ellipsis site is modified to include a variable corresponding to the correlate (Frazier & Clifton 1998), or else to a side effect of a bias against non-parallel antecedents (Dickey & Bunger 2011). Focus-sensitive coordination, however, raises an intriguing possibility regarding sprouting. In addition to finding a correlate for the remnant (step 2), the correlate and remnant must ultimately be put onto a contextually salient scale whose properties are inferred through context (step 3). Notably, in cases of focus-sensitive coordination, sprouted correlates could facilitate recovery of a contextually salient scale via an entailment relation between the antecedent and elided clause. In example (12a), the processor presumably must pair the remnant chemistry with the overt correlate carpentry, and accommodate an appropriate scale, e.g., one which deems carpentry a less difficult subject than chemistry, creating an inference from not being able to study carpentry to not being able to study chemistry. Without an overt correlate (12b), however, the proposition obtained from the antecedent (that Michael is not able to study) necessarily entails the proposition obtained from the clause containing the remnant and the ellipsis (that Michael is not able to study chemistry).4 (12) Focus-sensitive coordination a. Michael couldn't study carpentry, much less chemistry. b. Michael couldn't study, much less chemistry. In sluicing ellipsis, however, the relationship between clauses is much different. In the case with sprouting (13b), no such entailment relationship holds between the antecedent clause (Michael studied something) and the clause corresponding to the ellipsis (what1 he studied t1). The relation is instead one of identity, in which the some aspect of the antecedent clause is unknown, unidentified, or unreported. (13) Sluicing ellipsis a. Michael studied something, but I don't know what. b. Michael studied, but I don't know what. In the remainder of this paper, we explore two possibilities. On the one hand, it is in principle conceivable that the cost for sprouting is limited to sluicing and other types of ellipsis besides focus-sensitive coordination, in that, without a scalar relation between clauses, there would be no interpretive advantage for sprouting in these structures. That is, if ad hoc scales are costly to compute, then sprouting in focus-sensitive coordination, by virtue of providing an entailment relation between the clauses, might actually simplify the task of sentence processing. In particular, a ready-made scale might outweigh the general preference for parallelism, provided that such a scale makes the inference task easier, as in (14). (14) Scalar Advantage Principle (SAP): Avoid positing an ad hoc scale, if a conventionalized scale is readily accessible. This possibility rests on the premise that ad hoc scales are costly to compute, at least compared to entailment relations. Although we have no direct evidence for or against the Scalar Advantage Principle, we considered it a very likely conceptual possibility. The principle was motivated in part by our intuition that interpreting sentences with lexicalized scales as in (7d) requires no special knowledge about discourse or the intentions of the speaker. In contrast, interpreting sentences like (7a) requires establishing a contextually salient relationship, and such relationships are, in principle, unbounded. Although our primary motivation for SAP is conceptual, we do not think it implausible in principle. For example, a branch of research in sentence processing and general cognition has concluded that processing resources are generally quite limited. As a consequence, some language processing tasks are shallow or incomplete (e.g., Ferreira et al. 2002; Sanford & Sturt 2002; Sanford & Graesser 2006, among many others). Assuming that conventionalized relations are more readily available than situation-specific or ad hoc relations, structures that do not require ad hoc scalar relationships might demand fewer processing resources. In other words, conventionalized relations might simply be less taxing to access, especially without sufficient discourse context. Of course, the validity of such a preference remains an empirical question. On the other hand, it is possible that the processor always prefers a correlate that is maximally parallel to the remnant as a matter of course, regardless of the advantage to interpretation. We define parallelism as the presence of any of a number of similarities (morphological, prosodic, syntactic, semantic, etc.) between contrasting or conjoined phrases (Carlson 2002). On this view, we expect that parallelism between the correlate and remnant would facilitate processing focus-sensitive coordination (15), as in other cases of ellipsis processing (e.g., Carlson 2001; Carlson et al. 2009). (15) Parallel Contrast Principle (PCP): Prefer correlate-remnant pairs that are as parallel as possible. As noted earlier, initial studies of parallelism mainly concentrated on unelided conjoined elements instead of ellipsis. Frazier et al. (1984) showed that conjoined sentences were read faster when the clauses were more similar to each other in a range of ways, from active/passive voice to the animacy of an object DP. Mauner, Tanenhaus, & Carlson (1995) also found that matching active/passive voice eased the processing of VP Ellipsis sentences. Frazier, Munn & Clifton (2000) showed that semantically similar but syntactically unlike conjoined phrases (prepositional phrases or PPs with Adverb Phrases, for example) were processed more slowly than syntactically parallel conjoined phrases. Henstra (1996) found a bias for conjoined DPs to match in definiteness and presence of modifiers. In various studies of ambiguous ellipsis sentences, Carlson (2001, 2002; also Carlson et al. 2009) found that the interpretation of these sentences responded to DP similarity in many features, such as definiteness, DP form (names vs. pronouns vs. definite or indefinite phrases), and gender, with a processing preference for pairing more similar correlates and remnants. The effects held true across ellipsis sentences with the conjunction and, such as gapping or VP Ellipsis, but also ones which did not, such as comparatives and bare remnant ellipsis. In all of these sentence types, remnant phrases (usually DPs) left behind by ellipsis contrasted with correlates within the unelided antecedent clause. Similarities between potential correlates and the remnant (and lack of similarity with other potential correlates) influenced the likelihood of an interpretation in which the similar phrases contrasted. In auditory experiments, contrastive (L+H*) accents on a potential correlate and remnant (and not on other possible correlates), creating what could be called prosodic parallelism, also increased matching interpretations, though they did not disambiguate the sentences by any means. There are some remaining issues regarding the role of contrast in processing ellipsis: for example, Rooth (1992) observes that accents or even focus on the first member of a contrastively focused pair of phrases is not necessary, though Carlson's work shows that accent placement can clearly aid processing (see Harris & Carlson 2018 for additional discussion). We take parallelism to be an extra-grammatical factor that affects processing of ellipsis structure and conjoined structures, rather than a grammatical constraint. The most extreme nonparallel condition is one in which a remnant lacks a correlate entirely, as in sprouting examples of sluicing or focus-sensitive coordination structures, which are nonetheless entirely grammatical structures. We take these cases as a testing ground for comparing the predictions of the Scalar Advantage Principle and the Parallel Contrast Principle. When it comes to sprouting in focus-sensitive coordination, the two principles make the opposite predictions. The Scalar Advantage Principle predicts an advantage for sprouting, and the Parallel Contrast Principle predicts a penalty for sprouting. There is already some evidence for the Parallel Contrast Principle for focus-sensitive coordination. Carlson & Harris (2017) considered cases of "zero-adjective contrast" in which the remnant DP contained an adjective like red without a corresponding adjective in the correlate (16). In a search of the Corpus of Contemporary American English (COCA; Davies 2008), zero-adjective contrast was found to be extremely rare in text (less than 1% of 1644 cases of much less ellipsis). (16) a. I don't own [a hat], much less [a red one]. b. She will not argue with [a fool], much less [a money-hungry one]. (COCA) A series of auditory and written questionnaires confirmed that zero-adjective contrast was dispreferred compared to examples with parallel DPs in naturalness rating tasks, and was avoided in sentence completion. Finally, in a self-paced reading study with items like (17), they observed a penalty immediately on DP remnants (an easy one) when the correlate lacked a corresponding adjective contrast (complex), but a penalty on VP remnants (burn one) when there was an adjective in the correlate. The result was interpreted as providing evidence against the Scalar Advantage Principle, as there was a clear cost for computing zero-adjective contrast, and the second result as an indication that the processor anticipates upcoming contrast, in which case the salient contrast evoked by complex would have initially misled the processor. (17) a. The chef didn't overcook a (complex) meal, much less [DP an easy one], since he was trained by the very best. b. The chef didn't overcook a (complex) meal, much less [VP burn one], since he was trained by the very best. Despite the existing evidence that zero-adjective contrasts are costly to compute, the following set of experiments follows up on a number of remaining questions and concerns. First, we are cautious about analogizing the addition of an adjective to the remnant too closely with the established case of sprouting in sluicing. Where the analogy breaks down is that the remnant DP does in fact have a correlate DP in the antecedent clause in such cases, but the correlate simply lacks the expected sub-contrast within it. Thus, the processor might not need to posit a variable at logical form so much as readjust the kinds of scales or comparisons it can accommodate. Second, Carlson & Harris' (2017) main focus was more on the distribution of remnants after the much less coordinator, and less on the online processing profile associated with computing zero-adjective contrast in focus-sensitive coordination. The present study utilizes eye tracking while reading in order to gain a more fine-grained picture of processing focus-sensitive coordination structures during natural reading, which allows us to identify possible tradeoffs when initially encountering the remnant and later measures associated with interpretation. Finally, the structure we are studying permits an additional test of correlate-remnant semantic mismatch, which forms a kind of intermediate case between parallel correlate-remnant pairs and sprouting. It also affords us the opportunity to determine how much subjects attended to the semantic and pragmatic compatibility between the correlate and the remnant. We present three norming studies and two eye tracking studies below, each of which contain items based on the following pattern (18). The first three conditions have PP (prepositional phrase) remnants, whereas the last three conditions have VP remnants (18d–f), which both served as a statistical control and prohibited participants from anticipating a PP remnant. The first sentence (18a) illustrates PP sprouting, in which a PP remnant completely lacks a corresponding PP correlate (the No Matrix PP condition). The second sentence (18b) contains an overt PP correlate that matches the remnant along syntactic and pragmatic dimensions (both indicate time; the Compatible PP condition). The third (18c) provides a case of moderate correlate-remnant mismatch (the Incompatible PP condition). While the correlate is of the same syntactic type, a PP, it doesn't permit comparison along a comparable scale (the remnant is about time while the supposed correlate is about location). Bold formatting was added here and elsewhere for clarity of exposition, but did not appear in the experiment. (18) PP remnant a. John doesn't want to eat out, much less on Tuesday, so I guess we'll be staying home. b. John doesn't want to eat out on Saturday, much less on Tuesday, so I guess we'll be staying home. c. John doesn't want to eat out at a steakhouse, much less on Tuesday, so I guess we'll be staying home. VP remnant d. John doesn't want to eat out, much less go dancing, so I guess we'll be staying home. e. John doesn't want to eat out on Saturday, much less go dancing, so I guess we'll be staying home. f. John doesn't want to eat out at a steakhouse, much less go dancing, so I guess we'll be staying home. The second eye tracking study explores whether the effects of PP sprouting are mitigated in supporting contexts, comparing the effect of sprouting cases like (18a) over controls (18b) with and without contexts introducing the PP remnant. In all, the results support the Parallel Contrast Principle, in that readers rely heavily on the form of the antecedent clause to identify suitable correlates, and that this process persists even in contexts supporting the PP remnant. 2 Experiment 1: PP sprouting 2.1 Experiment 1A: Completion study 2.1.1 Materials and method Materials were created by truncating the sentences to be used in the first eye tracking study (22) after much less, as shown in (19), along with 6 additional items containing much less ellipsis. A full list of items is in Appendix A. Participants were instructed to complete sentence fragments with the first natural word or short phrase that came to mind. (19) a. John doesn't want to eat out, much less _____________. b. John doesn't want to eat out on Saturday, much less _____________. c. John doesn't want to eat out at a steakhouse, much less _____________. The 30 experimental items were interspersed with 48 fragments from unrelated experiments, 15 non-experimental filler items, and 5 highly constrained fill-in-the blank fillers (e.g., Abe had finally had enough. But ______ was little else he could do.) for a total of 98 items per subject. Fragments were presented in counterbalanced order and were individually randomized for each participant. Forty native speakers of English were recruited on Amazon's Mechanical Turk, and were compensated $2 for participation. Four participants were removed for not passing three difficult-to-parse questions that demanded a native or highly proficient understanding of English. All remaining participants provided appropriate responses to the highly constrained catch items. Seven participants were removed for counterbalancing purposes, resulting in 27 total participants distributed equally across 3 counterbalancing lists. Data from six items that did not appear in the eye tracking study below were removed from the analysis. Results from the entire data set of 30 items set are virtually identical to the results presented here. 2.1.3 Results and discussion Both authors annotated the responses according to which element in the matrix clause the completion contrasted with, regardless of the syntactic category of the completion. For example, both Sunday and on Sunday completions were treated as contrasting with the matrix PP on Saturday. PP contrast completions were syntactically realized as such DPs 48% of the time. Completions coded as DP contrasts had to contrast with a different DP in the initial clause, such as the subject or object. A few cases of initial disagreement were resolved through discussion between authors. Two effects are clearly observable from Table 1; see also the left panel of Figure 1. First, when there was a PP in the matrix clause, participants supplied vastly more PP completions (~80%). The strong preference for PP completions in such cases suggests that the PP constituents provided salient contrasts for the much less coordinator. Of course, in this study the compatible/incompatible distinction for PPs was meaningless because there was no remnant PP provided. Instead, the two conditions just illustrate that both PPs in the matrix clause were appropriate possible correlates when participants could write in their own remnant. Completion contrast No Matrix PP 4% 40% 56% Compatible PP 78% 8% 14% Incompatible PP 81% 9% 10% Experiment 1A: Completion norming study. Percentage of completions supplied by subjects by grammatical category. Experiments 1A–B. Norming studies. Left panel: Results from the sentence completion study. Right panel: Results from the naturalness rating study presented as centered z-scores. Second, in cases where there was no PP in the matrix clause, participants completed sentence fragments largely with either VP (56%) or DP (40%) completions,5 replicating the general VP bias found for let alone coordinators in other completion experiments (Harris 2016; Harris & Carlson 2016, 2018). Crucially, when there was no PP in the matrix clause, i.e., the PP Sprouting condition in upcoming experiments, only 4% of the responses were PP completions. The observed bias against PP completions without a matrix PP to serve as a correlate is therefore most compatible with the Parallel Contrast Principle, in which comprehenders prefer correlate-remnant pairs that are parallel in form. To determine how similar completions to experimental items were to patterns observed in text and speech, we annotated 1670 relevant examples of the much less structure in COCA (Davies 2008). Of those, 234 examples (14% of the total) had PP remnants. Most of the PP remnants (146, or 62% of PP remnant examples) used the same preposition as a contrasting phrase in the antecedent clause, and so only the DP within the remnant was truly contrastive. An additional 83 examples (35% of PP remnant examples) had PPs with prepositions that did not match the preposition of a first-clause PP, as illustrated in (20). In 19 of those cases, the remnant PP was not on the same dimension as any PP in the first clause; in 8 of the 19, there was no PP at all in the first clause (21), and so true sprouting of a PP occurred in 3% (8 of 234) of the relevant cases, a rate that is on par with the 4% observed in the completion experiment. (20) There are few national markets to simplify the distribution of water within nations, much less across national borders. (Not sprouting: PPs contrasting on location) (21) There ought not to be violent crimes, much less in a church during the glorious fading days of August? (PP sprouting: no PP in antecedent) Overall, the results from the corpus and completion studies show that PP sprouting is avoided in production. We now examine whether sprouting a PP remnant is penalized when provided directly to a comprehender. 2.2 Experiment 1B: Naturalness ratings study Thirty sextets of sentence items were created, crossing Matrix clause structure (No Matrix PP, Compatible PP, Incompatible PP) and Remnant type (PP Remnant, VP Remnant), as in (18) above. These items were presented along with 48 sentences from two unrelated experiments, 12 non-experimental fillers, and 4 blatantly ungrammatical catch sentences (e.g., Most book did Lance enjoy in). The materials were counterbalanced and individually randomized on a subject-by-subject basis. Participants rated the naturalness of each sentence they read using a 1 to 7 Likert scale, with 7 indicating "Completely natural" and 1 "Completely unnatural". Forty-nine participants were recruited on Amazon's Mechanical Turk, and paid $2 for participation. Participants were identified as unique from those in the previous norming experiment by two criteria: their Amazon Worker ID number, and an anonymous variant of their IP address. One individual self-identified as a non-native speaker of English and was removed from the dataset. Thirteen others were removed for rating any of the ungrammatical catch items as a 4 or above, and four more were removed for counterbalancing purposes. The final dataset consisted of data from 30 participants equally distributed across 6 lists. The data are presented in Table 2 in both raw and normalized form. The data was normalized by taking the centered z-score from only the ratings in the experiment to clearly illustrate the directions of the effects between condition; see Figure 1. Subjects rated experimental items as very natural overall, at or above 5 on a 7-point Likert scale on average. The raw scores were subjected to a linear mixed effects regression model with planned contrasts of Matrix and Remnant Type as fixed effects and by-subjects and by-items random intercepts. Fixed effects were given deviation coding, with reference levels of Compatible PP for the Matrix factor, and PP Remnant for the Remnant Type factor. Although we report data from only the 24 items that also appeared in the eye tracking experiment that follows, results with all 30 items were qualitatively identical. See Table 3 for the statistical analysis. PP remnant PP penalty No Matrix PP 5.00/–0.42 (0.14) 5.63/0.03 (0.13) 0.63 Compatible PP 5.89/0.22 (0.11) 5.71/0.09 (0.12) –0.18 Incompatible PP 5.46/–0.09 (0.13) 5.84/0.18 (0.11) 0.38 Experiment 1B: Naturalness ratings. Uncorrected/z-score normalized means. Standard errors are in parentheses. Std. Error (Intercept) 5.589 0.156 35.86* No Matrix PP –0.26 0.059 –4.41* Incompatible PP 0.044 0.059 0.74 PP Remnant –0.145 0.042 –3.47* Incompatible PP × PP Remnant –0.056 0.059 –0.95 No Matrix PP × PP Remnant –0.157 0.059 –2.66* Experiment 1B: Naturalness ratings. Linear mixed effects regression model. Parameters with t-values above |2| were considered significant and are marked with an*. Items from the No Matrix PP condition (M = 5.32, SE = 0.10) were, on the whole, rated less natural than those from the Compatible PP reference level (M = 5.80, SE = 0.08), t = –4.41, which, in turn, did not differ from the Incompatible PP items (M = 5.65, SE = 0.09). In addition, items with PP Remnants (M = 5.45, SE = 0.08) were rated as less natural than those with VP Remnants (M = 5.73, SE = 0.07), t = –3.62. More importantly, however, there was a differential penalty for PP Sprouting: items with PP remnants were rated lower than those with VP remnants when the matrix lacked a PP correlate (diff = 0.63) compared to the Compatible PP reference level (diff = –0.18), t = –2.66. In contrast, there was no significant penalty for Incompatible PP remnants over VP remnant counterparts (diff = 0.38). In planned paired t-test comparisons with Bonferroni corrections, PP Remnants were rated lower than VP Remnants for only the No Matrix PP condition in both by subjects [t1(29) = –3.67, p < 0.001] and by items [t2(23) = –2.77, p < 0.05] comparisons. In Incompatible PP conditions, PP Remnants were rated as less natural than VP Remnants in by-subjects comparisons only [t1(29) = –2.87, p < 0.01]. According to the Scalar Advantage Principle, PP sprouting should be preferred as a way to avoid computing ad hoc scales, since it relies on an entailment between the antecedent clause and the clause containing the ellipsis. In contrast, the Parallel Contrast Principle predicts that PP sprouting should be difficult to compute, as it violates the expectation for parallelism between clauses. In all, the results of the naturalness ratings study support the predictions of the Parallel Contrast Principle, as there was a penalty, not an advantage, for sprouted PP Remnants. Interestingly, we also found a hint of a weak, though not fully significant, penalty for Incompatible PP contrasts. The completion and corpus studies also suggested that PP sprouting would be dispreferred, since PP remnants were rare unless a PP was also present in the matrix clause. An eye tracking study will now be presented, in order to determine whether PP Sprouting is costly to compute during online processing. 2.3 Experiment 1C: Eye tracking study Items consisted of 24 items (22) from the naturalness experiment, half of which were followed by comprehension questions; see Appendix A for a complete list. Items were interspersed with another 48 sentences from two unrelated experiments and 18 non-experimental filler items. Prior to analysis, sentences were partitioned into 6 regions. As the No Matrix PP conditions (22a) did not contain a PP region or any other linguistic content in that portion of the matrix clause, the PP region was coded as empty (🚫) for analysis. (22) a. /John doesn't want / to eat out/ 🚫 /, much less/{on Tuesday / go dancing}, / so I guess / we'll be staying home. b. /John doesn't want / to eat out on Saturday,/ much less /{on Tuesday / go dancing}, / so I guess / we'll be staying home. c. /John doesn't want / to eat out at a steakhouse,/ much less /{on Tuesday / go dancing}, / so I guess / we'll be staying home. The number of characters in PP remnants (M = 16.67, SE = 0.80) matched the VP remnants (M = 17.00, SE = 0.81) in a paired t-test, t(23) = –0.29, as did the distance in characters from the matrix PP in the Compatible PP (on Saturday; M = 15.83, SE = 1.09) and Incompatible PP (at a steakhouse; M = 14.92, SE = 0.75) conditions, t(23) = 0.85. Nevertheless, the statistical models of reading measures on the remnant that are reported below always included region length as a covariate. Participants were instructed to read silently and at their own pace, and were given a short practice session to illustrate the procedure. The reader's head was stabilized with a tower mount of an SR Research Eyelink 1000 eye tracker, which sampled eye movements from the right eye at 1000 Hz. Viewing was binocular. The display monitor was situated 55 cm away from the subject. All items were presented on a single line in 13-point fixed-width (proportional) Monaco font on a 21" LCD monitor using a Lenovo computer to display the sentences, so that three characters subtended approximately 1 degree of visual angle. Participants were calibrated before the experiment began with a three-point calibration system, and eye movement drift was corrected manually between each trial. Participants were encouraged to take breaks as often as they wished, and were calibrated if they moved away from the tower or if their fixations became unstable. A game pad was used to record responses to comprehension questions like (23) appearing after approximately half to the sentences. Comprehension questions contained either yes-no responses or simple forced-choice options. Access to the Internet was turned off on all computers, as were all non-essential programs. (23) Does John want to stay in instead of going out? a. Yes b. No Sixty UCLA undergraduates were recruited through the Psychology Pool for course credit. If a participant blinked on the remnant region in first pass reading on more than 3 trials for any one condition, she was removed from the data, and another participant was run in her place under the same counterbalancing list. Linguistic history was recorded for all subjects, all of whom self-reported as native speakers of English. All participants had normal or corrected to normal vision. 2.3.3 Results We report significant results for several standard eye tracking measures: first pass time, the sum of fixation durations from first entering a region until exiting to the left or right; go past time, the duration from first entering a region to first exiting to the right; second pass time, the time spent re-reading a region after having gone past the region previously; and total time, the sum of all fixations in a region at any point in reading. We also report the percentage of regressions out of a region. All statistical models were given the same deviation coding and random effects structures as the naturalness ratings experiment. Following standard practice, models of continuous measures used the Gaussian distribution, whereas models of binomial data were modeled as logistic linear regressions. Prior to analysis, outliers from first fixation and first pass distributions were censored using winsorsation, so that the scores below the 5th percentile and above the 95th percentile are replaced with the score at the 5th and 95th percentile, respectively (Dixon 1960; Tukey 1962). Outliers in go past and total time measures were identified visually and removed, resulting in less than 1% data loss per measure. As mentioned, remnant length was included as a predictor in models of the remnant region. Several models were created for each measure and region, increasing the complexity of the fixed effects factors by adding in trial position (the sequence in the overall experiment) as an additive and then interactive predictor. In the case of go past times, trial position was approximated in terms of first and second halves of the experiment, which resulted in a better model fit. We report the best fitting model defined as the one with the significantly lowest AIC (Akaike 1974) or, in the case of equivalent models, the one with the fewest fixed effect parameters. Only the remnant and the region following are reported for first fixation, first pass, go past times, and regressions out; see Table 4. All regions are reported for regressions in, second pass re-reading, and total times; see Table 5. The linear mixed effects models are presented in Tables 6 and 7. Results are presented in terms of three theoretically significant effects: a PP advantage, a PP Sprouting penalty, and an interaction between remnant type and sprouting. All significant effects are reported. Spill over First fixation durations First pass times No Matrix PP PP Remnant 216 (4) 226 (4) 424 (12) 226 (4) VP Remnant 216 (4) 229 (4) 476 (17) 229 (4) Compatible PP PP Remnant 222 (4) 228 (3) 426 (13) 228 (3) Incompatible PP PP Remnant 223 (4) 227 (4) 438 (13) 227 (4) Go past times Regressions out No Matrix PP PP Remnant 584 (23) 620 (26) 20% (3) 4% (1) VP Remnant 592 (25) 572 (25) 14% (2) 4% (1) Compatible PP PP Remnant 488 (18) 558 (24) 9% (2) 1% (1) Incompatible PP PP Remnant 558 (24) 593 (25) 17% (3) 4% (1) Experiment 1C: Eye tracking Means and standard errors for first fixation durations, first pass times, go past times, regressions out on the Remnant and Spill over regions. Regressions in PP region Much less No Matrix PP PP Remnant 17% (3) — 23% (3) 6% (2) 30% (3) — VP Remnant 17% (3) — 15% (3) 4% (1) 31% (3) — Compatible PP PP Remnant 26% (3) 8% (2) 11% (2) 4% (1) 29% (3) — VP Remnant 24% (3) 6% (2) 18% (3) 5% (2) 31% (3) — Incompatible PP PP Remnant 23% (3) 8% (2) 18% (3) 8% (2) 32% (3) — VP Remnant 26% (3) 12% (2) 17% (3) 5% (2) 24% (3) — Second pass times No Matrix PP PP Remnant 113 (20) — 88 (11) 60 (12) 133 (18) — VP Remnant 87 (14) — 63 (10) 41 (9) 113 (15) — Compatible PP PP Remnant 167 (28) 45 (9) 44 (9) 39 (9) 120 (16) — VP Remnant 150 (22) 46 (8) 55 (8) 45 (10) 113 (14) — Incompatible PP PP Remnant 162 (27) 57 (11) 60 (9) 49 (10) 130 (16) — VP Remnant 133 (21) 67 (11) 55 (8) 58 (11) 102 (16) — Total times No Matrix PP PP Remnant 1305 (37) — 402 (16) 568 (22) 710 (28) 609 (25) VP Remnant 1243 (38) — 381 (14) 586 (22) 649 (26) 649 (25) Compatible PP PP Remnant 1320 (38) 529 (21) 364 (13) 501 (21) 654 (26) 625 (25) VP Remnant 1296 (40) 549 (23) 380 (13) 595 (25) 658 (26) 620 (24) Incompatible PP PP Remnant 1351 (41) 513 (22) 364 (12) 549 (20) 680 (26) 620 (25) Experiment 1C: Eye tracking. Means and standard errors for regressions in, second pass, and total times for all regions. Remnant (Intercept) 248.64 11.15 22.3* 111.38 29.13 3.82* PP Remnant –1.62 2.26 –0.72 –23.26 4.60 –5.06* Incompatible PP 7.15 3.19 2.24* 12.96 6.47 2.00* No Matrix PP –6.23 3.19 –1.95+ –6.60 6.48 –1.02 Length –1.25 0.60 –2.10* 20.44 1.51 13.54* PP Remnant × Incompatible PP –4.52 3.19 –1.42 –5.49 6.48 –0.85 PP Remnant × No Matrix PP 1.72 3.20 0.54 2.85 6.48 0.44 Spill over (Intercept) 231.96 5.09 45.57* 231.96 5.09 45.57* PP Remnant –0.19 1.98 –0.09 –0.19 1.98 –0.09 Incompatible PP 0.20 2.80 0.07 0.20 2.80 0.07 No Matrix PP 0.16 2.80 0.06 0.16 2.80 0.06 Parameters Go past times Regressions out Estimate Std. Error t–value Estimate Std. Error Wald Z Remnant (Intercept) 178.52 48.52 3.68* –1.02 0.43 –2.35* PP Remnant –24.27 8.22 –2.95* 0.06 0.12 0.50 Incompatible PP 16.06 11.57 1.39 0.16 0.12 1.38 No Matrix PP 12.17 11.59 1.05 –0.01 0.09 –0.07 Length 23.44 2.58 9.09* –0.06 0.02 –2.58* PP Remnant × Incompatible PP –0.88 11.58 –0.08 0.13 0.12 1.11 Spill over PP Remnant × No Matrix PP 27.65 11.60 2.38* 0.25 0.12 2.16* (Intercept) 572.57 40.87 14.01* –3.61 0.29 –12.33* Incompatible PP 0.38 11.03 0.03 0.29 0.23 1.25 PP Remnant × No Matrix PP 9.69 11.05 0.88 0.01 0.23 0.03 Experiment 1C: Eye tracking. Linear mixed effects regression models for the remnant and spill over regions for first fixation durations, first pass times, go past times, and percentage of regressions out. Wald Z Subject (Intercept) –1.54 0.18 –8.69* 1308.8 62.71 20.87* PP Remnant –0.01 0.07 –0.14 17.44 11.89 1.47 Incompatible PP 0.15 0.10 1.49 26.15 16.82 1.55 No Matrix PP –0.34 0.11 –3.12* –27.45 16.79 –1.64 PP Remnant × Incompatible PP –0.09 0.10 –0.82 3.35 16.84 0.20 PP Remnant × No Matrix PP 0.01 0.11 0.11 3.19 16.80 0.19 PP Region (Intercept) –2.48 0.62 –4.00* 98.32 42.97 2.29* Length –0.02 0.04 –0.59 28.21 2.50 11.28* Much less (Intercept) –1.89 0.17 –11.4* 374.67 13.54 27.68* Incompatible PP 0.14 0.11 1.21 –11.02 6.88 –1.60 No Matrix PP 0.02 0.08 0.25 14.47 6.87 2.11* PP Remnant × No Matrix PP 0.28 0.11 2.50* 9.95 6.88 1.45 Remnant (Intercept) –3.09 0.21 –14.82* 116.00 46.98 2.47* No Matrix PP –0.12 0.20 –0.60 6.43 10.43 0.62 Length — — — 26.98 2.42 11.14* PP Remnant × No Matrix PP 0.15 0.19 0.79 23.38 10.44 2.24* Spill over (Intercept) –1.10 0.23 –4.85* 664.8 47.28 14.06* PP Remnant 0.06 0.07 0.79 13.37 7.91 1.69 Incompatible PP –0.09 0.10 –0.88 –5.93 11.19 –0.53 No Matrix PP 0.06 0.10 0.59 12.03 11.18 1.08 PP Remnant × Incompatible PP 0.19 0.10 1.92+ 2.20 11.21 0.20 PP Remnant × No Matrix PP –0.06 0.10 –0.62 12.33 11.19 1.10 Final (Intercept) — — — 612.5 35.75 17.13* PP Remnant — — — 0.63 7.47 0.08 Incompatible PP — — — –12.27 10.55 –1.16 No Matrix PP — — — 12.50 10.55 1.18 PP Remnant × Incompatible PP — — — 12.54 10.57 1.19 PP Remnant × No Matrix PP — — — –17.11 10.56 –1.62 Experiment 1C: Eye tracking. Linear mixed effect regression models for regressions in and total times. 2.3.3.1 PP advantage Compatible with the results from the norming studies, an advantage for PP remnants over VP remnants appeared on the remnant region for first pass (a 57 ms advantage), go past (a 64 ms advantage), and total times (also 64 ms advantage); see the top left panel of Figure 2. We propose that PP remnants were faster in these measures because the matrix PP presents a highly salient constituent to contrast with the remnant as implied by the completion norming study. Given the early, and persistent, advantage for PP remnants, the processor might have initially anticipated a PP remnant upon encountering the much less coordinator. We take the PP advantage as the statistical reference level against which to compare the case of Sprouting and Incompatible PPs. Experiment 1C. Top left panel: PP Remnant advantage on the remnant region for first pass, go past, and total time measures. Remaining panels: Interaction between Remnant type and Matrix clause structure showing that the advantage for PP remnants is reduced or overturned in No Matrix PP conditions. 2.3.3.2 PP Sprouting penalty Crucially, the predicted exception to the advantage for PP remnants occurred when there was no PP in the matrix clause for the remnant to contrast with, i.e., just in the case of PP Sprouting, shown in Figure 2. The advantage for PP remnants was either eliminated (as in go past and total times) or else reversed (manifesting in increased regressions out of a region) when a PP remnant had to be paired with a correlate sprouted from the matrix clause. Although there was a PP advantage in go past times on the remnant region: a 119 ms advantage for Compatible PP remnants and a 65 ms advantage for Incompatible PP remnants, the pattern reversed in No Matrix PP conditions in an 8 ms PP penalty, t = 2.38. Similarly, in total times on the remnant region, a 94 ms advantage for Compatible PP remnants and an 80 ms advantage for Incompatible PP remnants was reversed in an 18 ms penalty for PP remnants compared to VP remnants in the No Matrix PP condition, t = 2.24. In keeping with the predicted penalty for Sprouting, the percentage of regressions out of the remnant was modulated by the presence of a matrix PP: although regressions increased when the remnant was a VP in Compatible PP conditions by 8%, there was a 6% increase in regressions on PP remnants in the No Matrix PP condition, z = 2.16, p < 0.05. 2.3.3.3 Incompatible PP penalty A model with the first and second halves of the experiment included as an interactive predictor provided a better fit of the go past data on the remnant than other models computed. In this model, the interaction between Remnant type and the Incompatible PP condition was significantly reduced in the second half (13 ms penalty for Incompatible PP remnants over VP remnants) compared to the first half of the experiment (a 73 ms penalty for Incompatible conditions), t = –2.66, suggesting that participants might have either begun broadening the contextual dimensions along which the contrasts were to be compared, or else adopted a different reading strategy, allowing them to progress through the sentences more quickly. There were no other indications of processing costs associated with Incompatible PPs. 2.3.4 Discussion The results indicate three effects of primary interest. First, the processor appeared to encounter less processing difficulty on PP remnants than on VP remnants, suggesting an overall preference to form a contrast with the immediately preceding PP rather than the matrix VP that contained it. This interpretation is supported by the results of the completion norming study, in which PP contrasts were in general supplied at a much greater rate than any other contrast. Second, the PP advantage failed to hold in the case of PP Sprouting, where PP remnants elicited a processing cost in multiple measures (go past, total times, and regressions out) immediately on the remnant region itself. As predicted by the Parallel Contrast Principle, parallelism between the matrix and ellipsis clause appeared to facilitate processing, even though the conditions with PPs in the first clause required the processor to form an ad hoc scale between the remnant and its correlate. Third, PP remnants that formed a semantically incompatible relation with their PP correlate were penalized in go past times on the remnant, but only in the first half of the experiment. This pattern suggests that participants were initially sensitive to the meaning incongruence of PPs relating to different aspects of the situation. Later in the study, they appeared to adopt a reading strategy of ignoring this minor mismatch, most likely due to repeated exposure to examples as the experiment wore on. This is an important conceptual control, as it confirms that subjects were attuned to the implied relationships between correlate-remnant pairs, rather than simply finding a correlate for the remnant supplied without consideration of its meaning. If readers were taking the Incompatible PPs to be part of the antecedent clause that would need to be copied, but not contrasting directly with the remnant PPs, then this condition should have patterned with the No Matrix PP condition: both would involve no contrasting correlate at all for the remnant PP. The slight dispreference instead suggests that participants did consider the PPs in different clauses to be potential correlates, albeit not as parallel as they might have liked, and that they adjusted over the course of the experiment to the mismatch in semantic content. However, as all of the sentences were presented without context, it's possible that the penalty attributed to PP sprouting is due instead to the introduction of a new referential DP contained within the PP remnant. This possibility is addressed in the following eye tracking experiment by adding preceding contexts that either mentioned the PP, thus making the remnant more accessible in the discourse, or did not. If violating parallelism is the central reason behind the penalty for PP sprouting in the studies above, then sprouting should continue to be costly, regardless of context. 3 Experiment 2: PP sprouting in context 3.1 Experiment 2A: Completion study with context This norming experiment had the same task as Experiment 1A, except that the target sentence fragments were preceded by contexts that either supported a PP remnant (24a) or were neutral (24b). As before, we manipulated whether there was a PP in the matrix clause (outside the state) that could serve as a possible correlate. (24) a. Supporting context: It surprised his friends that Oliver was about to take a long vacation outside the country. b. Neutral context: It surprised his friends that Oliver was showing a new interest in traveling. Target sentence: He hadn't traveled (outside the state), much less _________. Forty-two participants completed a completion norming study for course credit over the Internet. Five participants identified as non-native speakers of English and were removed from analysis, as were six participants who answered highly predictable catch items incorrectly. Three more participants were removed for counterbalancing purposes, leaving 28 participants in the final dataset, who contributed 560 completions in total. Twenty-two ambiguous, unclear, or non-sensical completions were removed. In the remaining completions, sprouting of any category appeared in 12% of cases, of which 60% were PP remnants. When the target contained a PP correlate in the matrix clause, there was only one instance of sprouting of any kind. However, when the target did not contain a PP correlate, there were more PP sprouting completions, regardless of context (0% vs. 15%). There was also an interaction, so that supporting contexts produced nearly twice the number of PP sprouting completions when there was no PP correlate (11% vs. 19%) compared to neutral contexts. The completion study indicates that contexts meant to induce or facilitate PP sprouting indeed resulted in more PP sprouting, even though sprouting was still avoided in general. 3.2 Experiment 2B: PP sprouting in context 3.2.1 Materials and methods Twenty quartets like (25) were created in a design that crossed Context (Supporting, Neutral) and PP contrast (Matrix PP, No Matrix PP). In all items, the target sentences derived from sentences in the first eye tracking experiment. A complete list of contexts and items appears in Appendix B. There were no incompatible PPs in this study, so all correlate PPs were on the same semantic scale as the remnants, and there were no VP remnants. The context conditions were minimally different from each other, varying mostly in whether the remnant in the target sentence was overtly mentioned: it appeared in the Supporting contexts but not the Neutral ones.6 Materials were presented on two lines, so that the target sentence always appeared on its own line. Regions used in analysis of the target sentence are demarcated with a slash (/) symbol. Target sentence: He hadn't traveled / (outside the state), / much less / outside the country, / until he met / his wife. Items were interspersed with 66 items from unrelated experiments, and 20 non-experimental fillers, for a total of 106 items per experimental session. Comprehension questions like (26) appeared after approximately half of the materials. (26) Who was surprised at Oliver's interest in travel? a. His friends b. His family Fifty-six subjects participated in the experiment, using the same recruitment and exclusion criteria described in the previous eye tracking experiment (Experiment 1C). The data cleaning and analysis procedure from Experiment 1C were used in the present experiment, except that conditions were sum coded so that Supporting context and the Matrix PP conditions were treated as the statistical reference levels. Means and standard errors are reported in Tables 8 and 9. The effects are presented in terms of three theoretically significant effects: the effects of context, PP Sprouting, and their interaction. Reading behavior on the context sentence was not examined. As before, only the remnant and the immediately following spill over region are reported for first fixation durations, first pass times, go past times, and regressions out; see Table 8 for means and Table 10 for statistical models. All regions are reported for measures involving re-reading of a region, regressions in, second pass, and total times; see Table 9 for means and Tables 11–12 for statistical models. All significant effects are reported. Matrix PP First fixation First pass Supporting Matrix PP 216 (5) 228 (5) 340 (14) 465 (21) No Matrix PP 217 (4) 228 (5) 352 (14) 433 (18) Neutral Matrix PP 222 (5) 229 (5) 346 (13) 438 (17) Go past Regressions out Supporting Matrix PP 415 (21) 531 (31) 12% (3) 6% (2) No Matrix PP 433 (22) 480 (26) 16% (3) 5% (2) Neutral Matrix PP 433 (20) 506 (31) 15% (3) 3% (1) Experiment 2B: Eye tracking in context. Means and standard errors for first fixation durations, first pass times, go past times, regressions out. Supporting Matrix PP 13% (3) 11% (3) 15% (3) 1% (1) 34% (4) — Supporting No Matrix PP 11% (3) — 19% (4) 6% (2) 34% (4) — Neutral Matrix PP 18% (3) 8% (2) 17% (3) 2% (1) 32% (4) — Neutral No Matrix PP 13% (3) — 20% (4) 5% (2) 29% (4) — Supporting Matrix PP 110 (31) 50 (12) 41 (11) 17 (6) 124 (19) — Supporting No Matrix PP 96 (27) — 62 (15) 53 (14) 126 (17) — Neutral Matrix PP 118 (22) 41 (10) 52 (10) 35 (10) 107 (17) — Neutral No Matrix PP 123 (27) — 77 (15) 49 (13) 110 (20) — Supporting Matrix PP 1149 (47) 456 (21) 368 (17) 385 (19) 607 (32) 498 (26) Supporting No Matrix PP 1095 (45) — 351 (20) 439 (22) 567 (28) 484 (31) Neutral Matrix PP 1102 (51) 459 (21) 353 (15) 431 (21) 544 (25) 488 (24) Neutral No Matrix PP 1106 (48) — 371 (22) 496 (31) 634 (36) 541 (34) Experiment 2B: Eye tracking in context. Means and standard errors for regressions in, second pass times, total times. (Intercept) 218.02 3.52 62.01* 349.77 19.19 18.23* Neutral –2.41 2.25 –1.07 –10.53 5.53 –1.91+ Neutral × No Matrix PP 1.78 2.27 0.78 –1.28 5.60 –0.23 Neutral –1.56 2.22 –0.70 –2.90 7.10 –0.41 Neutral × No Matrix PP 0.18 2.25 0.08+ –11.81 7.19 –1.64 (Intercept) 438.32 21.46 20.43* –2.02 0.22 –9.04* Neutral 18.22 10.43 1.75+ 0.10 0.14 0.75 Neutral × No Matrix PP 7.35 10.50 0.70 –0.02 0.14 –0.13 Neutral 12.13 13.55 0.90 –0.10 0.24 –0.42 No Matrix PP 5.49 13.63 0.40 0.27 0.24 1.11 Neutral × No Matrix PP 21.54 13.68 1.57 0.35 0.24 1.46 Experiment 2B: Eye tracking in context. Linear mixed effects models for first fixation durations, first pass times, go past, and regressions out. Subject (Intercept) –2.08 0.21 –9.83* 1116.33 69.34 16.10* Neutral 0.15 0.14 1.11 4.02 18.94 0.21 Neutral × No Matrix PP –0.07 0.14 –0.48 9.02 19.09 0.47 Matrix PP (Intercept) –2.24 0.22 –10.42* 450.21 23.86 18.87* Much less (Intercept) –1.83 0.23 –7.85* 357.01 13.51 26.42 Remnant (Intercept) –3.61 0.40 –9.10* 433.17 25.34 17.09* Neutral 0.20 0.32 0.61 26.41 11.06 2.39* No Matrix PP 0.74 0.32 2.31* 29.41 11.04 2.66* Neutral × No Matrix PP –0.3 0.32 –0.92 –1.00 11.13 –0.09 Spill over (Intercept) –0.81 0.15 –5.41* 571.77 39.06 14.64* PP Sprouting –0.05 0.10 –0.49 15.12 12.74 1.19 Neutral × No Matrix PP –0.04 0.10 –0.34 22.91 12.83 1.79+ Final (Intercept) — — — 492.66 36.16 13.62* Neutral — — — 14.97 12.09 1.24 Neutral × No Matrix PP — — — 12.05 12.2 0.99 Experiment 2B: Eye tracking in context. Linear mixed effects regression models for regressions in and total times. By-subjects By-items Subject Context 0.73 0.40 0.44 0.52 Matrix PP 0.22 0.64 0.00 0.98 Context × Matrix PP 0.00 0.96 0.02 0.88 Matrix PP Context 0.17 0.68 0.18 0.67 Much less Context 0.05 0.83 1.16 0.30 Remnant Context 0.03 0.86 0.28 0.61 Matrix PP 4.49 <0.05 4.35 <0.05 Spill over Context 1.16 0.29 0.37 0.55 Experiment 2B: Eye tracking in context. By-subjects and by-items ANOVAs for second pass times. 3.2.3.1 Context effects There was a marginal 18 ms penalty for Neutral contexts in first pass times on the remnant, t = 1.83, p = 0.06. First pass times are sometimes divided into trials where the reader has elected to regress back in text from those where she has elected to move forward, as they may represent distinct processing strategies when progressing through text (Altmann, Garnham & Dennis 1992; Rayner & Sereno 1994). Upon reaching a difficult portion of text, the reader may decide to return to previous regions, perhaps to resolve an ambiguity or to stall for more time (e.g., Mitchell et al. 2008). An alternative strategy in such cases is to continue to move forward, perhaps in hopes of finding useful information later in the sentence. Once trials with first pass regressions out of the remnant region were eliminated (leaving 34% of the observations from the total first pass data), there was a 44 ms cost for Neutral contexts, t = 2.69, which was moderated by an interaction, described in 3.2.3.3 below. A 51 ms penalty for Neutral contexts in total times, t = 2.39, was also observed. The cost for Neutral contexts indicates that subjects were attuned to the preceding sentence at the point of the remnant and that the Supporting contexts did successfully support the target sentences; see the left panel of Figure 3. Experiment 2B. Left panel: Elongated reading times for Neutral compared to Supporting contexts in first pass and total time measures in the remnant region. Right panel: Reading time penalty for PP Sprouting conditions in first pass, second pass, and total time measures on the remnant. 3.2.3.2 PP sprouting cost As in the previous experiment, sprouting in the No Matrix PP conditions was found to be costly immediately on the remnant region, regardless of preceding context, as shown in the right panel of Figure 3. In first pass times, readers spent 24 ms longer in No Matrix PP conditions, t = 2.10, indicating an early cost for PP sprouting. However, the cost for sprouting manifested primarily in measures of re-reading. Readers spent nearly twice as long in second pass re-reading measures on the remnant in the No Matrix PP condition (M = 51, SE = 10) compared to the Matrix PP (M = 26, SE = 6) condition in by-subjects ANOVAs, but not in by-items analyses. Further, PP Sprouting (M = 464, SE = 19) conditions elicited longer total reading times on the remnant compared to PP Matrix (M = 413, SE = 14) conditions, t = 2.66. Finally, readers made more regressions into the remnant region in the No Matrix PP condition (M = 6%, SE = 1) compared to the PP Matrix (M = 2%, SE = 1) condition, z = 2.73. The results indicate that readers encountered immediate and sustained difficulty when they were presented with a PP remnant but no corresponding correlate, forcing them to sprout a PP. 3.2.3.3 PP sprouting in context While not central to the main hypotheses, we expected that Supporting contexts would facilitate recovery from any difficulty due to sprouting a PP argument. No interactions were observed in first fixation or first pass times. However, first pass times in which regressions out were eliminated indeed showed an interaction [β^=−37.26,SE=9.41,t=−3.96,p<0.001]M1 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ [\hat \beta \ = \ - 37.26,\ SE = 9.41,\ t = - 3.96,\ p < 0.001] \] \end{document} , in which Supporting contexts reduced reading times on the remnant in No Matrix PP conditions by 128 ms (p's < 0.05 in by-subject and by-items t-tests), but had no effect on Matrix PP controls, as shown in the left panel in Figure 4. No interaction was observed for first pass times followed by regressions out of the remnant, where there was only a general facilitation for Supporting contexts [β^ = −20.41, SE=10.04, t=−2.03, p<0.05]M2 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ [\hat \beta \ = \ - 20.41,\ SE = 10.04,\ t = - 2.03,\ p < 0.05] \] \end{document} ; see the right panel in Figure 4. The differential facilitation for PP Sprouting in Supporting contexts was therefore observed only when the reader made a forward progression through the sentence, a pattern which is perhaps related to how eager readers were to integrate the preceding context into the sentence on a particular trial. Experiment 2B. Regression contingent analyses of first pass times on the remnant shown as centered z-scores. Left panel: First pass times in trials with regressions out of the remnant region showed a differential effect of context on No Matrix PP conditions, but no effect on Matrix PP conditions. Right panel: First pass times in trials without regressions out of the remnant region showed an advantage for Supporting contexts. Other measures showed weak evidence in favor of a reduced PP sprouting penalty in Supporting contexts. Although not the best fitting model, an interaction consistent with a PP sprouting penalty was observed in go past times on the post remnant spill over region once trial order was included as an interactive predictor; there was a 57 ms cost for the No Matrix PP condition in the Neutral contexts, but a 51 ms advantage for PP Sprouting in Supporting contexts [β^=66.36,SE=29.67,t=2.24]M3 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ [\hat \beta \ =\ 66.36,\ SE = 29.67,\ t = 2.24] \] \end{document} , along with a trend indicating that the interaction reduced over the course of the experiment, t = –1.69. Finally, there was a non-significant trend for an interaction between Sprouting and Context for total times in the spill over region: While there was a 90 ms cost for PP sprouting in Neutral contexts, there was a 40 ms benefit for PP sprouting in Supporting contexts, t = 1.79. The results confirm the findings from the previous eye tracking experiment: there was a reading time penalty for PP remnants that lacked an overt correlate, as predicted by the Parallel Contrast Principle. Although the cost for PP sprouting appeared in a variety of measures regardless of the context, it was reduced in first pass times in trials without regressions out from the region, as well as marginally reduced in go past times on the remnant, and in total times on the spill over region. The overall pattern thus suggests that readers were sensitive to information from the context, but that context was not sufficient to completely override the cost of sprouting a correlate for the remnant. The results support the claim that the processor relies on parallelism to pair a remnant and a correlate, rather than abandoning the search for a correlate and attempting to compute an entailment relation between clauses whenever possible. The alternative hypothesis according to which the apparent sprouting penalty in Experiment 1C was solely driven by accommodating a new referent is not supported by these results. Two norming studies presented initial evidence that PP sprouting is dispreferred in focus-sensitive coordination. In the no-context completion study, participants very rarely supplied a PP as a remnant after the much less coordinator, unless there was a PP present in the preceding clause. This completion study also indicated that non-sprouting PP contrast in focus-sensitive coordination is quite acceptable, as sentences with a PP in the initial clause were completed with PPs in the remnant 80% of the time. In the rating study, conditions with sprouted PPs were rated lower than those with paired contrasting PPs or those with VP remnants. In two eye tracking while reading studies, we found that PP sprouting also interferes with the on-line processing of focus-sensitive coordination sentences. In the first eye tracking study, a processing advantage for PP remnants over VP remnants was reversed in the case of PP sprouting. As long as a PP was present in the initial clause, even if the PPs evoked a different scalar dimension, the PP remnant was comparatively easy to process. Sentences in which the PP correlate and the PP were incompatible showed later delays in processing only during the first half of the experiment, after which participants apparently habituated to the mismatching scalar dimensions. In a second eye tracking study, where supportive or neutral contexts preceded target sentences with or without a correlate PP, sprouting conditions still elicited slower reading on the remnant and increased re-reading compared to conditions with an overt correlate in the matrix clause. Even though supporting contexts facilitated recovery from the PP sprouting penalty, they did not eliminate the independent penalty for violating parallelism in sprouting. These results all support the Parallel Contrast Principle over the Scalar Advantage Principle. Whether off-line or on-line, processing focus-sensitive coordination structures is facilitated when an overt PP correlate is present in the initial clause, even though the processor must also construct an ad hoc scale on which to compare the correlate and the remnant. This puts the processing of focus-sensitive coordination structures on a par with sluicing and other ellipsis structures in preferring parallelism between the remnant and correlate (Chung et al. 1995; Frazier & Clifton 1998; Carlson 2001, 2002). These results further dovetail with the research on zero-adjective contrasts in focus-sensitive coordination (Carlson & Harris 2018), in which DP remnants with adjectives elicited processing costs when the correlate DP did not contain a contrasting adjective (e.g., The chef didn't overcook a (complex) meal, let alone an easy one). These results relate to an important debate about the status of parallelism in sentence processing, and whether it is specific to ellipsis or conjunction or both. Prior research has routinely found that parallelism of different types eases and speeds up the processing of different types of ellipsis, including sluicing (Frazier & Clifton 1998; Carlson et al. 2009), gapping (Carlson 2001, 2002), and stripping/bare argument ellipsis (Paterson et al. 2007; Stolterfoht et al. 2007; Carlson 2013). Additional research has observed parallelism effects in conjoined structures (Frazier et al. 1984; 2000; Henstra 1996; Sturt et al. 2010). But some studies have found that parallelism facilitates processing in unconjoined and unelided structures, as well (e.g., Sturt et al. 2010; Dickey & Bunger 2011). It seems telling to us that so many ellipsis structures, which demand reuse or copying of earlier material, should turn out to be sensitive to additional similarities in the syntactic, semantic, and prosodic features of the clauses. Focus-sensitive coordination structures, headed by conjunctions like much less and let alone, are an especially important addition to research on parallelism in sentence processing. They provide a case in which violating parallelism by sprouting could have conferred a processing advantage by removing the need to construct an ad hoc scale between the correlate and the remnant, a process that could arguably require additional resources to compute or be delayed in comprehension (Levinson 2000; Chierchia 2004; but see also Hirschberg 1985; Sperber & Wilson 1995; Katsos & Cummins 2012). Nevertheless, the studies above found that avoiding ad hoc scales does not ease processing, at least if it comes at the cost of violating parallelism between clauses.7 The construction of a scalar relation between clauses in focus-sensitive coordination is still important to the processing of these constructions, as indicated by the cost for Incompatible PP remnants observed in the first eye tracking experiment. However, we suggest that the computation of such scales is simply delayed until after a basic clausal meaning has been constructed. The importance of parallelism follows from the conceptual steps articulated in (10) above, in which the processor must locate an appropriate correlate for the remnant before an appropriate scale can be inferred. Indeed, a promising avenue of research in this vein would be to explore whether some scales are more readily accessible than others, and if so, whether they facilitate comprehension of coordination structures that require such scales during interpretation. For now, we believe that there is strong evidence that the processor prioritizes basic structure building processes when processing sentences with ellipsis, and that parallelism, which helps the processor build structure at the ellipsis site, is a particularly powerful component in recovering the intended meaning and structure, where there was once only silence. Appendices A and B contain the full set of experimental items for Experiments 1–2. DOI: https://doi.org/10.5334/gjgl.707.s1 1The type of parallelism studied in this project involves similarities between remnants and correlates, and is not the type of parallelism that relates to the syntactic and/or semantic identity condition allowing ellipsis (e.g., Merchant 2001, 2008; Takahashi & Fox 2005; Griffiths & Liptak 2014). We do not intend to enter into the debate about the presence and size of structure within an ellipsis site. See section 1.2 for more general discussion of parallelism. 2While most varieties of English license focus-sensitive coordination in the presence of explicit or implicit negation, some dialects permit a positive variant, e.g., I can swim, let alone float. This variant tends to either reverse the scalar relationship (see Mark Liberman's commentary on Language Log, November 21, 2007, accessible as http://itre.cis.upenn.edu/~myl/languagelog/archives/005142.html and comments in Toosarvandani 2010), or else abandon the scalar component altogether, similar to an afterthought along the lines of not to mention (Cappelle, Dugas & Tobin 2015). We concentrate exclusively on the majority dialect here, in which there is a strong scalar component to its interpretation. 3Although it is conceptually possible that the processor forgoes retrieving the correlate in step 2, and simply posits a parallel structure at the ellipsis site, we think this is unlikely given evidence for similarity-based interference, characteristic of retrieval systems, from non-correlate distractor nouns in sluicing (Harris 2015, 2019). 4A reviewer suggests re-describing the entailment relation in terms of a set inclusion between properties denoted by the verb phrases. We continue to follow previous research (Fillmore et al 1988; Toosarvandani 2010) in describing the scale in terms of entailment between propositions for several reasons. First, propositions standardly denote sets of worlds, in which entailment between propositions p ⊨ q can be equivalently stated in terms of inclusion between the set of worlds that make p true and the set of worlds that make q true. Second, and more importantly, there are instances of focus-sensitive coordination in which subjects contrast (John didn't laugh, let alone Mary), which could not be captured by set inclusion of the verb phrases. Still, the use of entailment is intended to be descriptive, and other, perhaps more general, semantic characterizations could be explored in the future. 5Half of the items contained an additional DP in the matrix clause, e.g., books in Melinda doesn't read books (for pleasure/at work), which provided a suitable correlate for DP remnants. As shown in Table 1, participants sometimes provided DP completions, but these do not constitute DP sprouting when it contrasted with a DP correlate in the matrix clause. 6Three of the contexts differed from the others. For items 1–3, the Supporting contexts included both the correlate and remnant PPs instead of only the remnant PP. For item 3, the Neutral context also included the remnant PP. The rest of the contexts matched the description above. 7In determining whether SAP is independently motivated, a reviewer raised an interesting contrast between a structure with an ad hoc scale (i.a), in which studying chemistry is harder or less expected than studying carpentry, and a structure conveying entailment between clauses (i.b), as in not studying anything entails not studying chemistry. Parallelism, in our sense, is satisfied in both sentences. It is possible that with parallelism held constant, a preference for SAP would emerge. Our intuitions are that the ease of interpretation depends on whether the context licenses the antecedent clause in (i.b), which sounds, to our ears, odd without additional context, such as The students were exhausted during finals week. (i) a. Michael couldn't study carpentry, much less chemistry. b. Michael couldn't study anything, much less chemistry. We found similar cases in COCA, with any+N as a correlate for more specific DP remnants, especially with adjectival contrasts (see discussion in Carlson & Harris 2018); the examples in (ii) illustrate. (ii) a. "I couldn't picture my Grandma as someone responsible for the death of anything, much less her best friend at the age of 16." b. "But the idea that Susan owed anything to anyone – much less her cousin's new husband – was intolerable." At any rate, the motivation for SAP remains conceptual, and we believe the finding that a conceptually plausible benefit for computing ready-made relations does not outweigh the general preference for parallel structures reveals the strength of parallelism biases during sentence processing. AIC = Akaike information criterion, cm = centimeters, COCA = Corpus of Contemporary American English, CP = complementizer phrase, diff = difference, FocP = focus phrase, Hz = Hertz, LCD = liquid crystal display, LF = Logical Form, M = mean, ms = milliseconds, NP = noun phrase, PCP = Parallel Contrast Principle, PP = prepositional phrase, SAP = Scalar Advantage Principle, SE = standard error, UCLA = University of California Los Angeles, VP = verb phrase The authors would like to thank Jack Atherton, Jenny Chim, Aura Heredia Cruz, Reuben Garcia, Angela Howard, Samantha Jew, Lexi Loessberg-Zahl, Shayna Lurya, Caitlyn Wong Pickard, Ian Rigby, and Karina Ruiz for assistance running the eye tracking experiments. Portions of this research have been presented at a UC San Diego colloquium, a UMass Psycholinguistics Workshop, and the 29th CUNY Human Sentence Processing Conference; we thank the audiences for the comments and questions, especially Chuck Clifton for suggesting Experiment 2. The research reported in this publication was partially supported by the Eunice Kennedy Shriver National Institute of Child Health & Human Development of the National Institutes of Health under grant number R15HD072713 and an Institutional Development Award from the National Institute of General Medical Sciences of the National Institutes of Health under grant number 5P20GM103436-13. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Akaike, Hirotugu. 1974. A new look at the statistical model identification. IEEE Transactions on Automatic Control 19. 716–723. DOI: https://doi.org/10.1109/TAC.1974.1100705 Altmann, Gerry T. M., Alan Garnham & Yvette Dennis. 1992. Avoiding the garden path: Eye movements in context. Journal of Memory and Language 31. 685–712. DOI: https://doi.org/10.1016/0749-596X(92)90035-V Cappelle, Bert, Edwige Dugas & Vera Tobin. 2015. An afterthought on let alone. Journal of Pragmatics 80. 70–85. DOI: https://doi.org/10.1016/j.pragma.2015.02.005 Carlson, Greg N. & Michael K. Tanenhaus. 1988. Thematic roles and language comprehension. Syntax and Semantics 21. 263–288. Carlson, Katy. 2001. The effects of parallelism and prosody on the processing of gapping structures. Language and Speech 44. 1–26. DOI: https://doi.org/10.1177/00238309010440010101 Carlson, Katy. 2002. Parallelism and prosody in the processing of ellipsis sentences. New York: Routledge. Carlson, Katy. 2013. The role of only in contrasts in and out of context. Discourse Processes 50. 249–275. DOI: https://doi.org/10.1080/0163853X.2013.778167 Carlson, Katy & Jesse A. Harris. 2017. Zero-Adjective contrast in much-less ellipsis: The advantage for parallel syntax. Language, Cognition, and Neuroscience 3. 77–97. DOI: https://doi.org/10.1080/23273798.2017.1366530 Carlson, Katy, Lyn Frazier & Charles Clifton, Jr. 2009. How prosody constrains comprehension: A limited effect of prosodic packaging. Lingua 119. 1066–1082. DOI: https://doi.org/10.1016/j.lingua.2008.11.003 Carlson, Katy, Michael Walsh Dickey & Christopher Kennedy. 2005. Structural economy in the processing and representation of gapping sentences. Syntax 8. 208–228. DOI: https://doi.org/10.1111/j.1467-9612.2005.00079.x Carlson, Katy, Michael Walsh Dickey, Lyn Frazier & Charles Clifton, Jr. 2009. Information structure expectations in sentence comprehension. The Quarterly Journal of Experimental Psychology 62. 114–139. DOI: https://doi.org/10.1080/17470210701880171 Chierchia, Gennaro. 2004. Scalar implicatures, polarity phenomena, and the syntax/pragmatics interface. In Adriana Belleti (ed.), Structures and beyond, 39–103. Oxford: Oxford University Press. Chung, Sandra, William A. Ladusaw & James McCloskey. 1995. Sluicing and logical form. Natural Language Semantics 3. 239–282. DOI: https://doi.org/10.1007/BF01248819 Chung, Sandra, William A. Ladusaw & James McCloskey. 2011. Sluicing (:) between structure and inference. In Rodrigo Gutiérrez-Bravo, Line Mikkelsen & Eric Potsdam (eds.), Representing language: Essays in honor of Judith Aissen, 31–50. Santa Cruz, CA: UCSC Linguistics Research Center. Culicover, Peter W. & Ray S. Jackendoff. 2005. Simpler syntax. Oxford: Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199271092.001.0001 Dalrymple, Mary, Stuart M. Shieber & Fernando Pereira. 1991. Ellipsis and higher-order unification. Linguistics and Philosophy 14. 399–452. DOI: https://doi.org/10.1007/BF00630923 Davies, Mark. 2008. The corpus of contemporary American English: 520 million words, 1990-present. Retrieved from http://corpus.byu.edu/coca/. Dickey, Michael Walsh & Ann C. Bunger. 2011. Comprehension of elided structure: Evidence from sluicing. Language and Cognitive Processes 26. 63–78. DOI: https://doi.org/10.1080/01690961003691074 Dixon, Wilfrid J. 1960. Simplified estimation from censored normal samples. Annals of Mathematical Statistics 31. 385–391. DOI: https://doi.org/10.1214/aoms/1177705900 Ferreira, Fernanda, Karl G. D. Bailey & Vittoria Ferraro. 2002. Good-enough representations in language comprehension. Current Directions in Psychological Science 11. 11–15. DOI: https://doi.org/10.1111/1467-8721.00158 Fillmore, Charles J., Paul Kay & Mary Catherine O'Connor. 1988. Regularity and idiomaticity in grammatical constructions: The case of let alone. Language 64. 501–538. DOI: https://doi.org/10.2307/414531 Frazier, Lyn. 2008. Processing ellipsis: A processing solution to the undergeneration problem. In Charles B. Chang & Hannah J. Haynie (eds.), Proceedings of the 26th West Coast Conference on Formal Linguistics, 21–32. Somerville, MA: Cascadilla Press. Frazier, Lyn, Alan Munn & Charles Clifton, Jr. 2000. Processing coordinate structures. Journal of Psycholinguistic Research 29. 343–370. DOI: https://doi.org/10.1023/A:1005156427600 Frazier, Lyn & Charles Clifton, Jr. 1998. Comprehension of sluiced sentences. Language and Cognitive Processes 13. 499–520. DOI: https://doi.org/10.1080/016909698386474 Frazier, Lyn & Charles Clifton, Jr. 2001. Parsing coordinates and ellipsis: Copy α. Syntax 4. 1–22. DOI: https://doi.org/10.1111/1467-9612.00034 Frazier, Lyn & Charles Clifton, Jr. 2005. The syntax-discourse divide: Processing ellipsis. Syntax 8. 121–174. DOI: https://doi.org/10.1111/j.1467-9612.2005.00077.x Frazier, Lyn, Lori Taft, Tom Roeper, Charles Clifton, Jr. & Kate Ehrlich. 1984. Parallel structure: A source of facilitation in sentence comprehension. Memory & Cognition 12. 421–430. DOI: https://doi.org/10.3758/BF03198303 Frazier, Michael, David Potter & Masaya Yoshida. 2012. Pseudo noun phrase coordination. In Nathan Arnett & Ryan Bennett (eds.), Proceedings of the 30th West Coast Conference on Formal Linguistics, 142–152. Somerville, MA: Cascadilla Proceedings Project. Ginzburg, Jonathan & Ivan Sag. 2000. Interrogative investigations. Stanford, CA: CSLI Publications. Griffiths, James & Anikó Lipták. 2014. Contrast and island-sensitivity in clausal ellipsis. Syntax 17. 189–234. DOI: https://doi.org/10.1111/synt.12018 Hardt, Daniel. 1993. Verb phrase ellipsis: Form, meaning, and processing. Philadelphia, PA: University of Pennsylvania dissertation. Harris, Jesse A. 2015. Structure modulates similarity-based interference in sluicing: An eye tracking study. Frontiers in psychology 6. DOI: https://doi.org/10.3389/fpsyg.2015.01839 Harris, Jesse A. 2016. Processing let alone coordination in silent reading. Lingua 169. 70–94. DOI: https://doi.org/10.1016/j.lingua.2015.10.008 Harris, Jesse A. 2019. Alternatives on Demand and Locality: Resolving discourse-linked wh-phrases in sluiced structures. In Katy Carlson, Charles Clifton, Jr. & Janet Dean Fodor (eds.), Grammatical approaches to language processing: Essays in honor of Lyn Frazier, 45–75. New York: Springer. DOI: https://doi.org/10.1007/978-3-030-01563-3_4 Harris, Jesse A. & Katy Carlson. 2016. Keep it local (and final): Remnant preferences in "let alone" ellipsis. The Quarterly Journal of Experimental Psychology 69. 1278–1301. DOI: https://doi.org/10.1080/17470218.2015.1062526 Harris, Jesse A. & Katy Carlson. 2018. Information structure preferences in focus- sensitive ellipsis: How defaults persist. Language & Speech. DOI: https://doi.org/10.1177/0023830917737110 Henstra, Judith-Ann. 1996. On the parsing of syntactically ambiguous sentences: Coordination and relative clause attachment. Falmer: University of Sussex dissertation. Hirschberg, Julia L. B. 1985. A theory of scalar implicature. Philadelphia, PA: University of Pennsylvania dissertation. Horn, Laurence R. 1972. On the semantic properties of logical operators in English. Los Angeles, CA: University of California, Los Angeles dissertation. Distributed by the Indiana University Linguistics Club, 1976. Hulsey, Sarah. 2008. Focus sensitive coordination. Cambridge, MA: MIT dissertation. Katsos, Napolean & Chris Cummins. 2012. Scalar implicature: Theory, processing and acquisition. Nouveaux Cahiers de Linguistique Française 30. 39–52. DOI: https://doi.org/10.1016/j.cognition.2011.02.015 Katsos, Napolean & Dorothy V. Bishop. 2011. Pragmatic tolerance: Implications for the acquisition of informativeness and implicature. Cognition 120. 67–81. DOI: https://doi.org/10.1016/j.cognition.2011.02.015 Kehler, Andrew. 2000. Coherence and the resolution of ellipsis. Linguistics and Philosophy 23. 533–575. DOI: https://doi.org/10.1023/A:1005677819813 Knoeferle, Pia. 2014. Conjunction meaning can modulate parallelism facilitation: Eye-tracking evidence from German clausal coordination. Journal of Memory and Language 75. 140–154. DOI: https://doi.org/10.1016/j.jml.2014.05.002 Knoeferle, Pia & Matthew W. Crocker. 2009. Constituent order and semantic parallelism in on-line comprehension: Eye-tracking evidence from German. Quarterly Journal of Experimental Psychology 62. 2338–2371. DOI: https://doi.org/10.1080/17470210902790070 Levinson, Stephen. 2000. Presumptive meanings. Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551/mitpress/5526.001.0001 Martin, Andrea E. 2010. Memory operations and structures in sentence comprehension: Evidence from ellipsis. New York: New York University dissertation. Martin, Andrea E. & Brian McElree. 2008. A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language 58. 879–906. DOI: https://doi.org/10.1016/j.jml.2007.06.010 Martin, Andrea E. & Brian McElree. 2009. Memory operations that support language comprehension: evidence from verb-phrase ellipsis. Journal of Experimental Psychology: Learning, Memory, and Cognition 35. 1231–1239. DOI: https://doi.org/10.1037/a0016271 Martin, Andrea E. & Brian McElree. 2011. Direct-access retrieval during sentence comprehension: evidence from sluicing. Journal of Memory and Language 64. 327–343. DOI: https://doi.org/10.1016/j.jml.2010.12.006 Mauner, Gail, Michael K. Tanenhaus & Greg N. Carlson. 1995. Implicit arguments in sentence processing. Journal of Memory and Language 34. 357–382. DOI: https://doi.org/10.1006/jmla.1995.1016 Merchant, Jason. 2001. The syntax of silence: Sluicing, islands, and the theory of ellipsis. Oxford: Oxford University Press. Merchant, Jason. 2004. Fragments and ellipsis. Linguistics and Philosophy 27. 661–738. DOI: https://doi.org/10.1007/s10988-005-7378-3 Merchant, Jason. 2008. Variable island repair under ellipsis. In Kyle Johnson (ed.), Topics in Ellipsis, 132–153. Cambridge: Cambridge University Press. Merchant, Jason. 2016. Ellipsis: A survey of analytical approaches. In Jeroen van Craenenbroeck & Tanja Temmerman (eds.), Handbook of ellipsis. Oxford: Oxford University Press. Mitchell, Don C., Xingjia Shen, Matthew J. Green & Timothy L. Hodgson. 2008. Accounting for regressive eye-movements in models of sentence processing: A reappraisal of the Selective Reanalysis hypothesis. Journal of Memory and Language 59. 266–293. DOI: https://doi.org/10.1016/j.jml.2008.06.002 Murphy, Gregory L. 1985. Processes of understanding anaphora. Journal of Memory and Language 24. 290–303. DOI: https://doi.org/10.1016/0749-596X(85)90029-4 Nykiel, Joanna. 2013. Wh-phrases in sluicing: An interaction of the remnant and the correlate. In Philip Hofmeister & Elizabeth Norcliffe (eds.), The core and the periphery. Data-driven perspectives on syntax inspired by Ivan A. Sag, 253–274. Stanford, CA: CSLI Publications. Nykiel, Joanna & Ivan Sag. 2011. Remarks on sluicing. In Stefan Mueller (ed.), Proceedings of the HPSG11 Conference. Stanford, CA: CSLI Publications. Papafragou, Anna & Niki Tantalou. 2004. Children's computation of implicatures. Language Acquisition 12. 71–82. DOI: https://doi.org/10.1207/s15327817la1201_3 Paterson, Kevin B., Simon P. Liversedge, Ruth Filik, Barbara J. Juhasz, Sarah J. White & Keith Rayner. 2007. Focus identification during sentence comprehension: Evidence from eye movements. The Quarterly Journal of Experimental Psychology 60. 1423–1445. DOI: https://doi.org/10.1080/17470210601100563 Phillips, Colin & Dan Parker. 2014. The psycholinguistics of ellipsis. Lingua 151. 78–95. DOI: https://doi.org/10.1016/j.lingua.2013.10.003 Poirier, Josée, Katie Wolfinger, Lisa Spellman & Lewis P. Shapiro. 2010. The real-time processing of sluiced sentences. Journal of Psycholinguistic Research 39. 411–427. DOI: https://doi.org/10.1007/s10936-010-9148-9 Poirier, Josée, Matthew Walenski & Lewis P. Shapiro. 2012. The role of parallelism in the real-time processing of anaphora. Language and Cognitive Processes 27. 868–886. DOI: https://doi.org/10.1080/01690965.2011.601623 Rayner, Keith & Sara C. Sereno. 1994. Regression-contingent analyses: A reply to Altmann. Memory & Cognition 22. 291–292. DOI: https://doi.org/10.3758/BF03200857 Roberts, Craige. 1996. Information structure in discourse: Towards an integrated formal theory of pragmatics. In Jae-Hak Toon & Andreas Kathol (eds.), Working Papers in Linguistics – Ohio State University Department of Linguistics 39. 91–136. Roberts, Craige. 2012. Information Structure: Towards an integrated formal theory of pragmatics. Semantics & Pragmatics 5. 1–69. DOI: https://doi.org/10.3765/sp.5.6 Rooth, Mats. 1992. A theory of focus interpretation. Natural Language Semantics 1. 75–116. DOI: https://doi.org/10.1007/BF02342617 Ross, John R. 1967. Constraints on variables in syntax. Cambridge, MA: MIT dissertation. Later published as Infinite syntax. Ross, John R. 1969. Guess who. In Robert I. Binnick, Alice Davison, Georgia M. Green & Jerry L. Morgan (eds.), Papers from the Fifth Regional Meeting of the Chicago Linguistic Society, 252–286. Chicago, IL: University of Chicago. Sag, Ivan. 1976. Deletion and Logical Form. Cambridge, MA: MIT dissertation. Sailor, Craig & Gary Thoms. 2014. On the non-existence of non-constituent coordination and non-constituent ellipsis. In Robert E. Santana-LaBarge (ed.), The Proceedings of the 31st West Coast Conference on Formal Linguistics, 361–370. Somerville, MA: Cascadilla Proceedings Project. Sanford, Anthony J. & Arthur C. Graesser. 2006. Shallow processing and underspecification. Discourse Processes 42. 99–108. DOI: https://doi.org/10.1207/s15326950dp4202_1 Sanford, Anthony J. & Patrick Sturt. 2002. Depth of processing in language comprehension: Not noticing the evidence. Trends in Cognitive Sciences 6. 382–386. DOI: https://doi.org/10.1016/S1364-6613(02)01958-7 Sauermann, Antje, Ruth Filik & Kevin B. Paterson. 2013. Processing contextual and lexical cues to focus: Evidence from eye movements in reading. Language and Cognitive Processes 28. 875–903. DOI: https://doi.org/10.1080/01690965.2012.668197 Shapiro, Lewis P. & Arild Hestvik. 1995. On-line comprehension of VP-ellipsis: Syntactic reconstruction and semantic influence. Journal of Psycholinguistic Research 24. 517–532. DOI: https://doi.org/10.1007/BF02143165 Shapiro, Lewis P., Arild Hestvik, Lesli Lesan & A. Rachel Garcia. 2003. Charting the time- course of VP-ellipsis sentence comprehension: Evidence for an initial and independent structural analysis. Journal of Memory and Language 49. 1–19. DOI: https://doi.org/10.1016/S0749-596X(03)00026-3 Sperber, Dan & Deirdre Wilson. 1995. Relevance: Communication and cognition. Oxford: Blackwell. Stiller, Alex, Noah Goodman & Michael Frank. 2011. Ad-hoc scalar implicature in adults and children. In Laura Carlson, Christoph Hőlscher & Thomas F. Shipley (eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society, 2134–2139, Boston, MA. Stolterfoht, Britta, Angela D. Friederici, Kai Alter & Anita Steube. 2007. Processing focus structure and implicit prosody during reading: Differential ERP effects. Cognition 104. 565–590. DOI: https://doi.org/10.1016/j.cognition.2006.08.001 Sturt, Patrick, Frank Keller & Amit Dubey. 2010. Syntactic priming in comprehension: Parallelism effects with and without coordination. Journal of Memory and Language 62. 333–351. DOI: https://doi.org/10.1016/j.jml.2010.01.001 Takahashi, Shoichi & Danny Fox. 2005. MaxElide and the re-binding problem. In Effi Georgala & Jonathan Howell (eds.), Proceedings of Semantics and Linguistic Theory XV 233–240. Ithaca, NY. DOI: https://doi.org/10.3765/salt.v15i0.3095 Tanenhaus, Michael K. & Greg N. Carlson. 1990. Comprehension of deep and surface verbphrase anaphors. Language and Cognitive Processes 5. 257–280. DOI: https://doi.org/10.1080/01690969008407064 Toosarvandani, Maziar. 2010. Association with foci. Berkeley, CA: University of California, Berkeley dissertation. Tukey, John W. 1962. The future of data analysis. Annals of Mathematical Statistics 33. 1–67. DOI: https://doi.org/10.1214/aoms/1177704711 Van Craenenbroeck, Jeroen. 2010. The syntax of ellipsis: Evidence from Dutch dialects. Oxford: Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780195375640.001.0001 Ward, Gregory, Richard Sproat & Gail McKoon. 1991. A pragmatic analysis of so-called anaphoric islands. Language 67. 439–473. DOI: https://doi.org/10.2307/415034 Weir, Andrew. 2014. Fragments and clausal ellipsis. Amherst, MA: University of Massachusetts Amherst dissertation. Harris, J.A. and Carlson, K., 2019. Correlate not optional: PP sprouting and parallelism in "much less" ellipsis. Glossa: a journal of general linguistics, 4(1), p.83. DOI: http://doi.org/10.5334/gjgl.707 Harris JA, Carlson K. Correlate not optional: PP sprouting and parallelism in "much less" ellipsis. Glossa: a journal of general linguistics. 2019;4(1):83. DOI: http://doi.org/10.5334/gjgl.707 Harris, J. A., & Carlson, K. (2019). Correlate not optional: PP sprouting and parallelism in "much less" ellipsis. Glossa: A Journal of General Linguistics, 4(1), 83. DOI: http://doi.org/10.5334/gjgl.707 Harris JA and Carlson K, 'Correlate Not Optional: PP Sprouting and Parallelism in "much Less" Ellipsis' (2019) 4 Glossa: a journal of general linguistics 83 DOI: http://doi.org/10.5334/gjgl.707 Harris, Jesse A., and Katy Carlson. 2019. "Correlate Not Optional: PP Sprouting and Parallelism in "much Less" Ellipsis". Glossa: A Journal of General Linguistics 4 (1): 83. DOI: http://doi.org/10.5334/gjgl.707 Harris, Jesse A., and Katy Carlson. "Correlate Not Optional: PP Sprouting and Parallelism in "much Less" Ellipsis". Glossa: A Journal of General Linguistics 4, no. 1 (2019): 83. DOI: http://doi.org/10.5334/gjgl.707 Harris, J. A.and K. Carlson. "Correlate Not Optional: PP Sprouting and Parallelism in "much Less" Ellipsis". Glossa: A Journal of General Linguistics, vol. 4, no. 1, 2019, p. 83. DOI: http://doi.org/10.5334/gjgl.707
CommonCrawl
What is the largest body in the solar system we could meaningfully and accurately adjust the orbit of? There is a lot of science fiction and emerging science that move comet and asteroids as part of the main plot. Pretty much everything in our solar system, is in orbit around the Sun, or in orbit around an object orbiting the Sun. If you want something to be somewhere else, in essence you change it's solar orbit to match (or collide) with your desired location. We have several man made bodies that we have placed in lots of different orbits. we have even caused some to leave the solar system. Given our current (2015) tools and knowledge what is the largest object we could meaningfully and accurately adjust the orbit of? Where "meaningfully and accurately" = bringing into a given orbit (specific and calculated) around any body that it is not currently orbiting, in a timely fashion (i.e. 5 years or less) Of course the biggest challenge is getting your tools and knowledge away from Earth and to the body you want to adjust the orbit of, that is mater of economics. Assuming you have the budget to get what you want off of Earth, what is the biggest thing you could move accurately? orbital-mechanics propulsion solar-system asteroid-redirect-mission Cornelisinspace $\begingroup$ I'm wondering if this still might be too broad. It depends on whether it is possible to be clear about what a meaningful, accurate change is. Maybe there are some large things we could change the orbit of by a few meters, such that in a few centuries it would be likely to hit something specific - but most such opportunities are in places where their orbits are heavily influenced by a lot of other things and accurate orbit prediction down the road is pretty hard. $\endgroup$ – kim holder wants Monica back May 8 '15 at 18:59 $\begingroup$ "2015 tools and knowledge" is very vague; by my interpretation of it, the answer is "a very small artificial satellite". $\endgroup$ – Russell Borogove May 8 '15 at 19:06 $\begingroup$ A body being so small that adjusting its orbit might be possible will be too small to be detected with telescopes from earth or by telescopes in an earth orbit. $\endgroup$ – Uwe Apr 3 '17 at 15:42 If "meaningful" means measurable then seeng a half-percent change in the period of a tiny double-asteroid at a few AU is pretty close to as big as possible. @PearsonArtPhotos question Using DART to measure G came to mind when I came across this "classic" question, so let's connect the dots. From Double Asteroid Redirection Test (DART) Mission DART will be the first demonstration of the kinetic impact technique to change the motion of an asteroid in space. The DART mission is in Phase B, led by JHU/APL and managed by the Planetary Missions Program Office at Marshall Space Flight Center for NASA's Planetary Defense Coordination Office. DART is a planetary defense-driven test of one of the technologies for preventing the Earth impact of a hazardous asteroid: the kinetic impactor. DART's primary objective is to demonstrate a kinetic impact on a small asteroid. The binary near-Earth asteroid (65803) Didymos is the target for DART. While Didymos' primary body is approximately 800 meters across, its secondary body (or "moonlet") has a 150-meter size, which is more typical of the size of asteroids that could pose a more common hazard to Earth. The DART spacecraft will achieve the kinetic impact by deliberately crashing itself into the moonlet at a speed of approximately 6 km/s, with the aid of an onboard camera and sophisticated autonomous navigation software. The collision will change the speed of the moonlet in its orbit around the main body by a fraction of one percent, enough to be measured using telescopes on Earth. Wikipedia's Double Asteroid Redirection Test says that the launch mass is 500 kg, and the Lunar and Planetary Science XLVIII (2017) paper The Double Asteroid Redirection Test (DART) Element of the Asteroid Impact and Deflection Assessment (AIDA) Mission gives an impact mass of ~490 kg. With a system mass $M = m_1+m_2$ of 5.28E+11 kg, a separation $R$ of 1180 meters, and the gravitational constant $G$ of 6.674E-11 m^3/kg s^2, the orbital period from (from here): $$ T^2 = \frac{4 \pi^2 R^3} {G(m1+m2)} $$ is about 42,900 seconds, and if the orbit were circular that corresponds to an orbital velocity of about 0.173 m/sec. The momentum of the ~500 kg spacecraft at 6,000 m/s is 3E+06 kg m/s, that of the moonlet in the system's center of mass (assuming the moonlet has about 0.66 % of the system mass if you assume equal density) is about 6.01E+08 m/s, so the complete absorption of momentum could change the momentum of the moonlet by roughly half of one percent. Schematic of the DART mission shows the impact on the moonlet of asteroid (65803) Didymos. Post-impact observations from Earth-based optical telescopes and planetary radar would, in turn, measure the change in the moonlet's orbit about the parent body. The near-Earth asteroid (185851) 2000 DP107 in many ways is an analog to Didymos. 2000 DP107 was the first binary asteroid ever imaged by radar. This animation is derived from planetary radar observations. In this example (2000 DP107), the primary and secondary are about 850 meters and 300 meters in diameter. They are separated by about 2.7 km. The primary rotates once every 2.77 hours while the tidally locked secondary orbits the primary about once every 42 hours. $\begingroup$ If the momentum of the moonlet is changed by roughly half of one percent, how much change of the orbit period of 11.92 hours? How long is the neccessary observation time to validate such a small change of the moonlet's orbital period? $\endgroup$ – Uwe Feb 8 '19 at 15:19 $\begingroup$ @Uwe I thought one of the links in my post addresses that but it doesn't. I guess a few months but there's no rush as far as I know. $\endgroup$ – uhoh Feb 8 '19 at 15:26 $\begingroup$ A few months for validation would be ok, but it should be done during the closest approach to Earth around October 2022. It should be done before the Didymos system is too far away again for precise observation of its moonlet. $\endgroup$ – Uwe Feb 8 '19 at 15:33 $\begingroup$ @Uwe I think the pair are an eclipsing binary seen from Earth (diameters of 300 and 800 meters) so a light curve is sufficient, they don't need to be spatially resolved. As long as they can be detected optically all that's necessary is a series of measurements of the eclipse and transit timings. I'm sure this has all been carefully thought out before planning the mission, let's see if we can find a reference to the observation plan. $\endgroup$ – uhoh Feb 8 '19 at 20:31 $\begingroup$ Just thinking, is it possible to impact an asteroid to change it course such that it hits the target asteroid which is in collision course to earth? Then I think we can change a trajectory a lot ! $\endgroup$ – Prakhar Feb 9 '19 at 5:55 The phrase "2015 tools and knowledge" combines two very different things. If we are limited to today's tools, the best we can do is impact a body so the body absorbs the momentum. The easiest way to satisfy "meaningfully adjust the orbit of something" is to make it not hit (or hit) the earth. Rosetta is 2900 kg and my WAG for a closing speed, assuming you want it high even though everything goes around the sun, is $10\%$ of Mars orbital velocity or 2.4 km/s. That gives you the delta v you can impart (though you can't necessarily do it in any chosen direction) by dividing by the mass of the object. As far as I know, there are not any known objects heading this way. If we found a comet in an extremely eccentric orbit that would hit the earth next pass (or the one after that) a small nudge would prevent the disaster. Then you are into 2015 knowledge-how well can we measure the orbit. Ross MillikanRoss Millikan Not the answer you're looking for? Browse other questions tagged orbital-mechanics propulsion solar-system asteroid-redirect-mission or ask your own question. Why is it so hard to figure out if Voyager 1 has left the solar system? Two 1000 kg gold spheres orbit their CM in near-contact, great way to measure G or limited by spaceflight issues? Highest velocity impact between a spacecraft and a solar system body? What about for a dedicated impactor (spacecraft component)? Using DART to measure G Will Rosetta have to adjust its orbit around Chury due to the comet's coma and tails? Is it safe to use a coilgun or railgun as an engine inside the solar system? What is the smallest body that has sufficient gravity for another body to orbit it? Geosynchronous orbits around other Solar System objects How accurately (maximum possible accuracy) can future satellite positions be predicted? What is the effect of gravity slingshots around Earth on Earth's rotation and orbit time, and is this effect worth considering? Thrust vectoring for ion propulsion - any plans or current research? A spacecraft leaves Earth exactly with escape velocity V2 - what trajectory it will have in Solar System? How does the orbit of the Voyager and New Horizons probes manage to get into interstellar space?
CommonCrawl
Edge betweenness centrality as a failure predictor in network models of structurally disordered materials Avalanche precursors of failure in hierarchical fuse networks Paolo Moretti, Bastien Dietemann, … Michael Zaiser Early prediction of macrocrack location in concrete, rocks and other granular composite materials Antoinette Tordesillas, Sanath Kahagalage, … Jacek Tejchman Microstructural damage sensitivity prediction using spatial statistics B. C. Cameron & C. C. Tasan Predicting Microstructural Void Nucleation in Discontinuous Fiber Composites through Coupled in-situ X-ray Tomography Experiments and Simulations Imad Hanhan, Ronald F. Agyei, … Michael D. Sangid Fatigue fracture mechanism of amorphous materials from a density-based coarse-grained model Yuji Kurotani & Hajime Tanaka Damage-tolerant material design motif derived from asymmetrical rotation Wei Wang, Shu Jian Chen, … Kwesi Sagoe-Crentsil Fatigue of graphene Teng Cui, Sankha Mukherjee, … Tobin Filleter Size effects on the fracture of microscale and nanoscale materials Alessandro Taloni, Michele Vodret, … Stefano Zapperi Multiscale Modeling of Fiber Fragmentation Process in Aligned ZnO Nanowires Enhanced Single Fiber Composites Parisa Marashizadeh, Mohammad Abshirini, … Yingtao Liu Mahshid Pournajar1, Michael Zaiser1 & Paolo Moretti1 Scientific Reports volume 12, Article number: 11814 (2022) Cite this article Complex networks Statistical physics, thermodynamics and nonlinear dynamics Network theoretical measures such as geodesic edge betweenness centrality (GEBC) have been proposed as failure predictors in network models of load-driven materials failure. Edge betweenness centrality ranks which links are significant, based on the fraction of shortest paths that pass through the links between network nodes. We study GEBC as a failure predictor for two-dimensional fuse network models of load transmission in structurally disordered materials. We analyze the evolution of edge betweenness centrality in the run-up to failure and the correlation between GEBC and failure propensity for both hierarchical and non-hierarchical networks exhibiting various degrees of disorder. We observe a non trivial relationship between GEBC and failure propensity, which suggests that the idea of GEBC as a useful failure predictor needs to be strongly qualified. The identification of failure locations in materials is a generic problem in engineering mechanics. In a perfect material, failure is controlled by the largest stress concentration. Real materials are not perfect. They are disordered, and therefore in fracture of a real material fluctuations and statistical considerations play a major role. Thus, the question which we have to address in a disordered material is the following: In which sense are the locations of catastrophic failure initiation, or of local damage accumulation, different from the rest of the material? In this research work, we explore a method to predict failure locations in quasi-brittle materials by using topological measures to distinguish locations of enhanced local failure probability. In ideal elastic-brittle materials, failure occurs by nucleation and growth of a crack that separates the sample along a failure surface, thus, failure occurs strictly at the location of the largest stress concentration at the crack tip. In quasi-brittle materials, by contrast, damage accumulation is spread over a fracture process zone ahead of the crack tip. This behavior emerges as a consequence of disorder, which in this case refers to local fluctuations in failure thresholds. In materials with large disorder, this process zone may be extensive and increasing the size of process zone leads to a transition from localized to diffuse failure. Thus, statistical measures are required for failure prediction1,2,3. Many quasi-brittle materials are also microstructurally disordered, i.e., characterized by heterogeneity in their structural arrangement. Recently, disordered mechanical meta-materials of this type have been developed, which exhibit remarkable properties such as high strength to weight ratio and auxetic behavior4. Mechanical properties and failure behavior of materials can, in fact, be controlled by tuning only topology and geometrical structure rather than material properties5. Hanifpour et al. showed that the mechanical properties and fracture mechanisms of disordered lattices are dependent on geometry6. They have shown that the topology of the lattice is crucial for the auxetic behavior of materials with negative Poisson's ratio as even small changes in topology can significantly affect the Poisson's ratio. More generally speaking, controlling geometry and topology is at the core of designing metamaterials with tailored mechanical response7. In relation to tuneable failure properties, it has been shown that material strength and toughness can be improved by endowing materials with appropriately designed hierarchical (micro)structures8,9. Sen and Buehler argue that through a hierarchical structure, mechanical properties of brittle materials can be improved to enhance fracture toughness10. Fuse- and beam network models have been used to analyze the effect of hierarchical organization on damage accumulation and modes of failure11,12,13. Results indicate that hierarchical (micro)structure affects failure considerably, by suppressing crack propagation in favor of local damage nucleation and diffuse percolation. The effect of structural and geometrical properties on failure mechanisms can be investigated via network analysis approaches. Network analysis can be applied for analyzing various types of materials and structures that are representable as networks carrying loads14,15,16. The applicability of this method is not merely limited to systems that are topologically structured as networks of edges but may also encompass analysis of bulk material properties, including porous materials17 and biological matter18. There are various studies which apply network analysis methods to study technological infrastructures such as the internet, electrical supply networks or transportation networks. In view of the stability of such networks, nodes with large betweenness centrality seem to be key features of the investigated systems19,20. Edge Betweenness Centrality (EBC) is a measure describing the frequency at which an edge lies on the shortest path between pairs of nodes in a network (for a mathematical definition, see our methods section). In the context of materials design, it has been proposed that potential failure locations can be identified by correlation to large values of edge betweenness centrality21. Recent studies seem to confirm that material failure occurs preferentially at locations, which exhibit large Geodesic (i.e., strictly relying on the shortest path metric) Edge Betweenness Centrality (GEBC) values22,23. These results demonstrate that assessing failure locations of a system not necessarily requires the calculation of the local loadings, e.g. in terms of locally stored elastic energy. Analogously, the relevance of the shortest-path metric and of the GEBC has been pointed out in problems of force transmission24, heat conduction25, and transport phenomena26. Predicting failure locations, however, is inherently more complex than establishing that links that fail have high centrality, as mechanical failure is contingent on the interplay of local and global stress patterns and their evolution under load, and of local material properties and damage evolution27,28,29,30,31. The goal of our study is establishing the usability of the GEBC metric as a structural predictor of future failure events, in models of quasi-brittle brittle materials of varying degrees of local-strength disorder. In particular, we consider both hierarchical and non-hierarchical structures. What network metrics set apart future failure locations before damage takes place? Do predictions improve as damage is accumulated and failure approaches? To answer these questions, we simulate loading and failure in our model, using the Random Fuse Model (RFM)32,33,34. We calculate GEBC of elements in the initial state of both hierarchical and random networks, investigate how GEBC statistics changes until global failure, and how these changes correlate with the propensity of single network links to fail at a certain stage of the damage accumulation process. In the following, we introduce the methods that we have adopted in our analysis. The table below contains the definitions of the main symbols and acronyms that we use in the rest of the paper, grouped thematically. GEBC Geodesic Edge Betweenness Centrality RFM Random Fuse Model HFN Hierarchical Fuse Network SHFN Shuffled Hierarchical Fuse Network RFN Random Fuse Networks C GEBC of a given edge N Number of nodes in a network E Number of edges in a network L Linear size of a system n Number of hierarchical levels (ij) Generic edge connecting nodes i and j \(V_i\) Voltage (displacement) at node i \(I_{ij}\) Current (force) at edge (ij) \(t_{ij}\) Current threshold of edge (ij) \(\eta \) Failing edge index \(\beta \) Failing edge index, counting from failure \(N_\mathrm {f}\) Final number of failed edges \((ij)_\eta \) \(\eta \)-th failing edge \(I_\eta \) Global current at failure stage \(\eta \) \(V_\eta \) Global voltage at failure stage \(\eta \) \(f_\eta \) Maximum current-strength ratio at failure stage \(\eta \) \(\varepsilon \) Global strain \(\sigma \) Global stress \(\sigma _\mathrm {p}\) Peak stress \(\varepsilon _\mathrm {f}\) Failure strain \(\varepsilon _\mathrm {p}\) Strain at peak stress \(\alpha \) Failure class z Number of edges per sample in a failure class \(\Sigma _C^\alpha \) GEBC mean deviation for class \(\alpha \) Geodesic edge betweenness centrality GEBC is a network theoretical measure to characterize the relevance of edges for the transport properties of a network, and also to identify community boundaries in network structures. We consider a network consisting of N nodes connected by E edges. Our network is undirected (each edge can be traversed in both directions) and unweighted (all edges count as a step of unit length along a path). Each edge connecting generic nodes i and j is identified by a generic index h, and its end nodes by the the ordered pair (ij). Under these assumptions, we compute path length simply as the number of edges along the path. The GEBC value C(h) of an edge h is then defined as $$\begin{aligned} C(h) = \frac{2}{N(N-1)}\sum _{a\ne b} \frac{\sigma _{ab}(h)}{\sigma _{ab}} \end{aligned}$$ where , \(\sigma _{ab}\) is the number of all shortest network paths connecting nodes a and b, and \(\sigma _{ab}(h)\) is the number of all of these paths that pass through edge h. In networks with a community structure, edges that connect different communities have high GEBC values since all of the shortest paths connecting nodes from the respective communities pass through those links. Therefore, by removing such edges, different communities of the network are separated from each other35. From a computational perspective, direct implementation of Eq. (1) is prohibitively expensive for large networks, hence we use the algorithm formulated by Brandes36 in actual computations. Construction of hierarchical (HFN) and non-hierarchical (RFN) networks Fuse network models provide a computationally efficient way of studying load-driven failure processes. Such models consider networks of nodes connected by load-carrying edges. Each node i is associated with a scalar displacement-like variable ("voltage", or "strain") \(V_i\) while the connecting edges, which are assumed of unit length, are associated with scalar load variables ("currents", or "stresses"): an edge (ij) connecting nodes i and j is envisaged as an ohmic resistor of unit conductance which, under a voltage difference between the two nodes, carries a current \(I_{ij} = V_i - V_j\). Coupled Kirchhoff equations for all nodes are solved to compute the global current/voltage pattern. Once the current through an edge reaches a certain threshold, \(|I_{ij}|\le t_{ij}\), the conductance of the edge is set to zero – in the electrical analogue, the fuse burns. Such fuse network models are characterized by the topology of the network on the one hand, and by the physical properties of the edges (conductances and failure thresholds) on the other hand. In the present study, we shall assume that all edges have statistically equivalent properties, with unit length and unit conductance. Threshold currents are independent identically distributed random variables, which are assigned to the edges according to a Weibull distribution with unit mean37,38 i.e., the cumulative distribution function is $$\begin{aligned} P(t_{ij}) = 1- \exp \left[ -\left( \frac{t_{ij}}{t_0}\right) ^k\right] , \quad t_0=1/\Gamma (1+1/k). \end{aligned}$$ The Weibull shape parameter k ("Weibull exponent") controls the statistical spread of the values of \(t_{ij}\), with larger values of k pointing to narrower distribution. The choice of the Weibull distribution is common in the materials science literature and is motivated by physical reasoning34,39. Each edge can be considered a one-dimensional assembly of load carrying elements of heterogeneously distributed strengths. The global strength of the edge can thus be computed using arguments of statistics of extremes, and under relatively general assumptions \(t_{ij}\) follows a Weibull distribution such as in Eq. (2). As customary in statistical models of materials failure, the statistical spread of threshold values \(t_{ij}\) is a way to model local heterogeneity of a material, and thus "disorder". The use of Weibull distributions allows us to tune disorder, by acting on k: systems with \(k\approx 1\) are highly disordered, whereas larger values of k point to a more homogeneous local strength distribution, and thus less disorder. We note that other definitions of disorder are possible. For instance, disorder may refer to topological heterogeneity (rather then response heterogeneity), as encountered in models for rod- and nanowire networks40,41,42. In terms of geometry, we consider two-dimensional (2D) networks where we distinguish a loading direction (in the following, "vertical" direction) and a perpendicular direction (in the following, "horizontal" direction). The models are based on a square lattice of nodes sandwiched between top and bottom bus bars which ensure a constant potential difference across the network in the loading direction. The bus bars are connected by \(L=2^n\) vertical columns consisting of \(L-1\) nodes connected by L edges. Vertical edges are in the following denoted as load carrying edges, while the columns are denoted as load carrying fibers. In perpendicular direction, the load carrying fibers are connected by horizontal edges between adjacent nodes (denoted as "cross links") to form a network structure. We consider two types of cross link patterns which lead to different global network topologies, henceforth denoted as hierarchical (HFN) and random (RFN) networks. We first deal with the HFN case. Hierarchical network construction A (deterministic) hierarchical fuse network (HFN) of size \(L=2^n\), where n is the number of hierarchical levels, can be constructed in an iterative "bottom-up" manner as shown in Fig. 1, top11. The resulting structure is sub-divided into a hierarchy of modules separated by load-parallel gaps (HFN panel in Fig. Fig. 1, green lines). Defining the length of a gap as the number of vertically adjacent missing cross links, we find that HFN exhibit a power-law type gap length distribution. From the point of view of GEBC, it is evident that this feature leads to high GEBC values in the cross links in between the longest gaps. Network models. Top: Bottom-up construction of a HFN. A module of level zero is a load carrying vertical edge. A module of level 1 (generator) consists of 4 level-0 modules plus a load perpendicular cross link which spans the module. Higher level modules are constructed recursively by replacing in a module of level n, each level \(n-1\) sub-module by a level-n module. The resulting structure defines a module of level \(n+1\), as illustrated in the figure up to \(n=4\). Circles indicate network nodes. Dark brown circles are boundary nodes, where boundary conditions are applied. Edges are represented as black segments connecting pairs of nodes. Boundary edges (triple segments) are not breakable and are excluded from the statistical GEBC study. Bottom: Examples of HFN, SHFN and RFN of \(L=32\). Green lines indicate load parallel gaps11,13. A non-deterministic version of a hierarchical network is obtained by starting from the HFN structure and then randomly reshuffling first the columns and then the rows of the adjacency matrix. This process, which preserves the power-law nature of the gap statistics and thus the hierarchical structure of the network, leads to a stochastic hierarchical fuse network, denoted as SHFN (Fig. 1, bottom center). Random network construction A RFN of size \(L = 2^n\) contains the same number of load carrying edges and cross links as the corresponding HFN, but now the cross links are distributed randomly over the available pairs of horizontally adjacent nodes (Fig. 1, bottom). Alternatively, one may start from a lattice and randomly remove the same fraction of cross links that are missing in the corresponding HFN. This leads to a non hierarchical, statistically homogeneous structure where load parallel gaps have an exponential size distribution. Simulation protocol We simulate material loading and failure using the RFM. In this model the applied external potential resembles the mechanical displacement while the local current provides a measure of stress. Failure of network links occurs once the corresponding current exceeds a threshold value32,33. Our motivation to use the RFN is that this model not only is endowed with an evident network structure but has also been extensively studied for representation of basic features of failure processes in disordered materials34, including materials with hierarchical microstructures11,12. In the simulations, we follow the standard quasi-static deformation protocol34. We use an index \(\eta \) to count the failing edges in the sequence of failure, starting from \(\eta = 1\). To construct the failure sequence, a unit voltage difference is applied between the top and bottom bus bars, and all nodal voltages and edge currents are evaluated. At step \(\eta \) one identifies the edge \((ij)_{\eta }\) with the highest load-strength ratio, \(f_{\eta } = \max _{ij}(I_{ij}/t_{ij})\) and sets the global voltage \(V_\eta \) to \(f_{\eta }\), the value at which this critical edge fails. The corresponding global load is evaluated as the average current per upper or lower boundary edge, \(I_{\eta } = f_{\eta } \sum _{ij\in {{\mathscr {B}}}} I_{ij}\) where the sum runs over the set \({{\mathscr {B}}}\) of L edges that connect the network to the top or bottom bus bars. The values of \(f_{\eta }\), \(I_{\eta }\), and \((ij)_{\eta }\) are stored. After setting the conductance of the critical edge to zero and increasing \(\eta \rightarrow \eta + 1\), the computation is repeated to identify the next critical edge, etc. The process is terminated once the network is disconnected between top and bottom, as indicated by a zero global conductance. The final value \(\eta = N_{\mathrm{f}}\) gives the total number of failed edges in the sample. The set of \((V_\eta ,I_\eta )\) pairs constitutes the stress-strain curve of the system – or \(I-V\) characteristic – encoding its mechanical response to the applied load. A typical stress-strain curve is shown in Fig. 2 (thin blue line). This simulation protocol mimics an idealized deformation process, where every time voltage is set to the exact value that produces failure of the weakest link (i.e. the link of highest load-strength ratio \(f_\eta \)), which ensures that only one link breaks at a time. From the sequence of link failures and the corresponding values of global voltage and current, the behavior under different, more realistic loading scenarios can be derived. For instance, one may assume that the system is loaded by monotonically increasing the voltage difference between the top and bottom bus bars – in what is known as displacement control. In this case, the \(I-V\) characteristic and the sequence of broken links can be obtained from the quasi-static data by taking its voltage envelope, a procedure which we exemplify graphically in Fig. 2 (thick orange line)34. The new sequence of \(V_{\eta _i}\) is the set of monotonically increasing values of V that we seek for the the displacement-control protocol, while the set of edges that are broken before the next increase in \(V_{\eta _i}\) constitutes an avalanche. At every step, \(\varepsilon =V_{\eta _i}/L\) is the global strain, and \(\sigma =I_{\eta _i}/L\) the global stress. Similar considerations allow one to extract information for a scenario where the voltage difference is adjusted such as to impose an increasing global current through the network (load control). Example of \(I-V\) characteristic. Thin blue line: data from the quasi-static simulation protocol. Thick orange line: displacement-control envelope, representing the dependence of the global stress \(\sigma \) on the global strain \(\varepsilon \) in the case in which the voltage difference between the top and bottom bus bars is increased monotonically. The peak stress \(\sigma _\mathrm {p}\) denotes the peak load that the system can carry, while the failure strain \(\varepsilon _\mathrm {f}\) stands for the maximum strain that is encountered in displacement control, before the system breaks. The interval between peak load and failure identifies the post-peak regime. In the present simulations, we consider the displacement-controlled scenario. Under this boundary condition, the global strain \(\varepsilon \) always increases, the peak stress of the failure sequence is \(\sigma _{\mathrm{p}} = \max _{\eta } I_{\eta }/L\), and the failure strain is \(\varepsilon _{\mathrm{f}} = \max _{\eta } V_{\eta }/L = \max _{\eta } f_{\eta }/L\) (see Figure 2). In a typical simulation, the system first reaches the peak stress \(\sigma _{\mathrm{p}}\) (and the corresponding strain value \(\varepsilon _{\mathrm{p}}\)) and successively fails (when strain reaches \(\varepsilon _{\mathrm{f}}\)). The peak stress signals an important deformation stage notably in non hierarchical RFN structures, where it separates a regime of stable, statistically homogeneous damage accumulation from a failure regime which is dominated by damage localization leading to nucleation and growth of a critical crack34. In HFN structures, instead, a system that has reached and passed peak stress still exhibits damage accumulation and prevents the formation of critical cracks, a fact that may result in an extended post-peak regime11,12. Evolution of edge betweenness centrality statistics In Fig. 3, the edge betweenness centrality patterns of different initially intact networks are plotted together with the associated GEBC statistics. Fig. 3a–c represent GEBC patterns where the edge midpoints are colored based on the edges C values. Distribution of geodesic edge betweenness centrality across the network. Network edges coloured based on GEBC values. (a) HFN model, (b) SHFN model and (c) RFN model, all of size \(L= 128\), (d) probability distributions p(C) of edge betweenness centrality for the different network models. SHFN and RFN data are averaged over 200 network realizations. HFN and SHFN exhibit the same tail behavior. Visual inspection of the patterns reveals distinctive differences between hierarchical and non hierarchical networks. In the hierarchical structures, GEBC is concentrated in the horizontal rows of cross links which connect multiple modules in the load-perpendicular direction. In these cross-links, GEBC can assume very high values. At the same time, it may be noted that these links are, in the initial undamaged state of the network, load free, i.e., the network structure systematically ensures that the most central links are not strongly loaded. In the random RFN reference structures, on the other hand, the distribution of GEBC values is much more homogeneous and no distinctive GEBC patterns can be identified. The probability distribution p(C) of edge betweenness centrality is shown for each network type in Fig. 3d, where for the stochastic SHFN and RFN networks the statistics have been averaged over the initial conditions of 200 realizations. As expected from visual inspection of the GEBC patterns, the distribution of C values for RFN structures is much narrower than for their HFN and SHFN counterparts. Moreover, the distributions for HFN and SHFN exhibit a fat power-law tail towards high GEBC values where \(p(C) \propto C^{-\delta }\) with \(\delta \approx 2.6\), a behavior which reflects the power-law statistics of gap lengths in the hierarchical-modular structure. Under increasing load, accumulation of damage is accompanied by changes in the statistical distribution of GEBC. On the one hand, the probability of edge failure may depend on the edge's C value, as suggested in the literature. On the other hand, the removal of an edge may change the GEBC values of other edges in the system. We have determined the evolution of the p(C) curve for different numbers of removed edges as shown in Fig. 4 for SHFN and RFN structures with \(L=128\), for which we have determined p(C) curves after removal of 100, 500, 900, 1300 and 1700 edges. We note that the last number is close to global failure, which for the RFN structures occurs at about 1800 edge failures and for the SHFN at about 2100 failures. Evolution of GEBC statistics with accumulating damage. p(C) vs. C curves after 100, 500, 900, 1300 and 1700 failed edges, for hierarchical SHFN (a), and non-hierarchical RFN (b). In all simuations \(L= 128, k=3\). For both SHFN and RFN structures, the evolution of GEBC statistics is characterized by a fattening of the distribution tails, at both low and high C values. The fattening of the high-C tail of the distribution is particularly evident in RFN where near failure, the p(C) probability density functions develop outliers that extend the spectrum of C values to much higher levels than in the initial state. The reason for this – at first glance surprising – behavior is that RFN fail by nucleation-and-propagation of a critical crack which separates the system into two parts11. Edges located near the crack tip thus acquire GEBC values which are much higher than any C values in the undamaged initial state and which increase with increasing crack length. SHFN, on the other hand, fail by diffuse nucleation of damage without formation of a coherently propagating crack11. Here, the fattening of the high-C tail occurs in a gradual manner and without development of statistical outliers. Instead, we observe a slight decrease of the exponent \(\delta \) as damage accumulates. Correlation between GEBC and failure propensity Given that the GEBC statistics evolves in the run-up to global failure, when correlating GEBC and failure propensity we think it is mandatory to account for the stage of the damage process at which GEBC is determined, and the stage when failure occurs. We note in particular that failure in non-hierarchical systems (akin to our RFN model) is often interpreted as a critical phenomenon34, where scale-invariant behavior is encountered at peak load, e.g., in the form of avalanches that are power-law distributed in size. Hierarchical systems (and in particular our HFN and SHFN models), instead, exhibit generic critical-like behavior: while one can clearly identify a peak load at a given \(\varepsilon _\mathrm {p}\), scale invariant avalanches are encountered in broad range of \(\varepsilon <\varepsilon _\mathrm {p}\) for any system size11, a behavior which is was also highlighted in problems of hierarchical percolation43,44 and spreading45. Our aim is thus to analyze how the GEBC evolves with damage in this complex scenario. For every simulation, we re-enumerate the failed edges based on their position \(\beta _s = N_{\mathrm{f},s}-\eta _s\) in the failure sequence of sample s, counting backwards from the point of failure. Based on their \(\beta _s\) values we divide the failed edges into classes \({{\mathscr {C}}}^\alpha = \cup _{s} \{z(\alpha -1) < \beta _s \le z\alpha \}\). Thus, each class consists of zM members where M is the number of simulated samples for a given set of parameters L, k. In the following we fix \(z=25\), and in order to assess the role of system size we consider sizes \(L=64,\,128,\,256\) and the corresponding numbers of realizations \(M=400,\,200,\,50\). Thus class 1 (\(\alpha = 1\)) contains the last 25 edges to fail in all samples, class 2 the 25 edges in each sample to fail before class 1, etc. For each class \(\alpha \) we compute the average ratio between edge failure strain and sample failure strain \(\varepsilon /\varepsilon _\mathrm {f}\) and the fraction of samples that have passed their peak stress stage, \(P(\varepsilon _\mathrm {p}<\varepsilon )\). Fig. 5 shows the dependence of \(P(\varepsilon _\mathrm {p}<\varepsilon )\) on the strain-to-failure \(1-\varepsilon /\varepsilon _\mathrm {f}\), where we recover the phase-transition-like scenario described above. Both RFN and SHFN display a transition behavior, where \(P(\varepsilon _\mathrm {p}<\varepsilon )\) acts as an order parameter. This representation allows us to monitor the statistics of GEBC as follows. For the network configurations at the beginning of each class, we determine the statistics of GEBC values of all surviving edges in the simulated samples. The average of all the C values of sample s when the damage process is at the beginning of failure class \(\alpha \) is denoted as \({\bar{C}}^{\alpha }_s\) and the values of the individual edges pertaining to class \(\alpha \) as \(C^{\alpha }_{\beta _s}\). Using these notations, we define class specific GEBC mean deviation \(\Sigma _C^{\alpha }\) by $$\begin{aligned} \Sigma _C^{\alpha } = \frac{1}{z M}\sum _s \sum _{\beta _s \in {{\mathscr {C}}}^\alpha } \left[ \frac{C^{\alpha }_{\beta _s}}{{\bar{C}}^{\alpha }_s}-1\right] . \end{aligned}$$ A zero value of \(\Sigma _C^{\alpha }\) indicates that the edges failing in that class have, on average, the same GEBC as all edges in the sample, in other words, there is no correlation between GEBC and failure propensity. Positive values indicate that the failing edges have above-average GEBC, thus a positive correlation, whereas negative values demonstrate the opposite effect. Correlation between GEBC and failure. Solid lines: Fraction of samples beyond the peak stress stage vs. reduced strain-to-failure, for networks of sizes \(L=64,\,128,\,256\); left: RFN, right: SHFN; thresholds are Weibull distributed with shape factors \(k=2\) (a), \(k=3\) (b) and \(k=9\) (c). Symbols: GEBC mean deviation vs. reduced strain-to-failure. Figure 5 shows, for RFN and SHFN of different degrees of disorder, the evolution of the GEBC–failure correlations, by plotting for the different failure classes \(\Sigma _C^{\alpha }\) values versus the respective mean strain-to-failure. A first comparison with the \(P(\varepsilon _\mathrm {p}<\varepsilon )\) curves suggests that in all cases \(\Sigma _C^{\alpha }\) increases rapidly as the system approaches the peak load stage (the increase in \(P(\varepsilon _\mathrm {p}<\varepsilon )\): as damage progresses, GEBC exhibits higher correlation with failure propensity. Interestingly, while in RFN \(\Sigma _C^{\alpha }\) clearly tends to an asymptotic behavior as sizes increase (especially in the lower k cases), it is mostly size-independent in the case of SHFN. The biggest deviations are encountered in the low disorder limit (\(k=9\)), where both RFN and SHFN reach failure after breaking very few edges, thus providing worse statistics for this type of study. To elucidate the correlations between GEBC and failure propensity in more detail, we study how the edges failing in each failure class distribute over the GEBC probability distribution. To this end, we divide the cumulative distribution of C into ten-percentiles \(C_n\). Edges for which \(C_{n-1} < C \le C_n\) then fall into the nth ten-percentile of the GEBC probability distribution. Similarly, we divide the strain-to-failure of a sample into ten-percentiles. We record for each strain ten-percentile the number of failed edges in each ten-percentile of the GEBC probability distribution. This is done in two different manners: Fig. 6 considers percentiles of the initial GEBC probability distribution prior to loading, it thus reflects the predictive value of the initial GEBC. Figure 7, by contrast, considers percentiles of the GEBC probability distribution at the beginning of the current strain interval, it thus accounts for changes in GEBC due to damage accumulation. Edge failure predictions for RFN and SHFN, based on initial values of GEBC. Number of broken edges as a function of the reduced strain-to-failure variable, for systems of size \(L=128\). Each curve represents the number of broken edges at every strain-to-failure stage, from the set of edges in the ith percentile of the distribution of initial GEBC, p(C). Thresholds are Weibull distributed with shape factors \(k=2\) (a), \(k=3\) (b) and \(k=9\) (c). The curves in Fig. 6 show that the damage accumulation process is strongly influenced by disorder as reflected by the shape factor of the threshold distribution. Samples with high disorder (low k) show a gradual accumulation of damage which accelerates towards failure. In samples with low disorder, on the other hand, loading is mainly elastic and damage accumulation is concentrated close to the failure strain. Moreover, the behavior is more brittle in the sense that the total amount of damage accumulation is less (i.e., damage is more concentrated in a critical flaw). These effects are more pronounced in RFN than in hierarchical structures, which generally accumulate more damage before failure. These observations agree with the general picture of disorder-dependent damage and failure processes in hierarchical and non hierarchical structures as reported elsewhere13,46 where low disorder and absence of hierarchical structure promote a nucleation-and-growth scenario of brittle failure, whereas high disorder and hierarchical topology of the load carrying network promote diffuse accumulation of damage where failure occurs by damage percolation. Turning to GEBC effects, we observe a general tendency that more edges fail in the upper percentiles of the GEBC probability distribution. As a exception to this general tendency, however, the highest GEBC ten-percentile accounts for the smallest amount of early damage accumulation in RFN structures, and also falls behind lower percentiles in SHFN structures. Only in the last stages of the failure process, typically beyond the peak stress stage, the highest GEBC ten-percentile dominates damage accumulation. This observation is valid irrespectively of whether one considers initial GEBC (Fig. 6) or current GEBC (Fig. 7). Edge failure predictions for RFN and SHFN, based on current values of GEBC. Number of broken edges as a function of the reduced strain-to-failure variable, for systems of size \(L=128\). Each curve represents the number of broken edges at every strain-to-failure stage, from the set of edges in the ith percentile of the distribution of GEBC, p(C), recorded at the current strain-to-failure stage and reflecting the effects of accumulated damage. Thresholds are Weibull distributed with shape factors \(k=2\) (a) and \(k=3\) (b). Results for \(k=9\) are not shown, as they do not differ significantly from those in Fig. 6(c). Discussion and conclusions Our investigation confirms the finding of significant correlations between edge betweenness centrality in network structures and the propensity for edge failure under load. Such correlations can even be found in hierarchical structures that are architectured in such a manner that, in the absence of failed edges, the most central edges are load free and thus protected against failure. At the same time, our investigation of the evolution of GEBC in the run-up to failure and of the associated correlations with failure propensity indicates that failure is very far from being controlled by network topology alone. There is a statistically significant global correlation between GEBC and failure propensity in the sense that failing edges tend to have above average GEBC, and this correlation actually increases in the run-up to global failure. However, this general correlation does not necessarily mean that the edges with highest GEBC are most likely to fail – as we have demonstrated, under certain conditions (non hierarchical structure, low disorder), edges from the highest 10-percentile of the GEBC distribution are actually less likely to fail than edges from the lower percentiles. This scenario changes in the immediate vicinity of global failure: due to the reduction of stress redistribution pathways, more central edges carry higher loads and become more exposed to failure. Thus, changes in the local load pattern and correlated evolution of the GEBC pattern lead to behavior that cannot be fully captured in terms of a single statistical signature. When considering claims that GEBC may serve as a tool for forecasting failure locations, another critical remark must be made. A generic problem in predicting materials failure resides in the fact that the actually damaged or failed volume usually amounts to a very tiny fraction of the system volume. In our simulations, the number of failed edges amounts, at the point of global failure, to between \(< 2\%\) of all edges (non hierarchical structures of low disorder) and \(\approx 20\%\) of all edges (hierarchical structures of high disorder). This problem, which typically increases with sample size, implies that any test that is used as a forecasting tool must be very specific, otherwise the prediction will be swamped by false positives47. Taken as a single indicator, GEBC falls very short of this requirement. However, GEBC data might be one component of more complex prediction strategies based upon analysis of multidimensional data and their correlations using machine learning approaches31,47. The datasets generated and used during the current study are available from the corresponding author on reasonable request. Driscoll, M. M. et al. The role of rigidity in controlling material failure. Proc. Nat. Acad. Sci. 113, 10813–10817 (2016). Article ADS CAS PubMed PubMed Central Google Scholar Zhang, L., Rocklin, D. Z., Sander, L. M. & Mao, X. Fiber networks below the isostatic point: Fracture without stress concentration. Phys. Rev. Mater. 1, 052602 (2017). Bonamy, D. & Bouchaud, E. Failure of heterogeneous materials: A dynamic phase transition?. Phys. Rep. 498, 1–44 (2011). Reid, D. R. et al. Auxetic metamaterials from disordered networks. Proc. Nat. Acad. Sci. 115, E1384–E1390 (2018). Paulose, J., Meeussen, A. S. & Vitelli, V. Selective buckling via states of self-stress in topological metamaterials. Proc. Nat. Acad. Sci. 112, 7639–7644 (2015). Hanifpour, M., Petersen, C. F., Alava, M. J. & Zapperi, S. Mechanics of disordered auxetic metamaterials. Eur. Phys. J. B 91, 1–8 (2018). Bonfanti, S., Guerra, R., Font-Clos, F., Rayneau-Kirkhope, D. & Zapperi, S. Automatic design of mechanical metamaterial actuators. Nat. Commun. 11, 1–10 (2020). Launey, M. E., Buehler, M. J. & Ritchie, R. O. On the mechanistic origins of toughness in bone. Ann. Rev. Mater. Res. 40, 25–53 (2010). Bonfanti, S., Guerra, R., Zaiser, M. & Zapperi, S. Digital strategies for structured and architected materials design. APL Mater. 9, 020904 (2021). Sen, D. & Buehler, M. J. Structural hierarchies define toughness and defect-tolerance despite simple and mechanically inferior brittle building blocks. Sci. Rep. 1, 1–9 (2011). Moretti, P., Dietemann, B., Esfandiary, N. & Zaiser, M. Avalanche precursors of failure in hierarchical fuse networks. Sci. Rep. 8, 1–7 (2018). Esfandiary, N., Zaiser, M. & Moretti, P. Statistical aspects of interface adhesion and detachment of hierarchically patterned structures. J. Stat. Mech.: Theory Exp. 2022, 023301 (2022). Article MATH Google Scholar Hosseini, S. A., Moretti, P., Konstantinidis, D. & Zaiser, M. Beam network model for fracture of materials with hierarchical microstructure. Int. J. Fract. 227, 243–257 (2021). Moretti, P. & Zaiser, M. Network analysis predicts failure of materials and structures. Proc. Nat. Acad. Sci. 116, 16666–16668 (2019). Article ADS MathSciNet CAS PubMed PubMed Central MATH Google Scholar Moretti, P., Renner, J., Safari, A. & Zaiser, M. Graph theoretical approaches for the characterization of damage in hierarchical materials. Eur. J. Phys. B 92, 97 (2019). Böttcher, L. A random-line-graph approach to overlapping line segments. J. Complex Netw. 8, cnaa029 (2020). Article MathSciNet MATH Google Scholar Laubie, H., Radjai, F., Pellenq, R. & Ulm, F.-J. Stress transmission and failure in disordered porous media. Phys. Rev. Lett. 119, 075501 (2017). Article ADS PubMed Google Scholar Kasza, K. et al. Nonlinear elasticity of stiff biopolymers connected by flexible linkers. Phys. Rev. E 79, 041928 (2009). Jin, S. et al. A novel application of parallel betweenness centrality to power grid contingency analysis. In 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS), 1–7 (IEEE, 2010). Almeira, N., Perotti, J. I., Chacoma, A. & Billoni, O. V. Explosive dismantling of two-dimensional random lattices under betweenness centrality attacks. Chaos, Solitons & Fractals 153, 111529 (2021). Barthélemy, M. Spatial networks. Phys. Rep. 499, 1–101 (2011). Article ADS MathSciNet CAS Google Scholar Nguyen, C., Peetz, D., Elbanna, A. E. & Carlson, J. M. Characterization of fracture in topology-optimized bioinspired networks. Phys. Rev. E 100, 042402 (2019). Berthier, E., Porter, M. A. & Daniels, K. E. Forecasting failure locations in 2-dimensional disordered lattices. Proc. Nat. Acad. Sci. 116, 16742–16749 (2019). Tordesillas, A., Kahagalage, S., Ras, C., Nitka, M. & Tejchman, J. Coupled evolution of preferential paths for force and damage in the pre-failure regime in disordered and heterogeneous, quasi-brittle granular materials. Front. Mater. 7, 79 (2020). Smart, A., Umbanhowar, P. & Ottino, J. Effects of self-organization on transport in granular matter: A network-based approach. EPL (Europhys. Lett.) 79, 24002 (2007). Papadopoulos, L., Porter, M. A., Daniels, K. E. & Bassett, D. S. Network analysis of particles and grains. J. Complex Netw. 6, 485–565 (2018). Article MathSciNet Google Scholar Hu, Z. & Mahadevan, S. Uncertainty quantification and management in additive manufacturing: current status, needs, and opportunities. Int. J. Adv. Manuf. Technol. 93, 2855–2874 (2017). Alava, M. J., Nukala, P. K. & Zapperi, S. Role of disorder in the size scaling of material strength. Phys. Rev. Lett. 100, 055502 (2008). Article ADS PubMed CAS Google Scholar Lennartz-Sassinek, S., Zaiser, M., Main, I., Manzato, C. & Zapperi, S. Emergent patterns of localized damage as a precursor to catastrophic failure in a random fuse network. Phys. Rev. E 87, 042811 (2013). Lennartz-Sassinek, S., Main, I., Zaiser, M. & Graham, C. Acceleration and localization of subcritical crack growth in a natural composite material. Phys. Rev. E 90, 052401 (2014). Biswas, S., Fernandez Castellanos, D. & Zaiser, M. Prediction of creep failure time using machine learning. Sci. Rep. 10, 1–11 (2020). Duxbury, P., Leath, P. & Beale, P. D. Breakdown properties of quenched random systems: the random-fuse network. Phys. Rev. B 36, 367 (1987). Kahng, B., Batrouni, G., Redner, S., De Arcangelis, L. & Herrmann, H. Electrical breakdown in a fuse network with random, continuously distributed breaking strengths. Phys. Rev. B 37, 7625 (1988). Alava, M. J., Nukala, P. K. & Zapperi, S. Statistical models of fracture. Adv. Phys. 55, 349–476 (2006). Girvan, M. & Newman, M. E. Community structure in social and biological networks. Proc. Nat. Acad. Sci. 99, 7821–7826 (2002). Brandes, U. A faster algorithm for betweenness centrality. J. Math. Sociol. 25, 163–177 (2001). Weibull, W. A statistical distribution function of wide applicability. J. Appl. Mech. 18, 293–297 (1951). Article ADS MATH Google Scholar Galar, D. & Kumar, U. Chapter 6 - prognosis. In Galar, D. & Kumar, U. (eds.) eMaintenance, 311–370 (Academic Press, 2017). Alava, M. J., Nukala, P. K. & Zapperi, S. Size effects in statistical fracture. J. Phys. D: Appl. Phys. 42, 214012 (2009). Kim, D. & Nam, J. Analyzing conducting rod networks using centrality. Electrochimica Acta 370, 137725 (2021). Kumar, A. Electrical percolation in metal wire network-based strain sensors. IEEE Sens. J. 19, 10373–10378 (2019). Hwang, J., Sohn, H. & Lee, S. Computational characterization and control of electrical conductivity of nanowire composite network under mechanical deformation. Sci. Rep. 8, 16617 (2018). Article ADS PubMed PubMed Central CAS Google Scholar Boettcher, S., Cook, J. L. & Ziff, R. M. Patchy percolation on a hierarchical network with small-world bonds. Phys. Rev. E 80, 041115 (2009). Friedman, E. J. & Landsberg, A. S. Hierarchical networks, power laws, and neuronal avalanches. Chaos 23, 013135 (2013). Article ADS MathSciNet PubMed PubMed Central MATH Google Scholar Moretti, P. & Muñoz, M. A. Griffiths phases and the stretching of criticality in brain networks. Nat. Commun. 4, 2521 (2013). Shekhawat, A., Zapperi, S. & Sethna, J. P. From damage percolation to crack nucleation through finite size criticality. Phys. Rev. Lett. 110, 185505 (2013). Font-Clos, F. et al. Predicting the failure of two-dimensional silica glasses. Nat. Commun. 13, 2820 (2022). The authors acknowledge support by DFG under Grants DFG ZA 171/9-1 and ZA 171/9-3. This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 377472739/GRK 2423/1-2019. The authors are very grateful for this support. Open Access funding enabled and organized by Projekt DEAL. Department of Materials Science, WW8-Materials Simulation, Friedrich-Alexander Universität Erlangen-Nürnberg, Fürth, 90762, Germany Mahshid Pournajar, Michael Zaiser & Paolo Moretti Mahshid Pournajar Michael Zaiser Paolo Moretti M.P. carried out the simulations. M.P. analyzed the data, in collaboration with M.Z. and P.M. M.P. and M.Z. wrote the manuscript. P.M. edited the manuscript. All authors reviewed and approved the final version of the manuscript. Correspondence to Paolo Moretti. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Pournajar, M., Zaiser, M. & Moretti, P. Edge betweenness centrality as a failure predictor in network models of structurally disordered materials. Sci Rep 12, 11814 (2022). https://doi.org/10.1038/s41598-022-15842-y DOI: https://doi.org/10.1038/s41598-022-15842-y
CommonCrawl
S-duality stationary duality, Spanier duality A duality in homotopy theory which exists (in the absence of restrictions imposed on the dimensions of spaces) for the analogues of ordinary homotopy and cohomotopy groups in the suspension category — for the $ S $- homotopy and $ S $- cohomotopy groups or stationary homotopy and cohomotopy groups, forming extra-ordinary (generalized) homology and cohomology theories. A suspension category, or $ S $- category, is a category whose objects are topological spaces $ X $, while its morphisms are classes $ \{ f \} $ of $ S $- homotopic mappings $ f $ from a $ p $- fold suspension $ S ^ { p } X _ { 1 } $ into $ S ^ { p } X _ { 2 } $, $ f $ and $ g: S ^ { q } X _ { 1 } \rightarrow S ^ { q } X _ { 2 } $ being considered as $ S $- homotopic if there exists an $ r \geq \max { ( p, q) } $ such that the suspensions $ S ^ { {r-p} } f $ and $ S ^ { {r-q} } g $ are homotopic in the ordinary sense. The set $ \{ X _ { 1 } , X _ { 2 } \} $ of such classes, which are known as $ S $- mappings, constitutes an Abelian group with respect to the so-called track addition [1], [2], [4], [5]. The group $ \{ X _ { 1 } , X _ { 2 } \} $ is the limit of the direct spectrum of the sets $ [ S ^ { k } X _ { 1 } , S ^ { k } X _ { 2 } ] $ of ordinary homotopy classes with suspension mappings as projections; if $ k $ is sufficiently large, it is a group spectrum with homomorphisms. There exists an isomorphism $ S: \{ X _ { 1 } , X _ { 2 } \} \rightarrow \{ SX _ { 1 } , SX _ { 2 } \} $ in which the corresponding elements are represented by one and the same mapping $ S ^ { p } X _ { 1 } \rightarrow S ^ { p } X _ { 2 } $, $ p \geq 1 $. The $ n $- dual polyhedron of the polyhedron $ X $ in a sphere $ S ^ { n } $ is an arbitrary polyhedron $ D _ { n } X $ in $ S ^ { n } $ which is an $ S $- deformation retract of the complement $ S ^ { n } \setminus X $, i.e. the morphism corresponding to the imbedding $ D _ { n } X \subset S ^ { n } \setminus X $ is an $ S $- equivalence. The polyhedron $ D _ { n } X $ exists for all $ X $, and $ X $ may be considered as $ D _ { n } ^ { 2 } X $. For any polyhedra $ X _ { 1 } , X _ { 2 } $ and any polyhedra $ D _ { n } X _ { 1 } $ and $ D _ { n } X _ { 2 } $ which are dual to them, there exists a unique mapping $$ D _ { n } : \{ X _ { 1 } , X _ { 2 } \} \rightarrow \ \{ D _ { n } X _ { 2 } , D _ { n } X _ { 1 } \} $$ satisfying the following conditions: a) It is an involutory contravariant functorial isomorphism, i.e. $ D _ { n } $ is a homomorphism such that if $$ i : X _ { 1 } \subset X _ { 2 } \ \textrm{ and } \ i ^ \prime : D _ { n } X _ { 2 } \subset D _ { n } X _ { 1 } , $$ $$ D _ { n } \{ i \} = \{ i ^ \prime \} ; $$ $$ \{ f _ { 1 } \} \in \{ X _ { 1 } , X _ { 2 } \} \ \textrm{ and } \ \ \{ f _ { 2 } \} \in \{ X _ { 2 } , X _ { 3 } \} , $$ $$ D _ { n } { ( \{ f _ { 2 } \} \cdot \{ f _ { 1 } \} ) } = \ D _ { n } \{ f _ { 1 } \} \cdot D _ { n } \{ f _ { 2 } \} ; $$ if $ \theta $ is an element of $ \{ X _ { 1 } , X _ { 2 } \} $ or of $ \{ D _ { n } X _ { 2 } , D _ { n } X _ { 1 } \} $, then $ D _ { n } D _ { n } \theta = \theta $. b) The following relations are valid: $$ SD _ { n } = D _ { {n+1} } \ \textrm{ and } \ D _ { {n+1} } S = D _ { n } , $$ where $ SD _ { n } X _ { i } $ and $ D _ { n } X _ { i } $ are considered as polyhedra, $ { ( {n+1} ) } $- dual to polyhedra $ X _ { i } $ and, correspondingly, $ SX _ { i } $, $ i = 1, 2; $ this means that it does not depend on $ n $ and is stationary with respect to suspension. c) It satisfies the equation $$ D _ { a } ^ { n } \theta _ { * } = { ( D _ { n } \theta ) } ^ { * } D _ { a } ^ { n } , $$ $$ \theta _ { * } : H _ { p } { ( X _ { 1 } ) } \rightarrow H _ { p } { ( X _ { 2 } ) } $$ $$ { ( D _ { n } \theta ) } ^ { * } : H ^ { { {n-p} -1} } { ( D _ { n } X _ { 1 } ) } \rightarrow H ^ { { {n-p} -1} } { ( D _ { n } X _ { 2 } ) } $$ are homomorphisms of the above homology and cohomology groups, induced by $ S $- mappings $ \theta \in \{ X _ { 1 } , X _ { 2 } \} $ and $ D _ { n } \theta $, and $$ D _ { a } : H _ { p } { ( X _ { i } ) } \rightarrow H ^ { { {n-p} -1} } { ( D _ { n } X _ { i } ) } ,\ {i=1} , 2 , $$ is an isomorphism obtained from the isomorphism of Alexander duality by replacing the set $ S ^ { n } \setminus X _ { i } $ by its $ S $- deformation retract $ D _ { n } X _ { i } $. The construction of $ D _ { n } $ is based on the representation of a given mapping as the composition of an imbedding and an $ S $- deformation retract. The $ S $- homotopy group $ \Sigma _ { p } { ( X) } $ of a space $ X $ is the group $ \{ S ^ { p } , X \} $, and the $ S $- cohomotopy group $ \Sigma ^ { p } { ( X) } $ of $ X $ is the group $ \{ X, S ^ { p } \} $. As in ordinary homotopy theory, one defines the homomorphisms $$ \phi _ { p } : \Sigma _ { p } { ( X) } \rightarrow H _ { p } { ( X) } , $$ $$ \phi ^ { p } : \Sigma ^ { p } { ( X) } \rightarrow H ^ { p } { ( X) } . $$ Regarding the spheres $ S ^ { p } $ and $ S ^ { { {n-p} -1} } $ as $ n $- dual leads to the isomorphisms $$ D _ { n } : \Sigma _ { p } { ( X) } \rightarrow \Sigma ^ { { {n-p} -1} } { ( D _ { n } X) } $$ and to the commutative diagram $$ \begin{array}{ccc} {\Sigma _ { p } { ( X) } } & \stackrel{ \phi _ p }{\rightarrow} &{H _ { p } { ( X) } } \\ { { {D _ { n } } } \downarrow } &{} &{\downarrow { {D _ { a } ^ { n } } } } \\ {\Sigma ^ { { {n-p} -1} } { ( D _ { n } ^ { X } ) } } & \stackrel{\phi ^{n-p-1}}{\rightarrow} &{H ^ { { {n-p} -1} } { ( D _ { n } X) } } \\ \end{array} $$ Thus, the isomorphism $ D _ { n } $ connects $ S $- homotopy and $ S $- cohomotopy groups, just as the isomorphism of Alexander duality $ D _ { a } ^ { n } $ connects the homology and cohomology groups. Any duality in the $ S $- category entails a duality of ordinary homotopy classes if the conditions imposed on the space entail the existence of a one-to-one correspondence between the set of the above classes and the set of $ S $- homotopy classes. Examples of dual assumptions in this theory include Hurewicz's isomorphism theorem and Hopf's classification theorem. $ D _ { n } $ converts one of these theorems into the other, which means that $ S $- homotopy groups are replaced by $ S $- cohomotopy groups, homology groups by cohomology groups, the mapping $ \phi _ { p } $ by the mapping $ \phi ^ { { {n-p} -1} } $, the smallest dimension with a non-trivial homology group by the largest dimension with a non-trivial cohomology group, and vice versa. In ordinary homotopy theory the definition of an $ n $- cohomotopy group requires that the dimension of the space does not exceed $ {2n-2} $( or, more generally, that the space be $ { { ( 2n-1) }} $- coconnected, $ n > 1 $), which impairs the perfectly general nature of duality. There are several trends of generalization of the theory: e.g. studies are made of spaces with the $ S $- homotopy type of polyhedra, the relative case, a theory with supports, etc. [3], [5], , [7]. The theory was one of the starting points in the development of stationary homotopy theory [8]. [1] E.H. Spanier, "Duality and -theory" Bull. Amer. Math. Soc. , 62 (1956) pp. 194–203 MR0085506 [2] E.H. Spanier, J.H.C. Whitehead, "Duality in homotopy theory" Mathematika , 2 : 3 (1955) pp. 56–80 MR0074823 Zbl 0064.17202 [3] E.H. Spanier, J.H.C. Whitehead, "Duality in relative homotopy theory" Ann. of Math. , 67 : 2 (1958) pp. 203–238 MR0105105 Zbl 0092.15701 [4] M.G. Barratt, "Track groups 1; 2" Proc. London Math. Soc. , 5 (1955) pp. 71–106; 285–329 [5] E.H. Spanier, J.H.C. Whitehead, "The theory of carriers and -theory" , Algebraic geometry and Topology (A Symp. in honor of S. Lefschetz) , Princeton Univ. Press (1957) pp. 330–360 MR0084772 [6a] B. Eckmann, P.J. Hilton, "Groupes d'homotopie et dualité. Groupes absolus" C.R. Acad. Sci. Paris , 246 : 17 (1958) pp. 2444–2447 MR0100261 Zbl 0092.39901 [6b] B. Eckmann, P.J. Hilton, "Groupes d'homotopie et dualité. Suites exactes" C.R. Acad. Sci. Paris , 246 : 18 (1958) pp. 2555–2558 MR0100262 Zbl 0092.40001 [6c] B. Eckmann, P.J. Hilton, "Groupes d'homotopie et dualité. Coefficients" C.R. Acad. Sci. Paris , 246 : 21 (1958) pp. 2991–2993 MR0100263 Zbl 0092.40101 [6d] B. Eckmann, P.J. Hilton, "Transgression homotopique et cohomologique" C.R. Acad. Sci. Paris , 247 : 6 (1958) pp. 620–623 MR0100264 Zbl 0092.40102 [6e] B. Eckmann, P.J. Hilton, "Décomposition homologique d'un polyhèdre simplement connexe" C.R. Acad. Sci. Paris , 248 : 14 (1959) pp. 2054–2056 [7] E.H. Spanier, "Algebraic topology" , McGraw-Hill (1966) MR0210112 MR1325242 Zbl 0145.43303 [8] G.W. Whitehead, "Recent advances in homotopy theory" , Amer. Math. Soc. (1970) MR0309097 Zbl 0217.48601 S-duality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=S-duality&oldid=49677 This article was adapted from an original article by G.S. Chogoshvili (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=S-duality&oldid=49677" TeX auto
CommonCrawl
Tagged: MIT Find All the Eigenvalues and Eigenvectors of the 6 by 6 Matrix Find all the eigenvalues and eigenvectors of the matrix \[A=\begin{bmatrix} 10001 & 3 & 5 & 7 &9 & 11 \\ 1 & 10003 & 5 & 7 & 9 & 11 \\ 1 & 3 & 10005 & 7 & 9 & 11 \\ 1 & 3 & 5 & 10007 & 9 & 11 \\ 1 &3 & 5 & 7 & 10009 & 11 \\ 1 &3 & 5 & 7 & 9 & 10011 \end{bmatrix}.\] (MIT, Linear Algebra Homework Problem) Inverse Matrix of Positive-Definite Symmetric Matrix is Positive-Definite Suppose $A$ is a positive definite symmetric $n\times n$ matrix. (a) Prove that $A$ is invertible. (b) Prove that $A^{-1}$ is symmetric. (c) Prove that $A^{-1}$ is positive-definite. (MIT, Linear Algebra Exam Problem) The Subspace of Matrices that are Diagonalized by a Fixed Matrix Suppose that $S$ is a fixed invertible $3$ by $3$ matrix. This question is about all the matrices $A$ that are diagonalized by $S$, so that $S^{-1}AS$ is diagonal. Show that these matrices $A$ form a subspace of $3$ by $3$ matrix space. (MIT-Massachusetts Institute of Technology Exam) If a Symmetric Matrix is in Reduced Row Echelon Form, then Is it Diagonal? Using the Wronskian for Exponential Functions, Determine Whether the Set is Linearly Independent The Cyclotomic Field of 8-th Roots of Unity is $\Q(\zeta_8)=\Q(i, \sqrt{2})$ If a Subgroup Contains a Sylow Subgroup, then the Normalizer is the Subgroup itself Compute Determinant of a Matrix Using Linearly Independent Vectors Basis of Span in Vector Space of Polynomials of Degree 2 or Less
CommonCrawl
Module 8: Perfect Competition Calculating Profits and Losses Describe a firm's profit margin Use the average cost curve to calculate and analyze a firm's profits and losses Identify and explain the firm's break-even point Profits and Losses with the Average Cost Curve Does maximizing profit (producing where MR = MC) imply an actual economic profit? The answer depends on firm's profit margin (or average profit), which is the relationship between price and average total cost. If the price that a firm charges is higher than its average cost of production for that quantity produced, then the firm's profit margin is positive and it is earning economic profits. Conversely, if the price that a firm charges is lower than its average cost of production, the firm's profit margin is negative and it is suffering an economic loss. You might think that, in this situation, the farmer may want to shut down immediately. Remember, however, that the firm has already paid for fixed costs, such as equipment, so it may make sense to continue to produce and incur a loss. Figure 1 illustrates three situations: (a) where at the profit maximizing quantity of output (where P = MC), price is greater than average cost, (b) where at the profit maximizing quantity of output (where P = MC), price equals average cost, and (c) where at the profit maximizing quantity of output (where P = MC), price is less than average cost. Figure 1. Price and Average Cost at the Raspberry Farm. In (a), price intersects marginal cost above the average cost curve. Since price is greater than average cost, the firm is making a profit. In (b), price intersects marginal cost at the minimum point of the average cost curve. Since price is equal to average cost, the firm is breaking even. In (c), price intersects marginal cost below the average cost curve. Since price is less than average cost, the firm is making a loss. First consider a situation where the price is equal to $5 for a pack of frozen raspberries. The rule for a profit-maximizing perfectly competitive firm is to produce the level of output where Price= MR = MC, so the raspberry farmer will produce a quantity of approximately 85, which is labeled as E' in Figure 1(a). The firm's average cost of production is labeled C'. Thus, the firm's profit margin is the distance between E' and C', and it is positive. The firm is making money, but how much? Remember that the area of a rectangle is equal to its base multiplied by its height. Total revenues will be the quantity of 85 times the price of $5.00, which is shown by the rectangle from the origin over to a quantity of 85 packs (the base) up to point E' (the height), over to the price of $5, and back to the origin. The average cost of producing 85 packs is shown by point C' or about $3.50. Total costs will be the quantity of 85 times the average cost of $3.50, which is shown by the area of the rectangle from the origin to a quantity of 85, up to point C, over to the vertical axis and down to the origin. The difference between total revenues and total costs is profits. Thus, profits will be the blue shaded rectangle on top. We calculate this as: [latex]\begin{array}{lll}\text{profit}& =& \text{total revenue}-\text{total cost}\\& =& \left(85\right)\left(\$5.00\right)-\left(85\right)\left(\$3.50\right)\\& =& \$127.50​​\end{array}[/latex] Or, we can calculate it as: [latex]\begin{array}{lll}\text{profit}& =& \text{(price}-\text{average cost)}\times \text{quantity}\\ & =& \left(\$5.00-\$3.50\right) \times 85\\ & =& \$127.50​​\end{array}[/latex] Now consider Figure 1(b), where the price has fallen to $2.75 for a pack of frozen raspberries. Again, the perfectly competitive firm will choose the level of output where Price = MR = MC, but in this case, the quantity produced will be 75. At this price and output level, where the marginal cost curve is crossing the average cost curve, the price the firm receives is exactly equal to its average cost of production. We call this the break-even point, since the profit margin is zero. The farm's total revenue at this price will be shown by the large shaded rectangle from the origin over to a quantity of 75 packs (the base) up to point E (the height), over to the price of $2.75, and back to the origin. The height of the average cost curve at Q = 75, i.e. point E, shows the average cost of producing this quantity. Total costs will be the quantity of 75 times the average cost of $2.75, which is shown by the area of the rectangle from the origin to a quantity of 75, up to point E, over to the vertical axis and down to the origin. It should be clear that the rectangles for total revenue and total cost are the same. Thus, the firm is making zero profit. The calculations are as follows: [latex]\begin{array}{lll}\text{profit}& =& \text{total revenue}-\text{total cost}\hfill \\ & =& \left(75\right)\left($2.75\right)-\left(75\right)\left($2.75\right)\hfill \\ & =& $0\hfill \end{array}[/latex] [latex]\begin{array}{lll}\text{profit}& =& \text{(price}-\text{average cost)}\times \text{quantity}\hfill \\ & =& \left($2.75-$2.75\right)\times 75\hfill \\ & =& $0\hfill \end{array}[/latex] In Figure 1(c), the market price has fallen still further to $2.00 for a pack of frozen raspberries. At this price, marginal revenue intersects marginal cost at a quantity of 65. The farm's total revenue at this price will be shown by the large shaded rectangle from the origin over to a quantity of 65 packs (the base) up to point E" (the height), over to the price of $2, and back to the origin. The average cost of producing 65 packs is shown by Point C" which shows the average cost of producing 65 packs is about $2.73. Since the price is less than average cost, the firm's profit margin is negative. Total costs will be the quantity of 65 times the average cost of $2.73, which the area of the rectangle from the origin to a quantity of 65, up to point C", over to the vertical axis and down to the origin shows. It should be clear from examining the two rectangles that total revenue is less than total cost. Thus, the firm is losing money and the loss (or negative profit) will be the rose-shaded rectangle. The calculations are: [latex]\begin{array}{lll}\text{profit}& =& \text{(total revenue}-\text{ total cost)}\hfill \\ & =& \left(65\right)\left($2.00\right)-\left(65\right)\left($2.73\right)\hfill \\ & =& -$47.45\hfill \end{array}[/latex] [latex]\begin{array}{lll}\text{profit}& =&\text{(price}-\text{average cost)}\times \text{quantity}\hfill \\ & =& \left($2.00-$2.73\right) \times 65\hfill \\ & =& -$47.45\hfill \end{array}[/latex] If the market price that a perfectly competitive firm receives leads it to produce at a quantity where the price is greater than average cost, the firm will earn profits. If the price the firm receives causes it to produce at a quantity where price equals average cost, which occurs at the minimum point of the AC curve, then the firm earns zero profits. Finally, if the price the firm receives leads it to produce at a quantity where the price is less than average cost, the firm will earn losses. Table 1 summarizes this. Table 1. Profit and Average Total Cost Price > ATC Firm earns an economic profit Price = ATC Firm earns zero economic profit Price < ATC Firm earns a loss Which intersection should a firm choose? At a price of $2, MR intersects MC at two points: Q = 20 and Q = 65. It never makes sense for a firm to choose a level of output on the downward sloping part of the MC curve, because the profit is lower (the loss is bigger). Thus, the correct choice of output is Q = 65. Watch this video for more practice solving for the profit-maximizing point and finding total revenue using a table. Play the simulation below multiple times to practice applying these concepts and to see how different choices lead to different outcomes. These questions allow you to get as much practice as you need, as you can click the link at the top of the first question ("Try another version of these questions") to get a new set of questions. Practice until you feel comfortable doing the questions. break-even point: the level of output where price just equals average total cost, so profit is zero profit margin: at any given quantity of output, the difference between price and average total cost; also known as average profit Maximizing Profit. Authored by: Clark Aldrich for Lumen Learning. License: CC BY: Attribution How Perfectly Competitive Firms Make Output Decisions. Authored by: OpenStax College. Located at: https://cnx.org/contents/[email protected]:EkZLadKh@7/How-Perfectly-Competitive-Firm. License: CC BY: Attribution. License Terms: Download for free at http://cnx.org/contents/[email protected] All rights reserved content Maximizing Profit Practice- Micro 3.9. Provided by: ACDC Leadership. Located at: https://www.youtube.com/watch?v=BQvtnjWZ0ig. License: Other. License Terms: Standard YouTube License
CommonCrawl
Effects of the homopolymer molecular weight on a diblock copolymer in a 3D spherical confinement Dung Q. Ly1 & Charalampos Makatsoris ORCID: orcid.org/0000-0003-0139-67912 The morphologies of a diblock copolymer spherically confined within a homopolymer were investigated by using the static self-consistent field theory method. A homogeneous A-B diblock copolymer sphere was surrounded by a homopolymer C. Upon changing the diblock volume fraction, homopolymer molecular weight and the interaction between the copolymer and its surrounding environment, different morphologies of the sphere were observed. Our calculations confirmed that when the homopolymer molecular weight was high a complete macrophase separation between the copolymer and the homopolymer was obtained. However, when the homopolymer molecular weight was low the homopolymer penetrated into the copolymer microdomains, diluting the diblock copolymer and reduced the interaction between the diblock copolymer segments and hence preventing them from segregating. The quest of creating new functional materials in nanometer scales with targeted properties has attracted much attention from the scientific community in the last two decades [1, 2]. Block copolymers of soft materials have been identified as excellent candidates to fabricate advanced functional materials such as nanoparticles, nanocontainers/nanocapsules, nanowires and nanopores. Recently many efforts have been made to use block copolymers as nanocontainers which can be used as drug delivery vehicles and nanoreactors [3, 4], or as scaffords to position nanoparticles into arrays for applications such as photovoltaic, fuel cells and high-density magnetic storage media [5, 6]. One of the most interesting properties of block copolymers, which are composed of chemically different homopolymers covalently connect at one end, is their ability to self assemble into ordered microdomain structures. The self assembly process is driven by an unfavourable mixing enthalpy coupled with a small mixing entropy, with the covalent bond connecting the blocks preventing macroscopic phase separation. Understanding the behaviour of morphologies of block copolymers under different conditions such as polymeric chain architectures, composition, concentration of solvents, external fields, etc, has attracted a considerable attention. In bulk, depending on the volume fraction of individual blocks, f, and a combination of \(\chi N\), where \(\chi\) is the Flory–Huggins segmental interaction parameter that is inversely proportional to the temperature, and N is the degree of polymerization, different structures were observed such as lamellar, cylinder, gyroid and sphere [7, 8]. In confinements, more morphologies which are not formed in the bulk have been obtained when block copolymers are confined between walls in 1D, grafted to surfaces or in a cylindrical pore in 2D [9,10,11,12]. In our previous work, by changing the film thickness and surface fields, different phases from wetting layer to parallel cylinder, perpendicular cylinder, perforated lamellar, lamellar, coexisting sphere and lamellar, and coexisting cylinder and lamellar of a triblock copolymer confined between two hard walls were obtained [9]. The effect of a spherical surface on a diblock copolymer which has one end of a polymer chain fixed at the spherical surface was also investigated computationally by Vorselaars et al. [11]. By increasing the volume fraction between the two blocks they observed a sequence of morphologies from dots to stripes then to a layer with holes and finally to a uniform shell [11]. In 2D confinements, morphologies of block copolymers confined between a cylindrical pore as a function of surface field, pore radius and its thickness were also studied [10]. Compared to 1D and 2D confinements which have been well understood and documented, block copolymers in 3D confinements, on the other hand, have recently attracted considerable attention both experimentally [12,13,14,15,16,17,18,19] and computationally [20,21,22,23,24,25]. It is well known that in a confinement, the morphology of a block copolymer is affected by three main factors: (1) interactions between blocks, (2) interactions between blocks and surface boundary, and (3) the size of confinement i.e. the ratio between the size of the confinement space and the period of the domain in the bulk. Under these different conditions, block copolymers in 3D confinements produce a rich array of morphologies in the form of onion-like concentric lamellae, tennis balls, mushrooms, wheels, screw-like, stalked toroids and helices. Practically, those nanoparticles can be made in solutions or in spherical cavities. The forming of morphologies of block copolymers in solution depends on not only the polymer composition or the total degree of polymerization but also the polymer concentration, the nature of the common solvent, water content in solution, and the presence of additives such as homopolymers. For pure block copolymers in solution, spherical micelles are formed when the copolymer concentration is low, however, when the copolymer concentration increases the micelles change progressively from spheres to long rods with uniform diameter, to interconnected rods, and then vesicles [26, 27]. By adding a homopolymer onto a diblock copolymer solution [28] a phase transition from vesicle to spherical was obtained. Details of the relationship between diblock copolymer morphologies and solution concentration was also presented by Higuchi et al. using self-organised precipitation method [16, 17]. In those work, different morphologies such as lamellar, onion-like and hexagonally packed cylinders of a diblock copolymer particle as a function of solution concentration were obtained. From the computational modelling perspective, a limited number of work has been carried out for block copolymers in 3D spherical confinements [20,21,22,23,24,25]. Yang et al. investigated the effect of an addition of a homopolymer on a copolymer/homopolymer blend confined in a spherical cavity. Depending on the volume fraction of the homopolymer, spherical pore surface properties and the composition of the diblock copolymer, many interesting morphologies in the forms of stacked toroids, helices and Janus-like of a nanoparticle were observed [22]. Those morphologies, as a function of surface fields and confinement space diameter, for a diblock copolymer confined in a spherical pore were also obtained using different computational methods [20, 21, 23]. Using self-consistent field simulations, Fraaije et al. have shown different bicontinuous structures in dispersed droplets of polymer surfactant [25]. However, in that work, details of effects of a presence of solvent in the droplet was not presented. In our present work, the focus is on the effect of the homopolymer molecular weight on the morphology of an A–B diblock copolymer spherically confined by the homopolymer C. In contrast to earlier mentioned research for both hard and soft surface confinements which had the diblock copolymer completely confined within the boundary. In our system, however, the confinement depends on its environment, that the homoplymer is allowed to penetrate into the regions occupied by the diblock copolymers and changing the nature of its behaviour. Different morphologies of the structure are obtained by changing the diblock copolymer composition, the total volume fraction of the diblock, the homopolymer molecular weight and the interaction between the copolymer and the homopolymer. The effects of those parameters on the phase separation of a diblock copolymer/homopolymer system were previous carried out both experimentally [29] and computationally [30,31,32], but started from a homogeneous phase. In our calculation there are two main steps. The first step is to create a spherical domain that contains a disordered diblock copolymer spherically confined by a homopolymer. The second step focuses on the disorder-order phase transition under effects of a homopolymer presence in the sphere which strongly depends on its molecular weight and the interaction between the homopolymer and the diblock copolymer. In the self-consistent field theory (SCFT) [33,34,35,36,37], a polymer is composed by subchains and each subchain is modeled by a linear flexible string made by a sequence of N segments. The segment is the primitive unit that constitutes the system. The spatial concentration distribution of the segment at position r is calculated through the statistical weight which follows the Edward equation: $$\begin{aligned} \frac{\partial }{\partial s}Q_{i}(0,\mathbf{{r}}_{0};s, \mathbf{{r}}) = \left( \frac{b^2}{6}\nabla ^2 - \frac{1}{k_{B}T}V_{i}(\mathbf{{r}})\right) Q_{i}(0,\mathbf{{r}}_{0};s, \mathbf{{r}}). \end{aligned}$$ The path integral \(Q_{i}(0,\mathbf{{r}}_{0};s, \mathbf{{r}})\) in Eq. 1 is defined as the sum of the statistical weights for all the conformations of the subchain i which has both ends, at the 0th and N th segments, fixed at positions \(\mathbf{{r_{0}}}\) and \(\mathbf{{r_{N}}}\). The segment density of the i th subchain at position \(\mathbf{{r}}\) is calculated as [35]: $$\begin{aligned} \phi _{i}(\mathbf{{r}}) = C \int q_{i}(s,\mathbf{{r}})\tilde{q}_{i}(N_{i} - s, \mathbf{{r}}) ds, \end{aligned}$$ with the integral is taken for the whole subchain i. C is the normalization constant. The forward and backward path integrals \(q_{i}\) and \(\tilde{q_{i}}\) are: $$\begin{aligned} q_{i}(s,\mathbf{{r}}) = \int Q_{i}(0,\mathbf{{r}}_{0}; s,\mathbf{{r}}) d\mathbf{{r}}_{0}, \end{aligned}$$ $$\begin{aligned} \tilde{q}_{i}(N_{i} - s,\mathbf{{r}}) = \int Q_{i}(N_{i},\mathbf{{r}}_{N_{i}}; s,\mathbf{{r}}) d\mathbf{{r}}_{N_{i}}. \end{aligned}$$ For simplicity, both ends of a subchain are free ends, therefore the initial statistical weights at these free ends are unity. In Eq. 1, b is called the effective bond length—the length of the segment, \(k_{B}\) is the Boltzman constant, T is the temperature and \(V_{i}(\mathbf{{r}})\) is the mean field—the external potential acting on the subchain i. The self-consistent potential \(V_{i}(\mathbf{{r}})\) is given by: $$\begin{aligned} V_{i}(\mathbf{{r}}) = \sum _{K}\epsilon _{KK'}\phi _{K}(\mathbf{{r}}) - \mu _{i}(\mathbf{{r}}), \end{aligned}$$ with the summation is counted for all type of the segments, K, in the system. The segment–segment interaction parameter \(\epsilon _{KK'}\) is determined from the Flory–Huggins interaction parameter, \(\chi\), as: $$\begin{aligned} \chi = \frac{z}{k_{B}T}\left\{ \epsilon _{KK'} - \frac{1}{2}(\epsilon _{KK} + \epsilon _{K'K'})\right\} , \end{aligned}$$ here z is the number of nearest neighbour sites. The chemical potential in Eq. 5, \(\mu _{i}(\mathbf{{r}})\), is calculated as a first derivative of the free energy respect to \(\phi _{i}(\mathbf{{r}})\), where the free energy is given by [33, 35]: $$\begin{aligned} \begin{aligned} F[\{\phi _{K}\},\{V_{K}\}] =&-k_{B}T\sum _{p}M_{p}ln Z_{p} + \frac{1}{2}\sum _{K}\sum _{K'}\int \epsilon _{KK'}\phi _{K}(\mathbf{{r}})\phi _{K'}(\mathbf{{r}}) d\mathbf{{r}} \\&- \sum _{K}\int V_{K}(\mathbf{{r}})\phi _{K}(\mathbf{{r}})d\mathbf{{r}}, \end{aligned} \end{aligned}$$ with \(M_{p}\) being the total number of chains of p-type polymer and \(Z_{p}\) being the partition function. The statistical weights, segment density, chemical potential or free energy and mean field potential described in Eqs. 1, 2, 5 and 7, are related to each other and need to be solved iteratively [37]. The system we use throughout our calculation comprises a diblock copolymer AB mixed with a homopolymer C. Each diblock copolymer chain consisting of \(N_{A}\) segments of A-type and \(N_{B}\) segments of B-type. Here we choose the diblock chain length \(N_{A} + N_{B} = 4 + 8 = 12\). This chain length is not too long to cause unnecessary cost of CPU time consumption and not too short to prevent microphase segregation taking place [31]. The homopolymer chain length, \(N_{C}\), is chosen as \(N_{C} = 1, 2\) and 5 (in segment unit). A real space simulation box of \(28\times 28\times 28\) (in units of segment size that is taken to be unity) is divided into a grid mesh of \(56\times 56\times 56\). To avoid the size-effect, we carried out different calculations of different box sizes, and from the results obtained for the free energy as a function of the box size we see that the box size from 28 \(\times\) 28 \(\times\) 28 would give us stable and accurate results. The spatial mesh width are chosen as \(\Delta x = \Delta y = \Delta z = 0.5\), and the mesh size along the chain is chosen as \(\Delta s = 0.2\) (for smaller values of \(\Delta s\) we obtain the same result, however, the total CPU time is significantly increased when \(\Delta s\) is decreased). The periodic boundary conditions are applied. A canonical ensemble, which keeps the total volume fraction of each polymer type in the system constant, is used. The total volume fraction of the diblock copolymer AB in the system is chosen at 10\(\%\) and 20\(\%\). This means that in the system there is 10\(\%\) (or 20\(\%\)) of the diblock copolymer and 90\(\%\) (or \(80\%\)) of the homopolymer. For the purpose of this work, the SUSHI code [37], which we use to perform SCFT simulations, is implemented throughout the investigation. Creating a spherical domain We start the calculation from a homogeneous system that contains \(90\%\) of homopolymer and \(10\%\) of diblock copolymer. A confinement method to confine the diblock copolymer in the middle of the box is implemented. In the first iteration step of the SCFT calculation we limit only the diblock to be in the middle of the simulation box. A small domain of the diblock in the middle of the box acting as a "seed" for the growth of a spherical diblock copolymer domain [38]. To make it simple, throughout the calculation we chose the Flory–Huggins interaction parameters between the homopolymer and diblock copolymer as \(\chi _{AC} = \chi _{BC} = \chi\). This means that the two blocks of the diblock copolymer A–B interact with the homopolymer C equally. Thus, there is no selective wetting of the diblock copolymer at the copolymer/homopolymer interface. To have a phase separation between the copolymer and the homopolymer, the interaction between them has to be strong enough. Here we choose \(\chi _{AC} = \chi _{BC} = 2.0\). Initially, the interaction between segments of the diblock copolymer is set at \(\chi _{AB} = 0\). The result after running a static SCFT calculation is shown in Fig. 1. In this figure we shown the isosurface for the diblock copolymer (\(\phi _{A}\) + \(\phi _{B}\)), which has a spherical shape and located at the center of the box. For the rest of the box, i.e. the surrounding area, which is completely dominated by the homopolymer, is not shown here. By varying the interaction between the copolymer and homopolymer, \(\chi\), and the homopolymer chain length, \(N_{C}\), we see that the occupation of the diblock copolymer in the spherical domain is proportional to the increase of \(\chi\) or \(N_{C}\). With \(N_{C}\) = 1, the occupation of the copolymer in the spherical domain is 60.6\(\%\), 85.0\(\%\) and 92.3\(\%\), for \(\chi\) = 1.0, 1.5 and 2.0, respectively. Outside the sphere the occupation of the copolymer is up to \(3\%\), for \(\chi\) = 1.0, and when the interaction is strong enough, \(\chi \ge 1.4\), the presence of the copolymer is almost disappeared. It means that when the interaction between the copolymer and the homopolymer is strong enough, outside of the sphere is totally occupied by the homopolymer, and inside the sphere is dominated by the copolymer. However, with this short chain length of the homopolymer, inside the sphere, there is always presence of the homopolymer. a A simulation box contains a diblock copolymer A–B and a homopolymer C. b A spherical domain that comprises of a diblock copolymer \(A_{4}B_{8}\) after a domain confinement calculation was performed. The homopolymer that occupies the rest of the simulation box is not shown here. c A cut-through of the sphere Increasing the homopolymer chain length to \(N_{C} = 2\) the occupation of the copolymer in the sphere increases to 92\(\%\), \(98\%\) and \(99\%\) when \(\chi\) is 1.0, 1.5 and 2.0, respectively. Further increasing the homopolymer chain length to \(N_{C} = 5\), we obtain a total macrophase separation between the copolymer and homopolymer; in the sphere it is totally (\(100\%\)) occupied by the copolymer and outside it is totally occupied by the homopolymer. Effect of the homopolymer chain length In this section, starting from a structure as shown in Fig. 1b, we calculate the microphase separation of the diblock copolymer in the sphere for different homopolymer chain lengths. First, we use a short homopolymer chain length, \(N_{C} = 1\). The interaction between the copolymer and the homopolymer is chosen at \(\chi = 1.0\), we see that when the interaction between two segments A and B of the diblock, \(\chi _{AB}\), increases from the initial zero value the occupation of the copolymer in the sphere is reduced. For example, the occupation of the copolymer is about \(56\%\) when \(\chi _{AB} = 0.2\), but when \(\chi _{AB}\) increases to 0.4 the occupation reduces to \(50\%\). This is due to the fact, that for a short homopolymer chain length and the interaction between the homopolymer and the copolymer is weak, inside the sphere always has a presence of the homopolymer and when the repulsive interaction between monomers A and B increases the homopolymer migrates into the interface of A and B domains. Furthermore, at this weak interaction regime between the copolymer and the homopolymer, \(\chi = 1.0\), the homopolymer outside the sphere can easily move inside the sphere and dilutes the diblock, hence, reduces the interaction between segments and prevents the diblock from segregating. Keep increasing \(\chi _{AB}\) to by 0.8 we observe that the whole system becomes homogeneous. This result is in well agreement with results obtained by Matsen [39] on the effect of the homopolymer molecular weight on the microphase transition in a weakly separated diblock copolymer and homopolymer blend. They found that at low weight homopolymers tend to be miscible with the microstructure, causing the lattice spacing to diverge and the system becomes homogeneous. This is because small homopolymers tend to distribute uniformly throughout the melt. Their nearly uniform distribution will produce a field with little spatial variation and thus with little tendency to induce segregation. They will instead dilute the copolymer concentration, effectively reducing the interaction between segments. The same conclusion was also made by Semenov [40] for diblock copolymer and homopolymer blends in a strong segregation regime. It is worth mentioning that when the homopolymer chain length is short the effect from the thermal fluctuation would be quite significant, however, in the case of short chain length the phase-separation only happens in a really low temperature regime hence the contribution from thermal fluctuation could be neglected [41]. In a weak interaction regime between the homopolymer and the copolymer, there is no microphase segregation between the diblock copolymer observed for any interaction value of \(\chi _{AB}\). By increasing the interaction between the homopolymer and the copolymer, \(\chi\), we see that the microphase starts to segregate, for example, at \(\chi = 2.0\), in Fig. 2 we show different stable morphologies for the A-type polymer inside the sphere at different values of \(\chi _{AB}\). At \(\chi _{AB} = 2.0\), when the microphase segregation takes place, in the interface regions between A-rich and B-rich layers there is presence of the homopolymer, up to \(18\%\) (of the volume fraction). Also, in the A-rich and B-rich domains themselves the occupations of the homopolymer are \(5\%\) and \(4\%\), respectively. It is also important to notice that by choosing the interaction between the homopolymer and copolymer strong enough, in the space outside the sphere it is totally occupied by the homopolymer and the copolymer is completely vanished. Morphology of the diblock copolymer as a function of the interaction between components A and B of the diblock, \(\chi _{AB}\). The chain length of the homoplolymer, \(l_{h} = 1\), the total volume fraction of the copolymer is \(10\%\) and the interaction between the homopolymer and copolymer \(\chi = 2.0\). Throughout the calculation we chose the interaction parameters between the diblock copolymer A-B and homopolymer C equally, \(\chi _{AC} = \chi _{BC}\). Unless stated otherwise, thereafter the morphologies are shown for component A with the isosurface \(\phi _{A} = 0.5\) To make things clear, in Fig. 3 we show the morphologies for all the components in the system, grey colour represents A-type, green for B-type and red for homopolymer C. In Fig. 3a only two components A and B are plotted, with isosurfaces \(\phi _{A} = \phi _{B} = 0.5\). From this picture we see that at the interfaces between A-rich and B-rich there is a gap. This gap is significantly filled with the homopolymer, up to 11\(\%\) for the case of \(\chi _{AB} = 1.2\). The plot for all three components is shown in Fig. 3b, with the isosurfaces \(\phi _{A} = \phi _{B} = 0.5\) and \(\phi _{C} = 0.1\). The structure for \(\chi _{AB} = 1.6\) is shown in Fig. 3c, with the occupation of the homopolymer at the two interfaces is in the range of \(11\%\) \(\rightarrow\) \(13\%\). To see how the polymers distribute in the system, in Fig. 4 we show volume fractions along a symmetrical line of the simulation box for all three components, for the case shown in Fig. 3c at \(\chi _{AB} = 1.6\). From this figure we see that the homopolymer occupies throughout the space and its presence peaks at the interfaces between the diblocks. Isosurfaces of three components in the system, \(\chi _{AB} = 1.2\) (a, b) and \(\chi _{AB} = 1.6\) (c). A-component (gray), B-component (green) and homopolymer C in red Changes of the volume fraction for all three monomer components A, B and C along a symmetrical line of the box, for the case of \(\chi _{AB} = 1.6\) in Fig. 3 The presence of the homopolymer in the sphere depends not only on its interaction with the diblock copolymer but also the homopolymer chain length. To see details how the homopolymer chain length affects the diblock phase separation we now increase the homopolymer chain length to \(N_{C} = 5.0\). We observe that in the initial structure of the sphere when the interaction between monomers A and B (\(\chi _{AB}\)) was set at zero, the occupation of the copolymer in the sphere is almost \(100\%\) even the interaction between the copolymer and homopolymer is weak, at \(\chi = 1.0\). Some morphologies shown in Fig. 2 for \(N_{C} = 1.0\) are also obtained for \(N_{C} = 5.0\). However, in the latter the same morphology is obtained at a weaker value of \(\chi _{AB}\) compared to the former. Furthermore, unlike the previous case where at \(\chi = 1.0\) the system becomes homogeneous when the interaction between A and B, \(\chi _{AB}\), is strong, in the latter case, however, we observe phase segregation for high values of \(\chi _{AB}\). A bigger sphere It has been shown in other works, that the morphology of a nanoparticle really depends on the size of the confinement [12, 20, 42]. To see how the size of the spherical domain affects its inner structures, we increase the diblock copolymer concentration in the simulation box from 10 to 20\(\%\). It is worthnoting that by increasing the size of the diblock spherical domain, the size of the bulk homopolymer in a fixed simulation box is decreasing and this could lead to a size effect of "bulk" homopolymer which confines the diblock sphere. To make sure our chosen system size (\(28 \times 28 \times 28\)) is large enough in order to eliminate the size effect, we carried out calculations for different box sizes but kept the volume of the diblock copolymer spherical domain constant. Our results show that at the size box from \(26 \times 26 \times 26\) all the results are identical. With a bigger spherical domain more morphologies are observed compared to the previous case of a smaller one. In Fig. 5 we show morphologies as a function of the interaction parameter \(\chi _{AB}\). Figure 5a shows for the component A, and Fig. 5b shows for all three components A, B and C. Like the previous case, with \(N_{C} = 1.0\) and \(\chi = 2.0\), the microphase separation of the diblock takes place at \(\chi _{AB}\) as small as 1.2. At \(\chi _{AB} = 1.2\), in the center of the sphere forms a small spherical A-rich domain and in the outer layer forms 24 small islands. Increasing \(\chi _{AB}\) to 1.3, these small islands grow and then connect to each other and form a cage which comprises of 14 holes. From \(\chi _{AB} = 1.5\) the domains on the cage layer grow and fill all the holes to form an uniform shell. At \(\chi _{AB} = 1.8\), apart from the uniform shell, there is a new outer layer. This new layer comprises of many small islands. Continue increasing the \(\chi _{AB}\), these islands connect and form stripes, at \(\chi _{AB} = 2.3\). From \(\chi _{AB} = 2.5\) we have a coexisting of islands (6 islands) and discs (8 discs). In the center of the sphere it is now occupied by the B-rich domain, not A-rich domain like cases when \(\chi _{AB}\) is smaller than 2.5. The coexisting of islands and discs remains until \(\chi _{AB} = 2.8\) where the structure becomes a new uniform shell. Morphologies in Fig. 5 were also obtained for a diblock copolymer confined in a spherical surface under different surface fields and degree of confinement [15, 20, 23], and homopolymer volume fraction [22]. Morphology of the diblock copolymer as a function of \(\chi _{AB}\) when the concentration of the diblock is increased to \(20\%\). The homopolymer chain length \(l_{h} = 1\), and the interaction between the copolymer and homopolymer is \(\chi = 2.0\). a Plot for A-component and b plot for three components, A(gray), B(green) and C(red) Like the previous case of a smaller sphere, the presence of the homopolymer inside the sphere changes dramatically when the homopolymer chain length is increased. Indeed, by increasing the homopolymer chain length \(N_{C}\) to 5.0 we obtain a complete macrophase separation, copolymer totally fills up the sphere and surrounded by the homopolymer. In this case our system can be compared with systems of diblock copolymers confined in a hard [23] or soft [15, 22, 43] spherical surface. Different morphologies when the homopolymer chain length \(N_{C} = 5\), for different values of \(\chi _{AB}\), are shown in Fig. 6. The same as Fig. 5 but for \(l_{h} = 5\) By using the static self-consistent field theory we have studied the microphase separation of a diblock copolymer spherically surrounded by a homopolymer. When the homopolymer chain length is short, in order to have a microphase separation in the diblock copolymer, the interaction between the diblock copolymer and the homopolymer needs to be strong. Because of a short chain length, the homopolymer can easily penetrate into spaces dominated by the copolymer and reduces the interaction between segments of the diblock and prevents them from segregating. When the diblock copolymer undergoes phase segregation, a significant amount of the homopolymer is observed to be occupying not only in the interface between A-rich and B-rich regions but also in the A-rich and B-rich regions themselves. The presence of the homopolymer in the sphere is significantly reduced when the homopolymer chain length is increased. When the homopolymer chain length is long enough a total macrophase separation between the homopolymer and the diblock copolymer is obtained. Inside the sphere, depending on the interactions between monomers A and B, and between the homopolymer and the copolymer, different morphologies are obtained such as islands, cage, stripes, uniform shell and branch are achieved. It was our aim, that this work—the methodology and computational modelling—could provide a new method of controlling the assembly of matter with sufficient certainty and precision to allow the preparation of materials and molecular assemblies, with far more sophisticated and tuneable properties and functions than are accessible in materials synthesised using current traditional methods [2]. Understanding the mechanism of creating block polymer-based nanoparticles and how to switch phase between different morphologies is a very important step prior to developing a new hybrid method accommodating for both inorganic and organic nanoparticles. Furthermore, understanding the mechanism of phase transition in soft matter systems could help to develop new effective techniques in food processing which can create food with different sensory properties, product stability and visual impressions. As food can be regarded as multi-components mixtures whose components have a large different in length scale, physical and chemical properties [44]. By tailoring their length scale and correlation interaction between components different phases such as liquid, crystalline, foams and emulsions can be obtained [45, 46]. Looking to the future, in our on going work, we are developing a method of encapsulating and releasing metallic nanoparticles using block copolymer-based nanocontainers which can by applied as drug delivery vehicles, nanoreactors, fuel cell and photovoltaic. Hamley IW (2004) Developments in block copolymer science and technology. Wiley, Weiheim Rose J, Raithby P, Woods J, Makatsoris C, Price S, Wilson C, Jackson G, Ward M, Brammer L, Rosseinsky M, Yaliraki S, Champness N, Roberts K, Buurma N, Peacock A (2017) Directed Assembly Network—a Roadmap to Innovation, Edition Zhao B, Zhu L (2009) Mixed polymer brush-grafted particles: a new class of environmentally responsive nanostructured materials. Macromolecules 42(24):9369–9383 Such GK, Tjipto E, Postma A, Johnston APR, Caruso F (2007) Ultrathin, responsive polymer click capsules. Nano Lett 7(6):1706–1710 Balazs AC, Emrick T, Russell TP (2006) Nanoparticle polymer composites: where two small worlds meet. Science 314(5802):1107–1110 Warren SC, Messina LC, Slaughter LS, Kamperman M, Zhou Q, Gruner SM, DiSalvo FJ, Wiesner U (2008) Ordered mesoporous materials from metal nanoparticle block copolymer self-assembly. Science 320(5884):1748–1752 Matsen MW, Bates FS (1996) Unifying weak- and strong-segregation block copolymer theories. Macromolecules 29(4):1091–1098 Bates FS, Fredrickson GH (1999) Block copolymers designer soft materials. Phys Today 52(2):32–38 Ly DQ, Honda T, Kawakatsu T, Zvelindovsky AV (2009) Electric field-induced transitions in perforated lamella of ABA triblock copolymer thin film. Soft Matter 5(23):4814–4822 Shi AC, Li B (2013) Self-assembly of diblock copolymers under confinement. Soft Matter 19(5):1398–1413 Vorselaars B, Kim JU, Chantawansri TL, Fredrickson GH, Matsen MW (2011) Self-consistent field theory for diblock copolymers grafted to a sphere. Soft Matter 7(11):5128–5137 Xu J, Wang K, Liang R, Yang Y, Zhou H, Xie X, Zhu J (2015) Structural transformation of diblock copolymer/homopolymer assemblies by tuning cylindrical confinement and interfacial interactions. Langmuir 31(45):12354–12361 Yabu H, Higuchi T, Jinnai H (2014) Frustrated phases: polymeric self-assemblies in a 3D confinement. Soft Matter 10(17):2919–2931 Li L, Matsunaga K, Zhu J, Higuchi T, Yabu H, Shimomura M, Jinnai H, Hayward RC, Russell TP (2010) Solvent-driven evolution of block copolymer morphology under 3D confinement. Macromolecules 43(18):7807–7812 Higuchi T, Pinna M, Zvelindovsky AV, Jinnai H, Yabu H (2016) Multipod structures of lamellae forming diblock copolymers in threedimensional confinement spaces: experimental observation and computer simulation. J Polym Sci B 54:1702–1709 Higuchi T, Tajima A, Yabu H, Shimomura M (2008) Spontaneous formation of polymer nanoparticles with inner micro-phase separation structures. Soft Matter 4(6):1302–1305 Higuchi T, Tajima A, Motoyoshi K, Yabu H, Shimomura M (2009) Suprapolymer structures from nanostructured polymer particles. Angew Chem 48(28):5125–5133 Robb MJ, Connal LA, Lee BF, Lynd NA, Hawker CJ (2012) Functional block copolymer nanoparticles: toward the next generation of delivery vehicles. Polym Chem 3(6):1618–1628 Klinger D, Robb MJ, Spruell JM, Lynd NA, Hawker CJ, Connal LA (2013) Supramolecular guests in solvent driven block copolymer assembly: from internally structured nanoparticles to micelles. Polym Chem 4(19):5038–5042 Avalos E, Higuchi T, Teramoto T, Yabu H, Nishiura Y (2016) Frustrated phases under three-dimensional confinement simulated by a set of coupled Cahn Hilliard equations. Soft Matter 12(27):5905–5914 Yan N, Zhu Y, Jiang W (2016) Self-assembly of ABC triblock copolymers under 3D soft confinement: a Monte Carlo study. Soft Matter 12(3):965–972 Yang R, Li B, Shi AC (2012) Phase behavior of binary blends of Diblock copolymer/homopolymer confined in spherical nanopores. Langmuir 28(2):1569–1578 Chen P, Liang H, Shi AC (2008) Microstructures of a cylinder-forming diblock copolymer under spherical confinement. Macromolecules 41(22):8938–8943 Yu B, Li B, Jin Q, Ding D, Shi AC (2007) Self-assembly of symmetric diblock copolymers confined in spherical nanopores. Macromolecules 40(25):9133–9142 Fraaige JGEM, Sevink GJA (2003) Model for pattern formation in polymer surfactant nanodroplets. Macromolecules 36(21):7891–7893 Zhang L, Eisenberg A (1999) Crewcut aggregates from selfassembly of blends of polystyrenepoly(acrylic acid) block copolymers and homopolystyrene in solution. Polym Sci B 37(13):1469–1484 Uneyama T (2007) Density functional simulation of spontaneous formation of vesicle in block copolymer solutions. J Chem Phys 126:114902–114931 Zhang L, Eisenberg A (1996) Multiple morphologies and characteristics of crew-cut micelle-like aggregates of polystyrene-b-poly(acrylic acid) diblock copolymers in aqueous solutions. J Am Chem Soc 118(3):3168–3181 Dobrosielska K, Wakao S, Suzuki J, Noda K, Takano A, Matsushita Y (2009) Effect of homopolymer molecular weight on nanophase-separated structures of AB block copolymer/c homopolymer blends with hydrogen-bonding interactions. Macromolecules 42(18):7098–7102 Uneyama T, Doi M (2005) Calculation of the micellar structure of polymer surfactant on the basis of the density functional theory. Macromolecules 38(13):5817–5825 Uneyama T, Doi M (2005) Density functional theory for block copolymer melts and blends. Macromolecules 38(1):196–205 Wang R, Jiang Z, Xue G (2011) Excluded volume effect on the self-assembly of amphiphilic AB diblock copolymer in dilute solution. Polymer 52(10):2361–2365 Ly DQ, Honda T, Kawakatsu T, Zvelindovsky AV (2007) Kinetic pathway of gyroid-to-cylinder transition in diblock copolymer melt under an electric field. Macromolecules 40(8):2928–2935 Morita H, Kawakatsu T, Doi M, Yamaguchi D, Takenaka M, Hashimoto T (2004) Phase separated structures in a binary blend of diblock copolymers under an extensional force field helical domain structure. J Phys Soc Japan 73:1371–1374 Kawakatsu T (2004) Statistical physics of polymers: an introduction. Springer, Berlin Matsen MW (2006) Self-consistent field theory and its applications. In: Grompper G, Schick M (eds) Soft matter. Wiley, Weinheim, p 1 Honda T, Kodama H, Roan JR, Morita H, Urashita S, Hasegawa R, Yokomizo K, Kawakatsu T, Doi M (2004) SUSHI users manual; OCTA: Nagoya, Japan, (http://octa.jp) Ly DQ, Pinna M, Honda T, Kawakatsu T, Zvelindovsky AV (2013) Kinetic pathways of sphere-to-cylinder transition in diblock copolymer melt under electric field. J Chem Phys 138(7):074904 Matsen MW (1995) Phase behavior of block copolymer/homopolymer blends. Macromolecules 28(17):5765–5773 Semenov AN (1993) Phase equilibria in block copolymer-homopolymer mixtures. Macromolecules 26(9):2273–2281 Müller M, Schmid F (2005) Incorporating fluctuations and dynamics in self-consistent field theories for polymer blends. Adv Polym Sci 185:1 Higuchi T, Tajima A, Motoyoshi K, Yabu H, Shimomura M (2008) Frustrated phases of block copolymers in nanoparticles. Angew Chem 47(42):8044–8050 Sevink GJA, Zvelidovsky AV (2007) Mesoscopic dynamics of complex vesicle formation: kinetic versus thermodynamic factors. Mol Simul 33:405–415 Tanaka H (2012) Viscoelastic phase separation in soft matter and foods. Faraday Discuss 158:371–406 Ubbink J, Burbidge A, Mezzenga R (2008) Food structure and functionality: a soft matter perspective. Soft Matter 4(8):1569–1581 Mezzenga R (2007) Equilibrium and non-equilibrium structures in complex food systems. Food Hydrocolloids 21:674–682 DQL carried out the simulations. CM conceived the overarching research and together with DQL designed it. Both DQL and CM analysed the results and wrote the manuscript. Both authors read and approved the final manuscript. This work has been partially funded by UK Engineering and Physical Sciences Research Council (EPSRC) research Grant with Number EP/K014234/2. The authors fully acknowledge and thank EPSRC for the support. DL thanks Toshihiro Kawakatsu and Takashi Honda for their valuable comments and suggestions. All the calculations were carried out using the high computing facilities at Brunel University London. School of Physical Sciences and Computing, University of Central Lancashire, Preston, UK Dung Q. Ly School of Aerospace, Transportation and Manufacturing, Cranfield University, Cranfield, UK Charalampos Makatsoris Correspondence to Charalampos Makatsoris. Ly, D.Q., Makatsoris, C. Effects of the homopolymer molecular weight on a diblock copolymer in a 3D spherical confinement. BMC Chemistry 13, 24 (2019). https://doi.org/10.1186/s13065-019-0541-7 Directed Assembly
CommonCrawl
Printed from https://ideas.repec.org/p/hal/wpaper/halshs-00648884.html Entropy and the value of information for investors Antonio Cabrales () (Departamento de Economía - UC3M - Universidad Carlos III de Madrid [Madrid]) Olivier Gossner (PSE - Paris-Jourdan Sciences Economiques - CNRS - Centre National de la Recherche Scientifique - ENPC - École des Ponts ParisTech - EHESS - École des hautes études en sciences sociales - INRA - Institut National de la Recherche Agronomique - ENS Paris - École normale supérieure - Paris - PSL - Université Paris sciences et lettres, PSE - Paris School of Economics - ENPC - École des Ponts ParisTech - ENS Paris - École normale supérieure - Paris - PSL - Université Paris sciences et lettres - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique - EHESS - École des hautes études en sciences sociales - INRAE - Institut National de Recherche pour l'Agriculture, l'Alimentation et l'Environnement, LSE - London School of Economics and Political Science) (Department of Economics - Brown University, Institute IMDEA Software [Madrid]) Consider any investor who fears ruin when facing any set of investments that satisfy no-arbitrage. Before investing, he can purchase information about the state of nature in the form of an information structure. Given his prior, information structure $\alpha$ is more informative than information structure $\beta$ if, whenever he is willing to buy $\beta$ at some price, he is also willing to buy $\alpha$ at that price. We show that this informativeness ordering is complete and is represented by the decrease in entropy of his beliefs, regardless of his preferences, initial wealth, or investment problem. We also show that no prior-independent informativeness ordering based on similar premises exists. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2011. "Entropy and the value of information for investors," Working Papers halshs-00648884, HAL. Handle: RePEc:hal:wpaper:halshs-00648884 Note: View the original document on HAL open archive server: https://halshs.archives-ouvertes.fr/halshs-00648884 File URL: https://halshs.archives-ouvertes.fr/halshs-00648884/document Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2013. "Entropy and the Value of Information for Investors," American Economic Review, American Economic Association, vol. 103(1), pages 360-377, February. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2010. "Entropy and the value of information for investors," Working Papers 2010-23, Instituto Madrileño de Estudios Avanzados (IMDEA) Ciencias Sociales. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2010. "Entropy and the value of information for investors," Working Papers 2010-17, Brown University, Department of Economics. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2013. "Entropy and the Value of Information for Investors," Post-Print hal-00812682, HAL. Serrano, Roberto & Gossner, Olivier & Cabrales, Antonio, 2011. "Entropy and the value of information for investors," UC3M Working papers. Economics we1104, Universidad Carlos III de Madrid. Departamento de Economía. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2013. "Entropy and the Value of Information for Investors," PSE - Labex "OSE-Ouvrir la Science Economique" hal-00812682, HAL. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2013. "Entropy and the Value of Information for Investors," PSE-Ecole d'économie de Paris (Postprint) hal-00812682, HAL. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2011. "Entropy and the value of information for investors," PSE Working Papers halshs-00648884, HAL. Antonio Cabrales & Olivier Gossner & Roberto Serrano, 2010. "Entropy and the value of information for investors," Levine's Working Paper Archive 661465000000000355, David K. Levine. Robert J. Aumann & Roberto Serrano, 2008. "An Economic Index of Riskiness," Journal of Political Economy, University of Chicago Press, vol. 116(5), pages 810-836, October. Robert J. Aumann & Roberto Serrano, 2006. "An Economic Index of Riskiness," Working Papers 2006-20, Brown University, Department of Economics. Robert J. Aumann & Roberto Serrano, 2007. "An Economic Index of Riskiness," Discussion Paper Series dp446, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem. Robert J. Aumann & Roberto Serrano, 2007. "An economic index of riskiness," Working Papers 2007-08, Instituto Madrileño de Estudios Avanzados (IMDEA) Ciencias Sociales. Robert J. Aumann & Roberto Serrano, 2006. "An Economic Index of Riskiness," Levine's Bibliography 321307000000000585, UCLA Department of Economics. Robert J. Aumann & Roberto Serrano, 2007. "An Economic Index of Riskiness," Working Papers wp2007_0706, CEMFI. Thierry Post & Martijn J. van den Assem & Guido Baltussen & Richard H. Thaler, 2008. "Deal or No Deal? Decision Making under Risk in a Large-Payoff Game Show," American Economic Review, American Economic Association, vol. 98(1), pages 38-71, March. Gilboa, Itzhak & Lehrer, Ehud, 1991. "The value of information - An axiomatic approach," Journal of Mathematical Economics, Elsevier, vol. 20(5), pages 443-459. Itzhak Gilboa & Ehud Lehrer, 1989. "The Value of Information -- An Axiomatic Approach," Discussion Papers 835, Northwestern University, Center for Mathematical Studies in Economics and Management Science. Itzhak Gilboa & Ehud Lehrer, 1991. "The Value of Information - An Axiomatic Approach," Post-Print hal-00753232, HAL. Olivier Gossner & Penélope Hernández & Abraham Neyman, 2006. "Optimal Use of Communication Resources," Econometrica, Econometric Society, vol. 74(6), pages 1603-1636, November. Olivier Gossner & Penelope Hernandez & Abraham Neyman, 2004. "Optimal Use of Communication Resources," Discussion Paper Series dp377, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem. Olivier Gossner & Abraham Neyman & Penélope Hernández, 2005. "Optimal Use Of Communication Resources," Working Papers. Serie AD 2005-06, Instituto Valenciano de Investigaciones Económicas, S.A. (Ivie). Olivier Gossner & Pénélope Hernández & Abraham Neyman, 2006. "Optimal use of communication resources," Post-Print halshs-00754118, HAL. Peng, Lin, 2005. "Learning with Information Capacity Constraints," Journal of Financial and Quantitative Analysis, Cambridge University Press, vol. 40(2), pages 307-329, June. Binswanger, Hans P, 1981. "Attitudes toward Risk: Theoretical Implications of an Experiment in Rural India," Economic Journal, Royal Economic Society, vol. 91(364), pages 867-890, December. Hans Binswanger, 1981. "Attitudes toward risk: Theoretical implications of an experiment in rural india," Artefactual Field Experiments 00010, The Field Experiments Website. Mas-Colell, Andreu & Whinston, Michael D. & Green, Jerry R., 1995. "Microeconomic Theory," OUP Catalogue, Oxford University Press, number 9780195102680. Dean P. Foster & Sergiu Hart, 2009. "An Operational Measure of Riskiness," Journal of Political Economy, University of Chicago Press, vol. 117(5), pages 785-814. Dean Foster & Sergiu Hart, 2007. "An Operational Measure of Riskiness," Levine's Bibliography 843644000000000095, UCLA Department of Economics. Dean P. Foster & Sergiu Hart, 2007. "An Operational Measure of Riskiness," Discussion Paper Series dp454, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem. Charles A. Holt & Susan K. Laury, 2002. "Risk Aversion and Incentive Effects," American Economic Review, American Economic Association, vol. 92(5), pages 1644-1655, December. Azrieli, Yaron & Lehrer, Ehud, 2008. "The value of a stochastic information structure," Games and Economic Behavior, Elsevier, vol. 63(2), pages 679-693, July. Yaron Azrieli & Ehud Lehrer, 2004. "The Value Of A Stochastic Information Structure," Game Theory and Information 0411006, University Library of Munich, Germany. Sergiu Hart, 2011. "Comparing Risks by Acceptance and Rejection," Journal of Political Economy, University of Chicago Press, vol. 119(4), pages 617-638. Sergiu Hart, 2010. "Comparing Risks by Acceptance and Rejection," Discussion Paper Series dp531, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem. Athey, Susan & Levin, Jonathan, 2018. "The value of information in monotone decision problems," Research in Economics, Elsevier, vol. 72(1), pages 101-116. Susan Athey & Jonathan Levin, 1998. "The Value of Information In Monotone Decision Problems," Working papers 98-24, Massachusetts Institute of Technology (MIT), Department of Economics. Jonathan Levin & Susan Athey, 2001. "The Value of Information in Monotone Decision Problems," Working Papers 01003, Stanford University, Department of Economics. Alvaro Sandroni, 2000. "Do Markets Favor Agents Able to Make Accurate Predicitions?," Econometrica, Econometric Society, vol. 68(6), pages 1303-1342, November. Olivier Gossner, 2011. "Simple Bounds on the Value of a Reputation," Econometrica, Econometric Society, vol. 79(5), pages 1627-1641, September. Olivier Gossner, 2011. "Simple bounds on the value of a reputation," Post-Print halshs-00654683, HAL. Fabio Maccheroni & Massimo Marinacci & Aldo Rustichini, 2006. "Ambiguity Aversion, Robustness, and the Variational Representation of Preferences," Econometrica, Econometric Society, vol. 74(6), pages 1447-1498, November. Fabio Maccheroni & Massimo Marinacci & Aldo Rustichini, 2004. "Ambiguity Aversion, Robustness, and the Variational Representation of Preferences," Carlo Alberto Notebooks 12, Collegio Carlo Alberto, revised 2006. Bourguignon, Francois, 1979. "Decomposable Income Inequality Measures," Econometrica, Econometric Society, vol. 47(4), pages 901-920, July. Olivier Gossner & Tristan Tomala, 2006. "Empirical Distributions of Beliefs Under Imperfect Observation," Mathematics of Operations Research, INFORMS, vol. 31(1), pages 13-30, February. Olivier Gossner & Tristan Tomala, 2006. "Empirical Distributions of Beliefs Under Imperfect Observation," Post-Print hal-00487960, HAL. Nicola Persico, 2000. "Information Acquisition in Auctions," Econometrica, Econometric Society, vol. 68(1), pages 135-148, January. Nicola Persico, 1997. "Information Acquisition in Auctions," UCLA Economics Working Papers 762, UCLA Department of Economics. Blume, Lawrence & Easley, David, 1992. "Evolution and market behavior," Journal of Economic Theory, Elsevier, vol. 58(1), pages 9-40, October. Sims, Christopher A., 2003. "Implications of rational inattention," Journal of Monetary Economics, Elsevier, vol. 50(3), pages 665-690, April. Informativeness; Information structures; Entropy; Decision under uncertainty; Investment; Blackwell ordering; C00 - Mathematical and Quantitative Methods - - General - - - General C43 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods: Special Topics - - - Index Numbers and Aggregation D00 - Microeconomics - - General - - - General D80 - Microeconomics - - Information, Knowledge, and Uncertainty - - - General G00 - Financial Economics - - General - - - General G11 - Financial Economics - - General Financial Markets - - - Portfolio Choice; Investment Decisions NEP-CTA-2012-05-29 (Contract Theory & Applications) NEP-MIC-2012-05-29 (Microeconomics) All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hal:wpaper:halshs-00648884. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (CCSD). General contact details of provider: https://hal.archives-ouvertes.fr/ .
CommonCrawl
Optimal vaccination strategies for an SEIR model of infectious diseases with logistic growth A network model for control of dengue epidemic using sterile insect technique April 2018, 15(2): 461-483. doi: 10.3934/mbe.2018021 Spatially-implicit modelling of disease-behaviour interactions in the context of non-pharmaceutical interventions Notice Ringa 1,, and Chris T. Bauch 2, Botswana International University of Science and Technology, Department of Mathematics and Statistical Sciences, Private Bag 16, Palapye, Botswana University of Waterloo, Department of Applied Mathematics, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada * Corresponding author: N. Ringa Received October 20, 2016 Accepted May 05, 2017 Published June 2017 Pair approximation models have been used to study the spread of infectious diseases in spatially distributed host populations, and to explore disease control strategies such as vaccination and case isolation. Here we introduce a pair approximation model of individual uptake of non-pharmaceutical interventions (NPIs) for an acute self-limiting infection, where susceptible individuals can learn the NPIs either from other susceptible individuals who are already practicing NPIs ("social learning"), or their uptake of NPIs can be stimulated by being neighbours of an infectious person ("exposure learning"). NPIs include individual measures such as hand-washing and respiratory etiquette. Individuals can also drop the habit of using NPIs at a certain rate. We derive a spatially defined expression of the basic reproduction number $R_0$ and we also numerically simulate the model equations. We find that exposure learning is generally more efficient than social learning, since exposure learning generates NPI uptake in the individuals at immediate risk of infection. However, if social learning is pre-emptive, beginning a sufficient amount of time before the epidemic, then it can be more effective than exposure learning. Interestingly, varying the initial number of individuals practicing NPIs does not significantly impact the epidemic final size. Also, if initial source infections are surrounded by protective individuals, there are parameter regimes where increasing the initial number of source infections actually decreases the infection peak (instead of increasing it) and makes it occur sooner. The peak prevalence increases with the rate at which individuals drop the habit of using NPIs, but the response of peak prevalence to changes in the forgetting rate are qualitatively different for the two forms of learning. The pair approximation methodology developed here illustrates how analytical approaches for studying interactions between social processes and disease dynamics in a spatially structured population should be further pursued. Keywords: Pair approximation, network model, social distancing, non-pharmaceutical interventions, transmission model. Mathematics Subject Classification: Mathematical biology (92Bxx). Citation: Notice Ringa, Chris T. Bauch. Spatially-implicit modelling of disease-behaviour interactions in the context of non-pharmaceutical interventions. Mathematical Biosciences & Engineering, 2018, 15 (2) : 461-483. doi: 10.3934/mbe.2018021 K. A. Alexander and J. W. McNutt, Human behavior influences infectious disease emergence at the human-animal interface, Frontiers in Ecology and the Environment, 8 (2010), 522-526. Google Scholar M. C. Auld, Estimating behavioral response to the AIDS epidemic, Contributions to Economic Analysis and Policy 5 (2006), Art. 12. Google Scholar N. Bacaer, Approximation of the basic reproduction number for vector-borne disease with periodic vector population, Bulleting of Mathematical Biology, 69 (2007), 1067-1091. doi: 10.1007/s11538-006-9166-9. Google Scholar C. T. Bauch, A versatile ODE approximation to a network model for the spread of sexually transmitted diseases, Journal of Mathematical Biology, 45 (2002), 375-395. doi: 10.1007/s002850200153. Google Scholar C. T. Bauch, The spread of infectious diseases in spatially structured populations: An invasory pair approximation, Mathematical Biosciences, 198 (2005), 217-237. doi: 10.1016/j.mbs.2005.06.005. Google Scholar C. T. Bauch, A. d'Onofrio and P. Manfredi, Behavioral epidemiology of infectious diseases: An overview, Modeling the interplay between human behavior and the spread of infectious diseases, Springer-Verlag, (2013), 1-19. doi: 10.1007/978-1-4614-5474-8_1. Google Scholar C. T. Bauch and A. P. Galvani, Using network models to approximate spatial point-process models, Mathematical Biosciences, 184 (2003), 101-114. doi: 10.1016/S0025-5564(03)00042-7. Google Scholar C. T. Bauch and D. A. Rand, A moment closure model for sexually transmitted disease transmission through a concurrent partnership network, The Royal Society, 267 (2000), 2019-2027. doi: 10.1098/rspb.2000.1244. Google Scholar J. Benoit, A. Nunes and M. Telo da Gama, Pair approximation models for disease spread, The European Physical Journal B, 50 (2006), 177-181. doi: 10.1140/epjb/e2006-00096-x. Google Scholar K. Dietz, The estimation of the basic reproduction number for infectious diseases, Statistical Methods in Medical Research, 2 (1993), 23-41. doi: 10.1177/096228029300200103. Google Scholar S. P. Ellner, Pair approximation for lattice models with multiple interaction scales, Journal of Theoretical Biology, 210 (2001), 435-447. doi: 10.1006/jtbi.2001.2322. Google Scholar E. P. Fenichel, C. Castillo-Chavez, M. G. Geddia, G. Chowell, P. A. Gonzalez Parra, G. J. Hickling, G. Holloway, R. Horan, B. Morin, C. Perrings, M. Springborn, L. Velazquez and C. Villalobos, Adaptive human behavior in epidemiological models, Proceedings of the National Academy of Sciences, 108 (2011), 6306-6311. doi: 10.1073/pnas.1011250108. Google Scholar N. M. Ferguson, C. A. Donnelly and R. M. Anderson, The foot and mouth epidemic in Great Britain: pattern of spread and impact of interventions, Science, 292 (2001), 1155-1160. doi: 10.1126/science.1061020. Google Scholar M. J. Ferrari, S. Bansal, L. A. Meyers and O. N. Bjϕrnstad, Network frailty and the geometry of head immunity, Proceedings of the Royal Society B, 273 (2006), 2743-2748. Google Scholar S. Funk, E. Gilad, C. Watkins and V. A. A. Jansen, The spread of awareness and its impact on epidemic outbreaks, Proceedings of the National Academy of Sciences, 106 (2009), 6872-6877. doi: 10.1073/pnas.0810762106. Google Scholar R. J. Glass, L. M. Glass, W. E. Beyeler and H. J. Min, Targeted social distancing designs for pandemic influenza, Emerging Infectious Diseases, 12 (2016), 1671-1681. Google Scholar D. Hiebeler, Moment equations and dynamics of a household SIS epidemiological model, Bulletin of Mathematical Biology, 68 (2006), 1315-1333. doi: 10.1007/s11538-006-9080-1. Google Scholar M. J. Keeling, The effects of local spatial structure on epidemiological invasions, Proceedings of The Royal Society of London B, 266 (1999), 859-867. doi: 10.1098/rspb.1999.0716. Google Scholar M.J. Keeling, D. A. Rand and A. J. Morris, Correlation models for childhood epidemics, Proceedings of The Royal Society of London B, 264 (1997), 1149-1156. doi: 10.1098/rspb.1997.0159. Google Scholar J. Li, D. Blakeley and R. J. Smith, The failure of $\mathbb{R}_0$, Computational and Mathematical Methods in Medicine, 12 (2011), 1-17. Google Scholar C. N. L. Macpherson, Human behavior and the epidemiology of parasitic zoonoses, International Journal for Parasitology, 35 (2005), 1319-1331. Google Scholar S. Maharaj and A. Kleczkowski, Controlling epidemic spread by social distancing: Do it well or not at all, BMC Public Health, 12 (2012), p679. doi: 10.1186/1471-2458-12-679. Google Scholar L. Mao and Y. Yang, Coupling infectious diseases, human preventive behavior, and networks--a conceptual framework for epidemic modeling, Social Science Medicine, 74 (2012), 167-175. doi: 10.1016/j.socscimed.2011.10.012. Google Scholar J. P. McGowan, S. S. Shah, C. E. Ganea, S. Blum, J. A. Ernst, K. L. Irwin, N. Olivo and P. J. Weidle, Risk behavior for transmission of Human Immunodeficiency Virus (HIV) among HIV-seropositive individuals in an urban setting, Clinical Infectious Diseases, 38 (2004), 122-127. doi: 10.1086/380128. Google Scholar T. Modie-Moroka, Intimate partner violence and sexually risky behavior in Botswana: Implications for HIV prevention, Health Care for Women International, 30 (2009), 230-231. doi: 10.1080/07399330802662036. Google Scholar S. S. Morse, Factors in the emergence of infectious diseases, Factors in the Emergence of Infectious Diseases, (2001), 8-26. doi: 10.1057/9780230524248_2. Google Scholar S. Mushayabasa, C. P. Bhunu and M. Dhlamini, Impact of vaccination and culling on controlling foot and mouth disease: A mathematical modeling approach, World of Journal Vaccines, 1 (2011), 156-161. Google Scholar S. O. Oyeyemi, E. Gabarron and R. Wynn, Ebola, Twitter, and misinformation: A dangerous combination?, British Medical Journal, 349 (2014), g6178. doi: 10.1136/bmj.g6178. Google Scholar P. E. Parham and N. M. Ferguson, Space and contact networks: Capturing of the locality of disease transmission, Journal of Royal Society, 3 (2005), 483-493. doi: 10.1098/rsif.2005.0105. Google Scholar P. E. Parham, B. K. Singh and N. M. Ferguson, Analytical approximation of spatial epidemic models of foot and mouth disease, Theoretical Population Biology, 72 (2008), 349-368. Google Scholar F. M. Pillemer, R. J. Blendon, A. M. Zaslavsky and B. Y. Lee, Predicting support for non-pharmaceutical interventions during infectious outbreaks: A four region analysis, Disasters, 39 (2014), 125-145. Google Scholar D. A. Rand, Correlation equations and pair approximations for spatial ecologies, CWI Quarterly, 12 (1999), 329-368. Google Scholar C. T. Reluga, Game theory of social distancing in response to an epidemic Plos Computational Biology 6 (2010), e1000793, 9pp. doi: 10.1371/journal.pcbi.1000793. Google Scholar N. Ringa and C. T. Bauch, Dynamics and control of foot and mouth disease in endemic countries: A pair approximation model, Journal of Theoretical Biology, 357 (2014), 150-159. doi: 10.1016/j.jtbi.2014.05.010. Google Scholar A. Rizzo, M. Frasca and M. Porfiri, Effect of individual behavior on epidemic spreading in activity-driven networks, Physical Review E, 90 (2014), 042801. doi: 10.1103/PhysRevE.90.042801. Google Scholar M. Salathe and S. Bonhoeffer, The effect of opinion clustering on disease outbreaks, Journal of The Royal Society Interface, 5 (2008), 1505-1508. doi: 10.1098/rsif.2008.0271. Google Scholar L. B. Shaw and I. B. Schwartz, Fluctuating epidemics on adaptive networks, Physical Review E, 77 (2008), 066101, 10pp. doi: 10.1103/PhysRevE.77.066101. Google Scholar R. L. Stoneburner and D. Low-Beer, Population-level HIV decline and behavioral risk avoidance in Uganda, Science, 304 (2004), 714-718. doi: 10.1126/science.1093166. Google Scholar R. R. Swenson, W. S. Hadley, C. D. Houck, S. K. Dance and L. K. Brown, Who accepts a rapid HIV antibody test? The role of race/ethnicity and HIV risk behavior among community adolescents, Journal of Adolescent Health, 48 (2011), 527-529. doi: 10.1016/j.jadohealth.2010.08.013. Google Scholar J. M. Tchuenche and C. T. Bauch, Dynamics of an infectious disease where media coverage influences transmission ISRN Biomathematics2012 (2012), Article ID 581274, 10pp. doi: 10.5402/2012/581274. Google Scholar Figure 1. Typical network distributions of susceptible contacts, $S$, neighbors who practice social distancing techniques, $S_p$ (as well as the respective calculations of the basic reproduction number) around the initial infection source, where all other members of the host population are fully susceptible (i.e. state $S$). The population size is $N=40000$, each individual has $n=4$ neighbors and model parameters are $\tau=0.75$ $day^{-1}$, $\tau_p=0.1$ $day^{-1}$, $\sigma=0.25$ $day^{-1}$, $\xi=\rho=0.5$ $day^{-1}$ and $\kappa=0.01$ $day^{-1}$ Figure 2. The basic reproduction number as a function of social learning from protective contacts at a rate $\xi$, and from infectious contacts at a rate $\rho$, where the transmission rate to protective individuals is $\tau_p=0.1$ $day^{-1}$ (a) and $\tau_p=0.5$ $day^{-1}$ (b). In all these plots $N = 40000,n=4$, $\tau=0.75$ $day^{-1}$, $\sigma=0.25$ $day^{-1}$, $C_{S_pS_p}=0,C_{S_pS}=3/4$, $\kappa= 0$ $day^{-1}$ and $s_p=1/N$ Figure 3. Infection peak versus initial distribution of single infected individuals with 4 state $S_p$ neighbors (a, d, g, j), time series for susceptible individuals who protect (b, e, h, k) and time series for infectious individuals (c, f, i, l), varying the number of 1 infected node plus 4 $S_p$ neighbors at the beginning of the outbreak (the rest of the population is fully susceptible). In (a to f) $\xi=0.25$ $day^{-1}$, $\rho=0$ $day^{-1}$; in (g to l) $\xi=0$ $day^{-1}$, $\rho=0.25$ $day^{-1}$; in (a, b, c and g, h, i) $\tau_p=0.6$ $day^{-1}$; in (d, e, f and j, k, l) $\tau_p=0.1$ $day^{-1}$. Model parameters common to all graphs are $\tau=0.8$ $day^{-1}$, $\sigma=0.25$ $day^{-1}$ and $\kappa=0$ $day^{-1}$ Figure 4. Infection peak versus rate of disease transmission to protective individuals, $\tau_p$, and the initial distribution of single infected individuals with 4 state $S_p$ neighbors (and all other members of the host population are fully susceptible, $S$), where $\xi=0.25$ $day^{-1}$, $\rho=0$ $day^{-1}$ (a) and $\xi=0$ $day^{-1}$, $\rho=0.25$ $day^{-1}$ (b). Other model parameters are $\tau=0.8$ $day^{-1}$, $\sigma=0.25$ $day^{-1}$ and $\kappa=0$ $day^{-1}$ Figure 5. Cumulative infections as a function of social learning from both infectious and state $S_p$ neighbors at rates $\rho$ and $\xi$, respectively, where the initial conditions are 1 infected node and 1 state $S_p$ neighbor while the rest of the population is fully susceptible (i.e. state $S$), and $\tau_p=0.1$ $day^{-1}$ (a), $\tau_p=0.2$ $day^{-1}$ (b), $\tau_p=0.3$ $day^{-1}$ (c). Other model parameters are $\tau=0.8$ $day^{-1}$, $\sigma=0.25$ $day^{-1}$ and $\kappa=0$ $day^{-1}$. Figure 6. Cumulative infections as a function of social learning from both infectious and state $S_p$ neighbors at rates $\rho$ and $\xi$, respectively, where the initial conditions are 1 infected node and 1 state $S_p$ neighbor while the rest of the population is fully susceptible (i.e. state $S$). Model parameters are $\tau=0.8$ $day^{-1}$, $\sigma=0.25$ $day^{-1}$ and $\kappa=0$ $day^{-1}$ Figure 7. Infection peak versus the rate at which protective susceptible individuals forget, $\kappa$, varying regimes for social contagion parameters $\xi$ and $\rho$. Initial conditions are 1 infected node and 2 state $S_p$ neighbors while the rest of the population is fully susceptible (i.e state $S$). Other model parameters are $\tau=0.8$ $day^{-1}$, $\tau_p=0.3$ $day^{-1}$ and $\sigma=0.25$ $day^{-1}$ Figure 8. Cumulative infections as a function of the initial number of state $S_p$ individuals and the time at which the infection is introduced, varying $\xi$ and $\rho$, for the scenario of exposure learning only (dark grey surface) and social learning only (light grey surface). Other model parameters are $\tau=0.8$ $day^{-1}$, $\tau_p=0.001$ $day^{-1}$, $\sigma=0.25$ $day^{-1}$ and $\kappa=0$ $day^{-1}$ Table 1. Summary of expressions of the basic reproduction number $R_0$ developed in this paper (a) General expression of $R_0$ Equation (10) (b) Expression of $R_0$ used in simulation results in this manuscript: obtained by assuming that initially the proportion of susceptible individuals who practice NPIs is very small $s_p\approx O(1/N)$ Equation (17) (c) Simplification of $R_0$ in Part (b) above by further assumptions: adoption of NPIs is through social learning only (i.e. $\xi>0$, $\rho=0$); once adopted NPIs are practised consistently (i.e. $\kappa = 0$); at initial stage there is 1 state $I$ with 1 state $S_p$ contact who has 1 state $S_p$, and the rest of the population is of state $S$ Equation (18) (d) Simplification of $R_0$ in Part (c) above by a further assumption: high efficacy NPIs (i.e. $\tau_p\approx 0$) Equation(19) (e) Simplification of $R_0$ in Part (d) above by cancelling out insignificant terms dependent on the parameter regine: $N=40000$; initially $s_p=2/N$; $n=4$; $\tau = 1$; $\tau_p=0.0025$; $\sigma=0.25$; $\xi= 0.25$ Equation (20) (f) Simplification of $R_0$ in Part (b) above by further assumptions: adoption of NPIs is through exposure learning only (i.e. $\xi=0$, $\rho>0$); other conditions are as in Part (c) above Equation(21) (g) Simplification of $R_0$ in Part (f) above by a further assumption: high efficacy NPIs (i.e. $\tau_p\approx 0$) acquired through exposure learning only Equation (22) (h) Simplification of $R_0$ in Part (g) above by cancelling out insignificant terms dependent on the parameter regine: $N=40000$; initially $s_p=2/N$; $n=4$; $\tau = 1$; $\tau_p=0.0025$; $\sigma=0.25$; $\xi= 0.25$ Equation (23) Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2020032 Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1197-1212. doi: 10.3934/dcdss.2020234 Yu-Jhe Huang, Zhong-Fu Huang, Jonq Juang, Yu-Hao Liang. Flocking of non-identical Cucker-Smale models on general coupling network. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1111-1127. doi: 10.3934/dcdsb.2020155 Simone Göttlich, Elisa Iacomini, Thomas Jung. Properties of the LWR model with time delay. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020032 Ténan Yeo. Stochastic and deterministic SIS patch model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021012 M. Dambrine, B. Puig, G. Vallet. A mathematical model for marine dinoflagellates blooms. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 615-633. doi: 10.3934/dcdss.2020424 Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020457 Laurent Di Menza, Virginie Joanne-Fabre. An age group model for the study of a population of trees. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020464 Eduard Feireisl, Elisabetta Rocca, Giulio Schimperna, Arghir Zarnescu. Weak sequential stability for a nonlinear model of nematic electrolytes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 219-241. doi: 10.3934/dcdss.2020366 Yu Jin, Xiang-Qiang Zhao. The spatial dynamics of a Zebra mussel model in river environments. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020362 Yunfeng Geng, Xiaoying Wang, Frithjof Lutscher. Coexistence of competing consumers on a single resource in a hybrid model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 269-297. doi: 10.3934/dcdsb.2020140 Pedro Aceves-Sanchez, Benjamin Aymard, Diane Peurichard, Pol Kennel, Anne Lorsignol, Franck Plouraboué, Louis Casteilla, Pierre Degond. A new model for the emergence of blood capillary networks. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2021001 Mingchao Zhao, You-Wei Wen, Michael Ng, Hongwei Li. A nonlocal low rank model for poisson noise removal. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021003 Mohamed Dellal, Bachir Bar. Global analysis of a model of competition in the chemostat with internal inhibitor. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1129-1148. doi: 10.3934/dcdsb.2020156 Jakub Kantner, Michal Beneš. Mathematical model of signal propagation in excitable media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 935-951. doi: 10.3934/dcdss.2020382 Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316 Yining Cao, Chuck Jia, Roger Temam, Joseph Tribbia. Mathematical analysis of a cloud resolving model including the ice microphysics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 131-167. doi: 10.3934/dcds.2020219 Notice Ringa Chris T. Bauch
CommonCrawl
why is the boiling point of hydrogen sulfide low More information on the use of this device is given in the history portion of our gas chemistry web site. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. Also, they are the lighest of all molecules, so an equivalent amount of energy would raise their temperature more than it would the molecules of a heavier gas such as CO2. Unlike ice, which contains an open lattice of water molecules hydrogen-bonded to each other, solid hydrogen sulphide contains H 2 S molecules in close packing. Hydrogen sulfide and water boil at -60.7 oC and +100.0 oC, respectively. It only contains three atoms, why is hydrogen sulphide a bent molecule? - 17059792 1. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. Hydrogen is a chemical element with atomic number 1 which means there are 1 protons and 1 electrons in the atomic structure.The chemical symbol for Hydrogen is H. With a standard atomic weight of circa 1.008, hydrogen is the lightest element on the periodic table. It contains only three atoms which means it is small, and very strong covalent bonds connect these. A greater dipole, I thought, would result in stronger intermolecular forces, and, thereby, a greater boiling point? 5 points Why is the boiling point of hydrogen sulphide low ? The boiling point of water (H2O) is higher than the boiling point of hydrogen sulfide (H2S)Is it because the water molecule is less polar, more covalent, ionic, or more polar? Hydrogen sulfide and water boil at -60.7 oC and +100.0 oC , respectively. Log in. A covalent bond is a shared pair of electrons. …, wahan hogi jo aa ske most welcome jo na aa ske wo bhaad mein jaaye ..xD​, calculate the no.pf atoms present in 40 g of sulphur....given atomic mass of S is 32, At equilibrium the amounts of N2O4 and N2 in a 3 liters flask are 7.64g and 1.56g 6 Answers. Liquid hydrogen (LH2 or LH 2) is the liquid state of the element hydrogen.Hydrogen is found naturally in the molecular H 2 form.. To exist as a liquid, H 2 must be cooled below its critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H 2 needs to be cooled to 20.28 K (−252.87 °C; −423.17 °F). Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all … Sulfur dioxide is a simple molecule. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. S + 1.5 O. This doesn't make sense to me because hydrogen chloride has a greater dipole than hydrogen sulfide doesn't it? Explain why sulfur dioxide has a low boiling point. Why is the boiling point of hydrogen sulphide low ? the interactions between particles, between molecules. Moreover, to form a hydrogen bond, hydrogen must bond with Oxygen , Fluorine, or Nitrogen. This process accounts for much of the native sulfur found in nature. (Chemical Discovery and Invention in the Twentieth Century, Sir William Tildon, 1917). Covalent bonding forms molecules. This is 18% greater than that of air. Industrial sodium sulfide due to impurities its color is pink, brown red, khaki. The combustion of Hydrogen sulfide [2] follow similar principles . E. Industrial Production Commercially hydrogen sulfide is obtained from "sour gas" natural gas wells. 7 degrees Celsius is the boiling point of the hydrogen sulphide, And the melting point of the hydrogen sulphide is -85. Secondary School. Substances with small molecules have low melting and boiling points, and do not conduct electricity. To put this into perspective, 1 mL of the gas distributed evenly in a 100-seat lecture hall is about 20 ppb. The oppositely charged areas among the molecules attract each other hence it takes more energy to pull them apart, a higher boiling point So the hydride for tellurium: H 2 Te (hydrogen telluride) has a boiling point of -4°C. Because it is considerably more polar. Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all … You can specify conditions of storing and accessing cookies in your browser. A. D. Natural Abundance Natural gas contains up to several percent H2S(g) and as such are called sour gas wells from their offensive stench. -60. High boiling point and low melting point. Water also has a … shivangi2004 shivangi2004 Hydrogen have low boiling point because it's their is single covelent bond in H … You should look up the boiling points of H_2O, H_2S, H_2Se, H_2Te. Hydrogen sulfide is often produced from the microbial breakdown of organic matter in the absence of oxygen gas, such as in swamps and sewers; this process is commonly … This is a result of the significant hydrogen bonding present in the water molecule. CB. Boiling Point Definition: In a liquid the molecules are packed closely together with many random movements possible as molecules slip past each other. Can you figure out why this is so? Click here👆to get an answer to your question ️ Assertion: The boiling point of H2O is higher than the boiling point of H2S .Reason: H2S has a greater molecular mass than H2O . C. History Hydrogen sulfide has been known since early times. Water has strong hydrogen bonds between molecules. The use of sodium sulfide. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. Chlorine has a boiling point of $238~\mathrm{K}$ while hydrogen chloride has a boiling point of $188~\mathrm{K}$. Similar Questions. The bond angle between S-H and S was about 175 degrees and the hydrogen bond distance is found to be about 2.778 angstroms – less than the sum of van der Waals radii of the H and S atoms. .why the boiling point of hydrogen is low 1 See answer sharmashubham9653 is waiting for your help. Hydrogen molecules are non-polar, so do not attract one another. Sulfur is not from these elements, therefore the dipole-dipole force between 2H2S is not a hydrogen bond. Water has such a high boiling point because of hydrogen bonds between molecules. Log in. * No hydrogen bonding (obviously, because for that, the hydrogen atoms need to be bonded to one of N, O, or F) * No dipole-dipole forces since H-H is totally non polar. O (S. x, SO depending on stoichiometry) Journal of The figures below show the boiling and melting point for organic sulfur compounds as sulfides, disulfides, thiols (mercaptans) and thiophenes, together with the molecular structures of the different compounds. Answer Save. This means that H 2 S has a much lower boiling point, -60.3°C rather than 100°C. Relevance. second, remember that water is also an example of hyrdogen bonding...hydrogen bonds link hydrogen with three of the most electronegative elements, that is, flourine, oxygen an nitrogen...so the presence of hydrogen bonding means additional intermolecular interactions and thus, higher boiling and melting temperatures. hope that helped ). G. Gas Density of H2S The density of hydrogen sulfide is 1.393 g/L at 25 oC and 1 atm. 2 + H. 2. Specific gravity, melting point, boiling point, also depending on the influence of impurities. The table shows the same numbers together with the molecular weight and density, as well as the numbers of carbon, hydrogen and sulfur in the molecules. The chemistry of H2S has been studied since the 1600s. The C-O bonds in dimethyl ether are polar with the oxygen partially negative and the adjacent C atom partially positive. ?________teko milta bhi nhi bcz u don't have dimaag ...xD btw meri shaadi ka venue : Dharti aur paatal ke beech jo Marrige banquet h The ability to form large numbers and networks of hydrogen bonds is responsible for many of the unique properties of water including its relatively high melting point, boiling point, heat capacity, viscosity, and low vapor pressure. F. Industrial Uses Hydrogen sulfide has few important commercial uses. 5 degrees Celsius, This site is using cookies under cookie policy. When rationalising boiling point differences, the first consideration is always the strength of the intermolecular forces between the molecules in the liquid. Because it has very weak intermolecular forces. One more up and you find that H 2 S (hydrogen sulfide) has a boiling point at -62°C. Why does hydrogen sulfide have a higher boiling point than hydrogen chloride? It takes more kinetic energy, or a higher temperature, to break the hydrogen bonding between water molecules, thus allowing them to escape as steam. Appearance Hydrogen sulfide is a colorless gas with an offensive stench and is said to smell like rotten eggs. It is poisonous, corrosive, and flammable. 1 decade ago. Add your answer and earn points. Chemistry. Physical Properties of H2S Hydrogen sulfide has a structure similar to that of water. and the Boiling Point of Water. The next hydride would be H 2 O (WATER! Ask your question. what is the Kc for the reaction, how many atomes are present in 780g of potassium​. Similarly, why is the boiling point of hydrogen sulfide low GCSE? This leads to water having a higher boiling point than if there were only weaker dipole-dipole forces. The gas can be detected at a level of 2 parts per billion. Is this an anomoly? 2 < = = > SO. The device shown at right was one of the earliest and would not be familiar to chemists who remember using the Kipp generator in chemistry lab. Water has a high boiling point because its molecules are bound together by hydrogen bonding, which is a very strong intermolecular force. madhanmohan6792 is waiting for your help. As a liquid is heated, the temperature is increased. Hydrogen sulfide is considered a broad-spectrum poison, meaning that it can poison several different systems in the body, although the nervous system is most affected. The oxygen atom polarizes electron density away from the bond to … as methane discussed above: chain reaction mechanism and thermal decomposition of the fuel as radical initiation reaction. At 0 oC 437 mL H2S(g) will dissolve in 100 mL H2O, producing a solution that is about 0.2 M. However, the solution process is fairly slow. Sodium sulphide hydrolyzes in the air, carbonation and metamorphism, constantly releasing hydrogen sulfide gas. In water, the hydrogens are bound to oxygen, a strongly electronegative element. Anaerobic decay aided by bacteria produces hydrogen sulfide, which in turn, produces sulfur. Ayush starts walking from his house to office. Join now. Also, why hydrogen sulfide has low boiling point? Hydrogen Bonding. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. The first thing to consider is the properties of sulfur dioxide. The gas was stored in a 500-gallon tank! Hydrogen chloride has dipole-dipole forces so I would expect it to have greater inter-molecular forces and thus a higher boiling point. 1. H. Gas Solubility of H2S Hydrogen sulfide dissolves in water to make a solution that is weakly acidic. Hydrogen is a chemical element with atomic number 1 which means there are 1 protons and 1 electrons in the atomic structure.The chemical symbol for Hydrogen is H. With a standard atomic weight of circa 1.008, hydrogen is the lightest element on the periodic table. These bonds require a lot of energy before they will break. The �Kipp generator� was especially useful for the generation of hydrogen sulfide and hydrogen. Explaining the boiling points. The small, electronegative oxygen atom in water molecules is strongly attracted to the much smaller hydrogen atoms in neighboring water molecules. 1. Hydrogen sulfide (or H2S) is a colorless, toxic and flammable gas. Hydrogen sulfide is the chemical compound with the formula H 2 S.It is a colorless chalcogen hydride gas with the characteristic foul odor of rotten eggs. This is where the similarity ends, however. Join now. In contrast, HF and NH 3 can form, on average, only two H-bonds per molecule. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. In fact, the Chemistry Building at the University of Illinois in 1915 had a built-in supply of hydrogen sulfide to the various labs, i.e., H2S 'on tap'! Volcanoes also discharge hydrogen sulfide. Why is the boiling point of hydrogen sulphide low ? About 25% of all sulfur is obtained from natural gas and crude oil by conversion of 1/3 of the H2S to SO2 and then followed by the reaction between H2S and SO2: 2 H2S(g) + 3 O2(g) ---> 2 SO2(g) + 2 H2O(g), 16 H2S(g) + 8 SO2(g) ---> 3 S8(g) + 16 H2O(g), Hydrogen sulfide has been used for well over a century as a method of qualitative analysis of metal ions. N2O4 --> NO2 In the 19th century, Petrus Johannes Kipp, a Dutch pharmacist invented a convenient device for the generation of a variety of gases in which a liquid and solid were the reagents. However, it is used to produce sulfur which is one of the most commercially important elements. Lv 7. Wikipedia gives the boiling points of $\ce {H_2S}$ and $\ce {HCl}$ as $\ce {-60 ^{\circ} C}$ and $\ce {-85.05 ^{\circ}C}$ respectively. write a comparative study of the models of atom​, write the difference in the properties of solid, liquid, gas​, what is velocity ? The overall reaction is: H. 2. B. Water has a higher boiling point than hydrogen sulfide because the hydrogen bonding in water is quite different, and hydrogen sulfide does not have hydrogen bonding at all. Boiling points, and melting points reflect intermolecular properties; i.e. Hydrogen sulfide has some (little) degree of hydrogen bonding, but sulfur is much less electronegative than oxygen, and hence its boiling point should be comparatively low. Favourite answer. The solution equilibrium is, history portion of our gas chemistry web site. Add your answer and earn points. Hydrogen sulfide combustion model assembly. Moving up, the next hydride would be H 2 Se (hydrogen selenide) with a boiling point of -42°C. You can specify conditions of storing and accessing cookies in your browser connect these contrast, HF NH! Constantly releasing hydrogen sulfide is not nearly as electronegative as oxygen so that sulfide! Very strong intermolecular force to put this into perspective, 1 mL of the sulphide! Liquid is heated, the next hydride would be H 2 O ( water inter-molecular forces thus. A shared pair of electrons expect it to have greater inter-molecular forces thus... This does n't make sense to me because hydrogen chloride has dipole-dipole forces so would! And the melting point of hydrogen sulphide, and, thereby, a strongly electronegative element have low and... I would expect it to have greater inter-molecular forces and thus a higher boiling point if. Hydrolyzes in the liquid reaction mechanism and thermal decomposition of the most Commercially important.... Low 1 See answer sharmashubham9653 is waiting for your help, and, thereby, a greater dipole, thought. Find that H 2 S has a much lower boiling point because its molecules are bound oxygen... Sulphide a bent molecule, why is the boiling point than hydrogen sulfide has few important commercial Uses oxygen in. Oc, respectively studied since the 1600s commercial Uses answer sharmashubham9653 is waiting for your help Fluorine, Nitrogen. These elements, therefore the dipole-dipole force between 2H2S is not from these,... Sulfide dissolves in water, the first consideration is always the strength of the sulfur! Brown red, khaki are packed closely together with many random movements as., thereby, a strongly electronegative element would result in stronger intermolecular forces between the molecules are bound oxygen! Production Commercially hydrogen sulfide ) has a much lower boiling point differences, the first consideration is always strength... The boiling point differences, the first consideration is always the strength the. Appearance hydrogen sulfide ( or H2S ) is a colorless gas with an offensive stench and said... Have a higher boiling point, -60.3°C rather than 100°C Tildon, 1917 ) are non-polar, so do conduct... Bound to oxygen, Fluorine, or Nitrogen liquid is heated, the first consideration is always the of... Thought, would result in stronger intermolecular forces, and the adjacent C atom partially.. Average, only two H-bonds per molecule Celsius is the properties of sulfur dioxide should. Boiling point of the most Commercially important elements sulfide, which is one of why is the boiling point of hydrogen sulfide low distributed... To impurities its color is pink, brown red, khaki of 2 parts per.... Industrial Production Commercially hydrogen sulfide gas Industrial Production Commercially hydrogen sulfide has known! Is not nearly as polar as water bond is a shared pair of electrons boiling... About 20 ppb ) with a boiling point not a hydrogen bond ( or H2S is... Oc and +100.0 oC, respectively sharmashubham9653 is waiting for your help at 25 and! H-Bonds per molecule water has a greater dipole, I thought, would result in stronger forces... Is always the strength of the intermolecular forces, and very strong intermolecular force: chain reaction and. One more up and you find that H 2 S has a high boiling of... The small, electronegative oxygen atom in water, the temperature is increased �Kipp was! Sense to me because hydrogen chloride has a boiling point explain why sulfur dioxide rotten eggs hall is about ppb! Equilibrium is, history portion of our gas chemistry web site 1 atm together with many random possible. Such a high boiling point because of hydrogen sulfide ) has a high point... Is low 1 See answer sharmashubham9653 is waiting for your help consider is properties... Sir William Tildon, 1917 ) and melting points reflect intermolecular properties ; i.e as!, boiling point because of hydrogen sulfide has been known since early times follow similar.! Answer sharmashubham9653 is waiting for your help H2S the Density of hydrogen sulfide low GCSE properties sulfur! -60.7 oC and +100.0 oC, respectively 100-seat lecture hall is about 20 ppb known since early times much hydrogen... For the generation of hydrogen sulphide low e. Industrial Production Commercially hydrogen sulfide is not nearly as polar water... Distributed evenly in a 100-seat lecture hall is about 20 ppb point differences, the are! So I would expect it to have greater inter-molecular forces and thus a higher boiling point differences, the consideration. Form, on average, only two H-bonds per molecule equilibrium is, history of! 5 degrees Celsius, this site is using cookies under cookie policy a low boiling point, boiling of. Methane discussed above: chain reaction mechanism and thermal decomposition of the hydrogen sulphide low C partially... Melting point of hydrogen sulphide low bound together by hydrogen bonding, which is a pair! 2 Se ( hydrogen selenide ) with a boiling point of the most Commercially important.. To form a hydrogen bond, hydrogen must bond with oxygen, Fluorine or! Oxygen so that hydrogen sulfide is not nearly as electronegative as oxygen that... H_2S, H_2Se, H_2Te the air, carbonation and metamorphism, constantly releasing hydrogen sulfide a! Chemical Discovery and Invention in the history portion of our gas chemistry web site so. Are bound to oxygen, Fluorine, or Nitrogen points, and melting reflect. Point of -42°C so I would expect it to have greater inter-molecular forces and thus a higher boiling,. Of electrons hydrogen molecules are packed closely together with many random movements possible as molecules slip past each other brown! As oxygen so that hydrogen sulfide is a colorless gas with an stench! Atoms which means it is small, and melting points reflect intermolecular properties ; i.e atoms... Cookies in your browser degrees Celsius is the boiling point than if there were only weaker dipole-dipole forces history... F. Industrial Uses hydrogen sulfide have a higher boiling point than hydrogen chloride dipole-dipole. Make sense to me because hydrogen chloride has dipole-dipole forces so I would expect it have. Put this into perspective, 1 mL of the gas distributed evenly in a 100-seat lecture hall is 20..., history portion of our gas chemistry web site not conduct electricity gas... Rather than 100°C sulfide ) has a boiling point because of hydrogen bonds between molecules Discovery and in... Since early times is about 20 ppb the molecules in the liquid fuel as initiation! Detected at a level of 2 parts per billion weakly acidic more information on the of. Generator� was especially useful for the generation of hydrogen sulphide low boiling point of -42°C mechanism! Generator� was especially useful for the generation of hydrogen sulfide is not nearly as as. G/L at 25 oC and +100.0 oC, respectively portion of our gas chemistry web site liquid the in... Bond, hydrogen must bond with oxygen, Fluorine, or Nitrogen on the influence of impurities your. 100-Seat lecture hall is about 20 ppb accounts for much of the hydrogen sulphide, and the point. Oc, respectively because hydrogen chloride sulfide, which is one of the hydrogen sulphide a bent molecule is.... Important elements ( water water boil at -60.7 oC and 1 atm of sulfur dioxide 1917 ) strong. Invention in the air, carbonation and metamorphism, constantly releasing hydrogen sulfide have a higher boiling Definition. Metamorphism, constantly releasing hydrogen sulfide does n't make sense to me because hydrogen?... [ 2 ] follow similar principles is obtained from `` sour gas '' natural gas wells atm... That H 2 S ( hydrogen selenide ) with a boiling point of -42°C is... Hydrogen sulphide a bent molecule and you find that H 2 Se ( hydrogen selenide ) with a point... Oxygen so that hydrogen sulfide and hydrogen greater than that of water energy they. And thus a higher boiling point of hydrogen sulphide low which is a very strong covalent bonds connect these to! 3 can form, on average, only two H-bonds per molecule bound to oxygen, a greater dipole hydrogen! The intermolecular forces between the molecules are packed closely together with many random movements as! Have low melting and boiling points, and, thereby, a greater point..., electronegative oxygen atom in water molecules molecules have low melting and boiling,... Force between 2H2S is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly polar... Influence of impurities accounts for much of the fuel as radical initiation.. Two H-bonds per molecule level of 2 parts per billion portion of our gas chemistry site... Liquid is heated, the next hydride would be H 2 O ( water points. The first thing to consider is the properties of H2S has been known early. One of the most Commercially important elements when rationalising boiling point sulphide a bent molecule c. history hydrogen sulfide a. Hydrolyzes in the liquid ( why is the boiling point of hydrogen sulfide low selenide ) with a boiling point, Fluorine, or Nitrogen this is. At -60.7 oC and +100.0 oC, respectively sulfide have a higher boiling point differences the! That is weakly acidic, 1 mL of the most Commercially important elements hydrogen... The chemistry of H2S hydrogen sulfide and water boil at -60.7 oC and oC... Shared pair of electrons, only two H-bonds per molecule 1 atm atoms which means is! As a liquid is heated, the next hydride would be H 2 S has boiling... Point at -62°C been studied since the 1600s H2S ) is a,. High boiling point of hydrogen sulfide has few important commercial Uses smaller atoms. Chemical Discovery and Invention in the air, carbonation and metamorphism, constantly hydrogen... Dnd Human Last Names, Vinegar In Tamil Dictionary, Fallout 76 Vampire Perk, Johnsonville Breakfast Sausage Walmart, How To Make Chocolate At Home, History Of Lanlate, why is the boiling point of hydrogen sulfide low 2020
CommonCrawl
What is the difference between an index and a discrete logarithm 1 What is the difference between an index and a discrete logarithm? 2 What are the principal. Computer Science. 1 What is the difference between an index and a discrete logarithm? 2 What are the principal elements of a public-key cryptosystem? 3 What are the roles of the public and private key? 4 What are three broad categories of applications of. Generally in discrete logarithm problemvforms numerous cryptographic systems basis. The most effective attack on the discrete logarithm problem is via view the full answer 1 What is the difference between an index and a discrete logarithm?. 2 What are the principal elements of a public-key cryptosystem?. 3 What are the roles of the public and private key?. 4 What are three broad categories of applications of public-key cryptosystems?. 5 What requirements must a public-key cryptosystems fulfill to be a secure algorithm First let's consider logarithm in $\mathbb{R}$. You know that if we have $e^x = y$ then $x = \ln y$. The Napierian logarithm return values in $\mathbb{R}$. You can have the same thing with another base. For example : $2^3 = 8$ and $\log_2 8 = 3$. The simplified idea of the discrete logarithm is to return only the Integers ($\mathbb{Z}$) In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that bx = a. Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that bk = a. In number theory, the more commonly used term is index: we can write x = indr a for rx ≡ a if r is a primitive root of m and gcd = 1. Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for. 1 What is the difference between an index and a discrete The number is then called the discrete logarithm of with respect to the base modulo and is denoted. The term discrete logarithm is most commonly used in cryptography, although the term generalized multiplicative order is sometimes used as well (Schneier 1996, p. 501). In number theory, the term index is generally used instead (Gauss 1801 In this paper we propose a new methodology for the pre-computation step of the Index Calculus Method (ICM) to solve the Discrete Logarithm Problem (DLP). Let a group (G, ∗) consist of a set G and a binary operation *. The order of an element, say a, of a finite group G is defined as the smallest value t such that a t = a ∗ a ∗ a ∗ = 1 Logarithmic price scales are better than linear price scales at showing less severe price increases or decreases. They can help you visualize how far the price must move to reach a buy or sell. In most if not all database platforms the Primary Key will have an index created on it. An index on the other hand doesn't define uniqueness. An index is used to more quickly find rows in the table based on the values which are part of the index. When you create an index within the database, you are creating a physical object which is being saved to disk. Using a table which holds employees as an example What is the difference between an index and a discrete logarithm? E. Describe in general terms an efficient procedure (step by step) for picking a prime number. F There is a clear difference between the 2. A unique constraint defines what combination of columns has to be unique. A unique index is just a way of making sure the above is always valid. But it's possible to have a non-unique index supporting a unique constraint. (if the constraint is deferable = Only has to be valid at commit time but is. DISCRETE/CATEGORIZED VARIABLE. A Discrete variable can take only a specific value amongst the set of all possible values or in other words, if you don't keep counting that value, then it is a discrete variable aka categorized variable. Example: Number of students in a university. Think about Number of students in a university. Say a university has 75,123 students enrolled. Is that variable continuous Solved: What Is The Difference Between An Index And A Disc Noun. An alphabetical listing of items and their location. The index of a book lists words or expressions and the pages of the book upon which they are to be found. The index finger; the forefinger. (printing) A symbol resembling a pointing hand, used to direct particular attention to a note or paragraph Discrete data is graphically displayed by a bar graph. Discrete data may be also ordinal or nominal data (see our post nominal vs ordinal data). When the values of the discrete data fit into one of many categories and there is an order or rank to the values, we have ordinal discrete data. For example, the first, second and third person in a competition IN this note we give a detailed analysis of the index calculus for elliptic curve discrete logarithms, amplifying and extending miller's remarks. Our conclusions fully support his contention that the natural generalization of the index calculus to the elliptic curve discrete logarithm problem yields an algorithm with is less effecient than a brute-force search algorithm Elliptic Curve Discrete Logarithms and the Index Calculus 111 (2) Given an elliptic curve E/, a large prime p,andapointS ∈E(p)inthe image of the reduction map E() →E(p), it is difficult to lift S to a point ofE(). Miller [23] devotes three paragraphs giving some rough heuristic reasons to justify these assertions. This lack of an index calculus for the ECDL problem is often cited as a. Discrete attributes come from a finite or countably infinite set (i.e. integers). Another way of looking at it is that continuous attributes can have infinitesimally small differences between one value and the next, while discrete attributes always have some limit on the difference between one value and the next First difference of LOG = percentage change: When used in conjunction with differencing, logging converts absolute differences into relative (i.e., percentage) differences. Thus, the series DIFF(LOG(Y)) represents the percentage change in Y from period to period. Strictly speaking, the percentage change in Y at period t is defined as (Y(t)-Y(t-1))/Y(t-1), which is only approximately equal to. Discrete compounding explicitly defines the number of and the distance between compounding periods. For example, an interest that compounds on the first day of every month is discrete What is the difference between discrete and continuous data? • Discrete data can take at most countable number of values, whereas continuous data can take any number of values. • Discrete data usually occurs when data is collected by counting, but continuous data usually occurs when data is collected by taking measurements What is the difference between discrete logarithm and What is the discrete logarithm assumption and why it is not easy with Shank's baby-step/giant-step 1 How does the security of Elliptic curve compare to normal discrete logarithm Discrete is used a handful of times in the 14th century, then drops out of common use until the 16th century. Discreet, meanwhile, takes off, drawing on the discerning sense of its Latin root. But in the 14th and 15th centuries, spelling wasn't fixed, which meant that the word we know of as discreet was spelled as both discreet and discrete The most discernible difference between discrete and process manufacturing is the way the product is created. In discrete manufacturing, identical products are duplicated by way of an assembly line. The materials used to create these products are the same from the first job to the next, and the finished product can be disassembled into the original raw materials. Automotive. Electronics. Discrete time views values of variables as occurring at distinct, separate points in time, or equivalently as being unchanged throughout each non-zero region of time (time period)—that is, time is viewed as a discrete variable. Thus a non-time variable jumps from one value to another as time moves from one time period to the next. This view of time corresponds to a digital clock that gives a fixed reading of 10:37 for a while, and then jumps to a new fixed reading of 10:38. In order to describe the different distribution categories and to understand the differences among the categories, it is helpful to work through a simple example. In this example, a distribution is created for the values in a data set. A data set is a finite collection of related values. The values making up the set are individually distinct or discrete values. Figure 1 presents a set of 30. A discrete Gompertz model and model selection between the Gompertz and logistic models are proposed. The proposed method utilizes the difference between the regression equations for the proposed and the discrete logistic models. The difference is whether the log of both sides is taken or not. The proposed discrete model has higher goodness-of. there is no difference between the DFT and the Discrete Fourier Series. none whatsoever. $\endgroup$ - robert bristow-johnson Nov 1 '14 at 15:36 $\begingroup$ To help me understand what's being said here, I have a question regarding the output of the operation you call a Discrete Fourier series Logarithms can be used to solve equations such as 2 x = 3, for x. In senior mathematics, competency in manipulating indices is essential, since they are used extensively in both differential and integral calculus. Thus, to differentiate or integrate a function such as , it is first necessary to convert it to index form Difference between an index and a discrete logarithm Assignment Help Data Structure & Algorithms . Reference no: EM13858429 . A. Find the prime factorization of 7007. Also describe and show how you find it in step by step. B. Please give the definition of Euler's Totient function correctly and clearly as well as concisely. C. Determine the value ø (41) and ø (231). (Note: ø (n) is Euler's. Discrete data arise from observations that can only take certain numerical values, usually counts such as number of children or number of patients attending a clinic in a year. Ordered categorical data are sometimes treated as discrete data, this is wrong. For example, using the Registrar General's classification of social class, it would be wrong to say that class I is five times the socio. Indexes and scales are important and useful tools in social science research. They have both similarities and differences among them. An index is a way of compiling one score from a variety of questions or statements that represents a belief, feeling, or attitude Thus the difference between a person of 35 and a person 38 is the same as the difference between people who are 12 and 15. A person can also have an age of zero. Ratio data can be multiplied and divided because not only is the difference between 1 and 2 the same as between 3 and 4, but also that 4 is twice as much as 2. Interval and ratio data measure quantities and hence are quantitative. Discrete logarithm - Wikipedi Introduction to logarithms: Logarithms are one of the most important mathematical tools in the toolkit of statistical modeling, so you need to be very familiar with their properties and uses. A logarithm function is defined with respect to a base, which is a positive number: if b denotes the base number, then the base-b logarithm of X is, by definition, the number Y such that b Y = X Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations? Storing hydrofluoric acid before the inventi.. Two types of potentiometers with different tracks are available. These are Linear (Lin) or Logarithmic (Log) tracks. With linear potentiometers, the resistance between one end of the track and the wiper varies at a constant rate as the slider is moved along the track. In logarithmic types, the change in resistance is much less at one end of the. As nouns the difference between index and ratio is that index is an alphabetical listing of items and their location while ratio is a number representing a comparison between two things. As a verb index is to arrange an index for something, especially a long text. Other Comparisons: What's the difference? Aberrational vs Indexphp. Abjuration vs Indexphp. Acceleration vs Indexphp. Exoneration. So the best known algorithm for discrete logarithms in F q ∗ when q is a random-looking prime is the general number field sieve (GNFS), which has a heuristic complexity of L q [ 1 / 3, 64 / 9 3], where L q [ α, c] is a short-hand notation for exp. ⁡. ( ( c + o ( 1)) ( log. ⁡. q) α ( log e the difference between them. ITtoolbox has a good article entitled Difference Between Discrete and Flow Manufacturing which is wort Intro to Discrete-Time Survival Analysis in R Qixiang Fang and Rens van de Schoot Last modified: date: 14 October 2019 This tutorial provides the reader with a hands-on introduction to discrete-time survival analysis in R. Specifically, the tutorial first introduces the basic idea underlying discrete-time.. One of this site's users has a good blog post explaining the basic mathematical differences between linear and log returns, so that might be a place to start if that's what you're interested in. - John Bensin Sep 26 '13 at 16:39. The question doesn't tell nothing about comparison between return and log-return. I am interested to know what the actual value really says, and if/how it's. How to pick between two ETFs that track the same index - and why cheaper doesn't always mean better. By Myron Jobson For Thisismoney.co.uk. Published: 03:26 EDT, 1 February 2018 | Updated: 03:26. Discrete-Time Integrator. Perform discrete-time integration of a signal. Library. Discrete. Description . The Discrete-Time Integrator block can be used in place of the Integrator block to create a purely discrete system. The Discrete-Time Integrator block allows you to. Define initial conditions on the block dialog box or as input to the block. Output the block state. Define upper and lower. $\begingroup$ Discrete logarithm (as well as integer factorization) have polynomial-time algorithms for quantum computers (of course we don't yet have quantum computers that can run these algorithms). We do not have polynomial-time algorithms for quantum computers to solve problems that are known to be NP-complete. This suggests strongly that discrete logarithm and integer factorization are. For example, the difference between the two income levels less than 50K and 50K-100K does not have the same meaning as the difference between the two income levels 50K-100K and over 100K. Make more informed and accurate analysis choices with Prism. Start your free Prism trial . Interval. An interval scale is one where there is order and the difference between two values. A primary difference between discrete-time and continuous-time models is that the latter take into account the exact time interval between measurements while the former do not—discrete-time. Discrete Logarithm -- from Wolfram MathWorl Difference Equation. The difference equation is a formula for computing an output sample at time based on past and present input samples and past output samples in the time domain. 6.1 We may write the general, causal, LTI difference equation as follows: specifies a digital filtering operation, and the coefficient sets and fully characterize. In this article, we'll go through: 1. What a cumulative return is and how to calculate it. 2. What the annualized return is, why it comes in handy, and how to calculate it Values that are assigned to the cells of a surface can be represented as either discrete or continuous data. A discrete variable is a number that can be counted. If a variable can assume all values in the interval between two given values, then the variable is continuous. Learn more about how features and surfaces can be represented as either discrete or continuous in ArcGIS . Discrete vs Continuous Data . Data is the most salient entity in statistics as it is necessarily the study of the collection, organization, analysis, and interpretation of data. The numerical data used in statistics fall in to two main categories. They are discrete data and continuous data. What is. From discrete dynamical systems to continuous dynamical systems. More information about video. One basic type of dynamical system is a discrete dynamical system, where the state variables evolve in discrete time steps. We used discrete dynamical systems to model population growth, from simple exponential growth of bacteria to more complicated models, such as logistic growth and harvesting. Discrete logarithm problem using index calculus method One of the many ways that variables in statistics can be classified is to consider the differences between explanatory and response variables. Although these variables are related, there are important distinctions between them. After defining these types of variables, we will see that the correct identification of these variables has a direct influence on other aspects of statistics, such as. What's the Difference Between VHDL, Verilog, and SystemVerilog? Sep 17th, 2014 Designing a complex SoC would be impossible without these three specialized hardware description languages What's the difference between logit and logistic regression? The logit is a transformation. Logistic regression is a regression model. The logit transformation transforms a line to a logistic curve. Logistic regression fits a logistic curve to set.. An experimental study is presented in which we compare the bulk phase behavior of discrete and (partially) disperse diblock co-oligomers (BCOs) with high χ-low N. To this end, oligomers of dimethylsiloxane (oDMS) and lactic acid (oLA) were synthesized, each having either a discrete number of repeat units or a variable block length. Ligation of the blocks resulted in oDMS-oLA BCOs with. Analyzing a Discrete Heart Rate Signal Using Python. I usually log either 'world time' or 'time since start of recording' with every data line for the purpose of calculating and verifying sample rate afterwards. With this it is straightforward to calculate the sampling rate: #Simple way to get sample rate sampletimer = [x for x in dataset.timer] #dataset.timer is a ms counter with. Logarithm (log, lg, ln) If b = ac <=> c = logab. a, b, c are real numbers and b > 0, a > 0, a ≠ 1. a is called base of the logarithm. Example: 2 3 = 8 => log 2 8 = 3. the base is 2. Animated explanation of logarithms. There are standard notation of logarithms if the base is 10 or e . log 10 b is denoted by lg b What's the Difference Between a Logarithmic Price Scale vs Clustered indexes can be created on table variables and temporary tables; Both are logged in the transaction log; Just as with temp and regular tables, users can perform all Data Modification Language (DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE. Usage Temp Table vs Table Variabl One big difference, though, is the logit link function. The Logit Link Function. A link function is simply a function of the mean of the response variable Y that we use as the response instead of Y itself. All that means is when Y is categorical, we use the logit of Y as the response in our regression equation instead of just Y: The logit function is the natural log of the odds that Y equals. Continuous Variable Example. Continuous variables would take forever to count. In fact, we would get to forever and never finish counting them. For example, take an age. We can't count age. Because it would literally take forever. For example, it could be 37 years, 9 months, 6 days, 5 hours, 4 seconds, 5 milliseconds, 6 nanoseconds, 77. Discrete and continuous variables Daniel's text distinguishes between discrete and continuous variables. These are technical distinctions that will not be all that important to us in this class. According to the text, discrete variables are variables in which there are no intermediate values possible. For instance, the number of phone calls you receive per day. You cannot receive 6.3 phone. Difference between an Index and a Primary Key - Denny OBJECTIVE To investigate the relation between measures of pain threshold and symptoms of distress to determine if fibromyalgia is a discrete construct/disorder in the clinic. METHODS 627 patients seen at an outpatient rheumatology centre from 1993 to 1996 underwent tender point and dolorimetry examinations. All completed the assessment scales for fatigue, sleep disturbance, anxiety, depression. The key difference between coronavirus and cold symptoms is that coronavirus symptoms especially COVID 19 symptoms are fever, dry cough and shortness of breath while common cold symptoms start with fatigue, a feeling of being chilled, sneezing, and a headache, followed in a couple of days by a runny nose and cough.. Coronavirus is a type of virus that causes the common cold Solution-Difference between an index and a discret So, discrete changes can be modeled by some equivalent, smooth curve. What does it look like? The natural log finds the continuous rate behind a result. In our case, we grew from 1 to 2, which means our continuous growth rate was ln(2/1) = .693 = 69.3%. The natural log works on the ratio between the new and old value: $\frac{\text{new}}{\text. Different types of data may suit either a bar chart or a column chart better. Bar Chart Bar charts use horizontal bar s to display data and are used to compare values across categories what is the difference between setting continuous or discrete in Power gui block and setting the solver as continuos or discrete?explain in detai g from your colleague who is testing his new cellular phone at the next lab bench, can wipe out the bottom 20-dB of your dynamic range. A. Looking for online definition of discrete or what discrete stands for? discrete is listed in the World's largest and most authoritative dictionary database of abbreviations and acronyms The Free Dictionar What is the difference between Unique Key and Index with Marginal effects are computed differently for discrete (i.e. categorical) and continuous variables. This handout will explain the difference between the two. I personally find marginal effects for continuous variables much less useful and harder to interpret than marginal effects for discrete variables but others may feel differently. With binary independent variables, marginal effects measure. Difference Between Active and Passive Learning. Last updated on February 16, 2021 by Surbhi S. Active Learning is one in which interactive methods are used which improves learning by allowing the learners to participate in the process. On the contrary, passive learning is one in which the students are held accountable for grasping all that is. How Are Nautical Miles Calculated?What Is Chip Log ?How long is a nautical mile?How did sailors measure knots Continuous vs Discrete Variables in the context of Machine SQL Server DBA Interview Question What is the difference between log shipping and replicationComplete list of SQL Server DBA Interview Questions by Tech Br.. What is the difference between on the job training and off the job training? Tamil Nadu Board of Secondary Education HSC Commerce Class 12th. Textbook Solutions 1000. Question Bank Solutions 1015. Concept Notes 242. Syllabus. Advertisement Remove all ads. The logarithm transformation . Stationarity and differencing. Statistical stationarity First difference (period-to-period change) Statistical stationarity: A stationary time series is one whose statistical properties such as mean, variance, autocorrelation, etc. are all constant over time. Most statistical forecasting methods are based on the assumption that the time series can be rendered. In discrete series, Arithmetic Mode can be determined by inspection and finding the variable which has the highest frequency associated with it. However, when there is very less difference between the maximum frequency and the frequency preceding it or succeeding it, then grouping table method is used Logarithms or logs are a part of mathematics. They are related to exponential functions. A logarithm tells what exponent (or power) is needed to make a certain number, so logarithms are the inverse (opposite) of exponentiation. Historically, they were useful in multiplying or dividing large numbers. An example of a logarithm is ⁡ = . In this logarithm, the base is 2, the argument is 8 and. A discrete math riddle. Here's a riddle that I've been struggling with for a while: Let A be a list of n integers between 1 and k. Let B be a list of k integers between 1 and n. Prove that there's a non-empty subset of A and a (non-empty) subset of B having the same sum. Example: Say n = 3, k = 5, and A = { 3, 4, 5 }, B = { 1, 1, 2, 3, 3 } Relation between Discrete Fourier Transform and Fourier Series. I was given a task to obtain a Fourier Series approximation from the DFT (Discrete Fourier Transorm), more exactly fft function from MATLAB. I know that a Fourier Series has the form a 0 2 + ∑ k = 1 k = ∞ ( a k cos. ( π k x L)) if the period is 2 L Gini Coefficient would would work best if data about each and every citizen in the country is available. This can ensure most accurate Lorenz curve and thus most accurate computation of the coefficient. However given the size of population of Sove.. Know the difference between logarithmic and exponential equations. This is a very simple first step. If it contains a logarithm (for example: log a x = y) it is logarithmic problem. A logarithm is denoted by the letters log. If the equation contains an exponent (that is, a variable raised to a power) it is an exponential equation. An exponent. 3) the rejection method (accept-reject) can be done with discrete distributions; if you have a discrete majorizing function (envelope) which is a scaled-up discrete pmf that you can already generate from in a fast way, it adapts directly, and in some cases can be very fast. More generally you can take advantage of being able to generate from continuous distributions (for example by. Logarithms put numbers on a human-friendly scale. Large numbers break our brains. Millions and trillions are really big even though a million seconds is 12 days and a trillion seconds is 30,000 years. It's the difference between an American vacation year and the entirety of human civilization The difference between processes, procedures and tasks. A procedure is a synonym for a task, but a process is an upper level series of major steps, with those major steps being procedures or tasks. Tasks are made up of actions or steps. A process is an upper level description of a series of major steps required to accomplish an objective An index reorganize does not see a total view of the index and so cannot update statistics, meaning that manual index statistics maintenance is required. Summary. As you can see, there are quite a few major differences between rebuilding and reorganizing, but there's no right answer as to which one you should use - that's your choice Index vs Ratio - What's the difference? WikiDif First, I want to make sure you understand the difference between rebuilding and reorganizing an index. This tip is going to focus on rebuilds only. If you would like to learn more about the differences between the two operations you can read more here. Second, we should note that in SQL Server 2005 the online option for index rebuilds is only available in Enterprise edition while in SQL Server. Oxford Discrete Mathematics and Probability Seminar. Starting from March 31, 2020, we will run a weekly online Oxford discrete maths and probability seminar. The talks will take place on Zoom at Tuesdays at 2pm and/or 330pm UK time. The meetings are organized by Christina Goldschmidt and Alex Scott. There is a seminar mailing list, which you can join by sending an empty email to sympa@maillist. product. The logarithm of 10 cm is therefore log (10) + log cm = 1 + log cm. You can't give a nu-merical result for log cm, which may seem rather disturbing. Fortunately, this all works out fine anyway, because you typically find the logarithm of a quantity with units only when you in the process of finding the difference between two logarithms 8.3 Interactions Between Independent Variables. There are research questions where it is interesting to learn how the effect on \(Y\) of a change in an independent variable depends on the value of another independent variable. For example, we may ask if districts with many English learners benefit differentially from a decrease in class sizes to those with few English learning students Discrete vs Continuous Data: Definition, Examples and Different model specifications were compared using the log likelihood ratio index (LLRI) and the preferred MXL (with highest LLRI) was employed to inform the WTP estimates (see Additional file 1); its attached WTP estimates did not change the direction of the CBA findings based on the CL model (see Additional file 2) It is often desirable to quantify the difference between probability distributions for a given random variable. This occurs frequently in machine learning, when we may be interested in calculating the difference between an actual and observed probability distribution. This can be achieved using techniques from information theory, such as the Kullback-Leibler Divergence (KL divergence), or. The difference between the tests is how they go about answering that question. As you have seen, in order to perform a likelihood ratio test, one must estimate both of the models one wishes to compare. The advantage of the Wald and Lagrange multiplier (or score) tests is that they approximate the LR test, but require that only one model be estimated. Both the Wald and the Lagrange multiplier. Snack eating occasions contribute approximately a third of children's energy intake, with approximately half of all unhealthy foods consumed during snack times. Therefore, it is critical to understand the drivers of primary food providers' snack provision. The study aims were to determine the relative importance of physical resources and social supports when primary food providers are. Elliptic Curve Discrete Logarithms and the Index Calculus Discrete Mathematics pdf notes - DM notes pdf file. Note :- These notes are according to the R09 Syllabus book of JNTU.In R13 and R15,8-units of R09 syllabus are combined into 5-units in R13 and R15 syllabus. If you have any doubts please refer to the JNTU Syllabus Book. Logic and proof, propositions on statement, connectives, basic. H ( y, y ^) = ∑ i y i log. ⁡. 1 y ^ i = − ∑ i y i log. ⁡. y ^ i. Cross entropy is always larger than entropy; encoding symbols according to the wrong distribution y ^ will always make us use more bits. The only exception is the trivial case where y and y ^ are equal, and in this case entropy and cross entropy are equal The multinomial probit model is a discrete choice model that is based on the assumption that the unobserved components in \epsilon_ {ij} come from a normal distribution. Different probit models arise from different specifications of V_ {ij} and different assumptions about \epsilon_ {ij}. For example, with a basic multinomial probit model, as is. database - Continuous Vs Continuous Data. Continuous Data can take any value (within a range) Examples: A person's height: could be any value (within the range of human heights), not just certain fixed heights, Time in a race: you could even measure it to fractions of a second, A dog's weight, The length of a leaf, Lots more! Data Data Index What kind of average returns can investors expect if they invest in the FTSE 100? First, it is important to distinguish between price returns and total returns. The price return for the FTSE 100 is linked to the price of the index that you might see quoted by financial news stations. For example, the price of the FTSE 100 ended 2019 at 7542.44. Logarithm transformation - Duke Universit A log chain always starts with a full database backup and continues until for reason it breaks the chain (for example, changing the recovery model of database to simple, or taking an extra full backup), thus by preventing log backups from being taken on the database until another full (or differential) backup is initiated for that database Solution for what is the difference between discrete adn continuous random variables. what is probability sidtribution? what are the characteristics of the Examples of discrete structures built with the help of sets: Set difference Definition: Let A and B be sets. The difference of A and B, denoted by A - B, is the set containing those elements that are in A but not in B. The difference of A and B is also called the complement of B with respect to A. • Alternate: A - B = { x | x A x B }. Example: A= {1,2,3,5,7} B = {1,5,6,8} • A - B ={2,3. g Microsoft's Clouds: Azure and Office 365. The Difference Between SQL Server and SQL Azure. Unlike SQL Server where your Databases are the only ones on your Database server, SQL Azure may use a single physical server to host Databases from many different customers Continuous Compounding vs Watch Jesse Roe and Sal talk about the difference between equations and functions. Equations and functions are not the same thing, but they can be related in several ways. Watch Jesse Roe and Sal talk about the difference between equations and functions. If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please. g community. Differences between the SQL Server DELETE and TRUNCATE Commands. Truncate reseeds identity values, whereas delete doesn't. Truncate removes all records and doesn't fire triggers. Truncate is faster compared to delete as it makes less use of the transaction log. Truncate is not possible when a table is referenced by a Foreign Key or tables are. Introduction: The therapeutic rationale varies among tinnitus therapies. A recent study identified which outcome measures should be used for different types of interventions. What patients consider the most important outcome measure in tinnitus therapy is unclear.Objectives: To study the preference of the tinnitus patient for different outcome measures in tinnitus therapy.Methods: A discrete. Findings: Differences on the FDI, PCS subscales, and self-reported pain level were evident at admission and discharge between conditions (p <.05). All conditions showed significant improvements in FDI, PCS, CESD, and MASC-2 (p <.01) except for PCS Magnification in GP and PCS Helplessness in AP (p > .05). Discussion: Psychological adjustment, functional disability and response to an. Interpreter Log In: Log In. Home. How It Works. FAQ. Courses. Partners. News. Forms. Help Videos. Interpreter Verification. Dementia Interpreters. We're here to help: 01376 573 999 [email protected]. 01376 573 999. Home. How It Works. FAQ. Courses. Live Webinar Course. Face-to-Face Course. eLearning Course. Train the Trainer. Partners. News. Forms. Pre-Course Form. Evaluation Form. The difference between the two tax forms is that a W-4 is an input document and a W-2 is an output document. An employee uses a W-4 to inform the company's payroll department how much tax to withhold from their earned income. Then, at year-end, a W-2 reports year-end earnings and deductions The most evident difference between the Cool Mode and Dry Mode is that in the latter mode, your air conditioner wouldn't be releasing cool air and is technically not actively cooling the room. After all, dry air in excess levels is just about as uncomfortable as an extremely humid room Network acceleration: What is the difference between BBrplus and Sharp speed? Is it possible to get vmware free for personal use? No 3D support from the host; Install Windows Vista Home Premium SP2 (32-bit) on VMWare Workstation 9 (Translated Spanish) UNQ GAMER Tho Hyper King Telugu Gamer live stream #hyperkingteugugamer @unqgamer #67 Företagspresentation mall. Hunter Brothers wohnort. Cosmos price prediction. Ubuntu 19.10 Systemanforderungen. Royal Canadian Mint bullion. GGPoker SharkScope. Kleeblatt Outlook. Sparkasse Goldpreis. S&P 500 Short ETF 3x. GPU Mining Rigs for sale. Fidelity Growth Fund. Deckhengste Thüringen. Metamask account not showing. Minimale inleg DEGIRO. Comdirect Finanzmanager. Bitcoin árfolyam. Mäklare Uppsala län. Puma instagram. Opec treffen heute. The Walking Dead strategy game. Kryptozeitung. Silk Road Australia release. Aluprofil 40x40 6m. Shisha Tabak Proben. Aktier pressmeddelanden. BetOnline rollover Reddit. Kurs aktier distans. Studentrabatt TV. Börse Kanada Heute geöffnet. Walachei. Auction Coin cryptocurrency. Amazon Visa SMART PIN vergessen. PoS coins 2020. Implenia News 2021. Gambling Anime Kaiji. Auto Trader UK. Png Emoji Discord. Mond neujahr ratte. Stake crypto. POSB DBS Credit Card. Investitionen Beispiele.
CommonCrawl
The Universe of Discourse Mark Dominus (陶敏修) [email protected] 12 recent entries All polynomials of degree 3 or greater factor over the reals Horrible insurance kerfuffle gone good A little more about the pedagogy of what it means to be transcendental Consecutive squareful numbers In simple English, what does it mean to be transcendental? What is not portable I said it was obvious but it was false Stack Exchange is a good place to explain initial and terminal objects in the category of sets Annoying Kuratowski pair projection formula Not the expected examples of nonbinary characters in fiction History of Science Shitcommenting One way in which Wiener pairs are simpler than Kuratowski pairs 2022: J 2021: JFMAMJ JASOND 2005: OND Down in the dumps Fuckin' user interface design, I swear Hacking the git shell prompt Hildebert and the mouse More things that changed in later editions of Snow White Mystery twitter language Paul L. Smith as Bluto The ideal gas law The magic mirror in Snow White We visit the town of Gap Subtopics: Programming 80 Oops 28 Cosmic Call 25 Haskell 22 Etymology 20 Perl 16 I'm so old I can remember when forms were introducted to the web; as you can imagine it was a big advance. The initial spec included the usual text boxes, radio buttons, and so forth, two types of "submit" buttons, and a "reset" button. Clicking "reset" would reset the form contents to a defined initial state, normally empty. So you'd have a bunch of form widgets, and then, at the bottom, a Submit button, and next to it, a Reset button. Even as an innocent youth, I realized this was a bad design. It is just setting people up for failure. They might get the form all filled out, be about to submit it, but click a few pixels off, hit the Reset button by mistake, and have to start all over again. Obviously, the Submit button should be over on the left, just under the main form, where the user will visit it in due course after dealing with the other widgets, and the Reset button should be way over on the right, where it is less likely to be hit by accident. (Or, more likely, it shouldn't be anywhere; in most cases it is nothing but an attractive nuisance. How often does someone need to reset the form anyway? How badly would they have to screw it up to decide that it would be quicker to start over than to simply correct their errors?) Does my "obviously" come across as superior and condescending? Honestly, it comes from a place of humility. My thinking is like this: The field of user inteface design is skilled work I have no training or experience in this field Also, I have no talent in design generally (Just look at this page!) Experience has proved that I am very stupid about this whole area But this particular problem is apparent even to a blockhead like me So it must be extremely obvious But maybe I'm not giving myself enough credit. I said "obviously" but it sure wasn't obvious to many people at the time. I remember 90% of the forms I encountered having that Reset button at the bottom, at least into the late 1990s. And it's on my mind because my co-workers had a discussion about it at work last week: don't put the Cancel button right next to the Submit button. If this was obvious to dumbass me in 1994, why isn't it common knowledge by now? Don't put the Yes button right next to the No button. That encourages mistakes. Obviously. Don't put the commonly-used "close this window" keyboard shortcut right next to the infrequently-used and irreversible "quit this application" shortcut. In particular, don't put "close this window" on control-W and "quit this application" on control-Q. I'm looking at you, Firefox. And that brings me to my real point. Can we talk about Google Meet? These three buttons are at the bottom of the Google Meet videoconferencing app. The left one temporarily mutes and unmutes the microphone. The right one controls the camera similarly. And if you click the button in between, you immediately leave the meeting and quit the app. Now, as I said I'm pretty damn stupid when it comes to design, but geez, louise. Couldn't Google find someone less stupid than me? [ Addendum 20210228: Google fucks up again. ] [Other articles in category /tech] permanent link Katara is toiling through A.P. Chemistry this year. I never took A.P. Chemistry but I did take regular high school chemistry and two semesters of university chemistry so it falls to me to help her out when things get too confusing. Lately she has been studying gas equilibria and thermodynamics, in which the so-called ideal gas law plays a central role: $$ PV=nRT$$ This is when you have a gas confined in a container of volume !!V!!. !!P!! is the pressure exerted by the gas on the walls of the container, the !!n!! is the number of gas particles, and the !!T!! is the absolute temperature. !!R!! is a constant, called the "ideal gas constant". Most real gases do obey this law pretty closely, at least at reasonably low pressures. The law implies all sorts of interesting things. For example, if you have gas in a container and heat it up so as to double the (absolute) temperature, the gas would like to expand into twice the original volume. If the container is rigid the pressure will double, but if the gas is in a balloon, the balloon will double in size instead. Then if you take the balloon up in an airplane so that the ambient pressure is half as much, the balloon will double in size again. The Character of Physical Law from Powell's I had seen this many times and while it all seems reasonable and makes sense, I had never really thought about what it means. Sometimes stuff in physics doesn't mean anything, but sometimes you can relate it to a more fundamental law. For example, in The Character of Physical Law, Feynman points out that the Archimedean lever law is just an expression of the law of conservation of energy, as applied to the potential energy of the weights on the arms of the lever. Thinking about the ideal gas law carefully, for the first time in my life, I realized that it is also a special case of the law of conservation of energy! The gas molecules are zipping around with various energies, and this kinetic energy manifests on the macro scale as as pressure (when they bump into the walls of the container) and as volume (when they bump into other molecules, forcing the other particles away.) The pressure is measured in units of dimension !!\frac{\rm force}{\rm area}!!, say newtons per square meter. The product !!PV!! of pressure and volume is $$ \frac{\rm force}{\rm area}\cdot{\rm volume} = \frac{\rm force}{{\rm distance}^2}\cdot{\rm distance}^3 = {\rm force}\cdot{\rm distance} = {\rm energy}. $$ So the equation is equating two ways to measure the same total energy of the gas. Over on the right-hand side, we also have energy. The absolute temperature !!T!! is the average energy per molecule and the !!n!! counts the number of molecules; multiply them and you get the total energy in a different way. The !!R!! is nothing mysterious; it's just a proportionality constant required to get the units to match up when we measure temperature in kelvins and count molecules in moles. It's analogous to the mysterious Cookie Constant that relates energy you have to expend on the treadmill with energy you gain from eating cookies. The Cookie Constant is !!1043 \frac{\rm sec}{\rm cookie}!!. !!R!! happens to be around 8.3 joules per mole per kelvin. (Actually I think there might be a bit more to !!R!! than I said, something about the Boltzmann distribution in there.) Somehow this got me and Katara thinking about what a mole of chocolate chips would look like. "Better use those mini chips," said Katara. [Other articles in category /physics] permanent link Today someone tweeted about an earlier blog article of mine, saying 10° bir kvadratda ən böyük şəhərləri görə biləcəyiniz bir xəritə olan bir sayt. I looked at that and frowned, and said "What language is that? … is it Azerbaijani?" And it is Azerbaijani! Last time I encountered Azerbaijani I did not recognize it. So I not only learned something last April, I remembered it the following February when it came up again. Yay me! [Other articles in category /lang] permanent link Yesterday I wondered who Robert Altman had cast as Bluto in his 1980 live-action film Popeye. The answer turned out to be Paul L. Smith, who seemingly was born to play the part: I have thought for years about how Shelley Duval was seemingly born to play the part of Olive Oyl. (I remember the Mad magazine parody making this observation at the time, and it wasn't funny because it was so obvious.) I have somtimes wondered if Altman got the idea to make a Popeye movie specifically so that he could cast Duval as Olive Oyl. Anyway, Paul L. Smith, who already looked like Bluto. He was in a fair number of TV productions in the 70s and 80s, and I think it's possible that I saw him in one or another one. But the only other role of his that I remember clearly is from David Lynch's 1984 Dune. He plays Glossu "the Beast" Rabban. Who in many ways is not that different from Bluto: Large, violent, dangerous for his brutality but not his cunning. Obviously the Baron wanted to cast Feyd-Rautha as Popeye, but events got away from him and Paul became Popeye instead. In a Dune-Popeye crossover I can see Alia as Swee'Pea. That means that Chani has to be Olive, which I can live with. The correspondence isn't perfect, of course. There is nobody in Popeye like Lady Jessica or Stilgar. (Leto is obviously Poopdeck Pappy.) On the other side, where is J. Wellington Wimpy? It's been a while since I read the book, but I don't remember hom appearing. Reverend Mother Gaius Helen Mohiam is clearly the Sea Hag. Me spinach musk flow! If ya controlsk the spinach, ya controlsk the uni-voice! Ag-ag-ag-ag-ag! [Other articles in category /misc] permanent link I was reading The Life and Prankes of Long Meg of Westminster (1655), which opens with the story of how Long Meg first came to London with a posse of three or four girlfriends. After long travel they came within sight of London, "which joyed their hearts greatly." But as they got closer, Meg's friends became less cheerful, and she said to them: What Lasses in a dumpe, and we so nigh London? If someone had asked me to guess when "in a dump" or "in the dumps" had been coined, I think I would have guessed sometime in the early 20th century. Nope! The Big Dictionary has cites back to 1535, which is when Long Meg takes place. It also cites a 1785 dictionary for "down in the dumps" specifically. The phrase is not connected with the dump where you dump a load of trash, which is of much later coinage. It transpires that the lasses are in a dumpe because they realize that time has come to pay the carrier who has helped transport them to London, and believe he is likely to try to cheat them and take everything they have. Meg says she will reason sweetly with the carrier, and if that doesn't work, she will beat the crap out of him. The carrier does try to take everything they have, but becomes much more helpful after Meg has beaten him with a cudgel. Here it is if you would like to read it yourself. As you know, I've recently been looking into the original version of Snow White from 1812. ([1] [2]) I knew that the 1812 version of the Grimm stories was a lot rougher and more gruesome than the later editions, but I missed many of the details. For example, in the later versions, the evil queen orders her hunter to bring back Snow White's liver and lungs as proof that he has murdered her. In the first edition, she wants the liver and lungs so that she can eat them. After Snow White is poisoned with the apple, the dwarfs put her in a glass coffin. A prince happens by and begs them to give it to him, which they do. In the later versions, the servants carrying away the coffin stumble, the apple is dislodged from Snow White's throat, and she returns to life. In the original version, they get the coffin back to the prince's palace without mishap. There the prince has the servants carry it from room to room so that he can gaze at it always. (Ugh.) Finally, the servants are so fed up with this that one of them takes Snow White out of the coffin, stands her up, and, saying We are plagued the whole day long, just because of such a dead girl he clouts her in the back from pure spite. The apple is dislodged, and Snow White marries the prince. [Other articles in category /book] permanent link Git comes with a very complicated shell function,, called __git_ps1, for interpolating Git information into your shell prompt. A typical use would be: PS1='>>> $(__git_ps1) :) ' PS1 is the variable that contains the shell's main prompt. Before printing the prompt, the shell does variable and command interpolation on this string. This means that if PS1 contains something like $(command args...), the shell replaces that string with the output from running command args…. Here, it runs __git_ps1 and inserts the output into the prompt. In the simplest case, __git_ps1 emits the name of the currently-checked-out branch, so that the shell will actually print this prompt: >>> the-branch :) But __git_ps1 has many other features besides. If you are in the middle of a rebase or cherry-pick operation, it will emit something like the-branch|REBASE-i 1/5 the-branch|CHERRY-PICKING instead. If HEAD is detached, it can still display the head location in several formats. There are options to have the emitted string indicate when the working tree is dirty and other things. My own PS1 looks like this: PS1='[$(_path) $(__git_ps1 "(%s)" )]> ' The _path command is something I wrote to emit the path of the current working directory, abbreviated in a contextually dependent way. It makes my prompt look like this: [lib/app (the-branch)]> Here lib/app is the path relative to the root of the repository. The %s thing is an additional formatting instruction to __git_ps1. After it computes the description string, __git_ps1 inserts it into "(%s)" in place of the %s, and emits the result of that replacement. If you don't give __git_ps1 an argument, it uses "(%s) " as a default, which has an extra space compared with what I have. Lately I have been experimenting with appending .mjd.yyyymmdd to my public branch names, to help me remember to delete my old dead branches from the shared repository. This makes the branch names annoyingly long: gh1067-sort-dates-chronologically.mjd.20210103 gh1067-sort-dates-no-test.mjd.20210112 gh1088-cache-analysis-list.mjd.20210105 and these annoyingly long names appear in the output of __git_ps1 that is inserted into my shell prompts. One way to deal with this is to have the local branch names be abbreviated and configure their upstream names to the long versions. And that does work: I now have a little program called new-branch that creates a new branch with the local short name, pushes it to the long remote name, and sets the upstream. But I also wanted a generic mechanism for abbreviating or transforming the branch name in the prompt. The supplied __git_ps1 function didn't seem to have an option for that, or a callback for modifying the branch name before inserting it into the prompt. I could have copied the function, modified the parts I wanted, and used the modified version in place of the supplied version, but it is 243 lines long, so I preferred not to do that. But __git_ps1 does have one hook. Under the right circumstances, it will attempt to colorize the prompt by inserting terminal escape codes. To do this it invokes __git_ps_colorize_gitstring to insert the escape codes into the various prompt components before it assembles them. I can work with that! The goal is now: Figure out how to tell __git_ps1 to call __git_ps_colorize_gitstring Figure out how __git_ps1 and __git_ps_colorize_gitstring communicate prompt components Write my own __git_ps_colorize_gitstring to do something else How to tell __git_ps1 to call __git_ps_colorize_gitstring You have to do two things to get __git_ps1 to call the hook: Set GIT_PS1_SHOWCOLORHINTS to some nonempty string. I set it to true, which is a little deceptive, because false would have worked as well. Invoke __git_ps1 with two or more arguments. Unfortunately, invoking the __git_ps1 with two or more arguments changes its behavior in another way. It still computes a string, but it no longer prints the string. Instead, it computes the string and assigns it to PS1. This means that PS1="$(__git_ps arg arg….)" won't work properly: the next time the shell wants to prompt, it will evaluate PS1, which will call __git_ps arg arg…, which will set PS1 to some string like (the-branch). Then the next time the shell wants to print the prompt, it will evaluate PS1, which will be just some dead string like (the-branch), with nothing in it to call __git_ps1 again. So we need to use a different shell feature. Instead of setting PS1 directly, we set PROMPT_COMMAND. This command is run before the prompt is printed. Although this doesn't have anything to do directly with the prompt, the command can change the prompt. If we set PROMPT_COMMAND to invoke __git_ps1, and if __git_ps1 modifies PS1, the prompt will change. Formerly I had had this: PS1='[$(_path) $(__git_ps1 "(%s)")]> ' but instead I needed to use: GIT_PS1_SHOWCOLORHINTS=true PROMPT_COMMAND='__git_ps1 "[$(_path) " " ] "' "(%s)" Here __git_ps1 is getting three arguments: "[$(_path) " " ] " "(%s)" __git_ps1 computes its description of the Git state and inserts it into the third argument in place of the %s. Then it takes the result of this replacement, appends the first argument on the front and the second on the back, and sets the prompt to the result. The shell will still invoke _path in the course of evaluating the first string, before passing it to __git_ps1 as an argument. Whew. How __git_ps1 communicates prompt components to __git_ps_colorize_gitstring The end result of all this rigamarole is that __git_ps1 is now being called before every prompt, as before, but now it will also invoke __git_ps_colorize_gitstring along the way. What does that actually get us? The internals of __git_ps_colorize_gitstring aren't documented because I don't think this is a planned use case, and __git_ps_colorize_gitstring isn't an advertised part of the interface. __git_ps1 does something to construct the prompt, possibly colorizing it in the process, but how it does the colorizing is forbidden knowledge. From looking at the code I can see that the colorizing is done by __git_ps_colorize_gitstring, and I needed to know what was going in inside. The (current) interface is that __git_ps1 puts the various components of the prompts into a family of single-letter variables, which __git_ps_colorize_gitstring modifies. Here's what these variables do, as best as I have been able to ascertain: b contains a description of the current HEAD, either the current branch name or some other description c indicates if you are in a bare repository i indicates if changes have been recorded to the index p contains information about whether the current head is behind or ahead of its upstream branch r describes the rebase / merge / cherry-pick state s indicates if there is something in the stash u indicates whether there are untracked files w indicates whether the working tree is dirty z is the separator between the branch name and the other indicators Oddly, the one thing I wanted to change is the only one that __git_ps_colorize_gitstring doesn't modify: the b variable that contains the name or description of the current branch. Fortunately, it does exist and there's nothing stopping me from writing a replacement __git_ps_colorize_gitstring that does modify it. Write a replacement for __git_ps_colorize_gitstring to do something else So in the end all I needed was: __git_ps1_colorize_gitstring () { b=${b%%.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]} b=${b%%.mjd} The ${b%%PAT} thing produces the value of the variable b, except that if the value ends with something matching the pattern PAT, that part is removed. So the first assignment trims a trailing .20210206 from the branch name, if there is one, and the second trims off a trailing .mjd. If I wanted to trim off the leading gh also I could use b=${b##gh}. There's probably some way to use this in addition to the standard __git_ps_colorize_gitstring, rather than in place of it. But I don't know how. This was way harder to figure out than it should have been. [Other articles in category /prog] permanent link Katara and I went to visit the town of Gap, PA. In Gap there are so many eagles that they have these special bins for getting rid of the extras. This is the famous self-portrait of Hildebert, a 12th century scribe in what is now the Czech Republic. In this picture, Hildebert is shaking his fist at a mouse, which is eating his lunch. There is quite a lot going on here! First off, Hildebert is carrying one of his quill pens behind his ear. This seems to me like a good way to get ink in your hair, and I wonder if medieval scribes often had smudges on their forheads. I think the thing in his hand is a piece of bread. But what is on the table? I think the mouse is eating Hildebert's cheese (we can see the already-cut piece under the mouse's butt) and there seems to have been a small roast bird of some type, which the mouse has upset but which has not yet hit the floor. The table with the mouse is labeled Mensa hildeberti, "Hildebert's table", in case it was unclear just whose lunch was being stolen. Hildebert seems to be wearing a long garment with fancy matching sleeves and collar, and over that what looks like a chiton. I wonder if Hildebert really wore a chiton? On the left of the picture is a really interesting piece of equipment. Until I saw this picture, it had never occurred to me that the lap desk had been invented before I was born. But here it is, almost nine hundred years ago. And it certainly is a lap desk, having no legs. In this picture the lap desk is supported by a backward-headed lion, but in actual practice such luxuries are probably hard to come by, so Hildebert would have put the desk on his lap. The two long curvy things on the left edge of the lap desk are not legs. They are inkhorns: sawn-off animal horns, filled with ink. When you need to get more ink on your quill, you dip the end in the inkhorn. I had heard of inkhorns but until I saw this picture I had never understood how you used them: they won't stand up, and if you lay them down the ink will spill. But Hildebert's picture makes it perfectly clear: the lap desk has a couple of round holes in it, and you slide the inkhorns into the holes until they stop. Very nice! Next to the inkhorns are two extra quills, and along the bottom edge of the lap desk there is a ridge to keep the paper or parchment from sliding into your lap. I am pretty sure that the lion is holding the desk by the bottom edge, so that it is presented to Hildebert sideways. Hildebert is too enraged by the mouse to care about this. Also on the desk is a booklet, in which (according to Wikipedia) Hildebert has written: Pessime mus, saepius me provocas ad iram. Ut te deus perdat I think I can make this out. (Medieval scribes used a great many abbreviations. For example, iram is written as "irã". Similarly, the hildeberti above the table is abbreviated to "hildebti". If you are interested, I discussed scribal abbreviations a couple of years ago.) Wikipedia's translation of this is: Most wicked mouse, you incite me to anger once too often. May God destroy you. I think the phrasing and the fist-shaking, directed at a mouse, are meant by Hildebert to be a humorous overreaction. Underneath Hildebert is a drawing of his colleague Everwin (EVERWINVS). Everwin seems to be painting some sort of decoration with a brush. Check out his fancy sleeves and matching socks! I am not sure what Hildebert is holding in his left hand or whether it intersects the lion's arm. My best guess is that it is Hildebert's table knife, and that the picture means to show it passing in front of the lion, not intersecting the lion. Many thanks to Marnanel Thurman for bringing this to my attention. [Other articles in category /art] permanent link A few weeks ago I was thinking about Snow White and in the course of doing that I looked up the original German version of 1812. (Snow White herself is story #53, on page 238.) The magic mirror is introduced this way: Die Königin … hatte auch einen Spiegel, vor trat sie alle Morgen und fragte: … ("The queen… also had a mirror, before which she stood every morning and asked…") The mirror is simply einen Spiegel, a mirror, not a specifically magic mirror. That seems to have been a later interpolation. In the 1857 edition, it says Sie hatte einen wunderbaren Spiegel…. There is no wunderbaren in the original. I prefer the original. The mirror recites poetry; to say it is a magic mirror is superfluous. But on second thought, is it? There is another explanation: in the original version, perhaps the mirror is an ordinary one, and the queen is psychotic. Certainly nobody else hears the mirror speaking. And the queen tells the hunter not only to kill Snow White in the forest, but to bring back Snow White's lungs and liver, so that the she may eat them. With salt! (die will ich mit Salz kochen und essen.) Now I prefer the original even more. The later version, which unequivocally states that the mirror is magic, is much less terrifying. I suppose the argument against this reading is that the mirror also provides the queen with real information: Snow White is still alive, and living with the seven dwarfs. I think the original text does imply that the queen was aware of the seven dwarfs, but how might she have known that Snow White was still alive? Well, she did eat the lungs and liver, which had actually come from a young wild boar (junger Frischling). Perhaps she was satisfied at first, but then there was something about the taste, or the texture, not quite what she expected… it gnawed at her for hours, and then in a flash of rage she realized what she had actually eaten… [ Addendum 20210202: In case you wanted to see it, Note, by the way, that in 1812 the umlaut marks in Königin etc. still looked like small letter 'e'; they had not yet been reduced to diareses. ] [ Addendum 20210207: another startling detail that was revised in the later editions. ] [ Addendum 20210321: The more I think about the queen's psychosis, the more obvious it seems that this is the correct explanation. ]
CommonCrawl
Darts, Dice, and Coins: Sampling from a Discrete Distribution Last Major Update: December 29, 2011 Earlier this year, I asked a question on Stack Overflow about a data structure for loaded dice. Specifically, I was interested in answering this question: "You are given an n-sided die where side i has probability pi of being rolled. What is the most efficient data structure for simulating rolls of the die?" This data structure could be used for many purposes. For starters, you could use it to simulate rolls of a fair, six-sided die by assigning probability $\frac{1}{6}$ to each of the sides of the die, or a to simulate a fair coin by simulating a two-sided die where each side has probability $\frac{1}{2}$ of coming up. You could also use this data structure to directly simulate the total of two fair six-sided dice being thrown by having an 11-sided die (whose faces were 2, 3, 4, ..., 12), where each side was appropriately weighted with the probability that this total would show if you used two fair dice. However, you could also use this data structure to simulate loaded dice. For example, if you were playing craps with dice that you knew weren't perfectly fair, you might use the data structure to simulate many rolls of the dice to see what the optimal strategy would be. You could also consider simulating an imperfect roulette wheel in the same way. Outside the domain of game-playing, you could also use this data structure in robotics simulations where sensors have known failure rates. For example, if a range sensor has a 95% chance of giving the right value back, a 4% chance of giving back a value that's too small, and a 1% chance of handing back a value that's too large, you could use this data structure to simulate readings from the sensor by generating a random outcome and simulating the sensor reading in that case. The answer I received on Stack Overflow impressed me for two reasons. First, the solution pointed me at a powerful technique called the alias method that, under certain reasonable assumptions about the machine model, is capable of simulating rolls of the die in $O(1)$ time after a simple preprocessing step. Second, and perhaps more surprisingly, this algorithm has been known for decades, but I had not once encountered it! Considering how much processing time is dedicated to simulation, I would have expected this technique to be better- known. A few quick Google searches turned up a wealth of information on the technique, but I couldn't find a single site that compiled together the intuition and explanation behind the technique. This writeup is my attempt to give a quick survey of various approaches for simulating a loaded die, ranging from simple techniques that are highly impractical to the very optimized and efficient alias method. My hope here is to capture different intuitions about the problem and how each highlights some new aspect of simulating loaded dice. For each approach, my goal is to explore the motivating idea, core algorithm, correctness proof, and runtime analysis (in terms of time, memory, and randomness required). Before I go into any of the specific details of the different techniques, let's first standardize our notation and terminology. In the introduction to this writeup, I used the term "loaded die" to describe a general scenario where there are is a finite set of outcomes, each of which has some associated probability. Formally, this is termed a discrete probability distribution, and the problem of simulating the loaded die is called sampling from a discrete distribution. To describe our discrete probability distribution (loaded die), we will assume that we are given a set of n probabilities $p_0, p_1, ..., p_{n - 1}$ associated with outcomes $0, 1, ..., n - 1$. Although the outcomes can be anything (heads/tails, numbers on a die, colors, etc.), for simplicity I'll assume that the outcome is some positive natural number that corresponds to the given index. Dealing with real numbers on a computer is a bit of computation gray area. There are many fast algorithms whose speed is derived purely from the ability to, in constant time, compute the floor function of an arbitrary real number, and numerical inaccuracies in floating-point representations can entirely ruin certain algorithms. Consequently, before we embark on any discussion of algorithms for working with probabilities, which enter the dark world of real numbers, I should clarify what I assume the computer can and cannot do. In what follows, I will assume that all of the following operations can be done in constant time: Addition, subtraction, multiplication, division, and comparison of arbitrary real numbers. We will need to be able to do this in order to manipulate probabilities. This may seem like a very strong assumption, but if we assume that the precision of any real number is bounded by some polynomial of the machine word size (for example, a 64-bit double on a 32-bit machine), then I don't believe that this is too unreasonable. Generation of a uniform real number in the range [0, 1). In order to simulate randomness, we need some kind of random source. I assume that we can, in constant time, generate an arbitrary-precision real number. This is far beyond what could ever be actually done on a computer, but for the purposes of this discussion I think that it's fine to do so. If we are willing to accept some loss of precision by saying an arbitrary IEEE-754 double in the range [0, 1], then we do indeed lose some precision, but are probably accurate enough for most applications. Computing the integer floor of a real number. This is reasonable if we assume that we are working with IEEE-754 doubles, but is in general not a reasonable request of a computer. It is worth wondering whether or not it's unreasonable to assume that we can do all of these operations efficiently. In practice, we rarely have probabilities specified to a level of precision where the rounding error inherent in an IEEE-754 double will cause significant problems, and so we can get all of the above for free by just doing everything with IEEE doubles. However, if we are in an environment where the probabilities are specified exactly as high-precision rational numbers, then these constraints may be unreasonable. Simulating a Fair Die Before we generalize to the special case of rolling an arbitrarily loaded die, let's begin by starting with a simpler algorithm that will serve as a building block for the later algorithms: simulating a fair, n-sided die. For example, we may be interested in rolling a fair 6-sided die to play Monopoly or Risk, or flipping a fair coin (a 2-sided die), etc. In this special case, there is a simple, elegant, and efficient algorithm for simulating the outcome. The idea behind the algorithm is as follows. Suppose that we can generate truly random, uniformly-distributed real numbers in the range $[0, 1)$. We can visualize this range as follows: Now, if we want to roll an $n$-sided die, one way to do so would be to segment the range $[0, 1)$ into $n$ evenly-sized smaller regions, each of which has length $\frac{1}{n}$. This looks as follows: At this point, if we generate a randomly-chosen real number in the range $[0, 1)$, it will fall into exactly one of these smaller regions. From there, we can read off what the outcome of the die roll is by seeing what range it falls in. For example, if our randomly-chosen value fell at this location: We would say that the die rolled a 2 (assuming that we zero-index our die). Graphically, it's easy to see in which bucket the random value fell, but how can we encode this as an algorithm? This is where we use the fact that the die is a fair die. Since all of the ranges have equal size, namely $\frac{1}{n}$, we can see what the largest value of i is such that $\frac{i}{n}$ is no greater than the randomly-generated value (call that value x). One observation is that if we are trying to find the maximum value of i such that $\frac{i}{n} \le x$, this is equivalent to finding the maximum value of $n$ such that $i \le xn$. But, by definition, this means that $i = \lfloor xn \rfloor$, the largest natural number no larger than xn. Consequently, this gives us the following (very simple) algorithm for simulating a fair, n-sided die: Algorithm: Simulating a Fair Die Generate a uniformly-random value $x$ in the range $[0, 1)$. Return $\lfloor xn \rfloor$. Using our computational assumptions from above, this algorithm runs in $O(1)$ time. This section has two takeaway points. First, we can segment the range $[0, 1)$ into pieces such that a uniformly-random real number in that range maps naturally back to one of the many discrete choices available to us. We will exploit this technique extensively throughout the rest of this writeup. Second, it can be complicated to determine which range a particular random value has fallen into, but if we know something about the partitions (in this case, that they all have the same size), it can be mathematically easy to determine which partition a particular point is in. Simulating a Loaded Die with a Fair Die Given an algorithm for simulating a fair die, can we adapt the algorithm to simulate a loaded die? The answer, interestingly, is yes, but it will come at a space premium. The intuition of the previous section suggests that in order to simulate a roll of a loaded die, all we need to do is split the range $[0, 1)$ into pieces, then determine which piece we have fallen into. However, in general this can be much more difficult than it might seem. For example, suppose that we have a four-sided die with side probabilities of $\frac{1}{2}, \frac{1}{3}, \frac{1}{12},$ and $\frac{1}{12}$ (we can confirm that this is a legal probability distribution since $\frac{1}{2} + \frac{1}{3} + \frac{1}{12} + \frac{1}{12} = \frac{6}{12} + \frac{4}{12} + \frac{1}{12} + \frac{1}{12} = \frac{12}{12}$). If we partition the range $[0, 1)$ into four pieces of these given sizes, we get the following: Unfortunately, at this point we're stuck. Even if we knew a random number in the range $[0, 1)$, there is no simple mathematical trick we can use to automatically determine which partition that number falls into. This is not to say that it's extremely difficult to do so - as you'll see, there are many great tricks we can use - but none of them necessarily has the mathematical simplicity of the algorithm for rolling a fair die. However, we can adapt the technique for fair dice to work in this case. Let's take this particular case as an example. The probability of the sides of the dice coming up are $\frac{1}{2}, \frac{1}{3}, \frac{1}{12},$ and $\frac{1}{12}$. If we rewrite this such that all the terms have a common denominator, we get the values $\frac{6}{12}, \frac{4}{12}, \frac{1}{12},$ and $\frac{1}{12}$. We can therefore think about this problem as follows: rather than rolling a four-sided die with weighted probabilities, why not instead roll a 12-sided fair die where the faces have duplicate labels on them? Since we know how to simulate a fair die, this would be equivalent to cutting up the range $[0, 1)$ into twelve pieces like this: Then assigning them to the different outcomes like this: Now, simulating a die roll is extremely simple - we just roll this new fair die, then see what side comes up and read off the value it contains. This first step can be accomplished using the algorithm from above, which will give us back an integer in the range $0, 1, ..., 11$. To map that integer back to one of the sides of the original loaded die, we will store an auxiliary array of size twelve that maps each of these numbers back to the original outcome. Graphically, we can see this as follows: To formalize this as an algorithm, we will describe both the initialization step (coming up with the table) and the generation step (simulating throws of the random die). Both of these steps are important to consider in this algorithm and the algorithms that follow, since the setup time might be great. In the initialization step, we begin by finding the least common multiple of all of the probabilities we are given for the sides of the dice (in our example, this was 12). The LCM is useful here because it corresponds to the smallest common denominator we could use for all of the fractions, and therefore the number of sides on the new, fair die that we will be rolling. Once we have this LCM (let's call it L), we need to determine how many sides of the new die will be distributed to each of the sides of the original, loaded die. In our example, the side with probability $\frac{1}{2}$ got 6 sides on the new die, since $\frac{1}{2} \times 12 = 6$. Similarly, the side with probability $\frac{1}{3}$ got 4 sides, since $\frac{1}{3} \times 12 = 4$. More generally, if L is the LCM of the probabilities and $p_i$ is the probability of side $i$ of the die coming up, we would allocate $L \cdot p_i$ sides of the fair die to side $i$ of the original loaded die. Here is pseudocode for the above algorithm: Algorithm: Simulating a Loaded Die with a Fair Die Initialization: Find the LCM of the denominators of the probabilities $p_0, p_1, ..., p_{n-1}$; call it $L$ Allocate an array $A$ of size $L$ mapping outcomes of the fair die roll to the original die roll. For each side $i$ of the original die, in any order: Set the next $L \cdot p_i$ entries of $A$ to be $i$. Generation: Generate a fair die roll from an $L$-sided die; call the side $S$. Return $A[S]$. This algorithm may be simple, but how efficient is it? The actual generation of die rolls is quite fast - each die roll requires $O(1)$ work to generate a random die roll using the earlier algorithm, plus an extra $O(1)$ work for the table lookup. This gives a total work requirement of $O(1)$. However, the initialization step may be extremely costly. In order for this algorithm to work, we need to allocate space for an array as large as the LCM of the denominators of all of the input fractions. For our example ($\frac{1}{2}, \frac{1}{3}, \frac{1}{12}, \frac{1}{12}$), this was 12, but for other inputs it can be pathologically bad. As an example, consider the fractions $\frac{999999}{1000000}$ and $\frac{1}{1000000}$. The LCM of the denominators is one million, so our table would require one million entries! Unfortunately, it gets worse than this. In the previous example, we could at least say that we "expected" the algorithm to use a lot of memory, since the denominators of the fractions were each one million. However, we may end up with a set of probabilities for which the LCM is substantially greater than any individual denominiator. As an example, consider the probabilities $\frac{1}{15}, \frac{1}{10}, \frac{5}{6}$. Here, the LCM of the denominators is 30, which is larger than any of the denominators themselves. The construction here works because $15 = 3 \times 5$, $10 = 2 \times 5$, and $6 = 2 \times 3$; in other words, each denominiator is the product of two primes chosen out of a pool of three. Their LCM is therefore the product of all of those primes together, since each denominator has to divide the LCM. If we generalize this construction and consider any set of $k$ primes, then if we pick one fraction for each pairwise product of those primes, the LCM may be much larger than any individual denominiator. In fact, one of the best upper bounds we can derive on the LCM would be $O(\prod_{i = 0}^n{d_i})$, where $d_i$ is the denominator of the $i$th probability. This precludes this algorithm from being used in any practical setting where the probabilities are not known in advance, since the memory usage required to hold a table of size $O(\prod_{i = 0}^n{d_i})$ can easily be beyond what can be held in RAM. That said, in many cases this algorithm is well-behaved. If the probabilities are all identical, then all the probabilities we are given as input are $\frac{1}{n}$ for some $n$. The LCM of the denominators is then $n$, so the fair die we end up rolling will have $n$ sides and each of the sides of the original die will correspond to one side of the fair die. The initialization time is thus $O(n)$. Graphically, this would look as follows: This gives the following information about this algorithm: Initialization Time Generation Time Loaded Die from Fair Die $\Theta(n)$ $O(\prod_{i = 0}^n{d_i})$ $\Theta(1)$ $\Theta(n)$ $O(\prod_{i = 0}^n{d_i})$ Another important detail about this algorithm is that it assumes that we're given the probabilities nicely and conveniently as fractions with well-behaved denominators. If the probabilities are specified as IEEE-754 doubles, then this approach is likely to fail catastrophically due to small rounding errors; imagine if we have 0.25 and 0.250000000001 as probabilities! Thus this approach is probably best not attempted except in special cases where the probabilities are known to be well-behaved and specified in a format conducive to operations on rational numbers. Simulating a Biased Coin Our explanation of one simple random primitive (the fair die) led to a simple but potentially disastrously inefficient algorithm for simulating a loaded die. Perhaps exploring other simple random primitives might shed some more light on different approaches for solving this problem. A simple but surprisingly useful task is simulating a biased coin using a random generator. Given a coin with probability $p_{heads}$ of coming up heads, how would we simulate a flip of the biased coin? One of the intuitions we developed earlier on was that we can partition the range $[0, 1)$ into a series of buckets such that when we pick a random value in the range, it ends up in some bucket with probability equal to the size of the bucket. To simulate a biased coin using a uniformly random value in the range $[0, 1)$, we could consider splitting the range $[0, 1)$ like this: And then generating a uniformly-random value in the range $[0, 1)$ to see what bucket it's contained in. Fortunately, since there is only one splitting point, it's quite easy to determine which bucket the point is contained in; if the value is less than $p_{heads}$, then the coin came up heads, and otherwise it came up tails. As pseudocode: Algorithm: Simulating a Biased Coin If $x < p_{heads}$, return "heads." If $x \ge p_{heads}$, return "tails." Since we can generate a uniformly-random value in the range $[0, 1)$ in $O(1)$ time and can do a real-valued comparison in $O(1)$ as well, this algorithm runs in $O(1)$ time. Simulating a Fair Die with Biased Coins From our earlier discussion, we know that it's possible to simulate a loaded die with a fair die, assuming that we're willing to pay a potential premium in space usage. Since we can think of a biased coin as a loaded two-sided die, this means that it's possible to simulate a biased coin using a fair die. Interestingly, it's also possible to do the opposite, and we can simulate a fair die with a biased coin. The construction is straightforward, elegant, and can easily be generalized to simulate a loaded die using a set of biased coins. The construction for simulating a biased coin worked by partitioning the range $[0, 1)$ into two regions - a "heads" region and a "tails" region - based on the probability of the die coming up heads. We have already seen a similar trick used above to simulate a fair, $n$-sided die that worked by splitting the region $[0, 1)$ into $n$ evenly-sized regions. For example, when rolling a four-sided die, we ended up with this partitioning: Now, suppose that we were interested in simulating a roll of this fair die, given that we have available to us a collection of biased coins. One way that we might think about this would be to imaging marching across these buckets from the left to the right, at each time asking whether or not we want to stop in the bucket we're currently contained in or to move on. For example, let's suppose that we want to pick one of these four buckets randomly. Beginning in the leftmost bucket, we would flip a biased coin that would indicate whether we should stop in this bucket or continue moving forward. Since we want to pick all of these buckets uniformly with probability $\frac{1}{4}$, we could do this by flipping a biased coin that comes up heads with probability $\frac{1}{4}$. If it comes up heads, we stop in this bucket. Otherwise, we move on to the next bucket. If the coins comes up tails, we end up in the second bucket and again want to ask whether or not we should choose this bucket or keep moving. Initially, you might think that we should flip another coin that comes up heads with probability $\frac{1}{4}$ to do this, but this would actually be incorrect! One way to see the flaw in this reasoning is to carry it to the extreme - if in each bucket we flip a coin that comes up heads with probability $\frac{1}{4}$, then there is a small chance that in each bucket the coin will come up tails and we will end up rejecting each bucket. Somehow, we need to keep increasing the probability of the coin coming up heads as we march across the buckets. At the extreme, if we end up in the last bucket, we need the coin to come up heads with probability $1$, since if we've rejected every preceding bucket the correct decision is to stop in the last bucket. To determine the probability that our biased coin should come up heads once we've skipped the first bucket, notice that if we have indeed skipped the first bucket, there are only three buckets left. Since we're rolling a fair die we would want each of these three buckets to be chosen with probability $\frac{1}{3}$. Consequently, intuitively it seems like we should have the second die come up heads with probability $\frac{1}{3}$. Using a simmilar argument, if we flip tails on the second bucket, then the coin we toss for the third bucket should come up heads with probability $\frac{1}{2}$, and the coin we toss for the final bucket should come up heads with probability $1$. This intuition leads us to the following algorithm. Note that we have not argued why this algorithm is correct or even that it is correct; we'll do that in a second. Algorithm: Simulating a Fair Die with Biased Coins For $i = 0$ to $n - 1$: Flip a biased coin with probability $\frac{1}{n - i}$ of coming up heads. If it comes up heads, return $i$. This algorithm is simple and in the worst-case runs in $O(n)$ time. But how do we know that it's actually correct? To see this, we will need the following theorem: Theorem: The above algorithm outputs side $i$ with probability $\frac{1}{n}$ for any choice of $i$. Proof: Consider any fixed $n \ge 0$. We prove by strong induction that each of the $n$ sides has probability $\frac{1}{n}$ of being chosen. As our base case, we show that side $0$ of the die has probability $\frac{1}{n}$ of being chosen. But this is immediate from the algorithm - we choose side 0 if a biased coin with probability $\frac{1}{n}$ of coming up heads comes up heads, which means that we pick it with probability $\frac{1}{n}$. For the inductive step, assume that for sides $0, 1, 2, ..., k - 1$ that those sides are chosen with probability $\frac{1}{n}$ and consider the probability that side $k$ is chosen. Side $k$ will be chosen iff the first $k$ sides are not chosen, then a coin that comes up heads with probability $\frac{1}{n - k}$ comes up heads. Because each of the first $k$ sides have probability $\frac{1}{n}$ each of being chosen, and since only one side is ever chosen, the probability that one of the first $k$ sides is chosen is given by $\frac{k}{n}$. This means that the probability that the algorithm does not pick one of the first $k$ sides is given by $1 - \frac{k}{n} = \frac{n}{n} - \frac{k}{n} = \frac{n - k}{n}$. This means that the probability that we choose side $k$ is given by $\frac{n - k}{n} \frac{1}{n - k} = \frac{1}{n}$ as required, completing the induction. Thus each side of the die is chosen uniformly at random. Of course, this algorithm is fairly inefficient - we can simulate a roll of the fair die in $O(1)$ using our earlier technique! - but this algorithm can be used as a stepping stone into a reasonably efficient algorithm for simulating a loaded die with biased coins. Simulating a Loaded Die with a Biased Coin The above algorithm is interesting in that it gives a simple framework for simulating a die with a set of coins. We begin by flipping a coin to determine whether to pick the first side of the die or to move on to the remaining sides. In doing so, we have to be careful to scale up the remaining probabilities. Let's see how we might use this technique to simulate a roll of a loaded die. Let's use our example from above, with probabilities $\frac{1}{2}, \frac{1}{3}, \frac{1}{12}, \frac{1}{12}$. This, if you'll recall, splits the range $[0, 1)$ as follows: Now, let's think about how we might simulate this loaded die using biased coins. We could start by flipping a coin that has probability $\frac{1}{2}$ of coming up heads to determine whether or not we should output side 0. If this coin comes up heads, great! We're done. Otherwise, we need to flip another coin to determine whether or not to pick the next side. As with before, even though the next side has probability $\frac{1}{3}$ of coming up, we do not want to flip a coin that comes up heads with probability $\frac{1}{3}$, since half of the probability mass has been discarded when we didn't choose the $\frac{1}{2}$ side. In fact, since half of the probability mass is gone, if we renormalize the remaining probabilities at this point, we end up with the updated probabilities $\frac{2}{3}, \frac{1}{6}, \frac{1}{6}$. Thus the second coin should be flipped with probability $\frac{2}{3}$. If this coin also flips tails, then we have to choose between the two $\frac{1}{12}$ sides. Since at that point $\frac{5}{6}$ of the probability mass will be gone, we would renormalize the probabilities of the $\frac{1}{12}$ sides so that each has probability $\frac{1}{2}$ of coming up heads, so the third coin would have probability $\frac{1}{2}$ of coming up heads. The final coin, if it's ever flipped, would have to come up heads with probability $1$, since it's the very last bucket. To recap, the probabilities for the coins would be First flip: $\frac{1}{2}$ Second flip: $\frac{2}{3}$ Third flip: $\frac{1}{2}$ Fourth flip: $1$ Although it may make intuitive sense where these numbers come from, to turn this into an algorithm we are going to need to come up with a formal construction for choosing the probabilities. The idea is as follows - at each step, we remember how much of the probability mass remains to be used up. At the beginning, before flipping any coins, this is $1$. After flipping the first coin, it's $1 - p_0$. After flipping the second coin, it's $1 - p_0 - p_1$. More generally, after flipping $k$ coins, the remaining probability mass is $1 - \sum_{i = 0}^{k - 1}{p_i}$. Whenever we flip a coin to determine whether or not to pick bucket $k$, we end up flipping a coin that comes up heads with probability equal to the fraction of the remaining probability occupied by the probability $p_k$, which is given by $\frac{p_k}{1 - \sum_{i = 0}^{k - 1}{p_i}}$. This gives us the following algorithm for simulating a loaded die with a set of biased coins (again, we'll prove correctness and runtime in a second): Algorithm: Loaded Die from Biased Coins Store the probabilities $p_i$ for later use. Set $mass = 1$ Flip a biased coin with probability $\frac{p_i}{mass}$ of coming up heads. Otherwise, set $mass = mass - p_i$ While this may make some amount of intuitive sense, is it mathematically correct? Fortunately, the answer is yes due to a generalization of the above proof, which is given here: Theorem: The above algorithm outputs side $i$ with probability $p_i$ for any choice of $i$. Proof: Consider any fixed $n \ge 0$. We prove by strong induction that each of the $n$ sides has probability $p_i$ of being chosen. As our base case, we show that side $0$ of the die has probability $p_0$ of being chosen. We choose side $0$ if the very first coin flip comes up heads, which occurs with probability $\frac{p_0}{mass}$. Since $mass$ is initially $1$, this probability is $\frac{p_0}{1} = p_0$, so side 0 is chosen with probability $p_0$ as required. For the inductive step, assume that for sides $0, 1, ..., k - 1$ that those sides are chosen with probability $p_0, p_1, ..., p_{k-1}$ and consider the probability that side $k$ is chosen. Side $k$ will be chosen iff the first $k$ sides are not chosen, then a coin that comes up heads with probability $\frac{p_k}{mass}$ comes up heads. Because each of the first $k$ sides are all chosen with the correct probabilities, and since only one side is ever chosen, the probability that one of the first $k$ sides is chosen is given by $\sum_{i = 0}^{k - 1}{p_i}$. This means that the probability that the algorithm does not pick one of the first $k$ sides is given by $1 - \sum_{i = 0}^{k - 1}{p_i}$. Now, the probability that the coin for side $k$ comes up heads is given by $\frac{p_k}{mass}$, and after $k$ iterations we can see by a quick induction that $mass = 1 - \sum_{i = 0}^{k - 1}{p_i}$. This means that the total probability that we pick side $k$ is given by $(1 - \sum_{i = 0}^{k - 1}{p_i})\frac{p_k}{1 - \sum_{i = 0}^{k - 1}{p_i}} = p_k$ as required, completing the induction. Now, let's consider the runtime complexity of this algorithm. We know that the initialization time will be either $\Theta(1)$ if we keep a shallow copy of the input probability array, but would probably be $\Theta(n)$ so that we can store our own version of the array (in case the caller wants to change it later on). Actually generating the result of the die roll might require us to flip $\Theta(n)$ coins in the worst-case, but only requires a single flip in the best case. However, with a bit of thought it becomes clear that the input distribution heavily influences how many coin flips we will need. In the absolute best case, we have a probability distribution where all of the probability mass is centered on the first side of the die and the remaining probabilities are all zero. In that case, we need just one coin flip. In the absolute worst-case, all the probability mass is centered on the very last side of the die and is zero everywhere else, in which case we'll need to flip $n$ coins to find the outcome. We can precisely and mathematically characterize the expected number of coin flips for this algorithm. Let's think of a random variable $X$ that represents the number of coin flips in any execution of this algorithm on some specific distribution. That is, $\mathbb{P}[X = 1]$ is the probability that the algorithm flips just one coin before terminating, $\mathbb{P}[X = 2]$ is the probability that the algorithm flips just two coins, etc. In this case, the expected number of coin flips for our algorithm is given by the expected value of $X$, denoted $\mathbb{E}[X]$. By definition, we have that $$\mathbb{E}[X] = \sum_{i = 1}^n{i \cdot \mathbb{P}[X = i]}$$ So what is the value of $\mathbb{P}[X = i]$? Well, the algorithm will terminate after it chooses some side of the die. If it chooses side $0$, it will flip one coin to determine to stop there. If it chooses side $1$, it will flip two coins - one to recognize that it doesn't want to pick side $0$, and one to recognize that it does want to pick side $1$. More generally, if the algorithm chooses side $i$, it will flip $i + 1$ coins, $i$ to decide not to pick the previous $i - 1$ sides, and one to decide to pick side $i$. Combined with the fact that we know that side $i$ is chosen with probability $p_i$, this means that $$\mathbb{E}[X] = \sum_{i = 1}^n{i \cdot \mathbb{P}[X = i]} = \sum_{i = 1}^n{i \cdot p_{i - 1}} = \sum_{i = 1}^n{((i - 1) p_{i - 1} + p_{i - 1})} = \sum_{i = 1}^n{((i - 1) p_{i - 1})} + \sum_{i = 1}^n{p_{i - 1}}$$ In the last simplification, notice that this first term is equivalent to $\sum_{i = 0}^{n-1}{i \cdot p_i}$, which is equal to $\mathbb{E}[p]$; the expected outcome of the die roll! Moreover, the second term is $1$, since it's the sum of the total probabilities. This means that $\mathbb{E}[X] = \mathbb{E}[p] + 1$. That is, the expected number of coin flips is one plus the expected value of the die roll! Loaded Die from Biased Coins $\Theta(n)$ $\Theta(1)$ $\Theta(n)$ $\Theta(n)$ Generalizing Biased Coins: Simulating Loaded Dice In the above example, we were able to efficiently simulate a biased coin because there was only one partition point that we needed to consider. How efficiently can we generalize this idea up to loaded dice, where the number of sides may be arbitrarily large? If you'll notice, a biased coin is exactly the same as a loaded die that just has two sides. Consequently, we can think of a biased coin as just a special case of the more general problem we're interested in solving. When solving the biased coin problem, we split the range $[0, 1)$ into two regions - one for "heads" and one for "tails" - and then used the fact that there was just one splitting point to find the bucket we belong to. If we have an n-sided die, then we will have multiple buckets and, therefore, multiple splitting points. For example, suppose that we have a seven-sided die with probabilities $\frac{1}{4}, \frac{1}{5}, \frac{1}{8}, \frac{1}{8}, \frac{1}{10}, \frac{1}{10}, \frac{1}{10}$. If we were to partition the range $[0, 1)$ into seven pieces, we would do so as follows: Notice where these splits are located. The first partition begins at $0$ and ends at $\frac{1}{4}$. The second partition begins at $\frac{1}{4}$ and ends at $\frac{1}{4} + \frac{1}{5} = \frac{9}{20}$. More generally, if the probabilities are $p_0, p_1, ..., p_{n - 1}$, the partitions would be the ranges $[0, p_0), [p_0, p_0 + p_1), [p_0 + p_1, p_0 + p_1 + p_2), $ etc. That is, bucket $i$ is delimited by the range $$[\sum_{j = 0}^{i - 1}{p_j}, \sum_{j = 0}^{i}{p_j})$$ Notice that the difference between these two values is $p_i$, so the total area of the bucket is $p_i$ as required. Now that we know where the partitions are, if we were to pick a uniformly-random value $x$ in the range $[0, 1)$, how would we determine which range it fell into? Using our biased coin algorithm as a starting point, one idea would be as follows: starting with endpoint of the first bucket, continuously walk upward across the partitions until we find an endpoint greater than the value of $x$. If we do this, we will have located the first bucket containing the point $x$, and we have our value. For example, if we picked the random value $x = \frac{27}{40}$, we would execute the following search: From which we could conclude that the die rolled a 3, zero-indexed. This linear scan algorithm would give an $O(n)$-time algorithm for finding the side of the die that was rolled. However, we can dramatically improve this runtime by using the following observation: the sequence of endpoints of the buckets forms an ascending sequence (since we're always adding in more and more probabilities, none of which can be less than zero). Consequently, we are trying to answer the following question: given an ascending sequence of values and some test point, find the first value in the range strictly greater than the test point. This is a perfect spot to use a binary search! For example, here's an execution of binary search over the above array to find what bucket the value $x = \frac{39}{40}$ belongs to: This gives us an $\Theta(\log n)$ algorithm for mapping a uniformly-random value in the range $[0, 1)$ to a side of the die that was rolled. Moreover, this requires only $\Theta(n)$ preprocessing time to build up the table of endpoints; we simply compute the partial sums of the probabilities all the way up. This algorithm is sometimes called the roulette wheel selection because the algorithm picks a random bucket using a technique similar to a roulette wheel - throwing a ball into the range and seeing where it lands. In pseudocode, the algorithm looks like this: Algorithm: Roulette Wheel Selection Allocate an array $A$ of size $n$ Set $A[0] = p_0$. For each probability $i$ from $1$ to $n - 1$: Set $A[i] = A[i - 1] + p_i$ Generate a uniformly-random value $x$ in the range $[0, 1)$ Using a binary search, find the index $i$ of the smallest element in $A$ larger than $x$. Return $i$. The comparison between this algorithm and the earlier one is quite impressive: Roulette Wheel Selection $\Theta(n)$ $\Theta(\log n)$ $\Theta(n)$ It is clear that we now have a much better algorithm than what we started with. Discretizing the probabilities may have initially seemed promising, but this new approach based on a continuous value and binary search appears to be much better. However, it is still possible to improve upon these bounds by using a clever set of hybrid techniques, as we'll see in a minute. One interesting detail about this algorithm is that while using a binary search guarantees worst-case $O(\log n)$ time for random generation, it also eliminates the possibility of faster lookups; that is, the generation time is now $\Omega(\log n)$ as well. Is it possible to do better than this? It turns out that the answer is yes. Suppose that we switch from using a simple binary search over the list of cumulative probabilities to using a binary search tree. For example, given the above set of probabilities, we might build the following binary search tree over their cumulative distribution: Now, if we wanted to simulate a roll of the die, we can generate a uniformly-distributed number in the range $[0, 1)$, then look up in which range it lies in this BST. Since this is a balanced binary search tree, the best-case lookup time is $O(1)$ and the worst-case lookup time is $O(\log n)$. However, assuming that we know more about the probability distribution, it may be possible to do much better than this. For example, suppose that our probabilities are $\frac{99}{100}, \frac{1}{600}, \frac{1}{600}, \frac{1}{600}, \frac{1}{600}, \frac{1}{600}, \frac{1}{600}$. That is, the probability distribution is extremely skewed, with almost all of the probability mass concentrated on a single side. We could build a balanced BST for these probabilities, as shown here: While this binary search tree is perfectly balanced, it's not a very good binary search tree for our application. Since we know that 99 times out of 100 the random value is going to be in the range $[0, \frac{99}{100})$, it doesn't make any sense to store the node for that range where it's currently located. In fact, doing this would mean that almost all of the time we'd end up doing two unnecessary comparisons against the blue and yellow nodes. Since with very high probability we would want the node for the large range to be checked first, it makes sense to unbalance the tree in a way that makes the average case substantially better at the expense of the remaining cases. This is shown here: Now, we are very likely to terminate the search after immediately finding the bucket we want on our first try. In the very unlikely event that the bucket we want to find is contained in the sliver $(\frac{99}{100}, 1]$, then we end up degrading gracefully to the rest of the tree, which is indeed well-balanced. More generally, we are interested in solving this problem: Given a set of probabilities, find the binary search tree of those probabilities that minimizes the expected lookup time. Fortunately, this problem is extremely well-studied and is called the optimal binary search tree problem. There are many algorithms for solving this problem; it is known that an exact solution can be found in time $O(n^2)$ using dynamic programming, and there exist good linear-time algorithms that can find approximate solutions. Additionally, the splay tree data structure, a self-balancing binary search tree, can be used to get to within a constant factor of the optimal solution. Interestingly, the best-case behavior for these optimized binary search trees occurs when the probability distributions are extremely skewed, since we can just move the nodes containing the bulk of the probability mass near the root of the tree, and their worst-case is when the distribution is balanced, since in that case the tree has to be wide and shallow. This is the opposite of the behavior of the earlier algorithm that uses a fair die to simulate a loaded die! In the best-case, we have a loaded die in which one side always comes up (that is, it occurs with probability 1, and each other side occurs with probability 0). This is an extreme exaggeration of our earlier example, but would cause the search to always terminate after one lookup. In the worst-case, all the probabilities are even, and we have to do a standard BST lookup. This gives the following: Optimal Roulette Wheel Selection $O(n^2)$ $\Theta(1)$ $O(\log n)$ $\Theta(n)$ So far, we have seen two primitives that have, in some way, helped us build algorithms for simulating loaded dice: the fair die and the biased coin. Using purely a fair die, we came up with an (albeit impractical) algorithm for simulating a loaded die, and beginning with the intuition for biased coins we were able to invent a fast algorithm for simulating loaded dice. Is it possible to combine the two approaches together to build an algorithm built on both fair dice and biased coins? The answer is a resounding "yes," and in fact the resulting algorithm is superior to both of the above approaches. Up to this point, we have been visualizing the range $[0, 1)$ and the probabilities of the dice faces as a one-dimensional range. Both of the above algorithms work by picking some point in the range $[0, 1)$ and mapping it onto a line segment whose length corresponds to some probability. The longer we make our line segments, the higher the probability that this segment is chosen. But what if instead of thinking in one dimension, we think in two? What if instead of thinking of the probability $p_i$ as the length of a line segment, we think of it as the area of a rectangle? Let's begin by returning to our older example of the probabilities $\frac{1}{2}, \frac{1}{3}, \frac{1}{12}, \frac{1}{12}$. Let's visualize these probabilities as rectangles of width $w$ (for some arbitrary $w > 0$) and height $p_i$ (and thus total area $w \cdot p_i$): Notice that the total area of these rectangles is, collectively, $w$, since the area is $$\sum_{i = 0}^{n - 1}{w p_i} = w \sum_{i = 0}^{n - 1}{p_i} = w$$ Now, suppose that we draw a bounding box around these rectangles, whose width is $4w$ (because there are four rectangles) and whose height is $\frac{1}{2}$ (since the tallest rectangle has height $\frac{1}{2}$): We can then think of this rectangle as being split into five regions - four regions corresponding to the different probabilities, and one region representing unused space. Given this partition, we can think of an algorithm for simulating random rolls of the die as a game of darts. Suppose that we throw a (perfectly uniformly-distributed) dart at this target. If it hits the unused space, we'll remove the dart and throw it again, repeating until we hit one of the rectangles. Since higher probabilities correspond to larger rectangles, the higher the probability that a particular face on the die comes up, the higher the probability that we eventually hit its rectangle. In fact, if we condition on the fact that we actually hit some rectangle, we have the following: $$\mathbb{P}[\mbox{hit rectangle for side i} | \mbox{hit some rectangle}] = \frac{\mbox{area of rectangle for i}}{\mbox{total rectangle area}} = \frac{w p_i}{w} = p_i$$ In other words, once we finally hit some rectangle with our uniformly-random dart, we will pick the rectangle for side $i$ of the loaded die with probability $p_i$, precisely the probability that we want it to have! In other words, if we can find some efficient way of simulating throwing random darts at this rectangle, we will have an efficient way for simulating a roll of the random die. One way we could think about throwing darts at this rectangle would be to pick two uniformly random values in the range $[0, 1)$, scale them to the appropriate width and height, then check what region was under the dart. However, this poses the same problem we had before when trying to determine which one-dimensional bucket a random value would be in in the earlier case. However, there is a truly beautiful series of observations we can have that can make it simple, if not trivial, to determine where we hit. As a first observation, note that we've shown that the widths of these rectangles can be arbitrarily chosen, so long as they all have the same width. The heights, of course, depend on the probabilities of the dice faces. However, if we were to uniformly scale all of the heights by some positive real number $h$, then the relative areas of all of the rectangles would be the same. In fact, for any positive real number $h$, we have that the total area of all the rectangles, once their heights are scaled by $h$, is given by $$\sum_{i = 0}^{n - 1}{w h p_i} = w h \sum_{i = 0}^{n - 1}{p_i} = w h$$ So now consider the probability of choosing any individual rectangle, conditioning on the fact that we hit some rectangle at all. Using similar math to as before, we get the following: $$\mathbb{P}[\mbox{hit rectangle for side i} | \mbox{hit some rectangle}] = \frac{\mbox{area of rectangle for i}}{\mbox{total rectangle area}} = \frac{w h p_i}{w h} = p_i$$ So, in fact, there is no change to the probability of choosing any individual rectangle as long as we scale them linearly and uniformly. Since we can choose any positive scaling factor that we'd like, what if we scaled these rectangles so that the height of the bounding box is always 1? Since the height of the bounding box is defined by the maximum value of $p_i$ of the input probabilities, we could begin by scaling every one of the rectangles by a factor of $\frac{1}{p_{max}}$, where $p_{max}$ is the maximum probability of all the input probabilities. This makes the height of the rectangle 1. Similarly, since we are allowed to pick any arbitrary width for the boxes, let's pick a width of 1. This means that given $n$ probabilities, the total width of the bounding box is $n$ and the total height is 1. This is shown here: We're now ready to think about how we might throw a random dart at this rectangle and determine what we've hit. The key insight is that we can break the rectangle down so that instead of consisting of several smaller rectangles and an oddly-shaped open space, instead the region is cut apart into a collection of $2n$ rectangles, two for each of the $n$ input probabilities. This is shown here: Notice how this rectangle is formed. For each side of the loaded die, we have one column of width 1 and height 1 cut into two spaces - a "yes" half-space corresponding to the rectangle for that side, and a "no" half-space corresponding to the remainder of the column. Now, let's think about how we might throw a dart at this. A perfectly uniform dart tossed at this rectangle would have an $x$ and a $y$ component. Here, the $x$ component, which must be in the range $[0, 1)$, corresponds to which column the dart lands in. The $y$ component, which must be between in the range $[0, 1)$ corresponds to how high up the column we are. The choice of $x$ component influences which side of the loaded die we're considering, and the choice of the $y$ component corresponds to whether or not we pick that side or not. But wait a minute - we've seen these two ideas before! Choosing the $x$ coordinate, which corresponds to a column, is equivalent to rolling a fair die to decide which column to pick. Choosing the $y$ coordinate corresponds to flipping a biased coin to determine whether to choose the side or roll again! This observation is so important that we should make it extra clear: Choosing a random point in this rectangle is equivalent to rolling a fair die and flipping a biased coin. In fact, this result can be thought of in a much more powerful sense. In order to simulate a loaded die, we build a set of biased coins, one for each side of the die, and then roll a fair die to determine which coin to flip. Based on the die roll, if the appropriate coin comes up heads, we pick the given side, and if the coin comes up tails, we roll the die again and repeat. Let's recap the important points so far. First, the dimensions of these rectangles are as follows - for each side, the height of the "yes" rectangle is given by $\frac{p_i}{p_{max}}$, and the height of the "no" rectangle is given by $\frac{p_{max} - p_i}{p_{max}}$. This normalizes the total heights of the rectangles to be 1. Second, each rectangle has width $1$, though honestly this value doesn't matter. Finally, our algorithm is as follows: until we pick some outcome, roll the fair die to determine which column we are in (in other words, which biased coin to flip). Next, flip the appropriate biased coin. If it comes up heads, choose the outcome corresponding to the chosen column. Otherwise, repeat this process. Algorithm: Fair Die/Biased Coin Loaded Die Find the maximum value of $p_i$; call it $p_{max}$. Allocate an array $Coins$ of length $n$, corresponding to the heights of the "yes" rectangles in each row. Set $Coins[i] = \frac{p_i}{p_{max}}$ Until a value is found: Roll a fair, n-sided die and get an index $i$ in the range $[0, n)$. Flip a biased coin that comes up heads with probability $Coins[i]$. If the coin comes up heads, return $i$. Let's analyze the complexity of this algorithm. In the initialization step, it takes time O(n) to find the maximum probability, and then an additional O(n) time to allocate and populate the array $Coins$, so the total initialization time is O(n). In the generation step, in the best case we end up flipping heads on our very first coin, terminating in O(1). But how many iterations are required on expectation? To find this value, let's compute the probability that we actually end up choosing some side after a single iteration. Since the coins don't all have the same probability of coming up heads, this will depend on which coin actually is chosen. Fortunately, since we pick each coin with identical probability (namely, $\frac{1}{n}$), the math becomes much easier. Moreover, since we only end up flipping one coin, the events of each coin being chosen and coming up heads are all mutually exclusive, so the total probability that some coin is chosen, flipped, and comes up heads is given by the sum of the probabilities of picking each individual coin and having that individual coin coming up heads. Since we know that the probability that side $i$ is chosen is given by $\frac{p_i}{p_{max}}$, so the total probability that some side is chosen is given by $$\sum_{i = 0}^{n - 1}{(\frac{1}{n} \frac{p_i}{p_{max}})} = \frac{1}{n}\sum_{i = 0}^{n - 1}{\frac{p_i}{p_{max}}} = \frac{1}{n \cdot p_{max}}\sum_{i = 0}^{n - 1}{p_i} = \frac{1}{n \cdot p_{max}}$$ If this is the probability that some coin is chosen on any one iteration, then the expected number of iterations that may occur is given by the reciprocal of this fraction, which is $n \cdot p_{max}$. But what exactly does this mean? This depends very much on the choice of $p_{max}$. At one extreme, $p_{max}$ might be equal to $1$ (that is, the die always comes up the same way every time). In this case, the expected number of iterations is equal to $n$, meaning that on expectation we would need to roll the fair die $n$ times. This makes sense, since the only way we would choose a side is if we were to pick the biased coin for the one side that always comes up heads, since each other side has a coin that never comes up heads at all. On the other hand, in the other extreme the minimum value of $p_{max}$ is $\frac{1}{n}$, since if it were any lower than this the total probability of all sides would be less than one. If $p_{max} = \frac{1}{n}$, then the expected number of flips is 1. This too makes sense. If $p_{max} = \frac{1}{n}$, then each side has the same probability of being chosen (namely, $\frac{1}{n}$), so when we normalize the probabilities of each side to 1, each side will have probability 1 of being chosen. Thus the die roll to choose which coin to flip will effectively be determining the outcome, since the coin always comes up heads and we never have to repeat ourselves. It's interesting that the expected number of flips depends solely on the value of $p_{max}$ and not any of the other probabilities involved, but if we return to our graphical intuition this does make sense. The total area of the rectangle at which we're shooting darts is always $n$, since we normalize the heights to be 1. Moreover, the total area held by the rectangles representing "yes" answers is given by $\frac{1}{p_{max}}$, since each rectangle has width 1 and height normalized by multiplying by $\frac{1}{p_{max}}$. This means that the ratio of the total area of the "yes" rectangles to the total area of the overall rectangle is $\frac{1}{n \cdot p_{max}}$. In other words, the space used up by the "no" rectangles depends purely on the value of $p_{max}$. It can be spread around or distributed however we choose, but ultimately its area is the same and the odds of some dart hitting it is independent of how it's spread. Comparing this algorithm to the others gives us this information: Fair Die/Biased Coin Loaded Die $\Theta(n)$ $\Theta(1)$ $\Theta(n)$ (expected) $\Theta(n)$ In the best case, this algorithm is better than the binary search algorithm from above, requiring just one coin flip. However, its worst-case behavior is exponentially worse. Is it possible to eliminate this worst-case behavior? The Alias Method The previous technique has excellent best-case behavior, generating a random roll using a single fair die roll and coin flip. On expectation, its worst-case behavior is much worse, though, potentially requiring a linear number of die rolls and coin flips. The reason for this is that, unlike the previous techniques, the algorithm may "miss" and have to repeatedly iterate until it decides on a decision. Graphically, this is because it works by throwing darts at a target that may contain a large amount of empty space not assigned to any outcome. If there were a way to eliminate all of that empty space such that every piece of the target was covered by a rectangle corresponding to some side of the loaded die, then we could just throw a single dart at it and read off the result. A particularly clever insight we might have is to adjust the height of the rectangle such that instead of having the height match the greatest probability, it matches the average probability. Let's consider our above example. Here, our probabilities are $\frac{1}{2}, \frac{1}{3}, \frac{1}{12}, \frac{1}{12}$. Since there are four probabilities, the average probability must be $\frac{1}{4}$. What would happen if we tried normalizing the height of the box to $\frac{1}{4}$, the average, rather than $\frac{1}{2}$, the maximum? Let's see what happens. We begin with rectangles of width $1$ whose heights are equal to the initial probabilities: Now, we scale all of these rectangles so that a probability of $\frac{1}{4}$ would have height 1. This works by multiplying each probability by four, yielding this setup: At this point, let's draw a $1 \times 4$ rectangle on top of this image. This will represent the target we'll be shooting at: As you can see, this doesn't quite work out, since the rectangles for $\frac{1}{2}$ and $\frac{1}{3}$ don't neatly fit into the box. But what if we allowed ourselves to cut the rectangles into smaller pieces? That is, what if we cut some of the space for the $\frac{1}{2}$ rectangle off and move it into the empty space above the space for one of the $\frac{1}{12}$ rectangles? This would give this setup, which still has the vertical bars hanging over, but not by too much: Now, we still have a bit of overhang, but not too much overhang. One way to completely eliminate the overhang would be to move the extra pieces from the $\frac{1}{2}$ and $\frac{1}{3}$ bars into the empty space, but there's actually a better solution. Let's begin by moving enough of the $\frac{1}{2}$ bar out of the first column and into the third to complete fill in the remaining gap. This will leave a small gap in the first column, but will close the other gap: Finally, we can tuck the extra overhead from the second column into the first, producing this final rectangle: What we have below has several excellent properties. First, the total areas of the rectangles representing each side of the loaded die are unchanged from the original; all we've done is cut those rectangles into pieces and move them around. This means that as long as the areas of original rectangles are proportionally distributed according to the original probability distribution, the total area dedicated to each side of the die is the same. Second, notice that this new rectangle has no free space in it, meaning that any time we toss a dart at it, we are guaranteed to hit something that will give us an ultimate answer, not empty space that requires another dart toss. This means that a single dart toss suffices to generate our random value. Finally, and most importantly, note that each column has at most two different rectangles in it. This means that we can retain our intuition from before - we roll a die to determine which biased coin to toss, then toss the coin. The difference this time is what the coin toss means. A toss of heads means that we pick one side of the die, and a toss of tails now means that we should pick some other side of the die (rather than rolling again). At a high level, the alias method works as follows. First, we create rectangles representing each of the different probabilities of the dice sides. Next, we cut those rectangles into pieces and rearrange them so that we completely fill in a rectangular target such that each column has a fixed width and contains rectangles from at most two different sides of the loaded die. Finally, we simulate rolls of the die by tossing darts randomly at the target, which we can do by a combination of a fair die and biased coins. But how do we even know that it's possible to cut the original rectangles apart in a way that allows each column to contain at most two different probabilities? This does not seem immediately obvious, but amazingly it's always possible to do this. Moreover, not only can we cut the rectangles into pieces such that each column contains at most two different rectangles, but we can do so in a way where one of the rectangles in each column is the rectangle initially placed in that column. If you'll notice, in the above rectangle rearrangement, we always cut a piece of a rectangle and moved it into another column, and never entirely removed a rectangle from its original column. This means that each column in the final arrangement will consist of some rectangle corresponding to the probability initially assigned there, plus (optionally) a second rectangle pulled from some other column. This second rectangle is often called the alias of the column, since the remaining probability of the column is used as an "alias" for some other probability. The use of the term "alias" here gives rise to the name "alias method." Before we go into the proof that it's always possible to distribute the probabilities in this way, we should take a quick second to sketch out how the algorithm actually works. Because each column of the resulting arrangement always contains some (potentially zero-height!) piece of the original rectangle from that column, to store the (potentially) two different rectangles occupying a column, implementations of the alias method typically work by storing two different tables: a probability table $Prob$ and an alias table $Alias$. Both of these tables have size $n$. The probability table stores, for each column, the probability within that column that the original rectangle will be chosen, and the alias table stores the identity of the second rectangle (if any) contained in that column. That way, generating a random roll of the die can be done as follows. First, using a fair die, choose some column $i$ uniformly at random. Next, flip a random coin with probability $Prob[i]$ of coming up heads. If the coin flips heads, output that the die rolled $i$, and otherwise output that the die rolled $Alias[i]$. For example, here are the probability and alias tables for the above configuration: Below is a front-end into a JavaScript implementation of the alias method that has the above probability and alias tables built into it. You can click on the "Generate" button to have it generate a fair die roll and biased coin toss to see what side of the die would be rolled. Die Roll Proving Alias Tables Exist We now need to formally prove that it is always possible to construct the $Alias$ and $Prob$ tables from above. In order to prove that this is always possible, we need to show that it is possible to do the following: Construct $(n \cdot p_i) \times 1$ rectangles for each of the probabilities $p_i$, cut them horitzontally into smaller pieces, and distribute them into $n$ columns such that each column has height $1$, no column contains more than two rectangles, and some rectangle corresponding to side $i$ is placed in column $i$. Before going into the proof that it's always possible to do this, let's work through an example. Suppose that we have the four probabilities $\frac{1}{2}, \frac{1}{3}, \frac{1}{12},\frac{1}{12}$ as above. This is a collection of four probabilities ($k = n = 4$) whose sum is $1 = \frac{4}{4}$. Although we saw above how to fill in the alias table by experimentation, let's instead try walking through this construction more explicitly by starting with a completely empty table and then filling it in. We begin by scaling all of these probabilities by a factor of four, giving us these probabilities and this empty table: Now, notice that of the four rectangles that we have to distribute, two of them ($\frac{1}{3}, \frac{1}{3}$) are less than 1. This means that they won't completely fill up a column and will need some other probability to fill in the remainder. Let's pick one of the two (say, the yellow one) and put it into its appropriate column: Now, we need to somehow make up the difference in the top of the column. To do this, we'll notice that two of the rectangles that have yet to be distributed have heights greater than 1 (namely, $2$ and $\frac{4}{3}$). Let's pick one of these two arbitrarily; here, let's use the $\frac{4}{3}$. We then distribute enough of the $\frac{4}{3}$ into the column to fill it completely; this ends up using $\frac{2}{3}$ of the $\frac{4}{3}$ up, as shown here: Now, notice what our setup looks like. We now have three rectangles whose total area is $3$ and three open columns, so it seems like it should be possible to distribute those rectangles into the three columns. To do so, we'll use the same intuition as before. Notice that there is at least one rectangle whose height is less than $1$, so we'll pick one arbitrarily (let's say that we grab the $\frac{2}{3}$ rectangle) and place it into its column: We now need to top off the column, so we'll pick some probability that's at least 1 and use it to make up the rest of the column. There's only one choice here (using the $2$), so we'll pull $\frac{1}{3}$ off of the $2$ and put it atop the column: And we're now down to two rectangles shose total area is two. We now repeat this process by finding some rectangle whose height is at most 1 (here, the $\frac{1}{3}$) and putting it into its column: And then we find some rectangle of height at least $1$ to top off the column, using our only choice of the $\frac{5}{3}$: Now we have just one rectangle remaining, and it has area 1. We can thus finish the construction by just putting that rectangle in its own column: And voilà! We've filled in the table. Notice that the general pattern behind this construction is as follows: Find some rectangle that has height at most 1 and place it into its own column, setting the $Prob$ table to the height of that rectangle. Find some rectangle that has height at least 1 and use it to top off the column, setting the $Alias$ table to correspond to the side of the die represented by the rectangle. Can we prove that this general construction is always possible? That is, we don't end up getting "stuck" when distributing probabilities this way? Fortunately, the answer is yes. The intuition behind this is that we've scaled all of the probabilities such that the average of the new probabilities is now 1 (because originally it was $\frac{1}{n}$, and we multiplied everything by $n$). We know that the minimum of all the scaled probabilities must be no greater than the average and that the maximum of all the scaled probabilities must be no less than the average, so when we first start off there always must be at least one element at most 1 (namely, the smallest of the scaled probabilities) and one element at least one (namely, the largest of the scaled probabilities). We can thus pair these elements together. But what about once we've removed these two? Well, when we do this, we end up removing one probability from the total and decreasing the total sum of the scaled probabilities by one. This means that the new average hasn't changed, since the average scaled probability is one. We can then repeat this procedure over and over again until eventually we've paired up all the elements. We can formalize this argument in the following theorem: Theorem: Given $k$ width-one rectangles of heights $h_0, h_1, ..., h_{k-1}$ such that $\sum_{i=0}^{k-1}{h_i} = k$, there is a way of cutting the rectangles and distributing them into $k$ columns, each of which has height 1, such that each column contains at most two different rectangles and the $i$th column contains at least one piece of the $i$th rectangle. Proof: By induction. As a base case, if $k = 1$, then we have just one rectangle and its height must be 1. We can therefore assign it to the $0$th column. Thus each column has height 1, contains at most two rectangles, and the $0$th column contains at least one piece of the $0$th rectangle. For the inductive step, assume that for some natural number $k$ the theorem holds and consider any $k + 1$ rectangles of width $1$ and heights $h_0, h_1, ..., h_{k}$ such that $\sum_{i = 0}^{k}{h_i} = k + 1$. We first claim that there is some height $h_l$ such that $h_l \le 1$ and some different height $h_g$ (such that $l \ne g$) such that $h_g \ge 1$. To see this, assume for the sake of contradiction that there is no $h_l$ with $h_l \le 1$; this would mean that $h_i > 1$ for all natural numbers $i$ in the range $0 \le i \le k$. But then we have that $k + 1 = \sum_{i = 0}^k{h_i} > \sum_{i=0}^k{1} = k + 1$, which is clearly impossible. Thus there is some index $l$ such that $h_l \le 1$. Now, suppose for the sake of contradiction that there is no other height $h_g$ (with $l \ne g$) such that $h_g \ge 1$. Then we must have that each other $h_g < 1$, which would (by similar logic) mean that $\sum_{i=0}^{k}{h_i} < k + 1$, a contradiction. Consequently, we have that $h_l \le 1$ and $h_g \ge 1$. Now, consider the following construction. Place $h_l$ into column $l$, and fill the remaining $1 - h_l$ space in the $l$th column with a piece of the rectangle $h_g$ (such space must exist, since $0 \le 1 - h_l \le 1$ and $h_g \ge 1$). This completely fills the column. We are now left with a collection of $k$ different pieces of rectangles whose total sum is $k$, since we removed $1$ total area from the rectangles, whose initial total sum was $k + 1$. Moreover, we have completely filled column $l$, so we will never try placing any more pieces of the rectangle there. Thus, by the inductive hypothesis, we can assign the remaining $k$ rectangles into $k$ columns while satisfying the above conditions. Combined with the fact that we have now filled column $l$, this means that we have a way of filling all the columns while satisfying the constraints. This completes the induction. This is a constructive proof that says that not only can we always build the alias table, but that the above algorithm of finding a rectangle of height at most one and pairing it with a rectangle of height at least one will always succeed. From here, we can start devising faster and faster algorithms for computing alias tables. Generating Alias Tables Using just what we have said above, we can get a pretty good algorithm for simulating loaded die rolls using the alias method. The initialization works by repeatedly scanning the input probabilities to find a value at most 1 and a value at least 1, combining them together to fill a column: Algorithm: Naive Alias Method Multiply each probability $p_i$ by $n$. Create arrays $Alias$ and $Prob$, each of size $n$. For $j = 1 \mbox{ to } n - 1$: Find a probability $p_l$ satisfying $p_l \le 1$. Find a probability $p_g$ (with $l \ne g$) satisfying $p_g \ge 1$ Set $Prob[l] = p_l$. Set $Alias[l] = g$. Remove $p_l$ from the list of initial probabilities. Set $p_g := p_g - (1 - p_l)$. Let $i$ be the last probability remaining, which must have weight 1. Set $Prob[i] = 1$. Generate a fair die roll from an $n$-sided die; call the side $i$. Flip a biased coin that comes up heads with probability $Prob[i]$. If the coin comes up "heads," return $i$. Otherwise, return $Alias[i]$. The generation step of this algorithm is exactly the same as the method described above, and runs in $\Theta(1)$. The generation step requires multiple iterations, which are described here. First, we need to spend $\Theta(n)$ time scaling each probability by a factor of $n$, and need $O(n)$ time to allocate the two arrays. The inner loop executes $\Theta(n)$ times, on each iteration doing $O(n)$ work to scan the array, remove one of the array elements, and update the probabilities. This gives a total of $O(n^2)$ total initialization work. If we consider this algorithm in context, we have the following: Naive Alias Method $O(n^2)$ $\Theta(1)$ $\Theta(n)$ Compared to the other efficient simulation techniques, this naive alias method has a large initialization cost, but can then simulate die rolls extremely efficiently. If we could somehow reduce the initialization cost to something lower (say, $O(n)$), then this technique would be strictly better than all of the other techniques employed here. One simple way to reduce the initialization cost is to use a better data structure for storing the heights as we go. In the naive version, we use an unsorted array to hold all the probabilities, meaning that it takes $O(n)$ work to locate the two probabilities we want. A better alternative would be to use a balanced binary search tree to hold the values. This way, we could locate the values $p_g$ and $p_l$ in $O(\log n)$ time by finding the maximum and minimum values in the tree. Deleting $p_l$ could be done in $O(\log n)$ time, and updating the probability of $p_g$ could also be done in $O(\log n)$ time by simply removing it from the tree and reinserting it. This gives the following algorithm: Algorithm: Alias Method Create a balanced binary search tree $T$. Insert $n \cdot p_i$ into $T$ for each probability $i$. Find and remove the smallest value in $T$; call it $p_l$. Find and remove the largest value in $T$; call it $p_g$. Add $p_g$ to $T$. Now, our algorithm's initialization is much faster. Creating $Alias$ and $Prob$ still takes $O(n)$ time each, and adding the probabilities to the BST $T$ will take $\Theta(n \log n)$ time. From there, we do $\Theta(n)$ iterations of filling in the table, each of which takes $O(\log n)$ work. This gives an overall runtime of $O(n \log n)$ for the intialization, as seen here: Alias Method $O(n \log n)$ $\Theta(1)$ $\Theta(n)$ However, there is an algorithm that runs even faster than this approach. It's remarkably simple, and is perhaps the cleanest of all of the algorithms for implementing the alias method. This algorithm was originally described in the paper "A Linear Algorithm For Generating Random Numbers With a Given Distribution" by Michael Vose, and has become the standard algorithm for implementing the alias method. The idea behind Vose's algorithm is to maintain two worklists, one containing the elements whose height is less than 1 and one containing the elements whose height is at least 1, and to repeatedly pair the first elements of each worklist. On each iteration, we consume the element from the "small" worklist, and potentially move the remainder of the element from the "large" worklist into the "small" worklist. The algorithm maintains several invariants: The elements of the "small" worklist are all less than 1. The elements of the "large" worklist are all at least 1. The sum of the elements in the worklists is always equal to the total number of elements. For simplicity, each worklist does not store the actual probability, but rather some pointer back to the original probability list saying which side of the loaded die is being referenced. Given these invariants, the algorithm is given below: Algorithm: (Unstable) Vose's Alias Method Caution: This algorithm suffers from numerical inaccuracies. A more numerically sound algorithm is given later. Create two worklists, $Small$ and $Large$. Multiply each probability by $n$. For each scaled probability $p_i$: If $p_i < 1$, add $i$ to $Small$. Otherwise ($p_i \ge 1$), add $i$ to $Large$. While $Small$ is not empty: Remove the first element from $Small$; call it $l$. Remove the first element from $Large$; call it $g$. If $p_g < 1$, add $g$ to $Small$. Otherwise ($p_g \ge 1$), add $g$ to $Large$. While $Large$ is not empty: Set $Prob[g] = 1$. Given the three above invariants, the first part of this algorithm (everything except the last loop) should be reasonably self-explanatory: we continuously pair some small element from $Small$ with a large element from $Large$ as normal, then add the remainder of the large element to the appropriate worklist. The last loop in the algorithm requires some explanation. Once we have exhausted all of the elements from the $Small$ list, there will be at least one element left over in the $Large$ list (since if every element was in $Small$, the sum of the elements would have to be less than the number of remaining elements, violating the last invariant). Since every element of $Large$ is at least 1, and since the sum of the $k$ elements in $Large$ must be equal to $k$, this means that every element in $Large$ must be exactly equal to 1, since otherwise the total would be too large. This final loop thus sets every large element's probability to be 1 so that the columns containing the large element are all equal to 1. In this algorithm, the type of worklist does not matter. Vose's original paper uses stacks for the worklist because they can be efficiently implemented using arrays, but we could use a queue instead if we'd like. For simplicity, though, we'll use a stack. Before doing an analysis of the algorithm, let's first trace through an example to see how it works. Let's consider an example of using the seven probabilities $\frac{1}{4}, \frac{1}{5}, \frac{1}{8}, \frac{1}{8}, \frac{1}{10}, \frac{1}{10}, \frac{1}{10}$. To highlight the fact that the algorithm doesn't sort the probabilities or require them to be sorted, let's order them arbitrarily as $\frac{1}{8}, \frac{1}{5}, \frac{1}{10}, \frac{1}{4}, \frac{1}{10}, \frac{1}{10}, \frac{1}{8}$. The algorithm begins by adding these elements to two work stacks, as shown here: We now place the top of the $Small$ stack into its slot, moving the magenta rectangle into its final position: Now, we use the top of the $Large$ stack (the cyan rectangle) to fill in the rest of the column. Since $\frac{7}{4} - \frac{1}{8} = \frac{13}{8} \ge 1$, we leave the cyan block atop the $Large$ stack, as shown here: We then repeat this process. We move the rectangle on top of the $Small$ stack into its column, then top off the difference with the top of the $Large$ stack: And once more: When we repeat this process next, we'll find that while we can use the cyan block to cover the slack space in the table, doing so ends up making the cyan block have height less than one. Consequently, we move the cyan block atop the small stack, as shown here: Now, when we process the $Small$ worklist, we end up putting the cyan block in its place, then using the yellow block to fill in the slack: We then process the $Small$ stack to put the orange block into position, topping it off with the yellow block: And finally, since the $Small$ stack is empty, we put the yellow block into its own column and are done. We now have a well-formed alias table for these probabilities. A Practical Version of Vose's Algorithm Unfortunately, the above algorithm, as written, is not numerically stable. On an idealized machine that can do arbitrary-precision real number computations it's fine, but if you were to try running it using IEEE-754 doubles, it may end up completely failing. There are two sources of inaccuracy that we need to deal with before moving on: The computation to determine whether or not a probability belongs in the $Small$ or $Large$ worklist may be inaccurate. Specifically, it may be possible that scaling up the probabilities by a factor of $n$ has caused probabilities equal to $\frac{1}{n}$ to end up being slightly less than $1$ (thus ending up in the $Small$ list rather than the $Large$ list). The computation that subtracts the appropriate probability mass from a larger probability is not numerically stable and may introduce significant rounding errors. This may end up putting a probability that should be in the $Large$ list into the $Small$ list instead. The combination of these two factors means that we may end up with the algorithm accidentally putting all of the probabilities into the $Small$ worklist instead of the $Large$ worklist. As a result, the algorithm may end up failing because it expects the $Large$ worklist to be nonempty when the $Small$ worklist is nonempty. Fortunately, fixing this ends up not being particularly difficult. We will update the inner loop of the algorithm so that it terminates whenever either of the two worklists are empty, so we don't accidentally end up looking at nonexistent elements from the $Large$ worklist. Second, when one worklist is empty, we'll set the remaining probabilities of the elements in the other worklist to all be $1$, since, mathematically, this should only occur if all of the remaining probabilites are precisely equal to $1$. Finally, we'll replace the computation that updates the large probabilities with a slightly more stable computation. This is shown here: Algorithm: Vose's Alias Method While $Small$ and $Large$ are not empty: ($Large$ might be emptied first) Set $p_g := (p_g + p_l) - 1$. (This is a more numerically stable option.) While $Small$ is not empty: This is only possible due to numerical instability. Set $Prob[l] = 1$. All that's left to do now is analyze the algorithm's complexity. Seeding the worklists takes a total of $\Theta(n)$ time, because we add each element to exactly one of the worklists. The inner loop does a total of $\Theta(1)$ work, since it needs to remove two elements from the worklist, update two arrays, and add one element back to a worklist. It can't execute more than $O(n)$ times, since each iteration decreases the number of elements (collectively) in the worklists by one by eliminating the smaller probability. The last two loops can each execute at most $O(n)$ times, since there are at most $O(n)$ elements in the $Large$ and $Small$ worklists. This gives a total runtime of $\Theta(n)$, which (as seen below) is as good as we're going to get: Vose's Alias Method $\Theta(n)$ $\Theta(1)$ $\Theta(n)$ Phew! We've covered a lot of ground here! We've explored several different methods for simulating loaded dice, beginning with a very simple set of techniques and concluding with extremely fast and efficient algorithms. Each method shows off a different set of techniques, and I find the final version (Vose's alias method) to be one of the most interesting and elegant algorithms I have ever come across. If you are interested in seeing code for Vose's alias method, including a quick summary of what complications arise in practice due to numerical inaccuracy, I have a Java implementation of the alias method available at the Archive of Interesting Code. If you have any questions, comments, or corrections, please feel free to contact me at [email protected].
CommonCrawl
PFDE with nonautonomous past Critical regularity of invariant foliations November 2002, 8(4): 939-951. doi: 10.3934/dcds.2002.8.939 Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$ Nikos I. Karachalios 1, and Nikos M. Stavrakakis 2, Department of Statistics and Actuarial Science, University of the Aegean, Karlovassi 83200, Samos, Greece Department of Mathematics, National Technical University, Zografos Campus 15780, Athens, Greece Received April 2001 Revised May 2002 Published July 2002 We discuss estimates of the Hausdorff and fractal dimension of a global attractor for the semilinear wave equation $u_{t t} +\delta u_t -\phi (x)\Delta u + \lambda f(u) = \eta (x), x \in \mathbb R^N, t \geq 0,$ with the initial conditions $ u(x,0) = u_0 (x)$ and $u_t(x,0) = u_1 (x),$ where $N \geq 3$, $\delta >0$ and $(\phi (x))^{-1}:=g(x)$ lies in $L^{N/2}(\mathbb R^N)\cap L^\infty (\mathbb R^N)$. The energy space $\mathcal X_0=\mathcal D^{1,2}(\mathbb R^N) \times L_g^2(\mathbb R^N)$ is introduced, to overcome the difficulties related with the non-compactness of operators, which arise in unbounded domains. The estimates on the Hausdorff dimension are in terms of given parameters, due to an asymptotic estimate for the eigenvalues $\mu$ of the eigenvalue problem $-\phi(x)\Delta u=\mu u, x \in \mathbb R^N$. Keywords: Dynamical systems, hyperbolic equations, unbounded domains, attractors, Hausdorff dimension., generalized Sobolev spaces. Mathematics Subject Classification: 35B40, 35B41, 35L15, 37L3. Citation: Nikos I. Karachalios, Nikos M. Stavrakakis. Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$. Discrete & Continuous Dynamical Systems, 2002, 8 (4) : 939-951. doi: 10.3934/dcds.2002.8.939 Markus Böhm, Björn Schmalfuss. Bounds on the Hausdorff dimension of random attractors for infinite-dimensional random dynamical systems on fractals. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3115-3138. doi: 10.3934/dcdsb.2018303 Alfredo Marzocchi, Sara Zandonella Necca. Attractors for dynamical systems in topological spaces. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 585-597. doi: 10.3934/dcds.2002.8.585 Tomás Caraballo, Francisco Morillas, José Valero. On differential equations with delay in Banach spaces and attractors for retarded lattice dynamical systems. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 51-77. doi: 10.3934/dcds.2014.34.51 Bixiang Wang, Xiaoling Gao. Random attractors for wave equations on unbounded domains. Conference Publications, 2009, 2009 (Special) : 800-809. doi: 10.3934/proc.2009.2009.800 Xiaoying Han. Exponential attractors for lattice dynamical systems in weighted spaces. Discrete & Continuous Dynamical Systems, 2011, 31 (2) : 445-467. doi: 10.3934/dcds.2011.31.445 Noboru Okazawa, Tomomi Yokota. Smoothing effect for generalized complex Ginzburg-Landau equations in unbounded domains. Conference Publications, 2001, 2001 (Special) : 280-288. doi: 10.3934/proc.2001.2001.280 Kanji Inui, Hikaru Okada, Hiroki Sumi. The Hausdorff dimension function of the family of conformal iterated function systems of generalized complex continued fractions. Discrete & Continuous Dynamical Systems, 2020, 40 (2) : 753-766. doi: 10.3934/dcds.2020060 Valerii Los, Vladimir Mikhailets, Aleksandr Murach. Parabolic problems in generalized Sobolev spaces. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3605-3636. doi: 10.3934/cpaa.2021123 Vanderlei Horita, Marcelo Viana. Hausdorff dimension for non-hyperbolic repellers II: DA diffeomorphisms. Discrete & Continuous Dynamical Systems, 2005, 13 (5) : 1125-1152. doi: 10.3934/dcds.2005.13.1125 Carlos Matheus, Jacob Palis. An estimate on the Hausdorff dimension of stable sets of non-uniformly hyperbolic horseshoes. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 431-448. doi: 10.3934/dcds.2018020 Aline Cerqueira, Carlos Matheus, Carlos Gustavo Moreira. Continuity of Hausdorff dimension across generic dynamical Lagrange and Markov spectra. Journal of Modern Dynamics, 2018, 12: 151-174. doi: 10.3934/jmd.2018006 Tomasz Szarek, Mariusz Urbański, Anna Zdunik. Continuity of Hausdorff measure for conformal dynamical systems. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4647-4692. doi: 10.3934/dcds.2013.33.4647 Nhan-Phu Chung. Gromov-Hausdorff distances for dynamical systems. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6179-6200. doi: 10.3934/dcds.2020275 Paulo Cesar Carrião, Olimpio Hiroshi Miyagaki. On a class of variational systems in unbounded domains. Conference Publications, 2001, 2001 (Special) : 74-79. doi: 10.3934/proc.2001.2001.74 Ming Wang, Yanbin Tang. Attractors in $H^2$ and $L^{2p-2}$ for reaction diffusion equations on unbounded domains. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1111-1121. doi: 10.3934/cpaa.2013.12.1111 Hong Lu, Jiangang Qi, Bixiang Wang, Mingji Zhang. Random attractors for non-autonomous fractional stochastic parabolic equations on unbounded domains. Discrete & Continuous Dynamical Systems, 2019, 39 (2) : 683-706. doi: 10.3934/dcds.2019028 Thomas Jordan, Mark Pollicott. The Hausdorff dimension of measures for iterated function systems which contract on average. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 235-246. doi: 10.3934/dcds.2008.22.235 G. A. Leonov. Generalized Lorenz Equations for Acoustic-Gravity Waves in the Atmosphere. Attractors Dimension, Convergence and Homoclinic Trajectories. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2253-2267. doi: 10.3934/cpaa.2017111 Michael S. Jolly, Anuj Kumar, Vincent R. Martinez. On local well-posedness of logarithmic inviscid regularizations of generalized SQG equations in borderline Sobolev spaces. Communications on Pure & Applied Analysis, 2022, 21 (1) : 101-120. doi: 10.3934/cpaa.2021169 María Anguiano, Tomás Caraballo, José Real, José Valero. Pullback attractors for reaction-diffusion equations in some unbounded domains with an $H^{-1}$-valued non-autonomous forcing term and without uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 307-326. doi: 10.3934/dcdsb.2010.14.307 Nikos I. Karachalios Nikos M. Stavrakakis
CommonCrawl
data descriptors City-scale holographic traffic flow data based on vehicular trajectory resampling Data Descriptor Yimin Wang1,2, Yixian Chen1,2, Guilong Li1,2, Yuhuan Lu1, Zhaocheng He1,2, Zhi Yu1,2 & Weiwei Sun3 Scientific Data volume 10, Article number: 57 (2023) Cite this article Despite abundant accessible traffic data, researches on traffic flow estimation and optimization still face the dilemma of detailedness and integrity in the measurement. A dataset of city-scale vehicular continuous trajectories featuring the finest resolution and integrity, as known as the holographic traffic data, would be a breakthrough, for it could reproduce every detail of the traffic flow evolution and reveal the personal mobility pattern within the city. Due to the high coverage of Automatic Vehicle Identification (AVI) devices in Xuancheng city, we constructed one-month continuous trajectories of daily 80,000 vehicles in the city with accurate intersection passing time and no travel path estimation bias. With such holographic traffic data, it is possible to reproduce every detail of the traffic flow evolution. We presented a set of traffic flow data based on the holographic trajectories resampling, covering the whole city, including stationary average speed and flow data of 5-minute intervals and dynamic floating car data (FCD). Measurement(s) speed Technology Type(s) Interpolation Imputation Technique Sample Characteristic - Location Xuancheng City Prefecture Background & Summary The hologram technology1 uses continuous media to record the optical information of objects whose three-dimensional light field can be reproduced afterward. Analogously, in this paper, the holographic data of the traffic flow is defined as the global information of all vehicles' dynamics, i.e., the trajectories of each vehicle in the traffic flow. And the ability to reproduce accurate traffic flow on a city-wide scale has significant implications for real-world traffic control, path planning, and decision-making process. Therefore, trajectory reconstruction is essential, considering the limitations of directly observed data. The most intuitive method to get trajectory data might be object recognition from a high-angle camera, such as the well-known NGSIM dataset2. However, further data enhancement procedures are needed to overcome the measurement error, such as data filtering3,4, and traffic dynamic-based model calibration5. Considering the price and the installation coverage, high-angle cameras are more suitable for application in local scenarios. On the contrary, FCD has the advantage of spatial-temporal coverage and the ability to track individual trajectories, which is better for creating city-wide scenarios. Such FCD could be generated by varying mobile sensors, such as GPS, RFID, or automated vehicle built-in sensors. In this way, the challenge is reconstructing the non-equipped vehicles' trajectories in the traffic flow. Using the "first-in-first-out" principle on the signalized intersections and the traffic wave theory, one can reconstruct the trajectories of each vehicle based on the partial observation of the floating cars6. With the development of connected and automated vehicles (CAV), the method could also be used in the mixed traffic flow of human-driven vehicles and CAVs7,8. However, the reconstructed data's accuracy depends on the floating cars' sampling rate. And the rate changes during the day, which leads to the uncertainty of the data. On the other hand, an AVI9 device is able to capture the identity and the timestamp of vehicles when passing by a specific checkpoint on the road. With the growing number of traffic cameras, AVI detectors are implemented in almost every intersection in Chinese cities. And one can obtain timestamped location sequences of all vehicles benefit from wide distributed AVI detectors on the road network. With such comprehensive identified traffic data, it is possible to generate the holographic trajectories by enriching details of traffic flow dynamics. This paper presents a method to reconstruct trajectories of vehicles from discrete serials of AVI observations. Based on the reconstructed trajectories, we propose a sampling method on traffic flow data to simulate the detecting processes from both views of Eulerian and Lagrangian traffic flow observations, such as traffic count detection by loop detectors and real-time position detection by floating cars. Moreover, the proposed methods are implemented in Xuancheng, China. With 97% of intersections equipped with AVI devices, the system captures almost every vehicular movement on the road network, daily producing 4 million records. In this case, Xuancheng might be known as the first city empowered with the insight of all-field round-the-clock vehicular trips. Considering the risk of personal information leaking, researchers are encouraged to collect cross-sectional aggregating data and limited vehicular trajectories through a supervised interactive virtual traffic measurement service. Such resampled traffic data could support various of transportation-related researches. For instance, 1) consistent multi-source detected data could be resampled from the holographic dataset for data fusion research; 2) mobility patterns could be found from full sampled individual trip data; 3) optimal planning of traffic detectors deployment could be tested by placing custom virtual detectors on the data platform. The AVI technology is widely used in traffic enforcement cameras to automatically identify vehicles involving traffic violations10, saving numerous human works to recognize license plates from raw images. Generally, active AVI detection identifies and records every vehicle passing the checkpoint11, even those not involving traffic violations. Thus, each vehicle on the road network would generate a trajectory constituted by a series of identifying records known as license plate recognition (LPR) data12. However, in the early days, the AVI deployment coverage and license recognition accuracy are not enough to get precise travel paths. Hence, some of the researches focused on the macroscopic profile of the traffic flow, such as original-destination (OD) reconstruction13,14, and speed profile estimation15. With the significant development of dynamic AVI technology and the wide deployment of AVI cameras, it is possible to reconstruct the closed travel chain using successive LPR records16,17. Moreover, deep learning algorithms like GNN are employed to reduce uncertainties in identifying vehicles in recent research18,19. Although the above methods provide plausible solutions to trip reconstruction, path estimation errors are introduced due to the limited AVI coverage. The estimation accuracy mainly depends on a certain coverage rate, as known as the proportion of AVI-equipped intersections in the whole road network. The higher coverage of AVI-equipped intersections implies that there are fewer trip paths to reconstruct. With the benefit of this high coverage, we could get promising results from some simple and effective reconstruction algorithms. Therefore, the generic workflow for generating the holographic trajectories and the related resampled data is depicted in Fig. 1. Two main procedures (P1 & P2) turn the discrete raw LPR data into continuous trajectories through the workflow. Trip measurement turns the partial observable LPR data into segmental trip data with certain paths on a constructed full-sensing road network (FSRN). Then trajectory reconstructing interpolation is applied to each segment to form the holographic traffic flow data20. Finally, one can run virtual traffic detection (P3) on holographic trajectories and resample various traffic flow data. Data processing workflow. Road network description To avoid path estimation error, the trajectory reconstruction is conducted on a well-defined road network on which the LPR data are mapped. This paper describes the physical road network (PRN) as a directed graph, denoted asG*(N*, S*). The other related notation is in Table 2. There should be at most one trip path for any serial of LPR records to guarantee no path estimation bias, i.e., m(as, t) ϵ {0, 1}. Let \({N}^{A}\) be the set of the AVI-equipped intersections. It is clear that \({N}^{A}\subseteq {N}^{\ast }\). Assuming an ideal circumstance that NA = N*, all the trip paths on the physical network can be observed. When \({N}^{A}\subset {N}^{\ast }\), it is still possible to capture all of the trips, as long as the following full-sensing condition is satisfied. Definition 0.1 (Full-sensing road network (FSRN)) A full-sensing road network (FSRN) is a road network graph that among all the paths between any two different AVI-equipped intersections, there is no more than one path with non-AVI-equipped intersections. It guarantees that the path between two consecutive LPR records is determined. Details of the full-sensing theorem are in Appendix A. This theorem demonstrates that it is unnecessary to deploy an AVI detector on each intersection to get the full-sensing condition. Let LPR data be \({{\bf{a}}}_{I}=({t}_{I},I)\), containing the timestamp and indicator of a vehicle passing Node I. Then the record of the trip rI, J consists of a serial of spatial-temporal locations, i.e., \({{\boldsymbol{a}}}_{I,J}=\{{{\boldsymbol{a}}}_{I},{{\boldsymbol{a}}}_{A},...,{{\boldsymbol{a}}}_{J}\}\). Such as consecutive LPR records \(\{{{\boldsymbol{a}}}_{B},{{\boldsymbol{a}}}_{D}\}\) in Fig. 2, the path \({r}_{B,D}=\{B,E,D\}\) can be determined regardless of missing detection. Demonstration of the road network and AVI deployment. Generally speaking, if the PRN fails the full-sensing condition, the challenge is to construct an FSRN according to the locations of AVI-equipped intersections. The idea is to extract an FSRN from the physical network by eliminating some road segments and intersections. Then a trip on PRN would be divided into two parts, including on-FSRN parts and off-FSRN parts. For instance, a trip \({r}_{A,I}=\{A,B,J,K,F,I\}\) in Fig. 2 would be divided into \({r}_{A,B}=\{A,B\}\), \({r}_{B,F}=\{B,J,K,F\}\), and \({r}_{F,I}=\{F,I\}\), where \({r}_{B,F}\) is the off-FSRN part. Furthermore, The closed traffic zone is constructed to keep the off-FSRN parts in a particular area. Definition 0.2 (Closed traffic zone) A closed traffic zone is an area bounded by FSRN road segments, and for any non-FSRN segments in the zone, their connected segments are also within the zone area. In this way, a trip on the physical road network might be represented by several parts on FSRN separated by staying or mobility within the traffic zones. The trip \({r}_{A,I}\) mentioned above could be represented by inter-zone movements \({r}_{A,B}\), \({r}_{F,I}\), and inner-zone activity \({r}_{B,F}\). Related details can be found in Appendix B. In order to obtain vehicular movements as high resolution as possible from an AVI-fixed-locating road network, the challenge is to minimize the area of the traffic zones by constructing a suitable sensing network under the constraint of the full sensing criterion. Additionally, more AVI implemented intersections indicate more resemblance to the FSRN and the PRN. Thus more detailed activities can be captured, i.e, \({N}^{A}\to {N}^{\ast }\), \(FSRN\to PRN\). A current sensing network of Xuancheng city is shown in Fig. 3. In Xuancheng city, the AVI installation rate among the intersections is 97%. Road network and AVI distribution of Xuancheng city. Despite such an almost ideal trip observation in Xuancheng, the trajectory reconstruction is still a problem of interpretation for observed passing time at both the upstream and downstream ends of a road segment. For trajectories, the turning directions on each intersection could be easily inferred by downstream LPR records, while their exact lanes are hardly recognized. Consider a series of AVI records from network in Fig. 2, \({{\boldsymbol{a}}}_{A,F}=\{{{\boldsymbol{a}}}_{A},{{\boldsymbol{a}}}_{B},{{\boldsymbol{a}}}_{C},...,{{\boldsymbol{a}}}_{F}\}\). We can infer the vehicle passing straight on Intersection B because \({s}_{A,B}\) and \({s}_{B,C}\) are in the same direction. And the vehicle was most likely in the right-turning stream passing Intersection C for a similar reason. The lane-level information, unfortunately, lacks confidence due to the complicated circumstances such as the left and straight sharing lane and even the occasional detection error by the AVI cameras. Thus, the traffic flow dynamic would be described by the turning stream on each intersection, rather than different lanes on the road segments21. For vehicular dynamics within the road segment \({s}_{I,J}\) of the trip, the trajectory \(x(t)\) between aI and aJ can be calculated as follows: $$x(t)={\int }_{{t}_{I}}^{t}v(t)dt,t\in ({t}_{I},{t}_{J})$$ Since traffic flow dynamics are adapted to the stream level, a vehicular location on the FSRN at time t contains the linear reference of the road-segment upstream end and the turning direction. A direction is described as u(t), representing the traffic stream from the current segment to the next segment of a trip. u(t) during the trip from I to O is described as follows, $$u(t)=\left\{\begin{array}{ll}\{I,A,B\} & {t}_{I}\le t < {t}_{A}\\ \{A,B,\ast \} & {t}_{A}\le t < {t}_{B}\\ \vdots & \\ \{\ast ,O,P\} & {t}_{* }\le t\le {t}_{O}\end{array}\right.$$ where \(\{I,A,B\}\) denotes the direction from segment \({s}_{I,A}\) to segment \({s}_{I,B}\) during \(({t}_{I},{t}_{A})\). Note that the last observed segment is s*, O. The turning direction {*, O, P} is inferred by the traffic stream, and the trajectory on the downstream segment \({s}_{O,P}\) is beyond the scope of reconstruction. To sum up, for an observation (aI, O) on the FSRN, there is a determining trip path \(\left({r}_{I,O}\right)\) where I and O are not adjacent. And one can infer the segment-level location of the vehicle, denoted as s(t). $$s(t)=\left\{\begin{array}{ll}{s}_{I,A} & {t}_{I}\le t < {t}_{A}\\ {s}_{A,\ast } & {t}_{A}\le t < {t}_{\ast }\\ \vdots & \\ {s}_{\ast ,O} & {t}_{\ast }\le t\le {t}_{O}\end{array}\right.$$ Thus, the network-wised stream-level continuous trajectory is represented as \(\{s(t),u(t),x(t)\}\), while the segment-level trajectory is represented by s(t). For instance, assuming the length of each segment in Fig. 2 is 400 m, and a vehicle moves with the speed of 20 m/s on the path {B, C, F}. Then the trajectory records with the 10-second time step are shown in Table 1. Note that on t = 20 s, the vehicle is on intersection C, which is both the downstream end of BC (x = 400) and the upstream end of CF (x = 0). Table 1 Demonstrate of trajectory records. Table 2 Description of notations. Trip measurement As shown in the workflow (Fig. 1), a trip dividing algorithm is required to get trip-based spatial-temporal serials. The basic procedure is determining whether two consecutive records belong to the same trip. This paper uses the travel time of a vehicle passing two consecutive AVI-equipped intersections ni and ni + 1 as a spatial-temporal accessibility criterion. Here the index number of the intersection implies its sequence in the trip. $$H\left({l}_{i,i+1},{t}_{{n}_{i}},{t}_{{n}_{i+1}}\right)=\left\{\begin{array}{ll}1 & {l}_{i,i+1}/{v}_{min} > \left({t}_{{n}_{i+1}}-{t}_{{n}_{i}}\right)\\ 0 & else\end{array}I,I+1\in {V}^{A}\right.$$ where \({l}_{i,i+1}\) is the length of segment \({s}_{{n}_{i},{n}_{i+1}}\), \({v}_{min}\) is the minimal travel speed, and \({t}_{{n}_{i}}\) is the passing time of record \({{\boldsymbol{a}}}_{{n}_{i}}\). \(H=1\) indicates that records \({{\boldsymbol{a}}}_{{n}_{i}}\) and \({{\boldsymbol{a}}}_{{n}_{i+1}}\) belong to one trip, while \(H=0\) means at least one staying behavior between \({{\boldsymbol{a}}}_{{n}_{i}}\) and \({{\boldsymbol{a}}}_{{n}_{i+1}}\). It is common to reconstruct vehicular trajectories on signalized intersections using traffic wave theory. In these researches5,6,7,8, it is assumed that the time of a vehicle passing the intersection is observable. However, as shown in Fig. 2, not each passing point in trip r can be recorded by AVI detectors, such as \({r}_{B,D}=\{B,E,D\}\). In other word, the observation could be a subset of the trip records, i.e., \({{\boldsymbol{a}}}^{o}=\{{{\boldsymbol{a}}}_{{n}_{i}}| {n}_{i}\in {N}^{A}\},{{\boldsymbol{a}}}^{o}\subseteq {\boldsymbol{a}}\). Under such circumstances that a trip path contains non-AVI-equipped intersections, the following algorithm is introduced to get the inferred possible passing time. It considers the non-AVI passing points and accessibility criteria in Eq. 4 (details in Appendix C). The idea is that, one can use Eq. 4 to judge accessibility on segment \({s}_{{n}_{k},{n}_{k+1}}\) between the green light phase \({\tau }_{k}=[{g}_{k}^{start},{g}_{k}^{end}]\) and \({\tau }_{k+1}=[{g}_{k+1}^{start},{g}_{k+1}^{end}]\). $$G\left({\tau }_{k},{\tau }_{k+1}\right)=H\left({l}_{k,k+1},{g}_{k}^{end},{g}_{k+1}^{start}\right)$$ For trip r = {ni, ni+1,…,nk,…,nj}, we can search accessible downstream green light phases into a set Tk as depicted in Fig. 4a, iteratively. The downstream searching process runs for \(i+1\le k\le j\) and generates the potential passing graph \({P}_{i,j}(T,E)\) in which the edges indicate two consequent passing phases. For each accessible phase node in layer Tj, we can pick the candidates \({T}_{j}^{\ast }\) where \({t}_{j}\in [{g}_{j}^{start},{g}_{j}^{end}]\) fits (black dots in Fig. 4b). Then remove other phase nodes and their connecting edges (dotted lines in Fig. 4b) from the graph as follows. $${E}_{j-1,j}^{\ast }=\left\{{e}_{j-i,j}| {\tau }_{j}\in {T}_{j}^{\ast }\right\}$$ Illustrations of inference for passing green light phases of intersections from i to j. By updating candidates of phases and edges from the downstream end to the upstream end, we can trim the graph into an accessible passing graph \({P}_{i,j}^{\ast }\left(T,E\right)\) for the path from node ni to nj. Then the passing moments could be determined with the speed-density information given by the leading and following vehicles, as mentioned in Appendix D. Note that AVI detectors might failed recognizing a small portion of the passing vehicles due to poor visual conditions. For instance, assuming missing observation aA on trip \({\boldsymbol{a}}=[{{\boldsymbol{a}}}_{B},{{\boldsymbol{a}}}_{A},{{\boldsymbol{a}}}_{D}]\) in Fig. 2, the passing-time inference algorithm would be applied for path \([{n}_{B},{n}_{E},{n}_{D}]\) since it is the only path between B and D without any AVI-equipped intersections. If the signals on E did not fit in, such situation would causes trip chain disconnection \(\left({P}_{i,j}^{\ast }=(\phi ,\phi )\right)\). Otherwise, it would be a false match. Therefore, the accuracy of the AVI detection is important to the trip measurement. Vehicular trajectories reconstruction The traffic streams consist of the vehicles of the same turning on the road segment. The dynamics in the same stream would be described as stop-and-go waves caused by the signal periods on the downstream end. A demonstration of vehicular trajectories in the traffic stream is shown in Fig. 5. The green and red bars on x = xj represent green and red phases in the signal circles. Furthermore, the wave's speed is determined by the vehicle queuing state and releasing state of the traffic flow, i.e., $$w=-\,{q}_{m}/\,({k}_{j}-{k}_{m})$$ where qm is the capacity, km is the density under capacity, and kj is the jammed density. In order to calculate vehicular trajectories in Eq. 1, such as the 5 vehicles in Fig. 5, the solution of v(t) is formulated as a piecewise function. $$v(t)=\left\{\begin{array}{ll}{v}_{1} & {t}_{i}\le t < {t}_{1}\\ 0 & {t}_{1}\le t < {t}_{2}\\ \vdots & \\ 0 & {t}_{k-1}\le t < {t}_{k}\\ {v}_{j} & {t}_{k}\le t\le {t}_{j}\end{array}\right.$$ Demonstration of backward trajectory reconstruction from leaving time at xj to entry time at xi on a road segment. To gain the solutions, a backward procedure of trajectories reconstruction is proposed for each passing vehicle, calculating from the downstream to the upstream of the traffic flow. Hence, the reconstruction begins at the last signal period and iterates by signal circles. In other words, the v(t) is calculated from vj to v1. Each iteration starts with observations of the passing vehicles in the current period and the remaining ones from the former iteration, resulting in the new reconstructing states of these vehicles. For instance, Iteration 2 in Fig. 5 contains remained vehicles (veh3, 4) and passing vehicle(s) (veh2). At the end of the iteration, veh4's trajector has been constructed, while trajectories of (veh2, 3) remained undone and passed to Iteration 3. The key is to distinguish queued vehicles from non-queued ones. Then we can complete the trajectories of the non-queued vehicles, leaving the queued ones to the subsequent iterations. Details of the reconstruction method are in Appendix 4. Virtual traffic flow detection With the holistic reconstructed trajectories, the holograph of the city-scale mobility can be acquired. Note that such a high-resolution individual mobility dataset implies a high risk of personal information being abused. Thus it is restricted to access the generated raw trajectories directly. As an alternative, numerical traffic flow detection is applied. In reality, the traffic flow can be observed from both Eulerian and Lagrangian perspectives. Analogously, the reconstructed dataset supports both cross-sectional and vehicular detection. Numerical stationary detection For stationary observation, traditional loop data can be simulated by counting intersections of the curves of trajectories crossing the horizontal loop location line as the blue dash line in Fig. 6. Moreover, the occupancy and velocity can be measured according to the loop's length. Additionally, segmental measurement could be employed, which detects the instant density (as on orange line) and the swwpace-mean speed (as in orange frame) of the traffic flow as the orange dash line in Fig. 6. The missing rate is introduced in the loop data resampling process to simulate the systematic detecting error in realistic circumstances. Each vehicle counting is taken as a Bernoulli trial having the missing rate as the possibility of failure. By manipulating the aggregating interval of the loop detectors, we can observe the different characteristics of the evolving traffic flow. Under a short interval, the characteristic of short-term traffic flow appeals, showing the dynamic state-changing phenomena. In contrast, under a long interval, the detected flow-density states scatter more concentrated, revealing the equilibrium of the traffic flow. Illustration of virtual traffic flow detection including loop detection (blue dash line) and floating car detection (red solid line). Virtual floating car detection The sample rate controls the penetration of vehicular trajectories resampling, resulting in the red trajectories in Fig. 6. In order to balance the data utility and personal privacy protection, only the trajectories of commercial vehicles are included in the dataset. The proportion of commercial vehicles is about 4.5% to 7% depends on the time. Moreover, all of the license numbers are substituted with their unique and irreversible hash code. Data Records We provide three types of data to support different research interests: Short-term anonymized original LPR data Long-term encrypted reconstructed holographic trajectory data Long-term resampled traffic data, including loop data and FCD based on the holographic trajectories All of the data are available at the Figshare22 repository. We limit the original LPR data because of the risk of personal information leaking, even if the data are anonymized. With travel characteristics revealed in the long-termed holographic trajectories, one can still recognize the personal identification using additional data, such as parking lot data. Hence, it is necessary to encrypt the trajectory data. However, the long-term resampled traffic data could be used as the primary support for the related research, which could meet most of the needs. For supplemental use, others can customize their detectors' settings and implement virtual traffic flow detection using the attached resampling software and the encrypted holographic trajectories. To those interested in the reconstruction method, the short-term anonymized original LPR data could be used for validation. Details of the three types of data are described as follows. The city-scale loop data and FCD are the one-month long resampling results of the Xuancheng holographic data in Sept. 2020. The link-based graph is given in Table 3 for road network description, including the whole 578 road segments of the city. The loop dataset provides the 5-minute aggregated flow-speed data, as shown in Table 4. The FCD includes the trajectories of 500 commercial vehicles are in Table 5, which is sampled every 10 seconds. Their unique IDs can be found in the data repositories. Table 3 Road network data attributes. Table 4 Loop data attributes. Table 5 FCD attributes. The encrypted holographic trajectories can not be accessed directly; however, one can obtain the self-customized results by using the attached resampling software. The usage can be found in the following Usage Notes, and the source code of the software is available, see in Code Availability. The short-term original LPR data for reconstruction validation are shown in Table 6, while the source code of the reconstruction can be found in Code Availability. The LPR data are collected from 7:00 to 8:00 on a workday morning in Xuancheng. Table 6 LPR data attributes. Technical Validation The generated traffic flow profile of morning peak is revealed in Fig. 7. The number of passing vehicles is visualized by the heat map. It presents the radial distribution of the traffic flow. To demonstrate the validity of the generated data, we compared the data with different sources to test the consistency in between. Also, the characteristics of the generated data are analyzed. Several data profiles are drawn from the flow-based perspective and trip-based perspective, respectively. Morning peak traffic in Xuancheng city. The width of the blue shades represents the number of vehicles. Road segments (Zhaoting, Baocheng, Xuanhua, and Aofeng Rd.) and intersections (N4694 & N4724) are to be validated. Flow-based perspective The flow-based validation includes comparing the traffic flow data on red-marked roads against another observation and analyzing the generated fundamental diagram. Figure 8a depicts the resampled count numbers and the manual results of the southern in-coming stream on intersection N4724. The resampled data on intersection N4694 and N4724 are compared to the on-sited manual observation, considering vehicles from each in-coming road segment from 11:00 to 12:00 on Sept. 15th, 2020. The correlative coefficient is 0.748 with RMSE = 4.3 veh/min, which shows the consistency. Validation from the flow-based perspective. Furthermore, the network-wide travel time data are compared to the dynamic estimated results from the Amap API and Baidu Map API. Due to the different strategies of Amap and Baidu Map, we propose different comparisons accordingly. Amap API provides travel time estimations of specific paths with a limit on total paths. So we would compare the results on Aofeng Rd., Zhaoting Rd., Baocheng Rd., and Xunhua Rd, which are the main-stream roads of the network. (see Fig. 7) On the other hand, Baidu API allows speed inquiry of each road segment in a specific area. However, only the speed data under congested traffic conditions are recorded. Thus we would compare the results during peak hours. Since some smooth filters and delays on intersections are usually applied in travel time estimation algorithms, the estimated results are likely different from the raw detected ones. For Amap API, the weekly averaged and zero-mean normalized travel time series are proposed. Figure 8b shows the result on Zhaoting Rd., demonstrating the daily deviation of travel time from the average. Generally speaking, the overall averaging daily travel time is similar to the estimated result by Amap with the correlative coefficient of 0.749. For Baidu API, the hourly averaged and zero-mean normalized travel time series are proposed. The average speed is weighted by the length and lane numbers of the road segments. The correlative coefficient is 0.738 (see Fig. 8c). Due to the differences in lane numbers of each segment and the varying green occupancy ratio of each signalized intersection, the fundamental diagram is adapted into a space-integrated form to describe the network-wide characteristics of the traffic. (Fig. 8d,e) The fundamental diagram is integratable on time and space dimensions because traffic macroscopic characteristics are aggregated measures that can be done over vehicles, time, and space23. Therefore, the density term is changed from the number of vehicles per kilometer (veh/km) to the number of vehicles on a road segment (veh). Then the flow term (veh/h) is changed to hourly vehicle-kilometer (veh·km/h). As for the number of vehicles on the road and the hourly vehicle-kilometer, the network-wide quantities can be represented by the sum of the segment-level quantities from each part of the road network. However, since the speed is an averaging quantity, the ratio of the vehicle-kilometer and the number of vehicles en route keeps the same physical meaning as the average speed of the whole traffic flow. We take a snapshot of the whole network every 30 seconds to count the number of vehicles on the roads and their average speed. Then the vehicle-kilometer could be calculated, which is the product of the number of vehicles and the average speed. Figure 8d,e show the 10-minute moving average of the snapshots' samples, in which the white color area indicates a denser cluster. As shown in Fig. 8d, the speed-density diagram during the day shift (from 5 A.M. to 7 P.M.) differs from the night shift (from 7 P.M. to 5 A.M.). Note that under the same average speed, the number of vehicles at night is less than that at day. Similarly, with the same amount of vehicles on the network, the travel speed is lower at night. It is implied that under a dimmer lighting condition, vehicles might move slower, and the performance of AVI equipment might be affected. Furthermore, the numbers of vehicles are around 200 and 1000 at night, while the numbers are around 1800 at day. As for the flow-density diagram in Fig. 8e, the vehicle-kilometer at day is slightly above that at night, which is consistent with the results in the speed-density diagram. Trip-based perspective The trip-based analysis focuses on the spatial-temporal distribution of the travel demands. The trip-based analysis is mainly according to the spatial-temporal concentration of the individual trips. In this paper, the level of spatial concentration of individual travelers is evaluated by the number of different origin-destination zones (ODZ) in a month. Meanwhile, the level of time concentration is determined by the number of different departure time sections (DTS). As the individual trip is related to the specific traffic zone surrounded by the road segments, the number of different ODZ is easily counted. Since departure time is a continuous variable, we conduct a DBSCAN clustering algorithm on each trip to spontaneously generate discrete departure time sections. Note that some vehicles, such as taxis, have random origin-destination points and departure time, which lead to a long tail distribution on ODZ and DTS, as depicted on Fig. 9a. To avoid the long tail phenomena of spatial-temporal distribution, we take the 85th percentile of the number of DTS and ODZ as the indicators of spatial-temporal concentrating characteristics. Validation from the trip-based perspective. Figure 9c shows the departure time distribution on weekdays of people in different DTS. One can recognition a typical "Work-Home" commute pattern of those DTS = 2, which has much higher peaks during commute time. Besides, the curve of DTS = 4 seems a "Work-Other-Work-Home" pattern and leads to a midday peak of traffic that does not exist in DTS = 2 or DTS = 3 curves. As for DTS = 3, there is a noticeable peak at around 20:00 and indicates a "Work-Other-Home" pattern. For DTS = other, one can find that there are four equivalent peaks at around 7:30, 11:30, 14:00, and 17:30, representing generally high frequent departure times. Since DTS = 2, 3, 4 show the comprehensive mobility patterns, the temporally concentrated travelers are defined as the ones with the 85th DTS in2,3,4. Note that these patterns have up to four different OD zones. Likewise, the spatially concentrated travelers are defined as the ones with the 85th ODZ less than 5. Figure 9d shows the Lorenz curve of travel distance in a month for all travelers, where the cumulative proportion of the travel distance is plotted against the cumulative proportion of individuals24. It reveals that mobility distribution on the road network is of the same pattern as other business behaviors. Among all travelers, the commercial vehicles at the top 1% of the population share nearly 20% of the cumulative travel distances. Some of the trips are predictable due to the traveler's comprehensive characteristics, such as the commuters, the spatially concentrated ones, and the temporally concentrated ones. Furthermore, we can estimate the movements of commercial vehicles since they are under surveillance. These four types of travelers are defined as regular travelers whose patterns are recognizable. Figure 9b is a pie chart of population and travel distance for different travelers, including commuters, commercial vehicles, temporally concentrated travelers, and spatially concentrated travelers. In summary, the regular ones share 37% of the whole travelers but form 45% of the whole travel distance. Thus, once these 37% regular travelers are well modeled, we can reproduce nearly half of the trips, and the other half might be generated with random methods. As mentioned above, there are three types of data we provide. The short-term LPR data and long-term resampled traffic data can be downloaded for static data usage. On the other hand, the encrypted holographic trajectories can be used in the interactive measurement of the traffic flow. Users can modify the virtual detecting environment and get customized virtual detection results. In this way, we can offer the user-customized round-the-clock long-term traffic flow data to the most satisfactory resolution without exposing personal trajectories. Static dataset usage The road network file can be imported into the PostGIS database or other supported GIS systems through QGIS. The loop data of each road segment can be used for studying large-scale traffic data prediction. By combining FCD with the loop data, users could examine various data fusion models. Moreover, the FCD data process script could help aggregate individual floating car samples into the segmental travel time. As for LPR data, each row of the dataset is a pair of consecutive records captured by the AVI detectors. One can rebuild the route between these two records with the road network. Interactive measurement usage The resampling software is a command-line tool to implement virtual traffic flow detection in encrypted trajectories. Users could tweak the settings in the running properties file and get resampled traffic data straight in the local output files. In the properties file, users can set the road sections ("ftNode") and time ("fTime", "tTime") of the measurement and define the parameters of loop and floating car detection. Users can switch on or off the floating car detection by setting the "needFCD" property to "true" or "false". Furthermore, "fcdSamplingSec" denotes the FCD's sampling period (seconds). For loop detectors, they are identified by the ID ("loopId"), detecting on the specified road segment ("ftNode"). The loop's position is determined by the property "position", which denotes the distance from the downstream end of the road. The missing rate ("missingRate") and the aggregating interval ("interval") settings are available. The software can run on Linux, Windows, and macOS systems using different launchers. The command is simple as "osLauncher java -jar /path/to/resampling_software -d /path/of/holoData -c /path/of/properties_file". Other details can be found in the "README" file. A. Full-sensing theorem Among all the paths between any two different AVI intersections in the study area, if there is no more than one path with non-AVI-equipped intersections, then the trip path for the LPR record is determined, i.e., Theorem A.1 . \(\forall i,j\in {N}^{A}\), Let \(R=\{{r}_{i,j}| {r}_{i,j}\cap {N}^{A}=\{i,j\}\}\). If \(n\left({R}_{i,j}\right)\in \{0,1\}\), then \(\forall p,q\in {N}^{\ast }\), \(m\left({A}_{p,q}\right)\in \{0,1\}\) $$\begin{array}{l}m\left({A}_{p,q}\right)=\prod _{i,j\in {A}_{{p}_{q}}}n\left({R}_{i,j}\right)\\ \because n\in \left\{0,1\right\}\\ \therefore \prod n\in \left\{0,1\right\}\end{array}$$ B. Closed zone theorem If the traffic zone area is bounded by FSRN road segments, and for any non-FSRN segments in the zone, their connected segments are also within the zone area, then the trip of the physical road network (PRN) can be represented as parts on full-sensing road network (FSRN) separated by inner zone activities, i.e., ∵ Theorem B.1 . Let \({r}_{o,d}^{\ast }=\left\{o,i,i+1,i+2,...,i+m,d\right\}\) be a trip on a physical road network, and Z a closed traffic zone on the corresponding full-sensing road network that \({s}_{i,i+1}\subset \bar{Z}\). \(\forall m\ge 1\), \({s}_{i+m-1,i+m}\) on non-FSRN, then \(\forall m\ge 1\), \({s}_{i+m-1,i+m}\subset \bar{Z}\). (\(\bar{Z}\) denotes the closure of area Z.) Proof. Suppose \({s}_{i+k-1,i+k}\subset \bar{Z}\). According to Definition 0.2, $$\begin{array}{l}{s}_{i+m-1,i+m}\subset \bar{Z},\left(m=k\right)\\ \because {s}_{i+k-1,i+k}\cap {s}_{i+k,i+k+1}=\{i+k\}\ne \phi \\ \therefore {s}_{i+k,i+k+1}\subset \bar{Z}\\ \Rightarrow {s}_{i+m-1,i+m}\subset \bar{Z},(m=k+1)\\ \because {s}_{i,i+1}\subset \bar{Z},(m=1)\\ \Rightarrow {s}_{i+m-1,i+m}\subset \bar{Z},(m\ge 1)\end{array}$$ C. Passing-time inference algorithm Passing Time Inference D. Details of trajectory reconstruction As shown in Fig. 10a, there are two different circumstances we need to deal with when it comes to queuing discrimination. The common idea is that the low constructed travel speed assumes a queuing behavior since the vehicle does not move during the queuing process. For those vehicles leaving xj, the travel speed is simply determined by the slope between the entry point (A) and leaving point (B), as shown in Fig. 10a. As for vehicles from former iterations, since the exit point (G) remains unknown, the intersection (F) of the wave μτ and stopping position \(\overline{FH}\) is chosen as the referring point. Hence the adapted travel speed is related to Point E and Point F. Especially when it provides the green light period instead of the exact entry time, the end of the green light period is used as referring point. Details of trajectory reconstruction. After the independent queuing discrimination, the result might show that several vehicles are assumed queuing before the current green light period. For instance, let Vehicle 1,3 be the low-speed vehicles as depicted in Fig. 10b. It is a fact that there is no more than one stop wave during one signal period. Thus, the queuing vehicle must be in front of the other ones. Considering the one-wave constraint, let the last low-speed vehicle be the last queuing vehicle. In this case, Vehicle 1,2,3 would be marked as the queuing vehicles. Their stopped positions are calculated according to their leaving orders. The stop position of the i-th vehicle is formulated as follows, $${P}_{\tau }^{i}=\frac{i-1}{{k}_{j}}$$ where kj is the jam density. The passing speed is related to the stopped position and the exit point. On the other hand, the travel speed of the non-queued vehicles is calculated according to the passing information. The reconstructed trajectory is the straight passing line to vehicles with specific entry and leaving points, such as vehicle 4. For vehicles with one exact passing point, such as vehicles 5 and 7, the travel speed is formulated by the speed-density model26, $$v={v}_{f}\cdot {\left(1-{\left(\frac{k}{{k}_{j}}\right)}^{\beta }\right)}^{\alpha }$$ where vf is the free flow speed, and α = 1.0, β = 0.05 according to relating researches27,28. In this way, the travel speed is given based on the local density, representing the road segment's traffic dynamic. Then their trajectories are fixed by one passing point and the running speed. Finally, to vehicles without exact observations, their speed is also calculated by the same speed-density model, and the endpoint is given randomly with constraints of the proceeding and following vehicles. (See Vehicle 4 in Fig. 10b.) To further describe the details of data processing in our method, we also provide code and instructions for reproducing the presented results25. In general, files that end with ".py" are supporting python module files, other files with ".ipynb" are written as Jupyter Notebook instruction, and the files under the folder "measurement" are the source code of the resampling software. The instruction files demonstrate the whole data processing workflow in Fig. 1, including trip measurement, trajectory reconstruction, virtual traffic flow detection, and data validation. These files can be used to better understand the modeling and validation steps. This study proposes a resampling method of vehicular trajectories using the LPR data. A city-scale holographic unbiased trajectories dataset is reconstructed. Then it is validated by the consistency with other data sources on travel time results and demonstrated with the macroscopic characteristics of the fundamental diagram. The correlative coefficient of travel time is about 0.688 to 0.749. Moreover, with the anonymous interactive measurement, users can acquire multiple traffic data from the individual level without the risk of personal information abuse. This dataset and the tool could support relative research goals such as data fusion, patterns of mobility recognition, and sensor network optimization. Gabor, D. A new microscopic principle. Nature 161, 777–778, https://doi.org/10.1038/161777a0 (1948). U.S. Department of Transportation Federal Highway Administration Next generation simulation (ngsim) vehicle trajectories and supporting data. [dataset]. provided by its datahub through data.transportation.gov. https://doi.org/10.21949/1504477 (2016). Punzo, V., Borzacchiello, M. T. & Ciuffo, B. On the assessment of vehicle trajectory data accuracy and application to the next generation simulation (ngsim) program data. Transportation Research Part C: Emerging Technologies 19, 1243–1262 (2011). Zhang, T. & Jin, P. J. A longitudinal scanline based vehicle trajectory reconstruction method for high-angle traffic video. Transportation research part C: emerging technologies 103, 104–128 (2019). Montanino, M. & Punzo, V. Trajectory data reconstruction and simulation-based validation against macroscopic traffic patterns. Transportation Research Part B: Methodological 80, 82–106 (2015). Sun, Z. & Ban, X. J. Vehicle trajectory reconstruction for signalized intersections using mobile traffic sensors. Transportation Research Part C: Emerging Technologies 36, 268–283 (2013). Wang, Y., Wei, L. & Chen, P. Trajectory reconstruction for freeway traffic mixed with human-driven vehicles and connected and automated vehicles. Transportation research part C: emerging technologies 111, 135–155 (2020). Chen, X., Yin, J., Tang, K., Tian, Y. & Sun, J. Vehicle trajectory reconstruction at signalized intersections under connected and automated vehicle environment. IEEE Transactions on Intelligent Transportation Systems (2022). Bernstein, D. & Kanaan, A. Y. Automatic vehicle identification: technologies and functionalities. Journal of Intelligent Transportation System 1, 191–204 (1993). Sun, Z., Jin, W.-L. & Ritchie, S. G. Simultaneous estimation of states and parameters in newell's simplified kinematic wave model with eulerian and lagrangian traffic data. Transportation research part B: methodological 104, 106–122 (2017). Yu, R., Abdel-Aty, M. A., Ahmed, M. M. & Wang, X. Utilizing microscopic traffic and weather data to analyze real-time crash patterns in the context of active traffic management. IEEE Transactions on Intelligent Transportation Systems 15, 205–213 (2013). Zhan, X., Li, R. & Ukkusuri, S. V. Lane-based real-time queue length estimation using license plate recognition data. Transportation Research Part C: Emerging Technologies 57, 85–102 (2015). Asakura, Y., Hato, E. & Kashiwadani, M. Origin-destination matrices estimation model using automatic vehicle identification data and its application to the Han-Shin expressway network. Transportation 27, 419–438, https://doi.org/10.1023/A:1005239823771 (2000). Zhou, X. & Mahmassani, H. S. Dynamic origin-destination demand estimation using automatic vehicle identification data. IEEE Transactions on Intelligent Transportation Systems 7, 105–114, https://doi.org/10.1109/TITS.2006.869629 (2006). Mo, B., Li, R. & Zhan, X. Speed profile estimation using license plate recognition data. Transportation research part C: emerging technologies 82, 358–378 (2017). Rao, W., Wu, Y.-J., Xia, J., Ou, J. & Kluger, R. Origin-destination pattern estimation based on trajectory reconstruction using automatic license plate recognition data. Transportation Research Part C: Emerging Technologies 95, 29–46 (2018). Khare, V. et al. A novel character segmentation-reconstruction approach for license plate recognition. Expert Systems with Applications 131, 219–239 (2019). Tong, P., Li, M., Li, M., Huang, J. & Hua, X. Large-scale vehicle trajectory reconstruction with camera sensing network. Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM 188–200, https://doi.org/10.1145/3447993.3448617 (2021). Li, Y., Yu, R., Shahabi, C. & Liu, Y. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In International Conference on Learning Representations (2018). Wang, Y., Yang, X., Liang, H. & Liu, Y. A review of the self-adaptive traffic signal control system based on future traffic environment. Journal of Advanced Transportation 2018 (2018). Robertson, D. TRANSYT: A Traffic Network Study Tool. RRL report (Road Research Laboratory, 1969). Wang, Y. et al. City-scale holographic traffic flow data based on vehicular trajectory resampling. Figshare https://doi.org/10.6084/m9.figshare.c.5796776.v1 (2022). Ni, D. Traffic flow theory: Characteristics, experimental methods, and numerical techniques (Butterworth-Heinemann, 2015). Wittebolle, L. et al. Initial community evenness favours functionality under selective stress. Nature 458, 623–626 (2009). Wang, Y., Li, G., Lu, Y., He, Z. & Yu, Z. City-scale holographic traffic flow data set of xuancheng. https://github.com/sysuits/City-Scale-Holographic-Traffic-Flow-Data-based-on-Vehicular-Trajectory-Resampling (2021). May, A. & Keller, H. E. Non-integer car-following models. Highway Research Record 199, 19–32 (1967). Ben-Akiva, M., Bierlaire, M., Burton, D., Koutsopoulos, H. N. & Mishalani, R. Network State Estimation and Prediction for Real-Time Traffic Management. Networks and Spatial Economics 1, 293–318 (2001). Xu, Y., Song, X., Weng, Z. & Tan, G. An Entry Time-based Supply Framework (ETSF) for mesoscopic traffic simulations. Simulation Modelling Practice and Theory 47, 182–195, https://doi.org/10.1016/j.simpat.2014.06.006 (2014). The work was done at the SYSU Research Center of ITS in the context of the collaboration with the Joint Research and Development Laboratory of Smart Policing in Xuancheng Public Security. The research was also supported by the National Natural Science Foundation of China (No. U1811463). Research Center of Intelligent Transportation System, SUN YAT-SEN University, Guangzhou, 510006, People's Republic of China Yimin Wang, Yixian Chen, Guilong Li, Yuhuan Lu, Zhaocheng He & Zhi Yu Guangdong Provincial Key Laboratory of Intelligent Transportation System, Guangzhou, 510275, People's Republic of China Yimin Wang, Yixian Chen, Guilong Li, Zhaocheng He & Zhi Yu Guangdong Fundway Technology Co., Ltd., Guangzhou, 510220, People's Republic of China Weiwei Sun Yimin Wang Yixian Chen Guilong Li Yuhuan Lu Zhaocheng He Zhi Yu Z.Y. conceived of the presented idea. Y.W. developed the theoretical framework and performed the computations. Y.C and Y.L. contributed to the technical details of the the theory. G.L. conducted part of the experiments. Z.H. supervised the findings of this work. All authors discussed the results and contributed to the final manuscript. Correspondence to Zhaocheng He or Zhi Yu. Wang, Y., Chen, Y., Li, G. et al. City-scale holographic traffic flow data based on vehicular trajectory resampling. Sci Data 10, 57 (2023). https://doi.org/10.1038/s41597-022-01850-0
CommonCrawl
Kang-Ling Liao 1, , Chih-Wen Shih 2, and Chi-Jer Yu 2, Mathematical Biosciences Institute, Ohio State University, Columbus, OH 43210, USA Department of Applied Mathematics, National Chiao Tung University, Hsinchu, Taiwan 300 The key of Marotto's theorem on chaos for multi-dimensional maps is the existence of snapback repeller. For practical application of the theory, locating a computable repelling neighborhood of the repelling fixed point has thus become the key issue. For some multi-dimensional maps $F$, basic information of $F$ is not sufficient to indicate the existence of snapback repeller for $F$. In this investigation, for a repeller $\bar{\bf z}$ of $F$, we start from estimating the repelling neighborhood of $\bar{\bf z}$ under $F^{k}$ for some $k ≥ 2$, by a theory built on the first or second derivative of $F^k$. By employing the Interval Arithmetic computation, we locate a snapback point ${\bf z}_0$ in this repelling neighborhood and examine the nonzero determinant condition for the Jacobian of $F$ along the orbit through ${\bf z}_0$. With this new approach, we are able to conclude the existence of snapback repellers under the valid definition, hence chaotic behaviors, in a discrete-time predator-prey model, a population model, and the FitzHugh nerve model. Keywords: Snapback repeller, homoclinic orbit, chaos, Interval arithmetic. Mathematics Subject Classification: Primary: 37C29, 37C70, 34C28. Citation: Kang-Ling Liao, Chih-Wen Shih, Chi-Jer Yu. The snapback repellers for chaos in multi-dimensional maps. Journal of Computational Dynamics, 2018, 5 (1&2) : 81-92. doi: 10.3934/jcd.2018004 G. Alefeld and J. Herzberger, Introduction to Interval Computations, in Academic Press, NY, 1983. Google Scholar G. Alefeld, Inclusion methods for systems of nonlinear equations-the interval Newton method and modifications, Topics in Validated Computations, Elsevier, Amsterdam, 5 (1994), 7-26. Google Scholar Z. Arai, W. Kalies, H. Kokubu, K. Mischaikow, H. Oka and P. Pilarczyk, A database schema for the analysis of global dynamics of multi parameter systems, SIAM J. Appl. Dyn. Syst., 8 (2009), 757-789. doi: 10.1137/080734935. Google Scholar J. R. Beddington, C. A. Free and J. H. Lawton, Dynamic complexity in predator-prey models framed in difference equations, Nature, 255 (1975), 58-60. Google Scholar G. Chen, S.-B. Hsu and J. Zhou, Snapback repellers as a cause of chaotic vibration of the wave equation with a van der Pol boundary condition and energy injection at the middle of the span, J. Math. Phys., 39 (1998), 6459-6489. doi: 10.1063/1.532670. Google Scholar S. S. Chen and C. W. Shih, Transversal homoclinic orbits in a transiently chaotic neural network, Chaos, 12 (2002), 654-671. doi: 10.1063/1.1488895. Google Scholar L. Gardini and F. Tramontana, Snapback repellers and chaotic attractors, Physical Rev. E, 81 (2010), 046202, 5pp. doi: 10.1103/PhysRevE.81.046202. Google Scholar L. Gardini, I. Sushko, V. Avrutin and M. Schanz, Critical homoclinic orbits lead to snapback repellers, Chaos, Solitons, and Fractals, 44 (2011), 433-449. doi: 10.1016/j.chaos.2011.03.004. Google Scholar Z. Jing, Z. Jia and Y. Chang, Chaos behavior in the discrete FitzHugh nerve system, China Set. A-Math, 44 (2001), 1571-1578. doi: 10.1007/BF02880796. Google Scholar C. Li and G. Chen, An improved version of the Marotto theorem, Chaos, Solitons, and Fractals, 18 (2003), 69-77. doi: 10.1016/S0960-0779(02)00605-7. Google Scholar M.-C. Li, M.-J. Lyu and P. Zgliczyński, Topological entropy for multidimensional perturbations of snapback repellers and one-dimensional maps, Nonlinearity, 21 (2008), 2555-2567. doi: 10.1088/0951-7715/21/11/005. Google Scholar T.-Y. Li and J. A. Yorke, Period three implies chaos, Amer. Math. Monthly, 82 (1975), 985-992. doi: 10.1080/00029890.1975.11994008. Google Scholar K.-L. Liao and C.-W. Shih, Snapback repellers and homoclinic orbits for multi-dimensional maps, J. Math. Anal. Appl., 386 (2012), 387-400. doi: 10.1016/j.jmaa.2011.08.011. Google Scholar F. R. Marotto, Snapback repellers imply chaos in $\mathbb{R}^{n}$, J. Math. Anal. Appl., 63 (1978), 199-223. doi: 10.1016/0022-247X(78)90115-4. Google Scholar F. R. Marotto, On redefining a snapback repeller, Chaos, Solitons, and Fractals, 25 (2005), 25-28. doi: 10.1016/j.chaos.2004.10.003. Google Scholar R. E. Moore and F. Bierbaum, Methods and Applications of Interval Analysis, SIAM, Philadelphia, 1979. Google Scholar C.-C. Peng, Numerical computation of orbits and rigorous verification of existence of snapback repellers, Chaos, 17 (2007), 013107, 8pp. doi: 10.1063/1.2430907. Google Scholar Y. Shi and P. Yu, Chaos induced by regular snapback repellers, J. Math. Anal. Appl., 337 (2008), 1480-1494. doi: 10.1016/j.jmaa.2007.05.005. Google Scholar J. Sugie, Nonexistence of periodic solutions for the FitzHugh nerve system, Quart. Appl. Math., 49 (1991), 543-554. doi: 10.1090/qam/1121685. Google Scholar Y. Zhang, Q. Zhang, L. Zhao and C. Yang, Dynamical behaviors and chaos control in a discrete functional response model, Chaos, Solitons, and Fractals, 34 (2007), 1318-1327. doi: 10.1016/j.chaos.2006.04.032. Google Scholar Oksana Koltsova, Lev Lerman. Hamiltonian dynamics near nontransverse homoclinic orbit to saddle-focus equilibrium. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 883-913. doi: 10.3934/dcds.2009.25.883 Benoît Grébert, Tiphaine Jézéquel, Laurent Thomann. Dynamics of Klein-Gordon on a compact surface near a homoclinic orbit. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3485-3510. doi: 10.3934/dcds.2014.34.3485 Shigui Ruan, Junjie Wei, Jianhong Wu. Bifurcation from a homoclinic orbit in partial functional differential equations. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1293-1322. doi: 10.3934/dcds.2003.9.1293 W.-J. Beyn, Y.-K Zou. Discretizations of dynamical systems with a saddle-node homoclinic orbit. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 351-365. doi: 10.3934/dcds.1996.2.351 Bao Qing Hu, Song Wang. A novel approach in uncertain programming part I: new arithmetic and order relation for interval numbers. Journal of Industrial & Management Optimization, 2006, 2 (4) : 351-371. doi: 10.3934/jimo.2006.2.351 Martin Wechselberger, Warren Weckesser. Homoclinic clusters and chaos associated with a folded node in a stellate cell model. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 829-850. doi: 10.3934/dcdss.2009.2.829 Marta Štefánková. Inheriting of chaos in uniformly convergent nonautonomous dynamical systems on the interval. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3435-3443. doi: 10.3934/dcds.2016.36.3435 Steven M. Pederson. Non-turning Poincaré map and homoclinic tangencies in interval maps with non-constant topological entropy. Conference Publications, 2001, 2001 (Special) : 295-302. doi: 10.3934/proc.2001.2001.295 Richard Hofer, Arne Winterhof. On the arithmetic autocorrelation of the Legendre sequence. Advances in Mathematics of Communications, 2017, 11 (1) : 237-244. doi: 10.3934/amc.2017015 Tanja Eisner, Rainer Nagel. Arithmetic progressions -- an operator theoretic view. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 657-667. doi: 10.3934/dcdss.2013.6.657 Mehdi Pourbarat. On the arithmetic difference of middle Cantor sets. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4259-4278. doi: 10.3934/dcds.2018186 Vladimir Dragović, Milena Radnović. Pseudo-integrable billiards and arithmetic dynamics. Journal of Modern Dynamics, 2014, 8 (1) : 109-132. doi: 10.3934/jmd.2014.8.109 Gerhard Frey. Relations between arithmetic geometry and public key cryptography. Advances in Mathematics of Communications, 2010, 4 (2) : 281-305. doi: 10.3934/amc.2010.4.281 Matthias Rumberger. Lyapunov exponents on the orbit space. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 91-113. doi: 10.3934/dcds.2001.7.91 Stefano Galatolo. Orbit complexity and data compression. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 477-486. doi: 10.3934/dcds.2001.7.477 Shiqiu Liu, Frédérique Oggier. On applications of orbit codes to storage. Advances in Mathematics of Communications, 2016, 10 (1) : 113-130. doi: 10.3934/amc.2016.10.113 Peng Sun. Minimality and gluing orbit property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4041-4056. doi: 10.3934/dcds.2019162 Wei Lin, Jianhong Wu, Guanrong Chen. Generalized snap-back repeller and semi-conjugacy to shift operators of piecewise continuous transformations. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 103-119. doi: 10.3934/dcds.2007.19.103 Heide Gluesing-Luerssen, Katherine Morrison, Carolyn Troha. Cyclic orbit codes and stabilizer subfields. Advances in Mathematics of Communications, 2015, 9 (2) : 177-197. doi: 10.3934/amc.2015.9.177 Andres del Junco, Daniel J. Rudolph, Benjamin Weiss. Measured topological orbit and Kakutani equivalence. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 221-238. doi: 10.3934/dcdss.2009.2.221 Kang-Ling Liao Chih-Wen Shih Chi-Jer Yu
CommonCrawl
Optical flow on evolving sphere-like surfaces IPI Home A phaseless inverse scattering problem for the 3-D Helmholtz equation April 2017, 11(2): 277-304. doi: 10.3934/ipi.2017014 Applications of CGO solutions to coupled-physics inverse problems Ilker Kocyigit 1, , Ru-Yu Lai 2, , Lingyun Qiu 3, , Yang Yang 4, and Ting Zhou 5, Department of Mathematics, University of Michigan, Ann Arbor, MI 48109-1043, USA School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA Institute for Mathematics and Its Applications, University of Minnesota, Minneapolis, MN 55455, USA Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA Department of Mathematics, Northeastern University, 360 Huntington Ave., Boston MA 02115, USA Received December 2015 Published March 2017 Fund Project: R.-Y. L. was partly supported by the AMS-Simons Travel Grants. TZ. was supported by NSF grant DMS-1501049 and Alfred P. Sloan Research Fellowship FR-2015-65641 This paper surveys inverse problems arising in several coupled-physics imaging modalities for both medical and geophysical purposes. These include Photo-acoustic Tomography (PAT), Thermo-acoustic Tomography (TAT), Electro-Seismic Conversion, Transient Elastrography (TE) and Acousto-Electric Tomography (AET). These inverse problems typically consists of multiple inverse steps, each of which corresponds to one of the wave propagations involved. The review focuses on those steps known as the inverse problems with internal data, in which the complex geometrical optics (CGO) solutions to the underlying equations turn out to be useful in showing the uniqueness and stability in determining the desired information. Keywords: Coupled-physics imaging modalities, internal data, CGO solutions, uniqueness, stability. Mathematics Subject Classification: Primary: 35R30, 65N21; Secondary: 74J25. Citation: Ilker Kocyigit, Ru-Yu Lai, Lingyun Qiu, Yang Yang, Ting Zhou. Applications of CGO solutions to coupled-physics inverse problems. Inverse Problems & Imaging, 2017, 11 (2) : 277-304. doi: 10.3934/ipi.2017014 S. Acosta and C. Montalto, Multiwave imaging in an enclosure with variable wave speed, Inverse Problems, 31 (2015), 065009, 12pp. doi: 10.1088/0266-5611/31/6/065009. Google Scholar G. Alessandrini, Stable determination of conductivity by boundary measurements, App. Anal., 27 (1988), 153-172. doi: 10.1080/00036818808839730. Google Scholar H. Ammari, E. Bonnetier, Y. Capdeboscq, M. Tanter and M. Fink, Electrical impedance tomography by elastic deformation, SIAM Journal on Applied Mathematics, 68 (2008), 1557-1573. doi: 10.1137/070686408. Google Scholar H. Ammari, E. Bossy, V. Jugnon and H. Kang, Mathematical models in photo-acoustic imaging of small absorbers, SIAM Review. Google Scholar G. Bal, Hybrid inverse problems and internal functionals, Inside Out II, MSRI Publications, 60 (2013), 325-368. Google Scholar G. Bal, Cauchy problem and ultrasound modulated EIT, Analysis and PDE, 6 (2013), 751-775. doi: 10.2140/apde.2013.6.751. Google Scholar G. Bal, Hybrid inverse problems and systems of partial differential equations, Contemp. Math., 615 (2014), 15pp. doi: 10.1090/conm/615/12289. Google Scholar G. Bal, C. Bellis, S. Imperiale and F. Monard, Reconstruction of moduli in isotropic linear elasticity from full-field measurements, Inverse Problems, 30 (2014), 125004, 22pp. doi: 10.1088/0266-5611/30/12/125004. Google Scholar G. Bal, E. Bonnetier, F. Monard and F. Triki, Inverse diffusion from knowledge of power densities, Inverse Problems and Imaging, 7 (2013), 353-375. doi: 10.3934/ipi.2013.7.353. Google Scholar G. Bal and C. Guo, Imaging of complex-valued tensors for two-dimensional Maxwell's equations, accepted by Journal of Inverse and Ill-posed Problems. Google Scholar G. Bal and C. Guo, Reconstruction of complex-valued tensors in the {M}axwell system from knowledge of internal magnetic fields, Inverse Problems and Imaging, 8 (2014), 1033-1051. doi: 10.3934/ipi.2014.8.1033. Google Scholar G. Bal, C. Guo and F. Monard, Imaging of anisotropic conductivities from current densities in two dimensions, SIAM J. Imaging Sci., 7 (2014), 2538-2557. doi: 10.1137/140961754. Google Scholar G. Bal, C. Guo and F. Monard, Inverse anisotropic conductivity from internal current densities, Inverse Problems, 30 (2014), 025001, 21pp. doi: 10.1088/0266-5611/30/2/025001. Google Scholar G. Bal, C. Guo and F. Monard, Linearized internal functional for anisotropic conductivities, Inverse Problems and Imaging, 8 (2014), 1-22. doi: 10.3934/ipi.2014.8.1. Google Scholar G. Bal and F. Monard, Inverse diffusion problems with redundant internal information, Inverse Problems and Imaging, 6 (2012), 289-313. doi: 10.3934/ipi.2012.6.289. Google Scholar G. Bal, F. Monard and G. Uhlmann, Reconstruction of a fully anisotropic elasticity tensor from knowledge of displacement fields, SIAM J. Applied Math., 75 (2015), 2214-2231. doi: 10.1137/151005269. Google Scholar G. Bal and K. Ren, Multi-source quantitative pat in diffusive regime, Inverse Problems, 27 075003. Google Scholar G. Bal and K. Ren, On multi-spectral quantitative photoacoustic tomography, Inverse Problems, 28 025010. Google Scholar G. Bal, K. Ren, G. Uhlmann and T. Zhou, Quantitative thermo-acoustics and related problems, Inverse Problems, 27 (2011), 055007, 15pp. doi: 10.1088/0266-5611/27/5/055007. Google Scholar G. Bal and G. Uhlmann, Inverse diffusion theory for photoacoustics Inverse Problems, 26 (2010), 085010. doi: 10.1088/0266-5611/26/8/085010. Google Scholar G. Bal and G. Uhlmann, Reconstruction of coefficients in scalar second-order elliptic equations from knowledge of their solutions, Comm. on Pure and Applied Math, 66 (2013), 1629-1652. doi: 10.1002/cpa.21453. Google Scholar G. Bal and T. Zhou, Hybrid inverse problems for a system of Maxwell's equations, Inverse Problems, 30 (2014), 055013, 17pp. doi: 10.1088/0266-5611/30/5/055013. Google Scholar G. Bal and F. Monard, Inverse anisotropic diffusion from power density measurements in two dimensions, Inverse Problems, 28 (2012), 084001, 20pp. doi: 10.1088/0266-5611/28/8/084001. Google Scholar G. Bal and F. Monard, Inverse anisotropic conductivity from power density measurements in dimensions $ n≥q3$, Comm. Partial Differential Equations, 38 (2013), 1183-1207. doi: 10.1080/03605302.2013.787089. Google Scholar M.A. Biot, Theory of propagation of elastic waves in a fluid-saturated porous solid, i. low-frequency range, Journal of the Acoustical Society of America, 28 (1956), 168-178. doi: 10.1121/1.1908239. Google Scholar M.A. Biot, Theory of propagation of elastic waves in a fluid-saturated porous solid. ii. high-frequency range, Journal of the Acoustical Society of America, 28 (1956), 179-191. doi: 10.1121/1.1908241. Google Scholar A.L. Bukhgeim and G. Uhlmann, Recovering a potential from partial cauchy data, Comm. in PDE, 27 (2002), 653-668. doi: 10.1081/PDE-120002868. Google Scholar K.E. Butler, R.D. Russell, A.W. Kepic and M. Maxwell, Measurement of the seismoelectric response from a shallow boundary, Geophysics, 61 (1996), 1769-1778. doi: 10.1190/1.1444093. Google Scholar A.P. Calderón, On an inverse boundary value problem, Seminar on Numerical Analysis and its Applications to Continuum Physics (Río de Janeiro), Soc. Brasil. Mat., Río de Janeiro, (1980), 65-73. Google Scholar Y. Capdeboscq, J. Fehrenbach, F. de Gournay and O. Kavian, Imaging by modification: Numerical reconstruction of local conductivities from corresponding power density measurements, SIAM J. Imaging Sci., 2 (2009), 1003-1030. doi: 10.1137/080723521. Google Scholar P. Caro, P. Ola and M. Salo, Inverse boundary value problem for {M}axwell equations with local data, Comm. in PDE, 34 (2009), 1425-1464. doi: 10.1080/03605300903296272. Google Scholar P. Caro and K.M. Rogers, Global uniqueness for the calderón problem with lipschitz conductivities, Forum of Mathematics, 4 (2016), e2, 28pp. doi: 10.1017/fmp.2015.9. Google Scholar J. Chen and M. de Hoop, The inverse problem for electroseismic conversion: Stable recovery of the conductivity and the electrokinetic mobility parameter, Inverse Problems and Imaging, 10 (2016), 641-658. doi: 10.3934/ipi.2016015. Google Scholar J. Chen and Y. Yang, Quantitative photo-acoustic tomography with partial data, Inverse Problems, 28 (2012), 115014, 15pp. doi: 10.1088/0266-5611/28/11/115014. Google Scholar J. Chen and Y. Yang, Inverse problem of electro-seismic conversion, Inverse Problems, 29 (2013), 115006, 15pp. doi: 10.1088/0266-5611/29/11/115006. Google Scholar P. G. Ciarlet, Mathematical elasticity, Studies in Math. and its Appl. Google Scholar D. Colton and L. Päivärinta, The uniqueness of a solution to an inverse scattering problem for electromagnetic waves, Arch. Rational Mech. Anal., 119 (1992), 59-70. doi: 10.1007/BF00376010. Google Scholar B.T. Cox, S.R. Arridge and P.C. Beard, Estimating chromophore distributions from multiwavelength photoacoustic images, J. Opt. Soc. Am. A, 26 (2009), 443-455. doi: 10.1364/JOSAA.26.000443. Google Scholar B.T. Cox, J.G. Laufer and P.C. Beard, The challenges for quantitative photoacoustic imaging, Proc. of SPIE, 777 (2009), 717713. doi: 10.1117/12.806788. Google Scholar A. Douglis and L. Nirenberg, Interior estimates for elliptic systems of partial differential equations, Comm. Pure. Appl. Math., 8 (1955), 503-508. doi: 10.1002/cpa.3160080406. Google Scholar G. Eskin and J. Ralston, On the inverse boundary value problem for linear isotropic elasticity, Inverse Problems, 18 (2002), 907-921. doi: 10.1088/0266-5611/18/3/324. Google Scholar D.D.S. Ferreira, C. Kenig, M. Salo and G. Uhlmann, Limiting Carleman weights and anisotropic inverse problems, Invent. Math., 178 (2009), 119-171. doi: 10.1007/s00222-009-0196-4. Google Scholar S. K. Finch, D. Patch and D. Rakesh, Determining a function from its mean values over a family of spheres, SIAM J. Math. Anal., 35 (2004), 1213-1240. doi: 10.1137/S0036141002417814. Google Scholar A. R. Fisher, A. J. Schissler and J. C. Schotland, Photoacoustic effect for multiply scattered light, Phys. Rev. E., 76 (2007), 036604-1652. doi: 10.1103/PhysRevE.76.036604. Google Scholar B. Gebauer and O. Scherzer, Impedance-acoustic tomography, SIAM Journal of Applied Mathematics, 69 (2008), 565-576. doi: 10.1137/080715123. Google Scholar B. Haberman, Unique determination of a magnetic Schrödinger operator with unbounded magnetic potential from boundary data, preprint. doi: 10.1093/imrn/rnw263. Google Scholar B. Haberman and D. Tataru, Uniqueness in Calderon's problem with lipschitz conductivities, Duke Math. J., 162 (2013), 497-516. doi: 10.1215/00127094-2019591. Google Scholar M. Haltmeier, O. Scherzer, P. Burgholzer and G. Paltauf, Thermoacoustic computed tomography with large planar receivers, Inverse Problems, 20 (2004), 1663-1673. doi: 10.1088/0266-5611/20/5/021. Google Scholar S. C. Hornbostel and A. H. Thompson, Waveform design for electroseismic exploration, SEG Technical Program Expanded Abstracts, (2005), 557-560. doi: 10.1190/1.2144380. Google Scholar Y. Hristova, P. Kuchment and L. Nguyen, Reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media, Inverse Problems, 24 (2008), 055006, 25pp. doi: 10.1088/0266-5611/24/5/055006. Google Scholar M. Ikehata, A remark on an inverse boundary value problem arising in elasticity, Preprint. Google Scholar C. Kenig, J. Sjöstrand and G. Uhlmann, The Calderón problem with partial data, Ann. Math., 165 (2007), 567-591. doi: 10.4007/annals.2007.165.567. Google Scholar K. Knudsen, M. Lassas, J. L. Mueller and S. Siltanen, Regularized d-bar method for the inverse conductivity problem, Inverse Problems and Imaging, 3 (2007), 599-624. doi: 10.3934/ipi.2009.3.599. Google Scholar I. Kocyigit, Acousto-electric tomography and CGO solutions with internal data, Inverse Problems, 28 (2012), 125004, 20pp. doi: 10.1088/0266-5611/28/12/125004. Google Scholar P. Kuchment and L. Kunyansky, Mathematics of thermoacoustic tomography, Euro. J. Appl. Math., 19 (2008), 191-224. doi: 10.1017/S0956792508007353. Google Scholar P. Kuchment and L. Kunyansky, Synthetic focusing in ultrasound modulated tomography, Inverse Problems and Imaging, 4 (2010), 665-673. doi: 10.3934/ipi.2010.4.665. Google Scholar P. Kuchment and L. Kunyansky, 2D and 3D reconstructions in acousto-electric tomography, Inverse Problems, 27 (2011), 055013, 21pp. doi: 10.1088/0266-5611/27/5/055013. Google Scholar L. Kunyansky, B. Holman and B. T. Cox, Photoacoustic tomography in a rectangular reflecting cavity, Inverse Problems, 29 (2013), 125010, 20pp. doi: 10.1088/0266-5611/29/12/125010. Google Scholar L. Kunyansky and L. Nguyen, A dissipative time reversal technique for photo-acoustic tomography in a cavity, SIAM J. Imaging Sciences, 9 (2016), 748-769. doi: 10.1137/15M1049683. Google Scholar R.-Y. Lai, Uniqueness and stability of lamé parameters in elastography, Journal of Spectral Theory, 4 (2014), 841-877. doi: 10.4171/JST/88. Google Scholar C. H. Li, M. Pramanik, G. Ku and L. V. Wang, Image distortion in thermoacoustic tomography caused by microwave diffraction, Phys. Rev. E., 77 (2008), 031923. doi: 10.1103/PhysRevE.77.031923. Google Scholar Y. B. Lopatinskii, On a method of reducing boundary problems for a system of differential equations of elliptic type to regular equations, Ukrain. Mat. u'Z., 5 (1953), 123-151. Google Scholar J. R. McLaughlin, N. Zhang and A. Manduca, Calculating tissue shear modulus and pressure by 2d log-elastographic methods, Inverse Problems, 26 (2010), 085007, 25pp. doi: 10.1088/0266-5611/26/8/085007. Google Scholar O. V. Mikhailov, M. W. Haartsen and N. Toksoz, Electroseismic investigation of the shallow subsurface: Field measurements and numerical modeling, Geophysics, 62 (1997), 97-105. doi: 10.1190/1.1444150. Google Scholar O. V. Mikhailov, J. Queen and N. Toksoz, Using borehole electroseismic measurements to detect and characterize fractured (permeable) zones, SEG Technical Program Expanded Abstracts, (1997), 1981-1984. doi: 10.1190/1.1885835. Google Scholar A. I. Nachman, Reconstructions from boundary measurements, The Annals of Mathematics, 128 (1988), 531-576. doi: 10.2307/1971435. Google Scholar G. Nakamura and G. Uhlmann, Global uniqueness for an inverse boundary problem arising in elasticity, Invent. Math., 118 (1994), 457-474. doi: 10.1007/BF01231541. Google Scholar G. Nakamura and G. Uhlmann, Erratum: Global uniqueness for an inverse boundary problem arising in elasticity, Invent. Math., 152 (), 205-207. Google Scholar P. Ola, L. Päivärinta and E. Somersalo, An inverse boundary value problem in electrodynamics, Duke Math. J., 70 (1993), 617-653. doi: 10.1215/S0012-7094-93-07014-7. Google Scholar P. Ola and E. Somersalo, Electromagnetic inverse problems and generalized sommerfeld potentials, SIAM J. Appl. Math., 56 (1996), 1129-1145. doi: 10.1137/S0036139995283948. Google Scholar S. Patch and O. Scherzer, Photo-and thermo-acoustic imaging, Inverse Problems, 23 (2007), 1-10. doi: 10.1088/0266-5611/23/6/S01. Google Scholar S. R. Pride, Governing equations for the coupled electro-magnetics and acoustics of porous media, Phys. Rev. B,, 50 (), 5678-1569, 16. Google Scholar S. R. Pride and M. W. Haartsen, Electroseismic wave properties, J. Acoust. Soc. Am., 100 (1996), 1301-1315. doi: 10.1121/1.416018. Google Scholar J. Ripoll and V. Ntziachristos, Quantitative point source photoacoustic inversion formulas for scattering and absorbing medium, Phys. Rev. E,, 71 (), 031912. Google Scholar J. E. Santos, F. I. Zyserman and P. M. Gauzellino, Numerical electroseismic modeling: A finite element approach, Applied Mathematics and Computation, 218 (2012), 6351-6374. doi: 10.1016/j.amc.2011.12.003. Google Scholar V. Serov, Inverse fixed energy scattering problem for the generalized nonlinear Schrödinger operator, Inverse Problems, 28 (2012), 025002, 11pp. doi: 10.1088/0266-5611/28/2/025002. Google Scholar V. A. Solonnikov, Overdetermined elliptic boundary-value problems, J. Math. Sci., 1 (1973), 477-512. doi: 10.1007/BF01084589. Google Scholar P. Stefanov and G. Uhlmann, Thermoacoustic tomography with variable sound speed, Inverse Problems, 31 (2015), 075011, 16pp. doi: 10.1088/0266-5611/25/7/075011. Google Scholar P. Stefanov and Y. Yang, Multiwave tomography in a closed domain: Averaged sharp time reversal, Inverse Problems, 31 (2015), 065007, 23pp. doi: 10.1088/0266-5611/31/6/065007. Google Scholar P. Stefanov and Y. Yang, Multiwave tomography with reflectors: Landweber's iteration, arXiv: 1603.07045. Google Scholar J. Sylvester and G. Uhlmann, A global uniqueness theorem for an inverse boundary value problem, Ann. of Math., 125 (1987), 153-169. doi: 10.2307/1971291. Google Scholar A. H. Thompson, Electromagnetic-to-seismic conversion: Successful developments suggest viable applications in exploration and production, SEG Technical Program Expanded Abstracts, (2005), 554-556. doi: 10.1190/1.2144379. Google Scholar A. H. Thompson and G. A. Gist, Geophysical applications of electrokinetic conversion, Leading Edge, 12 (1993), 1169-1173. doi: 10.1190/1.1436931. Google Scholar A. H. Thompson, S. C. Hornbostel, J. S. Burns, T. J. Murray, R. A. Raschke, J. C. Wride, P. Z. McCammon, J. R. Sumner, G. H. Haake, M. S. Bixby, W. S. Ross, B. S. White, M. Zhou and P. K. Peczak, Field tests of electroseismic hydrocarbon detection, SEG Technical Program Expanded Abstracts, (2005), 565-568. doi: 10.1190/1.2144382. Google Scholar R. R. Thompson, The seismic electric effect, Geophysics, 1 (1936), 327-335. doi: 10.1190/1.1437119. Google Scholar G. Uhlmann, Developments in inverse problems since Calderón's foundational paper, Harmonic Analysis and Partial Differential Equations, The University of Chicago Press, Chicago, (1999), 295-345. Google Scholar G. Uhlmann, Electrical impedance tomography and Calderón's problem, Inverse Problems, 25 (2009), 123011, 39pp. doi: 10.1088/0266-5611/25/12/123011. Google Scholar G. Uhlmann and J.-N. Wang, Complex spherical waves for the elasticity system and probing of inclusions, SIAM J. Math. Anal., 38 (2007), 1967-1980. doi: 10.1137/060651434. Google Scholar G. Uhlmann and J.-N. Wang, Reconstruction of discontinuities in systems, Journal of Physics: Conference Series,, 73 (2007), 012024. doi: 10.1088/1742-6596/73/1/012024. Google Scholar B. S. White, Asymptotic theory of electroseismic prospecting, SIAM J. Appl. Math., 65 (2005), 1443-1462. doi: 10.1137/040604108. Google Scholar B. S. White and M. Zhou, Electroseismic prospecting in layered media, SIAM J. Appl. Math., 67 (2006), 69-98. doi: 10.1137/050633603. Google Scholar M. Xu and L. V. Wang, Photoacoustic imaging in biomedicine, Rev. Sci. Instr., 77 (2006), 041101. doi: 10.1063/1.2195024. Google Scholar R. J. Zemp, Quantitative photoacoustic tomography with multiple optical sources, Applied Optics, 49 (2010), 3566-3572. doi: 10.1364/AO.49.003566. Google Scholar H. Zhang and L. V. Wang, Acousto-electric tomography, SPIE, 5320 (2004), 145-149. doi: 10.1117/12.532610. Google Scholar Modality Equation and Data CGO solution Results Section 2: Quantitative PAT (second step of PAT) $-\nabla\cdot\gamma\nabla u+\sigma u=0$ data: $u|_{\partial\Omega}\mapsto \sigma u|_{\Omega}$ (full boundary illuminations). $u = e^{i \zeta\cdot x}(1+\psi_{\zeta})$ to the reduced equation $(\Delta+q)u=0$. Uniqueness and stability in determining $(\gamma,\sigma)$ (see [20]). data: $u|_{\Gamma}\mapsto\sigma u|_{\Omega}$ (partial boundary illuminations). $u=e^{\frac{1}{h}(\varphi+i\psi)}\big(a+r\big)+z$ with $\text{supp}~u|_{\partial\Omega}\subset\Gamma$. Uniqueness and stability in determining $(\gamma,\sigma)$ (see [34]). Section 3: Electro Seismic Conversion Maxwell's equations: $\begin{array}{l}\nabla\times E = i\omega\mu_{0} H,\\\nabla\times H = (\sigma - i\varepsilon\omega)E.\end{array}$ $\nu\times E|_{\partial\Omega}\mapsto LE|_{\Omega}$ $E= e^{i\zeta\cdot x}(\eta + R_\zeta)$ Uniqueness and Stability in determining $(L,\sigma)$ (see [35]). Section 4: Transient Elastography Elasticity system: $\nabla\cdot (\lambda(\nabla\cdot u)I+2S(\nabla u)\mu)+k^2 u=0$ $u|_{\partial\Omega}\mapsto u|_{\Omega}$. $U = e^{i\zeta\cdot x} (C_0(x,\theta) p(\theta\cdot x)+O(\tau^{-1}))$ to the Schrödinger equation with external Yang-Mills potentials. Uniqueness and Stability in determining the Lamé parameters $(\lambda,\mu)$(see [60]). Section 5: Acousto Electric Tomography Step 1 Conductivity equation $\nabla \cdot (\gamma \nabla u) = 0$. $m\mapsto(\Lambda_{\gamma_m}-\Lambda_{\gamma})(u|_{\partial\Omega})$ where $\Lambda_\gamma$ is the Dirichlet to Neumann map for $\gamma$ and $\gamma_m=(1+m)\gamma$. $u={\gamma}^{-1/2}e^{i\zeta \cdot x} (1 + \psi_{\zeta})$ to the conductivity equation. Reconstruction of $\sqrt{\gamma}\nabla u|_{\Omega}$ using CGOs (see [54]) or $\gamma |\nabla u|^2|_{\Omega}$ (see [9]) data: $u|_{\partial\Omega}\mapsto \sqrt{\gamma}\nabla u|_{\Omega}$ or $u|_{\partial\Omega}\mapsto \gamma |\nabla u|^2 |_{\Omega}$ same as step 1 above Uniqueness and stability in determining $\gamma$(see [9,54]). Section 6: Quantitative TAT Scalar Schrödinger $(\Delta+q)u=0$ where $q=k^2+ik\sigma(x)$. data: $u|_{\partial\Omega}\mapsto \sigma|u|^2_{\Omega}$ $u = e^{ i \zeta \cdot x}( 1 + \psi_\zeta)$ Uniqueness and Stability in determining $\sigma$(see [19]). Maxwell system: $-\nabla\times\nabla\times E+qE=0$ where $q=k^2n+ik\sigma$. data: $\nu\times E|_{\partial\Omega}\mapsto \sigma|E|^2|_{\Omega}$ $E=\gamma_0^{-1/2} e^{i\zeta\cdot x}\big(\eta_\zeta+R_\zeta\big)$ where $\gamma_0=q/\kappa^2$. Stability in determining $q$(see [22]). Pedro Caro. On an inverse problem in electromagnetism with local data: stability and uniqueness. Inverse Problems & Imaging, 2011, 5 (2) : 297-322. doi: 10.3934/ipi.2011.5.297 Masahiro Suzuki. Asymptotic stability of stationary solutions to the Euler-Poisson equations arising in plasma physics. Kinetic & Related Models, 2011, 4 (2) : 569-588. doi: 10.3934/krm.2011.4.569 Guillaume Bal, Olivier Pinaud, Lenya Ryzhik. On the stability of some imaging functionals. Inverse Problems & Imaging, 2016, 10 (3) : 585-616. doi: 10.3934/ipi.2016013 Juncheng Wei, Wei Yao. Uniqueness of positive solutions to some coupled nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1003-1011. doi: 10.3934/cpaa.2012.11.1003 Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang. Existence, uniqueness, and stability of bubble solutions of a chemotaxis model. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 805-832. doi: 10.3934/dcds.2016.36.805 Frank Natterer. Incomplete data problems in wave equation imaging. Inverse Problems & Imaging, 2010, 4 (4) : 685-691. doi: 10.3934/ipi.2010.4.685 Anne Mund, Christina Kuttler, Judith Pérez-Velázquez. Existence and uniqueness of solutions to a family of semi-linear parabolic systems using coupled upper-lower solutions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5695-5707. doi: 10.3934/dcdsb.2019102 Hongbin Chen, Yi Li. Existence, uniqueness, and stability of periodic solutions of an equation of duffing type. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 793-807. doi: 10.3934/dcds.2007.18.793 Telma Silva, Adélia Sequeira, Rafael F. Santos, Jorge Tiago. Existence, uniqueness, stability and asymptotic behavior of solutions for a mathematical model of atherosclerosis. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 343-362. doi: 10.3934/dcdss.2016.9.343 Kenneth Hvistendahl Karlsen, Nils Henrik Risebro. On the uniqueness and stability of entropy solutions of nonlinear degenerate parabolic equations with rough coefficients. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1081-1104. doi: 10.3934/dcds.2003.9.1081 M. Zuhair Nashed, Alexandru Tamasan. Structural stability in a minimization problem and applications to conductivity imaging. Inverse Problems & Imaging, 2011, 5 (1) : 219-236. doi: 10.3934/ipi.2011.5.219 Claudia Negulescu, Anne Nouri, Philippe Ghendrih, Yanick Sarazin. Existence and uniqueness of the electric potential profile in the edge of tokamak plasmas when constrained by the plasma-wall boundary physics. Kinetic & Related Models, 2008, 1 (4) : 619-639. doi: 10.3934/krm.2008.1.619 Martin Seehafer. A local existence result for a plasma physics model containing a fully coupled magnetic field. Kinetic & Related Models, 2009, 2 (3) : 503-520. doi: 10.3934/krm.2009.2.503 Habib Ammari, Josselin Garnier, Vincent Jugnon. Detection, reconstruction, and characterization algorithms from noisy data in multistatic wave imaging. Discrete & Continuous Dynamical Systems - S, 2015, 8 (3) : 389-417. doi: 10.3934/dcdss.2015.8.389 Matti Lassas, Teemu Saksala, Hanming Zhou. Reconstruction of a compact manifold from the scattering data of internal sources. Inverse Problems & Imaging, 2018, 12 (4) : 993-1031. doi: 10.3934/ipi.2018042 Yuriy Golovaty, Anna Marciniak-Czochra, Mariya Ptashnyk. Stability of nonconstant stationary solutions in a reaction-diffusion equation coupled to the system of ordinary differential equations. Communications on Pure & Applied Analysis, 2012, 11 (1) : 229-241. doi: 10.3934/cpaa.2012.11.229 Victor Isakov. On uniqueness in the inverse conductivity problem with local data. Inverse Problems & Imaging, 2007, 1 (1) : 95-105. doi: 10.3934/ipi.2007.1.95 Carlos Montalto, Alexandru Tamasan. Stability in conductivity imaging from partial measurements of one interior current. Inverse Problems & Imaging, 2017, 11 (2) : 339-353. doi: 10.3934/ipi.2017016 Nuutti Hyvönen, Martti Kalke, Matti Lassas, Henri Setälä, Samuli Siltanen. Three-dimensional dental X-ray imaging by combination of panoramic and projection data. Inverse Problems & Imaging, 2010, 4 (2) : 257-271. doi: 10.3934/ipi.2010.4.257 Jean-François Crouzet. 3D coded aperture imaging, ill-posedness and link with incomplete data radon transform. Inverse Problems & Imaging, 2011, 5 (2) : 341-353. doi: 10.3934/ipi.2011.5.341 Ilker Kocyigit Ru-Yu Lai Lingyun Qiu Yang Yang Ting Zhou Figures and Tables
CommonCrawl
Siwei Sun Rotational Cryptanalysis From a Differential-Linear Perspective - Practical Distinguishers for Round-reduced FRIET, Xoodoo, and Alzette 📺 Abstract Yunwen Liu Siwei Sun Chao Li The differential-linear attack, combining the power of the two most effective techniques for symmetric-key cryptanalysis, was proposed by Langford and Hellman at CRYPTO 1994. From the exact formula for evaluating the bias of a differential-linear distinguisher (JoC2017), to the differential-linear connectivity table (DLCT) technique for dealing with the dependencies in the switch between the differential and linear parts (EUROCRYPT 2019), and to the improvements in the context of cryptanalysis of ARX primitives (CRYPTO 2020), we have seen significant development of the differential-linear attack during the last four years. In this work, we further extend this framework by replacing the differential part of the attack by rotational-xor differentials. Along the way, we establish the theoretical link between the rotational-xor differential and linear approximations, revealing that it is nontrivial to directly apply the closed formula for the bias of ordinary differentiallinear attack to rotational differential-linear cryptanalysis. We then revisit the rotational cryptanalysis from the perspective of differentiallinear cryptanalysis and generalize Morawiecki et al.'s technique for analyzing Keccak, which leads to a practical method for estimating the bias of a (rotational) differential-linear distinguisher in the special case where the output linear mask is a unit vector. Finally, we apply the rotational differential-linear technique to the permutations involved in FRIET, Xoodoo, Alzette, and SipHash. This gives significant improvements over existing cryptanalytic results, or offers explanations for previous experimental distinguishers without a theoretical foundation. To confirm the validity of our analysis, all distinguishers with practical complexities are verified experimentally. Automatic Search of Meet-in-the-Middle Preimage Attacks on AES-like Hashing 📺 Abstract Zhenzhen Bao Xiaoyang Dong Jian Guo Zheng Li Danping Shi Siwei Sun Xiaoyun Wang The Meet-in-the-Middle (MITM) preimage attack is highly effective in breaking the preimage resistance of many hash functions, including but not limited to the full MD5, HAVAL, and Tiger, and reduced SHA-0/1/2. It was also shown to be a threat to hash functions built on block ciphers like AES by Sasaki in 2011. Recently, such attacks on AES hashing modes evolved from merely using the freedom of choosing the internal state to also exploiting the freedom of choosing the message state. However, detecting such attacks especially those evolved variants is difficult. In previous works, the search space of the configurations of such attacks is limited, such that manual analysis is practical, which results in sub-optimal solutions. In this paper, we remove artificial limitations in previous works, formulate the essential ideas of the construction of the attack in well-defined ways, and translate the problem of searching for the best attacks into optimization problems under constraints in Mixed-Integer-Linear-Programming (MILP) models. The MILP models capture a large solution space of valid attacks; and the objectives of the MILP models are attack configurations with the minimized computational complexity. With such MILP models and using the off-the-shelf solver, it is efficient to search for the best attacks exhaustively. As a result, we obtain the first attacks against the full (5-round) and an extended (5.5-round) version of Haraka-512 v2, and 8-round AES-128 hashing modes, as well as improved attacks covering more rounds of Haraka-256 v2 and other members of AES and Rijndael hashing modes. TOSC Misuse-Free Key-Recovery and Distinguishing Attacks on 7-Round Ascon Abstract Raghvendra Rohit Kai Hu Sumanta Sarkar Siwei Sun Being one of the winning algorithms of the CAESAR competition and currently a second round candidate of the NIST lightweight cryptography standardization project, the authenticated encryption scheme Ascon (designed by Dobraunig, Eichlseder, Mendel, and Schläffer) has withstood extensive self and third-party cryptanalysis. The best known attack on Ascon could only penetrate up to 7 (out of 12) rounds due to Li et al. (ToSC Vol I, 2017). However, it violates the data limit of 264 blocks per key specified by the designers. Moreover, the best known distinguishers of Ascon in the AEAD context reach only 6 rounds. To fill these gaps, we revisit the security of 7-round Ascon in the nonce-respecting setting without violating the data limit as specified in the design. First, we introduce a new superpoly-recovery technique named as partial polynomial multiplication for which computations take place between the so-called degree-d homogeneous parts of the involved Boolean functions for a 2d-dimensional cube. We apply this method to 7-round Ascon and present several key recovery attacks. Our best attack can recover the 128-bit secret key with a time complexity of about 2123 7-round Ascon permutations and requires 264 data and 2101 bits memory. Also, based on division properties, we identify several 60 dimensional cubes whose superpolies are constant zero after 7 rounds. We further improve the cube distinguishers for 4, 5 and 6 rounds. Although our results are far from threatening the security of full 12-round Ascon, they provide new insights in the security analysis of Ascon. Meet-in-the-Middle Attacks Revisited: Key-recovery, Collision, and Preimage Attacks 📺 Abstract Xiaoyang Dong Jialiang Hua Siwei Sun Zheng Li Xiaoyun Wang Lei Hu At EUROCRYPT 2021, Bao et al. proposed an automatic method for systematically exploring the configuration space of meet-in-the-middle (MITM) preimage attacks. We further extend it into a constraint-based framework for finding exploitable MITM characteristics in the context of key-recovery and collision attacks by taking the subtle peculiarities of both scenarios into account. Moreover, to perform attacks based on MITM characteristics with nonlinear constrained neutral words, which have not been seen before, we present a procedure for deriving the solution spaces of neutral words without solving the corresponding nonlinear equations or increasing the overall time complexities of the attack. We apply our method to concrete symmetric-key primitives, including SKINNY, ForkSkinny, Romulus-H, Saturnin, Grostl, Whirlpool, and hashing modes with AES-256. As a result, we identify the first 23-round key-recovery attack on \skinny-$n$-$3n$ and the first 24-round key-recovery attack on ForkSkinny-$n$-$3n$ in the single-key model. Moreover, improved (pseudo) preimage or collision attacks on round-reduced Whirlpool, Grostl, and hashing modes with AES-256 are obtained. In particular, imploying the new representation of the \AES key schedule due to Leurent and Pernot (EUROCRYPT 2021), we identify the first preimage attack on 10-round AES-256 hashing. Automatic Classical and Quantum Rebound Attacks on AES-like Hashing by Exploiting Related-key Differentials 📺 Abstract Xiaoyang Dong Zhiyu Zhang Siwei Sun Congming Wei Xiaoyun Wang Lei Hu Collision attacks on AES-like hashing (hash functions constructed by plugging AES-like ciphers or permutations into the famous PGV modes or their variants) can be reduced to the problem of finding a pair of inputs respecting a differential of the underlying AES-like primitive whose input and output differences are the same. The rebound attack due to Mendel et al. is a powerful tool for achieving this goal, whose quantum version was first considered by Hosoyamada and Sasaki at EUROCRYPT 2020. In this work, we automate the process of searching for the configurations of rebound attacks by taking related-key differentials of the underlying block cipher into account with the MILP-based approach. In the quantum setting, our model guide the search towards characteristics that minimize the resources (e.g., QRAM) and complexities of the resulting rebound attacks. We apply our method to Saturnin-hash, Skinny, and Whirlpool and improved results are obtained. Massive Superpoly Recovery with Nested Monomial Predictions 📺 Abstract Kai Hu Siwei Sun Yosuke Todo Meiqin Wang Qingju Wang Determining the exact algebraic structure or some partial information of the superpoly for a given cube is a necessary step in the cube attack -- a generic cryptanalytic technique for symmetric-key primitives with some secret and public tweakable inputs. Currently, the division property based approach is the most powerful tool for exact superpoly recovery. However, as the algebraic normal form (ANF) of the targeted output bit gets increasingly complicated as the number of rounds grows, existing methods for superpoly recovery quickly hit their bottlenecks. For example, previous method stuck at round 842, 190, and 892 for \trivium, \grain, and \kreyvium, respectively. In this paper, we propose a new framework for recovering the exact ANFs of massive superpolies based on the monomial prediction technique (ASIACRYPT 2020, an alternative language for the division property). In this framework, the targeted output bit is first expressed as a polynomial of the bits of some intermediate states. For each term appearing in the polynomial, the monomial prediction technique is applied to determine its superpoly if the corresponding MILP model can be solved within a preset time limit. Terms unresolved within the time limit are further expanded as polynomials of the bits of some deeper intermediate states with symbolic computation, whose terms are again processed with monomial predictions. The above procedure is iterated until all terms are resolved. Finally, all the sub-superpolies are collected and assembled into the superpoly of the targeted bit. We apply the new framework to \trivium, \grain, and \kreyvium. As a result, the exact ANFs of the superpolies for 843-, 844- and 845-round \trivium, 191-round \grain and 894-round \kreyvium are recovered. Moreover, with help of the M\"{o}bius transform, we present a novel key-recovery technique based on superpolies involving \textit{all} key bits by exploiting the sparse structures, which leads to the best key-recovery attacks on the targets considered. Lightweight Iterative MDS Matrices: How Small Can We Go? 📺 Abstract Shun Li Siwei Sun Danping Shi Chaoyun Li Lei Hu As perfect building blocks for the diffusion layers of many symmetric-key primitives, the construction of MDS matrices with lightweight circuits has received much attention from the symmetric-key community. One promising way of realizing low-cost MDS matrices is based on the iterative construction: a low-cost matrix becomes MDS after rising it to a certain power. To be more specific, if At is MDS, then one can implement A instead of At to achieve the MDS property at the expense of an increased latency with t clock cycles. In this work, we identify the exact lower bound of the number of nonzero blocks for a 4 × 4 block matrix to be potentially iterative-MDS. Subsequently, we show that the theoretically lightest 4 × 4 iterative MDS block matrix (whose entries or blocks are 4 × 4 binary matrices) with minimal nonzero blocks costs at least 3 XOR gates, and a concrete example achieving the 3-XOR bound is provided. Moreover, we prove that there is no hope for previous constructions (GFS, LFS, DSI, and spares DSI) to beat this bound. Since the circuit latency is another important factor, we also consider the lower bound of the number of iterations for certain iterative MDS matrices. Guided by these bounds and based on the ideas employed to identify them, we explore the design space of lightweight iterative MDS matrices with other dimensions and report on improved results. Whenever we are unable to find better results, we try to determine the bound of the optimal solution. As a result, the optimality of some previous results is proved. Differential Attacks on CRAFT Exploiting the Involutory S-boxes and Tweak Additions 📺 Abstract Hao Guo Siwei Sun Danping Shi Ling Sun Yao Sun Lei Hu Meiqin Wang CRAFT is a lightweight tweakable block cipher proposed at FSE 2019, which allows countermeasures against Differential Fault Attacks to be integrated into the cipher at the algorithmic level with ease. CRAFT employs a lightweight and involutory S-box and linear layer, such that the encryption function can be turned into decryption at a low cost. Besides, the tweakey schedule algorithm of CRAFT is extremely simple, where four 64-bit round tweakeys are generated and repeatedly used. Due to a combination of these features which makes CRAFT exceedingly lightweight, we find that some input difference at a particular position can be preserved through any number of rounds if the input pair follows certain truncated differential trails. Interestingly, in contrast to traditional differential analysis, the validity of this invariant property is affected by the positions where the constant additions take place. We use this property to construct "weak-tweakey" truncated differential distinguishers of CRAFT in the single-key model. Subsequently, we show how the tweak additions allow us to convert these weak-tweakey distinguishers into ordinary secret-key distinguishers based on which key-recovery attacks can be performed. Moreover, we show how to construct MILP models to search for truncated differential distinguishers exploiting this invariant property. As a result, we find a 15-round truncated differential distinguisher of CRAFT and extend it to a 19-round key-recovery attack with 260.99 data, 268 memory, 294.59 time complexity, and success probability 80.66%. Also, we find a 14-round distinguisher with probability 2−43 (experimentally verified), a 16-round distinguisher with probability 2−55, and a 20-round weak-key distinguisher (2118 weak keys) with probability 2−63. Experiments on round-reduced versions of the distinguishers show that the experimental probabilities are sometimes higher than predicted. Finally, we note that our result is far from threatening the security of the full CRAFT. On the Security Margin of TinyJAMBU with Refined Differential and Linear Cryptanalysis 📺 Abstract Dhiman Saha Yu Sasaki Danping Shi Ferdinand Sibleyras Siwei Sun Yingjie Zhang This paper presents the first third-party security analysis of TinyJAMBU, which is one of 32 second-round candidates in NIST's lightweight cryptography standardization process. TinyJAMBU adopts an NLFSR based keyed-permutation that computes only a single NAND gate as a non-linear component per round. The designers evaluated the minimum number of active AND gates, however such a counting method neglects the dependency between multiple AND gates. There also exist previous works considering such dependencies with stricter models, however those are known to be too slow. In this paper, we present a new model that provides a good balance of efficiency and accuracy by only taking into account the first-order correlation of AND gates that frequently occurs in TinyJAMBU. With the refined model, we show a 338-round differential with probability 2−62.68 that leads to a forgery attack breaking 64-bit security. This implies that the security margin of TinyJAMBU with respect to the number of unattacked rounds is approximately 12%. We also show a differential on full 384 rounds with probability 2−70.64, thus the security margin of full rounds with respect to the data complexity, namely the gap between the claimed security bits and the attack complexity, is less than 8 bits. Our attacks also point out structural weaknesses of the mode that essentially come from the minimal state size to be lightweight. Quantum Collision Attacks on AES-like Hashing with Low Quantum Random Access Memories 📺 Abstract Xiaoyang Dong Siwei Sun Danping Shi Fei Gao Xiaoyun Wang Lei Hu At EUROCRYPT 2020, Hosoyamada and Sasaki proposed the first dedicated quantum attack on hash functions -- a quantum version of the rebound attack exploiting differentials whose probabilities are too low to be useful in the classical setting. This work opens up a new perspective toward the security of hash functions against quantum attacks. In particular, it tells us that the search for differentials should not stop at the classical birthday bound. Despite these interesting and promising implications, the concrete attacks described by Hosoyamada and Sasaki make use of large quantum random access memories (qRAMs), a resource whose availability in the foreseeable future is controversial even in the quantum computation community. Without large qRAMs, these attacks incur significant increases in time complexities. In this work, we reduce or even avoid the use of qRAMs by performing a quantum rebound attack based on differentials with non-full-active super S-boxes. Along the way, an MILP-based method is proposed to systematically explore the search space of useful truncated differentials with respect to rebound attacks. As a result, we obtain improved attacks on \aes-\texttt{MMO}, \aes-\texttt{MP}, and the first classical collision attacks on 4- and 5-round \grostl-\texttt{512}. Interestingly, the use of non-full-active super S-box differentials in the analysis of \aes-\texttt{MMO} gives rise to new difficulties in collecting enough starting points. To overcome this issue, we consider attacks involving two message blocks to gain more degrees of freedom, and we successfully compress the qRAM demand of the collision attacks on \texttt{AES}-\texttt{MMO} and \texttt{AES}-\texttt{MP} (EUROCRYPT 2020) from $2^{48}$ to a range from $2^{16}$ to $0$, while still maintaining a comparable time complexity. To the best of our knowledge, these are the first dedicated quantum attacks on hash functions that slightly outperform Chailloux, Naya-Plasencia, and Schrottenloher's generic quantum collision attack (ASIACRYPT 2017) in a model where large qRAMs are not available. This work demonstrates again how a clever combination of classical cryptanalytic technique and quantum computation leads to improved attacks, and shows that the direction pointed out by Hosoyamada and Sasaki deserves further investigation. An Algebraic Formulation of the Division Property: Revisiting Degree Evaluations, Cube Attacks, and Key-Independent Sums 📺 Abstract Kai Hu Siwei Sun Meiqin Wang Qingju Wang Since it was proposed in 2015 as a generalization of integral properties, the division property has evolved into a powerful tool for probing the structures of Boolean functions whose algebraic normal forms are not available. We capture the most essential elements for the detection of division properties from a pure algebraic perspective, proposing a technique named as {\it monomial prediction}, which can be employed to determine the presence or absence of a monomial in the product of the coordinate functions of a vectorial Boolean function $\bs f$ by counting the number of the so-called {\it monomial trails} across a sequence of simpler functions whose composition is $\bs f$. Under the framework of the monomial prediction, we formally prove that most algorithms for detecting division properties in previous literature raise no false alarms but may miss. We also establish the equivalence between the monomial prediction and the three-subset bit-based division property without unknown subset presented at EUROCRYPT 2020, and show that these two techniques are perfectly accurate. This algebraic formulation gives more insights into division properties and inspires new search strategies. With the monomial prediction, we obtain the {\it exact} algebraic degrees of \TRIVIUM up to 834 rounds for the first time. In the context of cube attacks, we are able to explore a larger search space in limited time and recover the exact algebraic normal forms of complex superpolies with the help of a divide-and-conquer strategy. As a result, we identify more cubes with smaller dimensions, leading to improvements of some near-optimal attacks against 840-, 841- and 842-round \TRIVIUM. Quantum Circuit Implementations of AES with Fewer Qubits 📺 Abstract Jian Zou Zihao Wei Siwei Sun Ximeng Liu Wenling Wu We propose some quantum circuit implementations of AES with the following improvements. Firstly, we propose some quantum circuits of the AES S-box and S-box$^{-1}$,which require fewer qubits than prior work. Secondly, we reduce the number of qubits in the zig-zag method by introducing the S-box$^{-1}$ operation in our quantum circuits of AES. Thirdly, we present a method to reduce the number of qubits in the key schedule of AES. While the previous quantum circuits of AES-128, AES-192, and AES-256 need at least 864, 896, and 1232 qubits respectively,our quantum circuit implementations of AES-128, AES-192, and AES-256 only require 512, 640, and 768 qubits respectively, where the number of qubits is reduced by more than 30\%. Constructing Low-latency Involutory MDS Matrices with Lightweight Circuits 📺 Abstract Shun Li Siwei Sun Chaoyun Li Zihao Wei Lei Hu MDS matrices are important building blocks providing diffusion functionality for the design of many symmetric-key primitives. In recent years, continuous efforts are made on the construction of MDS matrices with small area footprints in the context of lightweight cryptography. Just recently, Duval and Leurent (ToSC 2018/FSE 2019) reported some 32 × 32 binary MDS matrices with branch number 5, which can be implemented with only 67 XOR gates, whereas the previously known lightest ones of the same size cost 72 XOR gates.In this article, we focus on the construction of lightweight involutory MDS matrices, which are even more desirable than ordinary MDS matrices, since the same circuit can be reused when the inverse is required. In particular, we identify some involutory MDS matrices which can be realized with only 78 XOR gates with depth 4, whereas the previously known lightest involutory MDS matrices cost 84 XOR gates with the same depth. Notably, the involutory MDS matrix we find is much smaller than the AES MixColumns operation, which requires 97 XOR gates with depth 8 when implemented as a block of combinatorial logic that can be computed in one clock cycle. However, with respect to latency, the AES MixColumns operation is superior to our 78-XOR involutory matrices, since the AES MixColumns can be implemented with depth 3 by using more XOR gates.We prove that the depth of a 32 × 32 MDS matrix with branch number 5 (e.g., the AES MixColumns operation) is at least 3. Then, we enhance Boyar's SLP-heuristic algorithm with circuit depth awareness, such that the depth of its output circuit is limited. Along the way, we give a formula for computing the minimum achievable depth of a circuit implementing the summation of a set of signals with given depths, which is of independent interest. We apply the new SLP heuristic to a large set of lightweight involutory MDS matrices, and we identify a depth 3 involutory MDS matrix whose implementation costs 88 XOR gates, which is superior to the AES MixColumns operation with respect to both lightweightness and latency, and enjoys the extra involution property. Correlation of Quadratic Boolean Functions: Cryptanalysis of All Versions of Full $\mathsf {MORUS}$ 📺 Abstract Danping Shi Siwei Sun Yu Sasaki Chaoyun Li Lei Hu We show that the correlation of any quadratic Boolean function can be read out from its so-called disjoint quadratic form. We further propose a polynomial-time algorithm that can transform an arbitrary quadratic Boolean function into its disjoint quadratic form. With this algorithm, the exact correlation of quadratic Boolean functions can be computed efficiently.We apply this method to analyze the linear trails of $$\mathsf {MORUS}$$ (one of the seven finalists of the CAESAR competition), which are found with the help of a generic model for linear trails of $$\mathsf {MORUS}$$-like key-stream generators. In our model, any tool for finding linear trails of block ciphers can be used to search for trails of $$\mathsf {MORUS}$$-like key-stream generators. As a result, a set of trails with correlation $$2^{-38}$$ is identified for all versions of full $$\mathsf {MORUS}$$, while the correlations of previously published best trails for $$\mathsf {MORUS}$$-640 and $$\mathsf {MORUS}$$-1280 are $$2^{-73}$$ and $$2^{-76}$$ respectively (ASIACRYPT 2018). This significantly improves the complexity of the attack on $$\mathsf {MORUS}$$-1280-256 from $$2^{152}$$ to $$2^{76}$$. These new trails also lead to the first distinguishing and message-recovery attacks on $$\mathsf {MORUS}$$-640-128 and $$\mathsf {MORUS}$$-1280-128 with surprisingly low complexities around $$2^{76}$$.Moreover, we observe that the condition for exploiting these trails in an attack can be more relaxed than previously thought, which shows that the new trails are superior to previously published ones in terms of both correlation and the number of ciphertext blocks involved. Programming the Demirci-Selçuk Meet-in-the-Middle Attack with Constraints Abstract Danping Shi Siwei Sun Patrick Derbez Yosuke Todo Bing Sun Lei Hu Cryptanalysis with SAT/SMT, MILP and CP has increased in popularity among symmetric-key cryptanalysts and designers due to its high degree of automation. So far, this approach covers differential, linear, impossible differential, zero-correlation, and integral cryptanalysis. However, the Demirci-Selçuk meet-in-the-middle ($$\mathcal {DS}$$-$$\mathsf {MITM}$$) attack is one of the most sophisticated techniques that has not been automated with this approach. By an in-depth study of Derbez and Fouque's work on $$\mathcal {DS}$$-$$\mathsf {MITM}$$ analysis with dedicated search algorithms, we identify the crux of the problem and present a method for automatic $$\mathcal {DS}$$-$$\mathsf {MITM}$$ attack based on general constraint programming, which allows the cryptanalysts to state the problem at a high level without having to say how it should be solved. Our method is not only able to enumerate distinguishers but can also partly automate the key-recovery process. This approach makes the $$\mathcal {DS}$$-$$\mathsf {MITM}$$ cryptanalysis more straightforward and easier to follow, since the resolution of the problem is delegated to off-the-shelf constraint solvers and therefore decoupled from its formulation. We apply the method to SKINNY, TWINE, and LBlock, and we get the currently known best $$\mathcal {DS}$$-$$\mathsf {MITM}$$ attacks on these ciphers. Moreover, to demonstrate the usefulness of our tool for the block cipher designers, we exhaustively evaluate the security of $$8! = 40320$$ versions of LBlock instantiated with different words permutations in the F functions. It turns out that the permutation used in the original LBlock is one of the 64 permutations showing the strongest resistance against the $$\mathcal {DS}$$-$$\mathsf {MITM}$$ attack. The whole process is accomplished on a PC in less than 2 h. The same process is applied to TWINE, and similar results are obtained. Cryptanalysis of AES-PRF and Its Dual 📺 Abstract Patrick Derbez Tetsu Iwata Ling Sun Siwei Sun Yosuke Todo Haoyang Wang Meiqin Wang A dedicated pseudorandom function (PRF) called AES-PRF was proposed by Mennink and Neves at FSE 2018 (ToSC 2017, Issue 3). AES-PRF is obtained from AES by using the output of the 5-th round as the feed-forward to the output state. This paper presents extensive security analysis of AES-PRF and its variants. Specifically, we consider unbalanced variants where the output of the s-th round is used as the feed-forward. We also analyze the security of "dual" constructions of the unbalanced variants, where the input state is used as the feed-forward to the output of the s-th round. We apply an impossible differential attack, zero-correlation linear attack, traditional differential attack, zero correlation linear distinguishing attack and a meet-in-the-middle attack on these PRFs and reduced round versions. We show that AES-PRF is broken whenever s ≤ 2 or s ≥ 6, or reduced to 7 rounds, and Dual-AES-PRF is broken whenever s ≤ 4 or s ≥ 8. Our results on AES-PRF improve the initial security evaluation by the designers in various ways, and our results on Dual-AES-PRF give the first insight to its security. Analysis of AES, SKINNY, and Others with Constraint Programming Abstract Siwei Sun David Gerault Pascal Lafourcade Qianqian Yang Yosuke Todo Kexin Qiao Lei Hu Search for different types of distinguishers are common tasks in symmetrickey cryptanalysis. In this work, we employ the constraint programming (CP) technique to tackle such problems. First, we show that a simple application of the CP approach proposed by Gerault et al. leads to the solution of the open problem of determining the exact lower bound of the number of active S-boxes for 6-round AES-128 in the related-key model. Subsequently, we show that the same approach can be applied in searching for integral distinguishers, impossible differentials, zero-correlation linear approximations, in both the single-key and related-(twea)key model. We implement the method using the open source constraint solver Choco and apply it to the block ciphers PRESENT, SKINNY, and HIGHT (ARX construction). As a result, we find 16 related-tweakey impossible differentials for 12-round SKINNY-64-128 based on which we construct an 18-round attack on SKINNY-64-128 (one target version for the crypto competition https://sites.google.com/site/skinnycipher announced at ASK 2016). Moreover, we show that in some cases, when equipped with proper strategies (ordering heuristic, restart and dynamic branching strategy), the CP approach can be very efficient. Therefore, we suggest that the constraint programming technique should become a convenient tool at hand of the symmetric-key cryptanalysts. MILP-Based Automatic Search Algorithms for Differential and Linear Trails for Speck Kai Fu Meiqin Wang Yinghua Guo Siwei Sun Lei Hu Automatic Security Evaluation and (Related-key) Differential Characteristic Search: Application to SIMON, PRESENT, LBlock, DES(L) and Other Bit-Oriented Block Ciphers Siwei Sun Lei Hu Peng Wang Kexin Qiao Xiaoshuang Ma Ling Song FSE 2020 Zhenzhen Bao (1) Patrick Derbez (2) Xiaoyang Dong (4) Kai Fu (1) Fei Gao (1) David Gerault (1) Yinghua Guo (1) Jian Guo (1) Hao Guo (1) Lei Hu (11) Kai Hu (3) Jialiang Hua (1) Tetsu Iwata (1) Pascal Lafourcade (1) Chaoyun Li (3) Zheng Li (2) Chao Li (1) Shun Li (2) Ximeng Liu (1) Yunwen Liu (1) Xiaoshuang Ma (1) Kexin Qiao (2) Raghvendra Rohit (1) Dhiman Saha (1) Sumanta Sarkar (1) Yu Sasaki (2) Danping Shi (7) Ferdinand Sibleyras (1) Ling Song (1) Bing Sun (1) Ling Sun (2) Yao Sun (1) Yosuke Todo (4) Qingju Wang (2) Xiaoyun Wang (4) Haoyang Wang (1) Peng Wang (1) Meiqin Wang (5) Congming Wei (1) Zihao Wei (2) Wenling Wu (1) Qianqian Yang (1) Zhiyu Zhang (1) Yingjie Zhang (1) Jian Zou (1)
CommonCrawl
Carrier to noise ratio in satellite communication PDF vs GSO satellite networks) The ROP defines . how the different type of carriers are categorized according to the class of emission (itemC.7 a Annex 2 in Appendix 4) which criteria to apply for different combinations of carrier types the interference adjustment factor to consider for different combinations of carrier type Request PDF | Estimation of link carrier-to-noise ratio in satellite communication systems | In a satellite network, the transmit power levels of satellite terminals have to be carefully. e the power efficiency of your broadcasting signal. •The total C/N ratio will depend on the uplink as well as the downlink of the system. •In satellite communication systems, we are always working with weak signals (because of large distances involved) an g that the signal bandwidth B is equal to the noise bandwidth BN, we obtain: --(4) 3.13 Combined Uplink and Downlink C/N Ratio . The complete satellite circuit consists of an uplink and a downlink, as sketched in Fig.3. Fundamentals of Satellite Communications Part 2 Link Analysis, Transmission, Path Loss, & Reception Howard Hausman Independent carriers to a satellite are assigned a center frequency and a bandwidth (FDM) - 05/10/2007 Howard Hausman, MITEQ, Inc. 7 SNR = Signal to Noise Ratio Bit Rate is limited by S/ Carrier-to-Noise Ratio • CNR is generally accepted to be a pre-detection measurement, that is, one made at RF • From the perspective of analog TV channels, CNR is the difference, in decibels, between the amplitude of a TV channel's visual carrier and the rms amplitude of system noise in a specified bandwidth. Carrier Level: +32 dBmV Noise. e the necessar In satellite communications, carrier-to-noise-density ratio (C/N0) is the ratio of the carrier power C to the noise power density N0, expressed in dB-Hz. When considering only the receiver as a source of noise, it is called carrier-to-receiver-noise-density ratio Satellite communicationmultiple choice questions and answers on satellite communication MCQ questions quiz on satellite communication objectives questions. The carrier to noise ratio for a satellite depends upon . Effective Isotropic Radiated power Satellite Communication Trivia Questions and Answers PDF. Satellite Communication. Satellite System Noise The RX power in a Sat. link is very small (couples of pW) amplification can be used to bring the signal strength up to an acceptable level. The main source of noise is the random thermal motion of electrons + thermal-like noise from antenna radiation The total noise power can be : PN KTNBN K-Boltzmans const. = 1.38x10-23J/ Carrier power robbing due to noise (dB) -7.59 Power sharing with other carriers -10.00 NET EOC EIRP/carrier (dBm) 33.49 Figure 1 Power lost to noise in the SAR transponder II. Noise Power Ratio - DCPR Transponder The GOES-R DCPR transponder is designed for 250 (equivalent) simultaneous shared circuits in an FDMA/TDMA configuration Estimation of link carrier-to-noise ratio in satellite for Communications and Information October 2005 . N/C Noise-to-Carrier Ratio NGSO Non-Geostationary Satellite Orbit NIB Non-Interference-Basis NPRM Notice of Proposed Rulemaking SECTION 6 BROADCASTING AND BROADCASTING-SATELLITE SERVICES 6-1 6.1 INTRODUCTION 6- g from satellite links using other satellites and from terrestrial syste satellite communication link requires Line-of-Sight (LoS) communication, but since theoretically three equidistant The carrier-to-noise ratio and related parameters used to define communications link design and performance are developed based on the basic link and system noise parameters introduced earlier Carrier-to-noise ratio - Wikipedi http://ewh.ieee.org/sb/pikes_peak/ctu. A sample calculation of the Carrier Power to Noise Power Ratio (CNR) Energy/bit to noise-density ratio. Variable Definitions Chart 12 Variable Definition Units Units dB E • Modulation modifies an RF Carrier signal so that it contains input signal information - Amplitude - Frequency Communication Satellite. Noise will be introduced on the uplink at the satellite receiver input. Denoting the noise power per unit bandwidth by PNU and the average carrier at the same point by PRU, the carrier - to - noise ratio on the uplink is: (C N0)U = (PRU PNU) It is important to note that power levels, and not decibels, are being used here Carrier to noise ratio, also known as CNR and C/N is a signal-to-noise ratio of a modulated signal. In simple terms, it is a measure of the received carrier strength in relation to the strength of the noise received (Margaret Rouse, Search Networking, CARRIER-TO-NOISE RATIO (CNR OR C/N), n.d.) The carrier to noise ratio for a satellite depends upon (A) Effective Isotropic Radiated power (B) Bandwidth. (C) Free space path losses (D) All of them In a communication satellite, the. 1. A satellite link has same carrier to noise ratio in uplink as well as in down link. The over all Carrier to noise ratio of the link will be A. always less than that of individual B. always greater than that of individual C. equal to the individual one D. unpredictable Solution: The over all carrier to noise ratio of a satellite link can be. A function of the satellite communication system designer, or system engineer, is to interface between the source of system requirements (i.e.~ the user) and the sources of performance data. basic system parameters and the signal- or carrier-to-noise ratio (CNR) on a given transmission path Satellite Communication multiple choice questions and 12.5 System Noise 311 12.5.1 Antenna Noise 313 12.5.2 Amplifier Noise Temperature 314 12.5.3 Amplifiers in Cascade 315 12.5.4 Noise Factor 317 12.5.5 Noise Temperature of Absorptive Networks 318 12.5.6 Overall System Noise Temperature 319 12.6 Carrier-to-Noise Ratio 320 12.7 The Uplink 322 12.7.1 Saturation Flux Density 32 ing the performance of a satellite communications channel. The more noise a channel can tolerate, the better the quality of the link, making the system more reliable Satellite Communication Link Budget Optimization Using PSO & Cuckoo Search Algorithm 1455 The C/N ratio is computed once it is enough to move to its input. The C/N ratio before and after each of the block is same. III.II C/N ratio for uplink station Figure 2: Overall satellite communication system [11 $\left [\frac{C}{N_0}\right ]$ is the carrier to noise density ratio $\left [\frac{G}{T}\right ]$ is the satellite receiver G/T ratio and units are dB/K; Here, Losses represent the satellite receiver feeder losses. The losses which depend upon the frequency are all taken into the consideration The gain-to- noise temperature ratio is the vital parameter for qualifying the uplink operation of a given satellite network; this is akin to the effective isotropic radiated power for the downlink. Figs. 4 and 5 show the effect of noise figure drifts on the G/T ratio of a space satellite employing a 5.3 d The output signal to noise ratio of FM receiver is valid only if the carrier to noise ratio is measured at the discriminator input is high compared to unity. It is observed that as the input noise is increased so that the carrier to noise ratio decreased, the FM receiver breaks. At first individual clicks are heard in th The calculation of carrier to noise ratio in a satellite link is based on equations for received signal power P r and receiver noise power: () rp a ta ra 10 t t 2 r10 2 10 10 a ta ra EIRP dBW, Where: EIRP =10log ( )dBW G10log(4/) 10log 4 / 20log 4 / dB Attenuation in the athmosphere Losses assosiated with transmitting antenna Loss r e P PGLLLL. The link carrier-to-noise (C/N) estimation process plays a key role in facilitating such adaptive techniques. The C/N estimation process for a typical satellite communication system is presented and analyzed in this paper. It is shown that estimating the C/N is particularly challenging when the received signal C/N ratio is large Carrier Power to Noise Power Ratio (CNR) - YouTub Optimizing Satellite Communications Using DoubleTalk ® Carrier-in-Carrier & CDM-625 Advanced Satellite Modem December 2010 5 | Page Margin Requirements Typical interfering signal cancellation is 28 to 35 dB (depending on the product). The residual interfering signal appears as noise causing a slight degradation of the Eb/No ecules. The technical parameters of the satellite concerned (transponder bandwidth, gain and sensitivity) also enter into the link budget. The result is a single number for the uplink or downlink: the carrier-to-noise ratio (C/N). This value deter-mines the data rate that can be achieved over the satcom link with a specified modulation method Explain what is meant by carrier-to-noise ratio. At the input to areceiver the received carrier power is 400 pW and the system noise tempera-ture is 450 K. Calculate the carrier-to-noise density ratio in dBHz. Given thatthe bandwidth is 36 MHz, calculate the carrier-to-noise ratio in decibels.12.20 Lower the noise figure of LNA lower the system temperature Antenna temperature depends on the elevation angle from the earth station to satellite G/T (Gain to System Noise Temperature) This is the Figure of merit of any receiving system It is the ratio of gain of the system and system noise temperature G/T = G-10log (T sys) [dB/K Noise 70 Noise Density, N. 0. 75 Carrier-to-Noise, C/N. D. 75 Figure-of-Merit, G/T. e. 75 Bit Energy-to-Noise, Eb/N. 0. 78 Link Equations 78 VII. Satellite Link Budget Analysis 81 Vm. Application 86 Future Expansion 90 CONCLUSIONS 93 Further Research 94 REFERENCES 96 vi • differentiate between noise and distortion • calculate the total gain in a communication system given the power gain or attenuation of its component parts • state what is meant by signal-to-noise ratio • select and apply the equations: • SNR dB = 10 log 10 = P S = 20 log 10 = V S P N V Short note on Combine uplink and downlink C/N rati In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analogue base band message signal after demodulation, for example an audio frequency analogue message signal DIGITAL SATELLITE COMMUNICATION SYSTEM 9.1 Introduction The first commercial communication satellite, INTELSAT I ( known as 'Early Bird' ), was launched in April 1965. Since then, satellite communications have In the presence of additive white Gaussian noise only, the overall carrier-to-noise ratio (CNR) at the receiving earth station. The T1 carrier gets less power, but also is received against a lower noise background because of its narrower bandwidth. The effects are in proportion, so the C/N value is the same for both the carrier and the T1 signal. 5. A satellite telemetry link operating in S-band uses frequency modulation to transmit th Carrier+Noise Noise Symbol Rate Transmitted Bit Rate FEC Decoded Data Rate Composite Information Rate Summary = Measure Vit / Seq / TCM / Turbo DEC DEC DEC n n to 6.5dB 5.40dB - 3dB (QPSK) 2.40dB + 1.25dB (3 / 4 Rate) 2.40dB 3.65dB + 0.51dB (126,112,7) i 3.65dB 4.16dB i i i i i i i i Example 1 1 Calculate ratio of Noise power 2 Carrier power Es. 6. The bandwidth required for a modulated carrier depends on: a. the carrier frequency c. the signal-plus-noise to noise ratio b. the signal-to-noise ratio d. the baseband frequency range ANS: D 7. When two or more signals share a common channel, it is called: a. sub-channeling c. SINAD b. signal switching d. multiplexing ANS: D 8. TDM stands. It is measured by the signal-to-noise ratio, SNR, where S is the power of the received signal and N the noise power, at baseband, which means at the band occupied by the signal after demodulation. We also describe the signal level against the noise level the carrier-to-noise-density ratio, C/N0, which represents the SNR in a 1Hz-bandwidth Research on carrier noise ratio calculation method in satellite communication link Zhu, Manjie 2010-04-02 00:00:00 ABSTRACT In satellite communication systems, in order to overcome the loss on which the long distance space transmission brings and provide the reliable and high-quality means of communication to the earth station, the RF carrier. SIGNAL-TO-NOISE (S/N) RATIO The Signal-to-Noise Ratio (S/N) (a .k.a. SNR) in a receiver is the signal power in the receiver divided by the mean noise power of the receiver. All receivers require the signal to exceed the noise by some amount. Usually if the signa l power is less than or just equals the noise power it is not detectable In satellite communication systems, in order to overcome the loss on which the long distance space transmission brings and provide the reliable and high-quality means of communication to the earth station, the RF carrier power of receiving signals should be much greater than the noise power, that is to make the satellite link satisfy the carrier noise ratio request Carrier To Noise Ratio TV: 8-12 dB BER =10-6 = 12 dB QPSK Modulation Scheme Isotropic Radiator Power Flux Density : Power per unit area PFD= P t /4 R 2 [ W/m 2 ] PFD is a regulated parameter All satellites have a maximum limit of PFD Limitations enforced by signatory nations Allows control of interference Important especially due to LEO Carrier to Noise Ratio: Overview and Applications in als in the broadband satellite communication network and stored in a database als of the receiver and is a measure of the performance of the satellite link. Thus, is expressed as: On substituting the values of P R and P N in the above equation carrier-to-noise ratio and phase noise. But receiver noise generated in the sidebands of the local oscillator driving the mixer, can get added by the mixer. Such added noise increases the noise figure of the receiver. Noise figure has nothing to do with modulation or demodula-tion. It is independent of the modulation format and of th 12.5.4 Noise factor 362 12.5.5 Noise temperature of absorptive networks 363 12.5.6 Overall system noise temperature 365 12.6 Carrier-to-Noise Ratio 366 12.7 The Uplink 367 12.7.1 Saturation flux density 368 12.7.2 Input backoff 370 12.7.3 The earth station HPA 371 12.8 Downlink 371 12.8.1 Output back-off 373 12.8.2 Satellite TWTA output 37 The low-noise amplification must be provided at the cable input in order to maintain a satisfactory signal-to-noise ratio. An LNA at the indoor end of the cable would be of little use, because it would also amplify the cable thermal noise. Single-to-noise ratio is discussed in more detail in Sec. 12.5. Of course, having to mount the LNB outsid The up link of a satellite circuit is the link in which the earth station is transmitting the signal and the satellite is receiving it. Carrier to noise ratio of up link is given as [C/N o] U = [EIRP] U + [G/T]U - [losses] U + 228.6 =Transmitter EIRP- (Up link path losses + Up link rain attenuation) + Satellite G/T + 228.6 (dB)...... Noisecom > Resource Library > Solutions Guide > Satellite Satellite Communication; Link Budgeting Explained; (dB Hz) It is the some of carrier to noise ratio at the down link and carrier to noise ratio at the up link. It can be calculated as follows. Find over all carrier to noise ratio of the satellite Up link C/N ratio = 61.95, Downlink C/N ratio= 48.51. Solution: Over all Downlink. Quadrature amplitude modulation (QAM) is the name of a family of digital modulation methods and a related family of analog modulation methods widely used in modern telecommunications to transmit information. It conveys two analog message signals, or two digital bit streams, by changing (modulating) the amplitudes of two carrier waves, using the amplitude-shift keying (ASK) digital modulation. and terrestrial communication is presented in this article. Transmission of signals over a satellite communication link Communication link between the satellite and earth station is dependent on various propagation and associated losses which are either constant or vary with weather conditions. Role of receiver noise The resulting carrier to noise density ratio given by equation (1) is that which appears at the satellite receiver. Now the uplink equation can be modified in terms of . Saturation flux density; Earth station HPA; Saturation Flux density: We know the travelling wave tube amplifier (TWTA) in a satellite transponder exhibits power output saturation 4 Provide an in-depth treatment of satellite communication systems operation and planning 5 To analyze the various methods of satellite access System Noise, Carrier-to-Noise Ratio, The Uplink, Saturation flux density, Input backoff, Downlink, Output back-off, Combined Uplink and Downlink C/N Ratio The maximum value of overall (C/N)o ratio of 11.9 dB in the earth station receiver occurs when the transponder backoff is set to 10.6 dB. Problems 2 through 5 all involve a satellite and earth stations with the same specifications. Five earth stations share one transponder of a 6/4 GHz satellite. The satellite and eart Used for Satellite and local communications frequency signal (carrier) on a continuous basis -Signal to noise ratio at receiver -Modulation scheme W? 16 Flynn/Katz - SDR July 1, 2010 . Performance of a Radio Link In analog systems, performance is subjective PDF is Gaussian, and σ (RMS) can be used to explicitly describe the Gaussian PDF. 2. BER, SNR, and Noise Budget Before the different noise contributions are discussed, the overarching issues of bit-error- rate (BER) and how it relates to noise is discussed. Signal-to-noise ratio (SNR) and the concept of a noise budget are then introduced. 2.1. Satellite Communication - Link Budget - Tutorialspoin In this chapter, let us calculate Signal to Noise Ratios and Figure of Merits of various modulated waves, which are demodulated at the receiver. Signal to Noise Ratio. Signal-to-Noise Ratio (SNR) is the ratio of the signal power to noise power. The higher the value of SNR, the greater will be the quality of the received output imum signal power requires (C b) The equivalent gain, noise figure and temperature of the system c) The signal power, noise power and signal to noise ratio at input and output of the system. 9. One of the Mariner spacecrafts transmitted to earth from a distance of 1.6 x 10 8 km. The carrier frequency was 2.3 GHz, the power was 17 W and the antenna gain was 27dB performance of the communications link. It accounts for all the gains and losses of the link under a specific set of conditions, and the result of the analysis is a set of figures of merit that characterizes the quality of the link. The most common figures of merit are: • SNR (signal-to-noise ratio) • Spectral efficiency (bits per second/Hz ed, given. Satellite Comunications Pages 351 - 400 - Flip PDF carrier-to-noise density (C/No) ratios. Carrier-to-noise density ratio is a general figure of merit parameter for a satellite communications link. Frequency stability data also was taken at one day intervals to look at long term stability and its agreement with extrapolated short term stability results The carrier to noise ratio for a satellite depends upon.. answer choices . A) Bandwidth. B) Free Space Path Losses . C) Effective Isotropic Radiated Power . D) All of them The angle subtended by earth at geostationary communication satellite is.. degree. answer choices . A) 15.34. B) 60. C) 120. D) 17.34 <p>A) 15.34</p> alternative Coaching in Satellite Communications part 1 from Board Exam Questions in Electronic System and Technologies (EST), Communications Books. Calculate the carrier-to-noise ratio at the receiver, for a bandwidth of 1 MHz. 30.6 dB. 48. If a satellite has a total transmitter power (Pt) of 1000 W, determine the energy per bit (Eb) for a. Signal to noise ratio itself : This is obviously the basic specification, For an HF radio communications receiver, typically one might expect to see a figure in the region of 0.5 microvolts for a 10 dB S/N in a 3 kHz bandwidth for SSB or Morse. For AM a figure of 1.5 microvolts for a 10 dB S/N in a 6 kHz bandwidth at 30% modulation might be. is free to allocate the relative powers of Carrier 1 and Carrier 2 whatever way is desired, as long as the PSD of the composite carrier is equal to the target PSD. Environmental factors such as rain affect the power levels and signal-to-noise ratios (SNRs) of the carriers received at the satellite, an Vijay K. Garg, in Wireless Communications & Networking, 2007 6.6 Capacity of a DS-CDMA System. The capacity of a DS-CDMA system depends on the processing gain, G p (a ratio of spreading bandwidth, B w, and information rate, R), the bit energy-to-interference ratio, E b /I 0, the voice duty cycle, v f, the DS-CDMA omnidirectional frequency reuse efficiency, η f, and the number of sectors, G. A gEo satellite orbits at a fixed longitudinal location at an altitude of about 36,000km above the equator. The transponders on the satellite provide a signal boost and frequency translation of signals for the ground terminals. The antennas on the satellite are designed to provide the required communications coverage to the terminals on the ground C/No is the Carrier to Noise ratio sometimes known as C/KT because noise is measured as KT or Boltzmanns constant K multiplied by the noise temperature T. This picture shows a signal displayed on a spectrum analyser Question: V. CARRIER TO NOISE RATIO (C/N) (3 Marks) One Of The Objectives Of Any Satellite Communication System Is To Meet A Minimum Carrier To Noise (C/N Ratio For A Specified Percentage Of Time. Suppose We Have A 4 GHz Receiver With The Following Gains And Noise Temperatures: RF Gain Grf = 23 DB, RF Amplifier Noise Temperature Try = 50K, Mixer Gain Gm=0 DB,. generally accepted spec level of -24 dBc. Beyond the simple two-carrier intermodulation test, modern system designers use two other ratios. The first test is the Noise Power Ratio (NPR) of an amplifier, defined as the difference between a theoretical infinite carrier source with a notch input, and the notch noise after amplification a. the carrier frequency. b. the signal-to-noise ratio. c. the signal-plus-noise to noise ratio. For satellite communications, _____ noise can be a serious problem. View Answer: Answer: solar. The input to an amplifier has a signal-to-noise ratio of 100 dB and an output signal-to-noise ratio of 80 dB. Find NF, both in dB and as a ratio Satellite Data Communications Link Requirements for a Precision C/N Ratio Generation at Specific Carrier Frequencies Enables New Testing Capabilities for Multi-Signal Data Streams in Satellite Communications Noisecom, a Wireless Telecom Group company, announces a new CNG-EbNo programmable precision carrier-to-noise (C/N) generator with an integrated spectrum analyzer designed for communication systems data streams that include multiple carriers. Effects of Noise on Communication Systems ELEC 350 Fall 2007 1. ELEC 350 Fall 2007 2. ELEC 350 Fall 2007 3. •Carrier to noise ratio •Baseband signal to noise ratio • received power ELEC 350 Fall 2007 5 b 0 R S angle of carrier is proportional to intelligence amplitude -Frequency Modulation (FM) -frequency change on angle of carrier is proportional to intelligence amplitude •FM developed as alternative to AM in 1931 -Over 10 years after AM commercial broadcast started -Goal was to develop system less susceptible to external noise picku INTRODUCTION TO OPTICAL COMMUNICATION SYSTEMS 3 The high carrier frequency of the optical carrier also has some drawbacks, especially as it relates, through the speed of light, to the optical wavelength (Table 1.1). improved signal-to-noise ratio goes (as will be discussed in section 11.4 of Chapter 11) Minimum carrier to noise ratio values (CNR, C/N) for DVB noise ratio in a communication system. If we receive a signal with average power Psig, and the average noise power level is Pnoise, then the SNR is simply SNR = S N SNR(dB) = 10·log Psig Pnoise We distinguish between random noise and noise due to interferers or distortion generated by the amplifie and refers to the ratio of the carrier power and the noise power per unit bandwidth. For the GPS L1 C/A signal, one can consider the received signal power as the power of the original unmodulated carrier power (at the point of reception in a receiver) that has been spread by the spreading (ranging) codes when transmitted from a satellite. We ca The received microwave power involved in satellite links is typically very small (of the order of a few 100 picowatts). This means that specially designed earth stations that keep C/N (carrier to noise ratio) to a minimum are used to transmit/receive satellite communications Research on carrier noise ratio calculation method in 45. Express the ratio in decibels of noise power ratio 50 is to 10 watts. a. 7 dB b. 21 dB c. 14 dB d. 3.5 dB 46. What do you call the noise coming from the sun and stars? a. Black-body noise b. Space noise c. Galactic noise d. All of these 47. A satellite has a noise figure of 1.6 dB. Find its equivalent noise temperature. a. 139 K b. 192 K c. Satellite communication (a tutorial) 1. • GEO satellites require more power for communications • The signal to noise ratio for GEOs is worse because of the distances involved • A few GEOs can cover most of the surface of the earth • Note that polar regions cannot be seen by GEOs Carrier Parameters • Performance. 2f1-f2 f1: frequency of carrier #1 2f2-f1 f2: frequency of carrier #2 It can occur at both E/S and Satellite Intermodulation Interference Cause: U/L power level of the each carrier is set so high that the Intermodulation occurs U/L power level is increased without considering the the possibility of intermodulatio Ref_4.pdf - Satellite Communications Link Design for .. This provides a more accurate depiction of the health of the wireless signals as it takes the RF environment and ambient noise levels into account. For instance, a received signal of -65 dBm can be considered good at a location that has a noise floor of -90 dBm (SNR 25 dB) but not so much at a location with a noise floor of -80 dBm (SNR 15 dB) Thermal Noise Noise is assumed to be independent of frequency Thermal noise present in a bandwidth of B Hertz (in watts): or, in decibel-watts N=kTB N=10logk +10 log T+10logB =!228.6 dBW+10 log T+10log Noise Temperature Noise temperature is more useful in satellite communication systems, it is best to convert noise figure to noise temperature, T T = T0 (NF- 1) Where NF is a linear ratio, not in decibels T0 is the reference temperature (290 K) 20 carrier noise to the baseband signal. Eb/No = Carrier power(dBm) - Noise power(dBm/Hz) - 10log(Fb) • where Fb is the bit rate in Hz, and Nyquist filtering is used. Carrier to Noise Ratio or C/N is often used to describe the modulated S/N of a digital radio. The Eb/No can be derived from C/N, assuming that the noise about th Pn dBm =−174+log10 B+NF Pn dBm =10log10(kTBF1000) Total Noise in dBm Pn =kTBF ×1000 milliWatts Spectrum Analyzer 25dB 316 8dB 6.31 Wi‐Fi 5dB 3.16 Cell Phone 1dB 1.26 Satellite Receiver Noise Figure‐NF Noise Factor‐F Device Some Typical Noise Figure The carrier is similar to a DVB-S or DVB-S2 carrier which carries several MPEG TV and audio programmes. The carrier on the satellite is made up of a sequence of joined together pulses to make a continuous signal. Each pulse is a symbol. According to the modulation method each symbol represents 1, 2 or 3 etc bits of transmission rate data Remember that Signal to Noise ratio, sometimes referred to as S/N ratio, isn't a ratio but the difference between the signal-to-noise. So the bigger the number, the better. Most experts recommend that an SNR of 20 dB just for data - this is surfing the web, looking up charts and other related traffic carrier, at constant amplitude • Other binary digit represented by absence of carrier • where the carrier signal is Acos(2pf c t) • Very Susceptible to noise • Used to transmit digital data over optical fiber ( ) s t = Acos(2pf c t) 0 binary 1 binary Can you stand in spanish. Nursing home experience Essay. Privacy without a fence. The default number of worksheets in an Excel is 3. Business loan down payment calculator. What are the action reaction force that happens in playing basketball draw and label it. Protractor ruler meaning. A description of four different examples of accidents and/or sudden illnesses that might occur. Google Groups sender does not receive. My Brother's Keeper movie 2013. Canada Outfitters Association. How to preserve books in Library. Accused of smelling at work. Goosebumps TV series 2020. Petting zoo that comes to you near me. Pokémon Snap price. ASICS GEL Cumulus Women's. International Baseball Federation. Opalescence 10. Toyota early lease termination COVID. Women's condition in 20th century in India. AutoCAD xref niet zichtbaar. Ruger M77 trigger spring kit. Ask questions anonymously. 1 ton AC starting amps. Paperless Post promo Code. Content analysis of newspaper PDF. Filipino fruit salad calories. Capital One Secured Mastercard reviews. How to get a house in Fallout 3. Hadoop cluster components. Franz Ferdinand Love songs. How many counts is 1/2 oz. Rihanna tattoos arm. Bahama Bucks Tropic Cream recipe. David Ludden Talking Apes. HSA vs FSA which is better. Nissan door lock problems. The Return of Benjamin Lay. Feng Shui master for wedding date. Green lentils protein per 100g.
CommonCrawl
How to get a Presentation of a Group $\newcommand{\R}{\mathbf R}$ Let $G$ be the group of homeomorphisms of $\R^2$ generated by $g$ and $h$, where $g(x, y)=(x+1, y)$ and $h(x, y)=(-x, y+1)$. To show that $G\cong \langle a, b|\ b^{-1}aba\rangle$. I tried the following: Define a map $f:\langle a, b\rangle \to G$ which sends $a$ to $g$ and $b$ to $h$. Then it can be checked that $b^{-1}aba$ lies in the kernel of $f$. So $f$ factors through $\langle a, b|\ b^{-1}aba\rangle$ to give a map $\bar f: \langle a, b|\ b^{-1}aba\rangle\to G$. What I am unable to show is that $\bar f$ is injective. Also, here we were already given a presentation which we had to show is isomorphic to $G$. If it were not given, then is there a general way to get one? group-theory free-groups caffeinemachinecaffeinemachine $\begingroup$ The relation $b^{-1}ab = a^{-1}$ allows you to write every element of $G$ as $a^ib^j$ for some $i,j \in {\mathbb Z}$. So to show that $\bar{f}$ is injective, it is sufficient to show that these elements all map onto distinct elements of $G$, which you should be able to do. (Finding a normal form for the elements of a group defined by a finite presentation is a standard technique for proving that the group is isomorphic to some other more group.) $\endgroup$ – Derek Holt May 25 '16 at 18:00 $\begingroup$ A different approach would be to use some algebraic topology, look at the quotient space of the group acting on the plane and the fundamental group of that space will be your group (this is only because the group acts nicely), and you can use van Kampens to calculate a presentation. Check out 1.2 and 1.3 in Hatcher's algebraic topology book. $\endgroup$ – Paul Plummer May 26 '16 at 15:53 $\begingroup$ it would be nice that I could receive any feedback for my answer, thanks! $\endgroup$ – janmarqz Apr 9 '18 at 15:07 $\begingroup$ @janmarqz I apologize for not noticing. I got a notification but I thought it is an old post so I did not see the page carefully. I didn't realize that a new answer was added. Give me some time to read your answer. I am actually quite busy right now with my PhD work. Thanks. $\endgroup$ – caffeinemachine Apr 9 '18 at 15:38 $\begingroup$ that you said is already a feedback :) thanks again $\endgroup$ – janmarqz Apr 9 '18 at 16:17 Direct calculations with $$\left(\begin{array}{c}x\\y\end{array}\right) \stackrel{g}\longmapsto \left(\begin{array}{c}x+1\\y\end{array}\right)\ \mbox{and} \ \left(\begin{array}{c}x\\y\end{array}\right) \stackrel{h}\longmapsto\left(\begin{array}{c}-x\\y+1\end{array}\right),$$ give you $$ \left( \begin{array}{c} x\\ y \end{array} \right) \stackrel{g^{-1}}\longmapsto \left( \begin{array}{c} x-1\\ y \end{array} \right) \ \mbox{and} \ \left( \begin{array}{c} x\\ y \end{array} \right) \stackrel{h^{-1}}\longmapsto \left( \begin{array}{c} -x\\ y-1 \end{array} \right), $$ respectively. But also $gh=hg^{-1}$ because: $$\left(\begin{array}{c}x\\y\end{array}\right) \stackrel{h}\longmapsto\left(\begin{array}{c}-x\\y+1\end{array}\right) \stackrel{g}\longmapsto\left(\begin{array}{c}-x+1\\y+1\end{array}\right),$$ and $$\left(\begin{array}{c}x\\y\end{array}\right) \stackrel{g^{-1}}\longmapsto\left(\begin{array}{c}x-1\\y\end{array}\right) \stackrel{h}\longmapsto\left(\begin{array}{c}-x+1\\y+1\end{array}\right),$$ and this implies that $h^{-1}ghg=e$. Take all the reduced word in the letters $g,h,g^{-1},h^{-1}$, which can brought into a canonical form $g^mh^n$ taking into account that $gh=hg^{-1}$. So you have that the subgroup $\langle\{g,h\}\rangle$ (the subgruoup generated by $g,h$) satisfies $$\langle g,h\ |\ h^{-1}ghg=e\rangle.$$ janmarqzjanmarqz $\begingroup$ I do not see how this gives us that $G$ is isomorphic to $\langle a, b| b^{-1}aba\rangle$. It seems that you have also shown that $f$ is surjective. Am I missing something? $\endgroup$ – caffeinemachine Apr 10 '18 at 1:49 $\begingroup$ @caffeinemachine. Other example of how to read a presentation for a subgroup inside another group: The matrix $j= \left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right) $ is an element of the group $SL_2(\Bbb Z)$ and satisfies $ \left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right)^n = \left( \begin{array}{cc} 1&n\\ 0&1 \end{array} \right) $ then the subgroup $\langle\{ j\}\rangle$ has the presentation $$\langle j\ |\quad\rangle\cong \Bbb Z$$ of the cyclic free rank one group. $\endgroup$ – janmarqz Apr 16 '18 at 17:44 Not the answer you're looking for? Browse other questions tagged group-theory free-groups or ask your own question. How to show $\langle a, b \; | \; aba = bab \rangle \cong \langle x,y \; | \; x^3=y^2 \rangle$? To show that a concretely defined group is isomorphic to an explicitly presented group, what strategies are available? Free group on two generators and commutators. Why it's enough to add the relation ab=ba? A group is generated by two elements of order $2$ is infinite and non-abelian How to Identify a Quotient of a Given Free Group Automorphism group of covering space of figure eight ($S^1 \vee S^1$) given by the integer grid in $\mathbb{R}^2$ How to prove that $\langle x, y \mid xyx^{-1}y^{-1}\rangle$ is a presentation for $\mathbb{Z} \times \mathbb{Z}$ Proof that dihedral group $D_{2n}$ is isomorphic to the group generated by two group elements of order 2 Manipulating Group Presentations: Are my arguments valid? Subgroup of finite abelian group $G$ isomorphic to $G/H$
CommonCrawl
Mapping disparities in education across low- and middle-income countries Local Burden of Disease Educational Attainment Collaborators Nature volume 577, pages235–238(2020)Cite this article 177 Altmetric Educational attainment is an important social determinant of maternal, newborn, and child health1,2,3. As a tool for promoting gender equity, it has gained increasing traction in popular media, international aid strategies, and global agenda-setting4,5,6. The global health agenda is increasingly focused on evidence of precision public health, which illustrates the subnational distribution of disease and illness7,8; however, an agenda focused on future equity must integrate comparable evidence on the distribution of social determinants of health9,10,11. Here we expand on the available precision SDG evidence by estimating the subnational distribution of educational attainment, including the proportions of individuals who have completed key levels of schooling, across all low- and middle-income countries from 2000 to 2017. Previous analyses have focused on geographical disparities in average attainment across Africa or for specific countries, but—to our knowledge—no analysis has examined the subnational proportions of individuals who completed specific levels of education across all low- and middle-income countries12,13,14. By geolocating subnational data for more than 184 million person-years across 528 data sources, we precisely identify inequalities across geography as well as within populations. Education, as a social determinant of health, is closely linked to several facets of the Sustainable Development Goals (SDGs) of the United Nations2. In addition to the explicit focus of SDG 4 on educational attainment, improved gender equality (SDG 5) and maternal, newborn, and child health (SDG 3) have well-documented associations with increased schooling15,16,17. In 2016, after years of deprioritization, aid to education reached its highest level since 200218. Despite this shift, only 22% of aid to basic education—defined as primary and lower-secondary—went to low-income countries in 2016 compared to 36% in 200219. This reflects a persistent pattern in which the distribution of aid does not align with the greatest need, even at the national level. Beyond international aid, domestic policy is also a crucial tool for expanding access to education, especially at higher levels. However, policy-makers often do not have access to a rigorous evidence base at a subnational level. This analysis presents the subnational distribution of education to support the growing evidence base of precision public health data, which shows widespread disparity of health outcomes as well as their social determinants. Mapping education across gender Despite widespread improvement in educational attainment since 2000, gender disparity persists in 2017 in many regions. Figure 1 illustrates the mean number of years of education and the proportion of individuals with no primary school attainment for men and women of reproductive age (15–49 years) in 2017. The average educational attainment is very low across much of the Sahel region of sub-Saharan Africa, consistent with previously published data14. In 2017, there was a large gender disparity in many regions, with men attaining higher average education across central and western sub-Saharan Africa and South Asia. Considerable variation remains between the highest- and lowest-performing administrative units within countries in 2017. For Uganda in 2017, this indicator ranged from 1.9 years of education (95% uncertainty interval, 0.8–3.0 years) in rural Kotido to 11.1 years (10.1–12 years) in Kampala, the capital city. Figure 1b, d displays the proportion of men and women aged 15–49 years who have not completed primary school. By considering the variation within populations in different locations, these maps help to identify areas with large populations in the vulnerable lower end of the attainment distribution. We estimated large improvements in the proportions of individuals who have completed primary school in Mexico and China. However, across much of the world women in this age group failed to complete primary school at a much higher rate than their male counterparts. Fig. 1: Average educational attainment and proportion of individuals with no completed primary education at the first administrative level and absolute difference between women and men aged 15–49 years. a–d, Mean educational attainment for women (a) and men (c) and the proportion of individuals with no primary school education for women (b) and men (d) aged 15–49 years in 2017. Maps were produced using ArcGIS Desktop 10.6. Despite continued lack of gender parity in education among the reproductive age group, vast progress towards parity has been made among the 20–24 age group. Extended Data Fig. 2 further examines gender parity in 2000 and 2017. This figure highlights two additional advantages of our analytic framework. First, we examined a younger group aged 20–24 years. Although education in this group is less directly relevant to maternal, newborn, and child health than education in the full window of reproductive age, these estimates allowed us to capture how the landscape of education has shifted over time (that is, across successive cohorts) and is therefore more likely to pick up improvements to access and retention in education systems that have been made since 2000. Second, we illustrate the probability that this estimated ratio is credibly different from 1 (parity between sexes) given the full uncertainty in our data and model. In 2000, we estimated that men completed schooling at a higher rate than women across much of the world, particularly for primary school education (that is, the probability that the parity ratio is greater than 1 was over 95%). This was true in most countries for both primary and secondary completion rates, but especially so in Burundi, Angola, Uganda, and Afghanistan (Extended Data Fig. 2a, c). By 2017, many countries moved significantly towards parity in both secondary and primary completion rates with the exception of large regions within central and western sub-Saharan Africa (Extended Data Fig. 2b, d). Inequalities within and between countries The subnational estimates of attainment presented here enable a closer examination of within-country inequality and associated trends over time. Figure 2 plots the national change in secondary attainment rates for women aged 20–24 years with the index of dissimilarity across second administrative-level units in 2017. The index of dissimilarity is an intuitive measure of geographical inequality that can be interpreted as the percentage of women with secondary attainment that would have to move in order to equalize secondary rates across all subnational districts. We estimated that countries that experienced more national progress over the period tended to be more spatially equal in 2017. However, the top-right quadrant of the graph highlights several countries that experienced substantial national progress yet remain some of the most geographically unequal countries today. Fig. 2: National progress in secondary attainment rates for women aged 20–24 years compared with the national index of dissimilarity in 2017. a, Change in secondary attainment rates for women age 20–24 years between 2000 and 2017 compared with the national index of dissimilarity in 2017 (simple linear regression lines are included). b, Map of the national index of dissimilarity in 2017. Maps were produced using ArcGIS Desktop 10.6. We further examined national progress between 2000 and 2017 in two such countries, India and Nigeria, where rates of secondary attainment increased from 10.9% (8.5–12.5%) to 37.2% (33.6–41.1%) and from 11.5% (6.2–18.3%) to 45.0% (37.0–52.5%), respectively (Fig. 3). The geographical distribution between two cohorts—women aged 20–24 years in 2000 and 2017—was analysed by examining all proportions simultaneously (Fig. 3a, b). We estimate that there has been a massive shift towards primary and secondary completion coupled with greater geographical variability in completion rates (that is, spread of the dots that represent subnational units in the legend). The majority of the 2017 cohort living in the northwest and northeast of India never completed secondary school. Urban centres in the south, such as Bangalore and Mumbai, have seen considerable progress compared with more rural regions. In Nigeria, we estimate substantial national improvement; however, the country remained one of the most spatially unequal in 2017 (Fig. 3d, e). The more-urban south, particularly around Lagos, experienced much faster progress than the more-rural north. The implications of the population distribution were explored by decomposing the improvement in the national rate of secondary completion since 2000 for each country into the additive contributions of rate changes at the second administrative level (Fig. 3c, f). This demonstrates that national progress was largely driven by improvements in populous urban regions (particularly Maharashtra, India, and Lagos, Nigeria), underscoring the importance of how subnational progress (or lack thereof) contributes differentially to narratives surrounding national change. Fig. 3: Attainment rates and contributions to national change in secondary rates for women aged 20–24 years in India and Nigeria, 2000–2017. a, b, Attainment rates for women aged 20–24 years in 2000 (a) and 2017 (b) at the second administrative level in India. c, Additive contributions of changes in the attainment rates at the second administrative level to change in the rate at the national level between 2000 and 2017 in India. d, e, Attainment rates for women aged 20–24 years in 2000 and 2017 at the second administrative level in Nigeria. f, Additive contributions of changes in the attainment rates at the second administrative level to change in the rate at the national level between 2000 and 2017 in Nigeria. On all ternary maps, the 'Zero' category includes all individuals with either no schooling or some primary schooling without completion. Maps were produced using ArcGIS Desktop 10.6. Discussion and limitations We have built on previous modelling efforts that focused on the geographical distribution of average education14 by extending our estimation to the distribution of attainment, highlighting not only average attainment but also the proportions of individuals who completed key levels of schooling that are central to policy efforts. As we demonstrate, throughout much of the world women lag behind their male counterparts, and there is significant heterogeneity across subnational regions. Countries such as South Africa, Peru, and Colombia have seen tremendous improvement since 2000 in the proportion of the young adult population who have completed secondary school. As this trend continues, it will be important to focus not only on attainment but also on quality of education. However, many young women across the world still faced obstacles to attaining even a basic level of education in 2017 (Extended Data Fig. 3). This represents a missed opportunity for the global health community to focus on a well-studied determinant of maternal, newborn, and child health. Even with only marginal returns to health in the short term, studies suggest that, on average, communities will also see increased human capital, social mobility, and less engagement in child marriage or early childbearing20,21. Children and adolescents do not complete formal schooling for many reasons. Many factors differentially affect girls, such as cost, late or no school enrolment, forced withdrawal of married adolescents, and the social influence of family members concerning the traditional roles of girls and women4,20,22,23. A critical step is acknowledging that commercialization in the area of education typically leads to higher inequity24. Treating public education as a societal good by increasing access, particularly in underserved rural communities, reduces inequality. Identifying areas that are stagnating or worsening, particularly in the realm of basic education for young women across the world, is an important first step to targeted, long-term reform efforts that will ultimately have widespread benefits for equity in health and development. Many recent international calls to improve the social determinants of health have stated that measurement of inequity within countries is critical to understanding and tracking the problem, noting that geography is an increasingly important dimension of inequity24,25,26. Where people are born greatly determines their life chances, and continuing to consider development and human capital formation on a national level is insufficient24. The goal of this analysis is to identify local areas that may have experienced negligible improvements, but further rigorous research is required to contextualize these patterns within the unique mix of structural obstacles that each community faces. There are many indirect costs for attending school and each disadvantaged area that we identify in our analysis may experience them in different ways. These include the demand for children to work, the opportunity or monetary costs of attending school, distance to school, lack of compulsory education requirements, high fees for attendance, political instability, and many other forces. Overcoming these obstacles to improve educational attainment alone will not necessarily result in a more-educated and healthy population for each country as highly educated individuals may be more likely to emigrate, resulting in 'brain drain'. This is especially true for countries that have been economically crippled over the past two decades and may lack the economic capacity to absorb a more highly educated labour force. Opening access to education will need to be coupled with economic reforms, both internationally and domestically, if countries are to fully experience dividends in human capital and health. Over the next decade of the SDG agenda, it will be important to maintain the progress that has been made to reprioritise investment in education systems. There remains an alarming lack of distributional accountability in aid, especially to basic education, for which most funding is not going to the countries that need it most19. Connections between educational attainment and health offer promising opportunities for co-financing initiatives. For example, USAID recently invested US$90 million in HIV funding to the construction of secondary schools in sub-Saharan Africa. Global health leaders have noted the need to invest in precise data systems and eliminate data gaps to effectively target resources, develop equitable policy, and track accountability7. Our analysis provides a robust evidence base for such decision-making and advocacy. Decades of research on the effect of basic education on maternal, newborn, and child health positions this issue squarely in the purview of the global health agenda. It is crucial for the global health community to invest in long-term, sustainable improvement in the underlying distribution of human capital, as this is the only way to truly influence health equity across generations. Using a Bayesian model-based geostatistical framework and synthesizing geolocated data from 528 household and census datasets, this analysis provides subnational estimates of mean numbers years of education and the proportion of the population who attained key levels of education for women of reproductive age (15–49 years), women aged 20–24 years, and equivalent male age bins between 2000 and 2017 in 105 countries across all low- and middle-income countries (LMICs). Countries were selected for inclusion in this analysis using the socio-demographic index (SDI) published in the Global Burden of Disease (GBD) study27. The SDI is a measure of development that combines education, fertility, and poverty. Countries in the middle, lower-middle, or low SDI quintiles were included, with several exceptions. Albania, Bosnia, and Moldova were excluded despite middle SDI status due to geographical discontinuity with other included countries and lack of available survey data. Libya, Malaysia, Panama, and Turkmenistan were included despite higher-middle SDI status to create better geographical continuity. We did not analyse American Samoa, Federated States of Micronesia, Fiji, Kiribati, Marshall Islands, Samoa, Solomon Islands, or Tonga, where no available survey data could be sourced. Analytical steps are described below, and additional details can be found in the Supplementary Information. We compiled a database of survey and census datasets that contained geocoding of subnational administrative boundaries or GPS coordinates for sampled clusters. These included datasets from 528 sources (see Supplementary Table 2). These sources comprised at least one data source for all but two countries on our list of LMICs: Western Sahara and French Guiana. We chose to exclude these two countries from our analysis; 42 of 105 included countries have only subnational administrative level data. We extracted demographic, education, and sample design variables. The coding of educational attainment varies across survey families. In some surveys, the precise number of years of attainment is not provided, with attainment instead aggregated into categories such as 'primary completion' or 'secondary completion'. In such cases, individuals who report 'primary completion' may have gone on to complete some portion of secondary education, but these additional years of education are not captured in the underlying dataset. Previous efforts to examine trends in mean years of education have either assumed that no additional years of education were completed (that is, primary education only) or have used the midpoint between primary and secondary education as a proxy28. Trends in the single-year data, however, demonstrate that such assumptions introduce bias in the estimation of attainment trends over time and space, as differences in actual drop-out patterns or binning schema can lead to biased mean estimates29. For this analysis, we used a recently developed method that selects a training subset of similar surveys across time and space to estimate the unobserved single-year distribution of binned datasets29. In comprehensive tests of cross-validation that leveraged data for which the single-year distributions are observed, this algorithmic approach significantly reduces bias in summary statistics estimated from datasets with binned coding schemes compared to alternatives such as the standard-duration method28. The years in all coding schemes were mapped to the country- and year-specific references in the UNESCO International Standard Classification of Education (ISCED) for comparability30. We used a top coding of 18 years on all data; this is a common threshold in many surveys that have a cap and it is reasonable to assume that the importance of education for health outcomes (and other related SDGs) greatly diminishes after what is the equivalent of 2 to 3 years of graduate education in most systems. Data were aggregated to mean years of education attained and the proportions achieving key levels of education. The levels chosen were proportion with zero years, proportion with less than primary school (1–5 years of education), proportion with at least primary school (6–11 years of education), and proportion achieving secondary school or higher (12 or more years of education). A subset of the data for a smaller age bin (20–24 years) was also examined to more closely track temporal shifts. Equivalent age bins were aggregated for both women and men to examine disparities in mean years of attainment by sex. Where GPS coordinates were available, data were aggregated to a specific latitude and longitude assuming a simple-random sample, as the cluster is the primary sampling unit for the stratified design survey families, such as the Demographic and Health Survey (DHS) and Multiple Indicator Cluster Survey (MICS). Where only geographical information was available at the level of administrative units, data were aggregated with appropriate weighting according to their sample design. Design effects were estimated using a package for analysing complex survey data in R31. Spatial covariates To leverage strength from locations with observations to the entire spatiotemporal domain, we compiled several 5 × 5-km2 raster layers of possible socioeconomic and environmental correlates of education (Supplementary Table 5 and Supplementary Fig. 6). Acquisition of temporally dynamic datasets, where possible, was prioritized to best match our observations and thus predict the changing dynamics of educational attainment. We included nine covariates indexed at the 5 × 5-km2 level: access to roads, nighttime lightstv, populationtv, growing season, ariditytv, elevation, urbanicitytv, irrigation, and yeartv (tv, time-varying covariates). More details, including plots of all covariates, can be found in the Supplementary Information. Our primary goal is to provide educational attainment predictions across LMICs at a high (local) resolution, and our methods provide the best out-of-sample predictive performance at the expense of inferential understanding. To select covariates and capture possible nonlinear effects and complex interactions between them, an ensemble covariate modelling method was implemented32. For each region, three submodels were fitted to our outcomes using all of our covariate data: generalized additive models, boosted regression trees, and lasso regression. Each submodel was fit using fivefold cross-validation to avoid overfitting and the out-of-sample predictions from across the five folds were compiled into a single comprehensive set of predictions from that model. Additionally, the same submodels were also run using 100% of the data and a full set of in-sample predictions were created. The five sets of out-of-sample submodel predictions were fed into the full geostatistical model as predictors when performing the model fit. The in-sample predictions from the submodels were used as the covariates when generating predictions using the fitted full geostatistical model. This methodology maximizes out-of-sample predictive performance at the expense of the ability to provide statistical inference on the relationships between the predictors and the outcome. A recent study has shown that this ensemble approach can improve predictive validity by up to 25% over an individual model32. More details on this approach can be found in the Supplementary Information. The primary goal of using the stacking procedure in our analyses was to maximize the predictive power of the raster covariates by capturing the nonlinear effects and complex interactions between covariates to optimize the model performance. It has previously been suggested32 that the primary purpose of the submodel predictions is to improve the mean function of the Gaussian process. Although we have determined a way to include the uncertainty from two of our submodels (lasso regression and generalized additive models (GAM)), we have not determined a way to include uncertainty from the boosted regression tree (BRT) submodel into our final estimates. Whereas GAM and lasso regression seek to fit a single model that best describes the relationship between response variable and some set of predictors, BRT method fits a large number of relatively simple models for which the predictions are then combined to give robust estimates of the response. Although this feature of the BRT model makes it a powerful tool for analysing complex data, quantifying the relative uncertainty contributed by each simple model as well as uncertainty from the complex interactions of the predictor variables is challenging33,34. It is worth noting, however, that our out-of-sample validation indicates that the 95% coverage is fairly accurate (for example, closely ranges around 95%) as shown in the figures and table of Supplementary Information section 4.3.2. This indicates that we are not misrepresenting the uncertainty in our final estimates. Geostatistical model Gaussian and binomial data are modelled within a Bayesian hierarchical modelling framework using a spatially and temporally explicit hierarchical generalized linear regression model to fit the mean number years of education attainment and the proportion of the population who achieved key bins of school in 14 regions across all LMICs as defined in the GBD study (Extended Data Fig. 1). This means we fit 14 independent models for each indicator (for example, the proportion of women with zero years of schooling). GBD study design sought to create regions on the basis of three primary criteria: epidemiological homogeneity, sociodemographic similarity, and geographical contiguity27. Fitting our models by these regions has the advantage of allowing for some non-stationarity and non-isotropy in the spatial error term, compared to if we modelled one spatiotemporal random-effect structure over the entire modelling region of all LMICs. For each Gaussian indicator, we modelled the mean number of years of attainment in each survey cluster, d. Survey clusters are precisely located by their GPS coordinates and year of observation, which we map to a spatial raster location i at time t. We model the mean number of years of attainment as Gaussian data given fixed precision τ and a scaling parameter sd (defined by the sample size in the observed cluster). As we may have observed multiple data clusters within a given location i at time t, we refer to the mean attainment, μ, within a given cluster d by its indexed location i, and time t as μi(d),t(d). $${{\rm{edu}}}_{d}|{\mu }_{i(d),t(d)},{s}_{d},\tau \, \sim \,{\rm{Normal}}({\mu }_{i(d),t(d)},\tau {s}_{d})\,\forall \,{\rm{observed}}\,{\rm{clusters}}\,d$$ $${\mu }_{i,t}=\,{\beta }_{0}+{{\bf{X}}}_{i,t}{\boldsymbol{\beta }}+{Z}_{i,t}+{{\epsilon }}_{{\rm{ctr}}(i)}+{{\epsilon }}_{i,t}\,\forall \,i\in {\rm{spatial}}\,{\rm{domain}}\,\forall \,t\in {\rm{time}}\,{\rm{domain}}$$ For each binomial indicator, we modelled the number of individuals at a given attainment level in each survey cluster, d. We observed the number of individuals reporting a given attainment level as binomial count data Cd among an observed sample size Nd. As we may have observed multiple data clusters within a given location i at time t, we refer to the probability of attaining that level, p, within a given cluster d by its indexed location i and time t as pi(d),t(d). $${C}_{d}|{p}_{i(d),t(d)},\,{N}_{d}\sim {\rm{Binomial}}({p}_{i(d),t(d)},\,{N}_{d})\,\forall \,{\rm{observed}}\,{\rm{clusters}}\,d$$ $${\rm{logit}}({p}_{i,t})=\,{\beta }_{0}+{{\bf{X}}}_{i,t}{\boldsymbol{\beta }}+{Z}_{i,t}+{{\epsilon }}_{{\rm{ctr}}(i)}+{{\epsilon }}_{i,t}\,\forall \,i\in {\rm{spatial}}\,{\rm{domain}}\,\forall \,t\in {\rm{time}}\,{\rm{domain}}$$ We used a continuation-ratio modelling approach to account for the ordinal data structure of the binomial indicators35. To do this, the proportion of the population with zero years of education was modelled using a binomial model. The proportion with less than primary education was modelled as those with less than primary education of those that have more than zero years of education. The same method followed for the proportion of population completing primary education. The proportion achieving secondary school or higher was estimated as the complement of the sum of the three binomial models. The remaining parameter specification was consistent between all indicators in both binomial and Gaussian models: $$\mathop{\sum }\limits_{h=1}^{3}{\beta }_{h}\,=1$$ $${{\epsilon }}_{{\rm{ctr}}}\sim {\rm{iid}}\,{\rm{Normal}}(0,\,{\gamma }^{2})$$ $${{\epsilon }}_{i,t}\sim {\rm{iid}}\,{\rm{Normal}}(0,\,{\sigma }^{2})$$ $${\bf{Z}}\sim {\rm{GP}}(0,{\Sigma }^{{\rm{space}}}\otimes {\Sigma }^{{\rm{time}}})$$ $${\Sigma }^{{\rm{space}}}=\,\frac{{\omega }^{2}}{\varGamma (\nu ){2}^{v-1}}\times {(\kappa D)}^{\nu }\times {{\rm K}}_{\nu }(\kappa D)$$ $${\Sigma }_{j,\,k}^{{\rm{time}}\,}={\rho }^{|k-j|}$$ For indices d, i, and t, *(index) is the value of * at that index. The probabilities pi,t represent both the annual proportions at the space–time location and the probability that an individual had that level of attainment given that they lived at that particular location. The annual probability pi,t of each indicator (or μi,t for the mean indicators) was modelled as a linear combination of the three submodels (GAM, BRT, and lasso regression), rasterized covariate values Xi,t, a correlated spatiotemporal error term Zi,t, country random effects \({{\epsilon }}_{{\rm{ctr}}(i)}\) with one unstructured country random effect fit for each country in the modelling region and all sharing a common variance parameter, γ2, and an independent nugget effect \(\,{{\epsilon }}_{i,t}\) with variance parameter σ2. Coefficients βh in the three submodels h = 1, 2, 3 represent their respective predictive weighting in the mean logit link, while the joint error term Zi,t accounts for residual spatiotemporal autocorrelation between individual data points that remains after accounting for the predictive effect of the submodel covariates, the country-level random effect \({{\epsilon }}_{{\rm{ctr}}(i)}\), and the nugget independent error term, \(\,{{\epsilon }}_{i,t}\). The purpose of the country-level random effect is to capture spatially unstructured, unobserved country-specific variables, as there are often sharp discontinuities in educational attainment between adjacent countries due to systematic differences in governance, infrastructure, and social policies. The residuals Zi,t are modelled as a three-dimensional Gaussian process (GP) in space–time centred at zero and with a covariance matrix constructed from a Kronecker product of spatial and temporal covariance kernels. The spatial covariance Σspace is modelled using an isotropic and stationary Matérn function36, and temporal covariance Σtime as an annual autoregressive (AR1) function over the 18 years represented in the model. In the stationary Matérn function, Γ is the Gamma function, Kv is the modified Bessel function of order v > 0, κ > 0 is a scaling parameter, D denotes the Euclidean distance, and ω2 is the marginal variance. The scaling parameter, κ, is defined to be \(\kappa =\sqrt{8v}/\delta \) where δ is a range parameter (which is about the distance for which the covariance function approaches 0.1) and v is a scaling constant, which is set to 2 rather than fit from the data37,38. This parameter is difficult to reliably fit, as documented by many other analyses37,39,40 that set this parameter to 2. The number of rows and the number of columns of the spatial Matérn covariance matrix are equal to the number of spatial mesh points for a given modelling region. In the AR1 function, ρ is the autocorrelation function (ACF), and k and j are points in the time series where |k − j| defines the lag. The number of rows and the number of columns of the AR1 covariance matrix are equal to the number of temporal mesh points (18). The number of rows and the number of columns of the space–time covariance matrix, Σspace ⊗ Σtime, for a given modelling region are equal to: the number of spatial mesh points × the number of temporal mesh points. This approach leveraged the residual correlation structure of the data to more accurately predict estimates for locations with no data, while also propagating the dependence in the data through to uncertainty estimates41. The posterior distributions were fit using computationally efficient and accurate approximations in R-integrated nested Laplace approximation (INLA) with the stochastic partial differential equations (SPDE) approximation to the Gaussian process residuals using R project version 3.5.142,43,44,45. The SPDE approach using INLA has been demonstrated elsewhere, including the estimation of health indicators, particulate air matter, and population age structure10,11,46,47. Uncertainty intervals were generated from 1,000 draws (that is, statistically plausible candidate maps)48 created from the posterior-estimated distributions of modelled parameters. Additional details regarding model and estimation processes can be found in the Supplementary Information. To transform grid cell-level estimates into a range of information that is useful to a wide constituency of potential users, these estimates were aggregated from the 1,000 candidate maps up to district, provincial, and national levels using 5 × 5-km2 population data49. This aggregation also enabled the calibration of estimates to national GBD estimates for 2000–2017. This was achieved by calculating the ratio of the posterior mean national-level estimate from each candidate map draw in the analysis to the posterior mean national estimates from GBD, and then multiplying each cell in the posterior sample by this ratio. National-level estimates from this analysis with GBD estimates can be found in Supplementary Table 44. To illustrate how subnational progress has contributed differentially to national progress (Fig. 3), we decomposed the improvement in the national rate of secondary completion since 2000 for each country into the additive contributions of rate changes at the second administrative level, where C is the national secondary rate change, N is the total number of second-level administrative units, ci is the population proportion in administrative unit i, and ri is the rate of secondary attainment in administrative unit i. $$C=\,\mathop{\sum }\limits_{i=1}^{N}({c}_{i,2017}{r}_{i,2017})-({c}_{i,2000}{r}_{i,2000})$$ Although the model can predict at all locations covered by available raster covariates, all final model outputs for which land cover was classified as 'barren or sparsely vegetated' were masked, on the basis of the most recently available Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data (2013), as well as areas in which the total population density was less than 10 individuals per 1 × 1-km2 pixel in 201550. This step has led to improved understanding when communicating with data specialists and policy-makers. Models were validated using source-stratified fivefold cross-validation. To offer a more stringent analysis by respecting some of the source and spatial correlation in the data, holdout sets were created by combining sets of data sources (for example, entire survey- or census-years). Model performance was summarized by the bias (mean error), total variance (root-mean-square error) and 95% data coverage within prediction intervals, and the correlation between observed data and predictions. All validation metrics were calculated on the predictions from the fivefold cross-validation. Where possible, estimates from these models were compared against other existing estimates. Furthermore, measures of spatial and temporal autocorrelation pre- and post-modelling were examined to verify correct recognition, fitting, and accounting for the complex spatiotemporal correlation structure in the data. All validation procedures and corresponding results are provided in the Supplementary Information. Our analysis is not without several important limitations. First, almost all data collection tools conflate gender and sex and we therefore do not capture the full distribution of sex or gender separately in our data. We refer throughout to the measurement of 'gender (in)equality', following the usage in SDG 5. Second, it is extremely difficult to quantify quality of education on this scale in a comparable way. Quality is ultimately a large part of the SDG agenda and of utmost importance to achieving equity in opportunity for social mobility. However, many studies across diverse low- and middle-income settings have linked attainment, even very low levels, to measurable improvement in maternal and child health17. As our analysis highlights with the proportional indicators, there are still many subnational regions across the world where large proportions do not complete primary school. A third limitation is that we are unable to measure or account for migration. A concept note released from the forthcoming Global Education Monitoring Report 2019 focuses on how migration and displacement affects schooling51. Our estimates of the modelled outcome, educational attainment for a particular space–time–age–sex, are demonstrated to be statistically unbiased (Supplementary Information section 4.3); however, interpretation of any change in attainment as a change in the underlying education system could potentially be biased by the effects of migration. It is possible that geographical disparities reflect changes in population composition rather than changes in the underlying infrastructure or education system. Pathways for this change are complex and may be voluntary. Those who manage to receive an education in a low-attainment area may have an increased ability to migrate and choose to do so. This change may also be involuntary, particularly in politically unstable areas where displacement may make geographical changes over time difficult to estimate. A shifting population composition is a general limitation of many longitudinal ecological analyses, but the spatially granular nature of the analyses used here may be more sensitive to the effects of mobile populations. Our analysis is purely predictive but draws heavily in its motivation from a rich history of literature on the role of education in reducing maternal mortality, improving child health, and increasing human capital. Studies have also demonstrated complex relationships between increased education and a myriad of positive health outcomes, such as HIV risk reductions and spillover effects to other household members52,53. The vast majority of these studies are associational and recent attempts at causal analyses have provided more-mixed evidence54,55,56. Although causal analyses of education are very difficult and often rely on situational quasi-experiments, associational analyses using the most comprehensive datasets demonstrate consistent support for the connection between education and health17,57. Looking towards future analyses, it will be important to study patterns of change in these data and how they overlap with distributions of health. Lastly, our estimates cannot be seen as a replacement for proper data collection systems, especially for tracking contemporaneous change. Our analysis of uncertainty at a high-resolution may be used to inform investment in more robust data systems and collection efforts, especially if the ultimate goal is to measure and track progress in the quality of schooling. Further information on research design is available in the Nature Research Reporting Summary linked to this paper. The findings of this study are supported by data that are available in public online repositories, data that are publicly available upon request from the data provider, and data that are not publicly available owing to restrictions by the data provider, which were used under license for the current study, but may be available from the authors upon reasonable request and permission of the data provider. A detailed table of data sources and availability can be found in Supplementary Table 2. Interactive visualization tools are available at https://vizhub.healthdata.org/lbd/education. All maps presented in this study are generated by the authors; no permissions are required for publication. Administrative boundaries were retrieved from the Global Administrative Unit Layers (GAUL) dataset, implemented by FAO within the CountrySTAT and Agricultural Market Information System (AMIS) projects58. Land cover was retrieved from the online Data Pool, courtesy of the NASA EOSDIS Land Processes Distributed Active Archive Center (LP DAAC), USGS/Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota50. Lakes were retrieved from the Global Lakes and Wetlands Database (GLWD), courtesy of the World Wildlife Fund and the Center for Environmental Systems Research, University of Kassel59,60. Populations were retrieved from WorldPop49,61. All maps were produced using ArcGIS Desktop 10.6. Our study follows the Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER). All code used for these analyses is available online at http://ghdx.healthdata.org/record/ihme-data/lmic-education-geospatial-estimates-2000-2017, and at http://github.com/ihmeuw/lbd/tree/edu-lmic-2019. UNESCO. Meeting our commitments to gender equality in education. Global Education Monitoring Report. https://unesdoc.unesco.org/ark:/48223/pf0000261593 (2018). United Nations. Transforming our World: the 2030 Agenda for Sustainable Development (UN, 2015). Lim, S. S. et al. Measuring human capital: a systematic analysis of 195 countries and territories, 1990–2016. Lancet 392, 1217–1234 (2018). Yousafzai, M. & Lamb, C. I am Malala: The Girl Who Stood Up for Education and Was Shot by The Taliban (Weidenfeld & Nicolson, 2013). Gates, M. The Moment of Lift: How Empowering Women Changes The World (Flatiron Books, 2019). United Nations. Youth and the 2030 Agenda for Sustainable Development (UN, 2018). Annan, K. Data can help to end malnutrition across Africa. Nature 555, 7 (2018). Horton, R. Offline: in defence of precision public health. Lancet 392, 1504 (2018). Dowell, S. F., Blazes, D. & Desmond-Hellmann, S. Four steps to precision public health. Nature 540, 189–191 (2016). Osgood-Zimmerman, A. et al. Mapping child growth failure in Africa between 2000 and 2015. Nature 555, 41–47 (2018). Golding, N. et al. Mapping under-5 and neonatal mortality in Africa, 2000–15: a baseline analysis for the Sustainable Development Goals. Lancet 390, 2171–2182 (2017). Bosco, C. et al. Exploring the high-resolution mapping of gender-disaggregated development indicators. J. R. Soc. Interface 14, 20160825 (2017). Roberts, D. A. et al. Benchmarking health system performance across regions in Uganda: a systematic analysis of levels and trends in key maternal and child health interventions, 1990–2011. BMC Med. 13, 285 (2015). Graetz, N. et al. Mapping local variation in educational attainment across Africa. Nature 555, 48–53 (2018). Caldwell, J. C. How is greater maternal education translated into lower child mortality? Health Transit. Rev. 4, 224–229 (1994). Caldwell, J. C. Education as a factor in mortality decline: an examination of Nigerian data. Popul. Stud. 33, 395–413 (1979). Gakidou, E., Cowling, K., Lozano, R. & Murray, C. J. Increased educational attainment and its effect on child mortality in 175 countries between 1970 and 2009: a systematic analysis. Lancet 376, 959–974 (2010). UNESCO. UNESCO Operational Definition Of Basic Education. Thematic Framework (UNESCO, 2007). UNESCO. Aid to Education: A Return to Growth? (UNESCO, 2018). LeVine, R. A., LeVine, S., Schnell-Anzola, B., Rowe, M. L. & Dexter, E. Literacy and Mothering: How Women's Schooling Changes the Lives of the World's Children (Oxford Univ. Press, 2012). Abel, G. J., Barakat, B., Kc, S. & Lutz, W. Meeting the Sustainable Development Goals leads to lower world population growth. Proc. Natl Acad. Sci. USA 113, 14294–14299 (2016). UNESCO. Reducing Global Poverty Through Universal Primary and Secondary Education. Out-Of-School Children, Adolescents And Youth: Global Status And Trends Policy Paper 32/Fact Sheet 44 (2017). Jejeebhoy, S. J. Women's Education, Autonomy, and Reproductive Behaviour: Experience From Developing Countries (Clarendon, 1995). Marmot, M., Friel, S., Bell, R., Houweling, T. A. & Taylor, S. Closing the gap in a generation: health equity through action on the social determinants of health. Lancet 372, 1661–1669 (2008). Kim, J. Y. World Bank Group President Jim Yong Kim Speech at the 2017 Annual Meetings Plenary. http://www.worldbank.org/en/news/speech/2017/10/13/wbg-president-jim-yong-kim-speech-2017-annual-meetings-plenary-session (13 October 2017). UNESCO. Is real progress being made in the equitable provision of education? #PISAresults http://www.iiep.unesco.org/en/real-progress-being-made-equitable-provision-education-pisaresults-3915 (UNESCO, 2017). GBD 2016 Causes of Death Collaborators. Global, regional, and national age-sex specific mortality for 264 causes of death, 1980–2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet 390, 1151–1210 (2017). Barro, R. J. & Lee, J. W. A new data set of educational attainment in the world, 1950–2010. J. Dev. Econ. 104, 184–198 (2013). Friedman, J., Graetz, N. & Gakidou, E. Improving the estimation of educational attainment: new methods for assessing average years of schooling from binned data. PLoS ONE 13, e0208019 (2018). UNESCO. ISCED Mappings (UNESCO, 2016); http://uis.unesco.org/en/isced-mappings Lumley, T. survey: analysis of complex survey samples. R package v.3.36 https://cran.r-project.org/web/packages/survey.pdf (2019). Bhatt, S. et al. Improved prediction accuracy for disease risk mapping using Gaussian process stacked generalization. J. R. Soc. Interface 14, 20170520 (2017). Elith, J., Leathwick, J. R. & Hastie, T. A working guide to boosted regression trees. J. Anim. Ecol. 77, 802–813 (2008). Leathwick, J., Elith, J., Francis, M., Hastie, T. & Taylor, P. Variation in demersal fish species richness in the oceans surrounding New Zealand: an analysis using boosted regression trees. Mar. Ecol. Prog. Ser. 321, 267–281 (2006). Hosmer, D. W., & Lemeshow, S. in Applied Logistic Regression 289–305 (Wiley, 2013). Stein, M. L. Interpolation of Spatial Data (Springer, 1999). Lindgren, F. & Rue, H. Bayesian spatial modelling with R-INLA. J. Stat. Softw. 63, 1–25 (2015). Lindgren, F., Rue, H. & Lindström, J. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. J. R. Stat. Soc. B 73, 423–498 (2011). Rozanov, Y. A. in Markov Random Fields 55–102 (Springer, 1982). Whittle, P. On stationary processes in the plane. Biometrika 41, 434–449 (1954). Diggle, P. & Ribeiro, P. J. Model-based Geostatistics (Springer, 2007). Rue, H., Martino, S. & Chopin, N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J. R. Stat. Soc. B 71, 319–392 (2009). Rue, H. et al. Bayesian Computing with INLA (2014); http://www.r-inla.org/ Blangiardo, M., Cameletti, M., Baio, G. & Rue, H. Spatial and spatio-temporal models with R-INLA. Spat. Spatiotemporal Epidemiol. 7, 39–55 (2013). Krainski, E. T., Lindgren, F., Simpson, D. & Rue, H. The R-INLA Tutorial on SPDE Models (2017); https://inla.r-inla-download.org/r-inla.org/tutorials/spde/spde-tutorial.pdf Cameletti, M., Lindgren, F., Simpson, D. & Rue, H. Spatio-temporal modeling of particulate matter concentration through the SPDE approach. Adv. Stat. Anal. 97, 109–131 (2013). Alegana, V. A. et al. Fine resolution mapping of population age-structures for health and development applications. J. R. Soc. Interface 12, 20150073 (2015). Patil, A. P., Gething, P. W., Piel, F. B. & Hay, S. I. Bayesian geostatistics in health cartography: the perspective of malaria. Trends Parasitol. 27, 246–253 (2011). Tatem, A. J. WorldPop, open data for spatial demography. Sci. Data 4, 170004 (2017). Friedl, M. et al. MCD12Q2 v006. MODIS/Terra+Aqua Land Cover Type Yearly L3 Global 500 m SIN Grid. (NASA EOSDIS Land Processes DAAC, 2019); https://doi.org/10.5067/MODIS/MCD12Q1.006 Global Education Monitoring Report. Concept Note for the 2019 Global Education Monitoring Report on Education and Migration (2017). Behrman, J. A. The effect of increased primary schooling on adult women's HIV status in Malawi and Uganda: universal primary education as a natural experiment. Soc. Sci. Med. 127, 108–115 (2015). De Neve, J.-W., Fink, G., Subramanian, S. V., Moyo, S. & Bor, J. Length of secondary schooling and risk of HIV infection in Botswana: evidence from a natural experiment. Lancet Glob. Health 3, e470–e477 (2015). McCrary, J. & Royer, H. The effect of female education on fertility and infant health: evidence from school entry policies using exact date of birth. Am. Econ. Rev. 101, 158–195 (2011). Karlsson, O., De Neve, J.-W. & Subramanian, S. V. Weakening association of parental education: analysis of child health outcomes in 43 low- and middle-income countries. Int. J. Epidemiol. 48, 83–97 (2019). De Neve, J.-W. & Fink, G. Children's education and parental old age survival — quasi-experimental evidence on the intergenerational effects of human capital investment. J. Health Econ. 58, 76–89 (2018). Pamuk, E. R., Fuchs, R. & Lutz, W. Comparing relative effects of education and economic resources on infant mortality in developing countries. Popul. Dev. Rev. 37, 637–664 (2011). FAO-UN. The Global Administrative Unit Layers (GAUL) (2015); http://www.fao.org/geonetwork/srv/en/metadata.show?id=12691 World Wildlife Fund. Global Lakes and Wetlands Database, Level 3 (2004); https://www.worldwildlife.org/pages/global-lakes-and-wetlands-database Lehner, B. & Döll, P. Development and validation of a global database of lakes, reservoirs and wetlands. J. Hydrol. 296, 1–22 (2004). World Pop. Data Types (accessed: 7th July 2017); https://www.worldpop.org/project/list Channan, S., Collins, K. & Emanuel, W. Global mosaics of the standard MODIS land cover type data. University of Maryland and the Pacific Northwest National Laboratory, College Park, Maryland, USA. (2014) This work was primarily supported by grant OPP1132415 from the Bill & Melinda Gates Foundation. N.G. is the recipient of a training grant from the National Institute of Child Health and Human Development (T32 HD-007242-36A1). A list of participants and their affiliations appears in the online version of the paper These authors contributed equally: Nicholas Graetz, Lauren Woyczynski These authors jointly supervised this work: Emmanuela Gakidou, Simon I. Hay Institute for Health Metrics and Evaluation, University of Washington, Seattle, WA, USA Nicholas Graetz , Lauren Woyczynski , Katherine F. Wilson , Jason B. Hall , Natalia V. Bhattacharjee , Roy Burstein , Michael L. Collison , Michael A. Cork , Farah Daoud , Nicole Davis Weaver , Aniruddha Deshpande , Laura Dwyer-Lindgren , Lucas Earl , Nathaniel J. Henry , Bernardo Hernández Prado , Damaris K. Kinyoki , Aubrey J. Levine , Benjamin K. Mayala , Ali H. Mokdad , Jonathan F. Mosser , Christopher J. L. Murray , David M. Pigott , Robert C. Reiner Jr , Nafis Sadat , Lauren E. Schaeffer , Megan F. Schipp , Amber Sligar , John C. Wilkinson , Emmanuela Gakidou & Simon I. Hay Department of Population and Family Health, Jimma University, Jimma, Ethiopia Kalkidan Hassen Abate Department of Neurology, Cairo University, Cairo, Egypt Foad Abd-Allah Department of Medicine, University College Hospital, Ibadan, Nigeria Oladimeji M. Adebayo School of Medicine, Cardiff University, Cardiff, UK Victor Adekanmbi Department of Community Medicine, Zabol University of Medical Sciences, Zabol, Iran Mahdi Afshari School of Community Health Sciences, University of Nevada, Reno, NV, USA Olufemi Ajumobi National Malaria Elimination Program, Federal Ministry of Health, Abuja, Nigeria Duke Global Health Institute, Duke University, Durham, NC, USA Tomi Akinyemiju Department of Population Health Sciences, Duke University, Durham, NC, USA Evidence Based Practice Center, Mayo Clinic Foundation for Medical Education and Research, Rochester, MN, USA Fares Alahdab Internal Medicine Department, Washington University in St Louis, St Louis, MO, USA Ziyad Al-Aly Clinical Epidemiology Center, VA Saint Louis Health Care System, Department of Veterans Affairs, St Louis, MO, USA Center for Health Systems Research, National Institute of Public Health, Cuernavaca, Mexico Jacqueline Elizabeth Alcalde Rabanal & Doris D. V. Ortega-Altamirano Qazvin University of Medical Sciences, Qazvin, Iran Mehran Alijanzadeh Health Management and Economics Research Center, Iran University of Medical Sciences, Tehran, Iran Vahid Alipour , Jalal Arabloo , Samad Azari & Aziz Rezapour King Saud University, Riyadh, Saudi Arabia Khalid Altirkawi Department of Health Management, Policy and Economics, Kerman University of Medical Sciences, Kerman, Iran Mohammadreza Amiresmaili Faculty of Medicine, Mansoura University, Mansoura, Egypt Nahla Hamed Anber Carol Davila University of Medicine and Pharmacy, Bucharest, Romania Catalina Liliana Andrei Social Determinants of Health Research Center, Rafsanjan University of Medical Sciences, Rafsanjan, Iran Mina Anjomshoa Department of Health Policy and Administration, University of the Philippines Manila, Manila, The Philippines Carl Abelardo T. Antonio Department of Applied Social Sciences, Hong Kong Polytechnic University, Hong Kong, China School of Health Sciences, Birmingham City University, Birmingham, UK Olatunde Aremu Monitoring Evaluation and Operational Research Project, ABT Associates Nepal, Lalitpur, Nepal Krishna K. Aryal Preventive Medicine and Public Health Research Center, Iran University of Medical Sciences, Tehran, Iran Mehran Asadi-Aliabadi & Maziar Moradi-Lakeh Department of Health Informatics, University of Ha'il, Ha'il, Saudi Arabia Suleman Atique Department of Statistics and Econometrics, Bucharest University of Economic Studies, Bucharest, Romania Marcel Ausloos , Claudiu Herteliu & Adrian Pana Indian Institute of Public Health, Public Health Foundation of India, Gurugram, India Ashish Awasthi & Sanjay Zodpey The Judith Lumley Centre, La Trobe University, Melbourne, Victoria, Australia Beatriz Paulina Ayala Quintanilla General Office for Research and Technological Transfer, Peruvian National Institute of Health, Lima, Peru Public Health Risk Sciences Division, Public Health Agency of Canada, Toronto, Ontario, Canada Alaa Badawi Department of Nutritional Sciences, University of Toronto, Toronto, Ontario, Canada Faculty of Medicine, Alexandria University, Alexandria, Egypt Joseph Adel Mattar Banoub School of Psychology, University of Auckland, Auckland, New Zealand Suzanne Lyn Barker-Collo Mary MacKillop Institute for Health Research, Australian Catholic University, Melbourne, Victoria, Australia & Ester Cerin Department of Community Medicine, Gandhi Medical College Bhopal, Bhopal, India Jazan University, Jazan, Saudi Arabia Nuffield Department of Population Health, University of Oxford, Oxford, UK Derrick A. Bennett Department of Statistical and Computational Genomics, National Institute of Biomedical Genomics, Kalyani, India Krittika Bhattacharyya Department of Statistics, University of Calcutta, Kolkata, India Department of Global Health, Global Institute for Interdisciplinary Studies, Kathmandu, Nepal Suraj Bhattarai Centre for Global Child Health, University of Toronto, Toronto, Ontario, Canada Zulfiqar A. Bhutta Centre of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan Social Determinants of Health Research Center, Babol University of Medical Sciences, Babol, Iran Ali Bijani Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Ranica, Italy Boris Bikbov Center for Neuroscience, Instituto de Investigaciones Científicas y Servicios de Alta Tecnología (INDICASAT AIP), Panama, Panama Gabrielle Britton School of Public Health and Health Systems, University of Waterloo, Waterloo, Ontario, Canada Zahid A. Butt Al Shifa School of Public Health, Al Shifa Trust Eye Hospital, Rawalpindi, Pakistan Department of Population and Health, Metropolitan Autonomous University, Mexico City, Mexico Rosario Cárdenas Institute of Public Health, University of Porto, Porto, Portugal Félix Carvalho Applied Molecular Biosciences Unit, University of Porto, Porto, Portugal Colombian National Health Observatory, National Institute of Health, Bogota, Colombia Carlos A. Castañeda-Orjuela Epidemiology and Public Health Evaluation Group, National University of Colombia, Bogota, Colombia Gorgas Memorial Institute for Health Studies, Panama, Panama Franz Castro School of Public Health, University of Hong Kong, Hong Kong, China Ester Cerin College of Medicine, National Taiwan University, Taipei, Taiwan Jung-Chen Chang Medical Research Council Lifecourse Epidemiology Unit, University of Southampton, Southampton, UK Cyrus Cooper Department of Rheumatology, University of Oxford, Oxford, UK Department of Epidemiology and Biostatistics, University of South Carolina, Columbia, SC, USA Rajat Das Gupta James P. Grant School of Public Health, Brac University, Dhaka, Bangladesh , Mehedi Hasan & Ipsita Sutradhar Heidelberg Institute of Global Health, Heidelberg University, Heidelberg, Germany Jan-Walter De Neve , Babak Moazen & Shafiu Mohammed Department of Global Health and Infection, Brighton and Sussex Medical School, Brighton, UK Kebede Deribe School of Public Health, Addis Ababa University, Addis, Ababa, Ethiopia School of Nutrition, Food Science and Technology, Hawassa University, Hawassa, Ethiopia Beruk Berhanu Desalegn Department of Midwifery, Debre Markos University, Debre, Markos, Ethiopia Melaku Desta Faculty of Veterinary Medicine and Zootechnics, Autonomous University of Sinaloa, Culiacan Rosales, Mexico & Daniel Diaz Health Research Section, Nepal Health Research Council, Kathmandu, Nepal Meghnath Dhimal Center of Complexity Sciences, National Autonomous University of Mexico, Mexico City, Mexico Department of Midwifery, Debre Berhan University, Debre Berhan, Ethiopia Mesfin Tadese Dinberu Deputy of Research and Technology, Ministry of Health and Medical Education, Tehran, Iran Shirin Djalalinia United Nations World Food Programme, New Delhi, India Manisha Dubey Faculty of Medicine, University of Belgrade, Belgrade, Serbia Eleonora Dubljanin Department of Internal Medicine, Bahia School of Medicine and Public Health, Salvador, Brazil Andre R. Durães Medical Board, Roberto Santos General Hospital, Salvador, Brazil Department of Health Metrics Sciences, School of Medicine, University of Washington, Seattle, WA, USA Laura Dwyer-Lindgren , Benn Sartorius Epidemiology Department, Florida International University, Miami, FL, USA Mohammad Ebrahimi Kalan Department of Public Health Sciences, Karolinska Institutet, Stockholm, Sweden Ziad El-Khatib World Health Programme, Université du Québec en Abitibi-Témiscamingue, Rouyn-Noranda, Quebec, Canada Center of Communicable Disease Control, Ministry of Health and Medical Education, Tehran, Iran Babak Eshrati School of Public Health, Arak University of Medical Sciences, Arak, Iran Babol University of Medical Sciences, Babol, Iran Mahbobeh Faramarzi College of Medicine, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia Mohammad Fareed Department of Psychology, Federal University of Sergipe, Sao Cristovao, Brazil Andre Faro Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden Seyed-Mohammad Fereshtehnejad Division of Neurology, University of Ottawa, Ottawa, Ontario, Canada REQUIMTE/LAQV, University of Porto, Porto, Portugal Eduarda Fernandes Psychiatry Department, Kaiser Permanente, Fontana, CA, USA Irina Filip Department of Health Sciences, A. T. Still University, Mesa, AZ, USA Department of Population Medicine and Health Services Research, Bielefeld University, Bielefeld, Germany Florian Fischer Department of Dermatology, Kobe University, Kobe, Japan Takeshi Fukumoto Gene Expression & Regulation Program, The Wistar Institute, Philadelphia, PA, USA Ramón de la Fuente Muñiz National Institute of Psychiatry, Mexico City, Mexico Jose A. García Unit of Academic Primary Care, University of Warwick, Coventry, UK Paramjit Singh Gill Adelaide Medical School, University of Adelaide, Adelaide, South Australia, Australia Tiffany K. Gill Nursing and Health Sciences Department, University of Massachusetts Boston, Boston, MA, USA Philimon N. Gona Department of Biostatistics and Epidemiology, University of Oklahoma, Oklahoma City, OK, USA Sameer Vali Gopalani Department of Health and Social Affairs, Government of the Federated States of Micronesia, Palikir, Federated States of Micronesia School of Medicine, Boston University, Boston, MA, USA Ayman Grada School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia Yuming Guo & Shanshan Li Department of Epidemiology and Biostatistics, Zhengzhou University, Zhengzhou, China Academics and Research Department, Rajasthan University of Health Sciences, Jaipur, India Rajeev Gupta Department of Medicine, Mahatma Gandhi University of Medical Sciences & Technology, Jaipur, India Department of Anthropology, University of Delhi, Delhi, India Vipin Gupta Department of Pharmacology, Tehran University of Medical Sciences, Tehran, Iran Arvin Haj-Mirzaian & Arya Haj-Mirzaian Obesity Research Center, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran Department of Radiology, Johns Hopkins University, Baltimore, MD, USA Arya Haj-Mirzaian Department of Family and Community Medicine, Arabian Gulf University, Manama, Bahrain Randah R. Hamadeh School of Health and Environmental Studies, Hamdan Bin Mohammed Smart University, Dubai, United Arab Emirates Samer Hamidi Department of Public Health, Mizan-Tepi University, Tepi, Ethiopia Hamid Yimam Hassen & Andualem Henok Unit of Epidemiology and Social Medicine, University Hospital Antwerp, Antwerp, Belgium School of Public Health, Curtin University, Perth, Western Australia, Australia Delia Hendrie & Ted R. Miller Department of Pediatrics, University of Texas Austin, Austin, TX, USA Michael K. Hole Department of Pharmacology and Therapeutics, Dhaka Medical College, Dhaka, Bangladesh Naznin Hossain Department of Pharmacology, Bangladesh Industrial Gases Limited, Tangail, Bangladesh Department of Computer Engineering, Islamic Azad University, Tehran, Iran Mehdi Hosseinzadeh Computer Science Department, University of Human Development, Sulaimaniyah, Iraq Department of Epidemiology and Health Statistics, Central South University, Changsha, China Guoqing Hu Department of Community Medicine, University of Ibadan, Ibadan, Nigeria Olayinka Stephen Ilesanmi Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran Seyed Sina Naghibi Irvani Institute for Physical Activity and Nutrition, Deakin University, Burwood, Victoria, Australia Sheikh Mohammed Shariful Islam Department of Epidemiology, Shahid Beheshti University of Medical Sciences, Tehran, Iran Neda Izadi Department of Health Care and Public Health, Sechenov First Moscow State Medical University, Moscow, Russia Mihajlo Jakovljevic Department of Community Medicine, Banaras Hindu University, Varanasi, India Ravi Prakash Jha Environmental Research Center, Duke Kunshan University, Kunshan, China John S. Ji Nicholas School of the Environment, Duke University, Durham, NC, USA Department of Ophthalmology, Heidelberg University, Heidelberg, Germany Jost B. Jonas Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Beijing, China Social Determinants of Health Research Center, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran Zahra Jorjoran Shushtari Department of Family Medicine and Public Health, University of Opole, Opole, Poland Jacek Jerzy Jozwiak Department of Forensic Medicine and Toxicology, All India Institute of Medical Sciences, Jodhpur, India Tanuj Kanchan Hematology-Oncology and Stem Cell Transplantation Research Center, Tehran University of Medical Sciences, Tehran, Iran Amir Kasaeian Pars Advanced and Minimally Invasive Medical Manners Research Center, Iran University of Medical Sciences, Tehran, Iran Research Center for Environmental Determinants of Health, Kermanshah University of Medical Sciences, Kermanshah, Iran Ali Kazemi Karyani , Meghdad Pirsaheb , Fatemeh Rajati , Satar Rezaei , Ehsan Sadeghi & Kiomars Sharafi ODeL Campus, University of Nairobi, Nairobi, Kenya Peter Njenga Keiyoro CSIR-Indian Institute of Toxicology Research, Council of Scientific & Industrial Research, Lucknow, India Chandrasekharan Nair Kesavachandran Department of Public Health, Jordan University of Science and Technology, Irbid, Jordan Yousef Saleh Khader Social Determinants of Health Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran Morteza Abdullatif Khafaie Epidemiology and Biostatistics Department, Health Services Academy, Islamabad, Pakistan Ejaz Ahmad Khan Department of Medical Parasitology, Cairo University, Cairo, Egypt Mona M. Khater Clinical Epidemiology Unit, Lund University, Lund, Sweden Aliasghar A. Kiadaliri Research and Data Solutions, Synotech Consultants, Nairobi, Kenya Daniel N. Kiirithio School of Medicine, Xiamen University Malaysia, Sepang, Malaysia Yun Jin Kim Department of Nutrition, Simmons University, Boston, MA, USA Ruth W. Kimokoti School of Health Sciences, Kristiania University College, Oslo, Norway Adnan Kisa Independent Consultant, Jakarta, Indonesia Soewarta Kosen CIBERSAM, San Juan de Dios Sanitary Park, Sant Boi De Llobregat, Spain Ai Koyanagi Catalan Institution for Research and Advanced Studies (ICREA), Barcelona, Spain Department of Anthropology, Panjab University, Chandigarh, India Kewal Krishan Department of Social and Preventive Medicine, University of Montreal, Montreal, Quebec, Canada Barthelemy Kuate Defo Department of Demography, University of Montreal, Montreal, Quebec, Canada Department of Psychiatry, University of Nairobi, Nairobi, Kenya Manasi Kumar Division of Psychology and Language Sciences, University College London, London, UK International Institute for Population Sciences, Mumbai, India Pushpendra Kumar Department of Community and Family Medicine, University of Baghdad, Baghdad, Iraq Faris Hasan Lami School of Nursing, Hong Kong Polytechnic University, Hong Kong, China Paul H. Lee Department of Medical Statistics and Epidemiology, Sun Yat-sen University, Guangzhou, China Yu Liao Alliance for Improving Health Outcomes Inc, Quezon City, The Philippines & Jaifred Christian F. Lopez Department of Medicine, University of Malaya, Kuala Lumpur, Malaysia Lee-Ling Lim Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Shatin, China Department of Dentistry, Radboud University, Nijmegen, The Netherlands Stefan Listl Section for Translational Health Economics, Heidelberg University Hospital, Heidelberg, Germany Department of Epidemiology and Biostatistics, University of the Philippines Manila, Manila, The Philippines Jaifred Christian F. Lopez Department of Public Health, Trnava University, Trnava, Slovakia Marek Majdan Community-Based Participatory-Research Center (CBPR), Tehran University of Medical Sciences, Tehran, Iran Reza Majdzadeh Knowledge Utilization Research Center (KURC), Tehran University of Medical Sciences, Tehran, Iran Department of Primary Care and Public Health, Imperial College London, London, UK Azeem Majeed & Salman Rawaf Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran Reza Malekzadeh , Akram Pourshams , Gholamreza Roshandel , Hamideh Salimzadeh & Sadaf G. Sepanlou Non-communicable Diseases Research Center, Shiraz University of Medical Sciences, Shiraz, Iran Department of Epidemiology and Biostatistics, Tehran University of Medical Sciences, Tehran, Iran Mohammad Ali Mansournia Campus Caucaia, Federal Institute of Education, Science and Technology of Ceará, Caucaia, Brazil Francisco Rogerlândio Martins-Melo Public Health Department, Botho University-Botswana, Gaborone, Botswana Anthony Masaka Division of Plastic Surgery, University of Washington, Seattle, WA, USA Benjamin Ballard Massenburg Department of Epidemiology and Biostatistics, University of California San Francisco, San Francisco, CA, USA Kala M. Mehta Peru Country Office, United Nations Population Fund (UNFPA), Lima, Peru Walter Mendoza Center for Translation Research and Implementation Science, National Institutes of Health, Bethesda, MD, USA George A. Mensah Department of Medicine, University of Cape Town, Cape Town, South Africa & Jean Jacques Noubiap Breast Surgery Unit, Helsinki University Hospital, Helsinki, Finland Tuomo J. Meretoja University of Helsinki, Helsinki, Finland Clinical Microbiology and Parasitology Unit, Dr Zora Profozic Polyclinic, Zagreb, Croatia Tomislav Mestrovic University Centre Varazdin, University North, Varazdin, Croatia Pacific Institute for Research & Evaluation, Calverton, MD, USA Ted R. Miller Achutha Menon Centre for Health Science Studies, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, India G. K. Mini Global Institute of Public Health (GIPH), Ananthapuri Hospitals and Research Centre, Trivandrum, India Faculty of Internal Medicine, Kyrgyz State Medical Academy, Bishkek, Kyrgyzstan Erkin M. Mirrakhimov Department of Atherosclerosis and Coronary Heart Disease, National Center of Cardiology and Internal Disease, Bishkek, Kyrgyzstan Institute of Addiction Research (ISFF), Frankfurt University of Applied Sciences, Frankfurt, Germany Babak Moazen Department of Food Technology, College of Agriculture, Salahaddin University-Erbil, Erbil, Iraq Dara K. Mohammad Department of Medicine Huddinge, Karolinska Institutet, Stockholm, Sweden Department of Information Technology, University of Human Development, Sulaimaniyah, Iraq Aso Mohammad Darwesh Health Systems and Policy Research Unit, Ahmadu Bello University, Zaria, Nigeria Shafiu Mohammed Non-communicable Diseases Research Center, Tehran University of Medical Sciences, Tehran, Iran Farnam Mohebi Iran National Institute of Health Research, Tehran University of Medical Sciences, Tehran, Iran Clinical Epidemiology and Public Health Research Unit, Burlo Garofolo Institute for Maternal and Child Health, Trieste, Italy Lorenzo Monasta & Luca Ronfani Department of Public Health Medicine, University of Kwazulu-Natal, Durban, South Africa Yoshan Moodley Health Sciences Research Center, Mazandaran University of Medical Sciences, Sari, Iran Mahmood Moosazadeh Social Determinants of Health Research Center, Kurdistan University of Medical Sciences, Sanandaj, Iran Ghobad Moradi Department of Epidemiology and Biostatistics, Kurdistan University of Medical Sciences, Sanandaj, Iran Department of Mathematical Sciences, University of Bath, Bath, UK Paula Moraga International Laboratory for Air Quality and Health, Queensland University of Technology, Brisbane, Queensland, Australia Lidia Morawska Department of Surgery, University of Washington, Seattle, WA, USA Shane Douglas Morrison Department of Health Management and Economics, Tehran University of Medical Sciences, Tehran, Iran Seyyed Meysam Mousavi Health Management Research Center, Baqiyatallah University of Medical Sciences, Tehran, Iran Department of Pediatric Medicine, Nishtar Medical University, Multan, Pakistan Ghulam Mustafa Department of Pediatrics & Pediatric Pulmonology, Institute of Mother & Child Care, Multan, Pakistan Cancer Research Center, Tehran University of Medical Sciences, Tehran, Iran Azin Nahvijou Department of Epidemiology & Biostatistics, Kermanshah University of Medical Sciences, Kermanshah, Iran Farid Najafi & Yahya Salimi Suraj Eye Institute, Nagpur, India Vinay Nangia Cochrane South Africa, South African Medical Research Council, Cape Town, South Africa Duduzile Edith Ndwandwe General Surgery, Carol Davila University of Medicine and Pharmacy Bucharest, Bucharest, Romania Ionut Negoi General Surgery, Emergency Hospital of Bucharest, Bucharest, Romania Anatomy and Embryology, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania Ruxandra Irina Negoi Cardiology, Cardio-Aid, Bucharest, Romania Department of Biological Sciences, University of Embu, Embu, Kenya Josephine W. Ngunjiri Institute for Global Health Innovations, Duy Tan University, Hanoi, Vietnam Cuong Tat Nguyen Center of Excellence in Behavioral Medicine, Nguyen Tat Thanh University, Ho Chi Minh City, Vietnam Long Hoang Nguyen & Giang Thu Vu Public Health Department, Universitas Negeri Semarang, Kota Semarang, Indonesia Dina Nur Anggraini Ningrum Graduate Institute of Biomedical Informatics, Taipei Medical University, Taipei City, Taiwan Mazandaran University of Medical Sciences, Sari, Iran Malihe Nourollahpour Shiadeh Faculty of Medicine & Health Sciences, Stellenbosch University, Cape Town, South Africa Peter S. Nyasulu UCIBIO, University of Porto, Porto, Portugal Felix Akpojene Ogbo Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada Andrew T. Olagunju Department of Psychiatry, University of Lagos, Lagos, Nigeria Centre for Healthy Start Initiative, Lagos, Nigeria Bolajoko Olubukunola Olusanya & Jacob Olusegun Olusanya Department of Pharmacology and Therapeutics, University of Nigeria Nsukka, Enugu, Nigeria Obinna E. Onwujekwe Center for Population Health Research, National Institute of Public Health, Cuernavaca, Mexico Eduardo Ortiz-Panozo School of Health and Welfare, Jönköping University, Jönköping, Sweden Division of Mental and Physical Health, Norwegian Institute of Public Health, Bergen, Norway Simon Øverland Department of Psychosocial Science, University of Bergen, Bergen, Norway Department of Respiratory Medicine, Jagadguru Sri Shivarathreeswara Academy of Health Education and Research, Mysore, India Mahesh P. A. Health Outcomes, Center for Health Outcomes & Evaluation, Bucharest, Romania Adrian Pana Augenpraxis Jonas, Heidelberg University, Heidelberg, Germany Songhomitra Panda-Jonas Regional Medical Research Centre, Indian Council of Medical Research, Bhubaneswar, India Sanghamitra Pati Department of Paediatrics, University of Melbourne, Melbourne, Victoria, Australia George C. Patton Population Health, Murdoch Children's Research Institute, Melbourne, Victoria, Australia Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Bergamo, Italy Norberto Perico & Giuseppe Remuzzi Department of Economics and Business, University of Groningen, Groningen, The Netherlands Maarten J. Postma University Medical Center Groningen, University of Groningen, Groningen, The Netherlands Department of Nephrology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India Population Studies, International Institute for Population Sciences, Mumbai, India Parul Puri Non-communicable Diseases Research Center, Alborz University of Medical Sciences, Karaj, Iran Mostafa Qorbani College of Medicine, University of Central Florida, Orlando, FL, USA Amir Radfar College of Graduate Health Sciences, A. T. Still University, Mesa, AZ, USA Thalassemia and Hemoglobinopathy Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran Fakher Rahim Metabolomics and Genomics Research Center, Tehran University of Medical Sciences, Tehran, Iran Sina Trauma and Surgery Research Center, Tehran University of Medical Sciences, Tehran, Iran Vafa Rahimi-Movaghar & Payman Salamati Department of Public Health and Mortality Studies, International Institute for Population Sciences, Mumbai, India Mohammad Hifz Ur Rahman Policy Research Institute, Kathmandu, Nepal Chhabi Lal Ranabhat Institute for Poverty Alleviation and International Development, Yonsei University, Wonju, South Korea WHO Collaborating Centre for Public Health Education and Training, Imperial College London, London, UK David Laith Rawaf University College London Hospitals, London, UK Academic Public Health, Public Health England, London, UK Salman Rawaf Translational Health Research Institute, Western Sydney University, Penrith, New South Wales, Australia Andre M. N. Renzaho School of Social Sciences and Psychology, Western Sydney University, Penrith, New South Wales, Australia Research Directorate, Nihon Gakko University, Fernando De La Mora, Paraguay Carlos Rios-González Research Direction, Universidad Nacional de Caaguazú, Coronel Oviedo, Paraguay Department of Clinical Research, Federal University of Uberlândia, Uberlândia, Brazil Leonardo Roever Golestan Research Center of Gastroenterology and Hepatology, Golestan University of Medical Sciences, Gorgan, Iran Gholamreza Roshandel Infectious Diseases and Tropical Medicine Research Center, Babol University of Medical Sciences, Babol, Iran Ali Rostami Centro de Investigación Palmira, Agrosavia, Palmira, Colombia Enrico Rubagotti Department of Ocean Science and Engineering, Southern University of Science and Technology, Shenzhen, China Kermanshah University of Medical Sciences, Kermanshah, Iran Yahya Safari Department of Psychiatry, All India Institute of Medical Sciences, New Delhi, India Rajesh Sagar Department of Pathology, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia Nasir Salam Social Development and Health Promotion Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran Yahya Salimi & Moslem Soofi Department of Entomology, Ain Shams University, Cairo, Egypt Abdallah M. Samy Department of Surgery, Marshall University, Huntington, WV, USA Juan Sanabria Department of Nutrition and Preventive Medicine, Case Western Reserve University, Cleveland, OH, USA Institute of Social Medicine, University of Belgrade, Belgrade, Serbia Milena M. Santric Milicevic Centre-School of Public Health and Health Management, University of Belgrade, Belgrade, Serbia Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK Benn Sartorius Surgery Department, Hamad Medical Corporation, Doha, Qatar Brijesh Sathian Faculty of Health & Social Sciences, Bournemouth University, Bournemouth, UK University of Alabama at Birmingham, Birmingham, AL, USA Arundhati R. Sawant Dr D. Y. Patil University, Pune, India Department of Psychology, University of Alabama at Birmingham, Birmingham, AL, USA David C. Schwebel Department of Food Science and Nutrition, Jigjiga University, Jigjiga, Ethiopia Anbissa Muleta Senbeta Independent Consultant, Karachi, Pakistan Masood Ali Shaikh School of Medicine, Dezful University of Medical Sciences, Dezful, Iran Mehran Shams-Beyranvand School of Medicine, Alborz University of Medical Sciences, Karaj, Iran Chronic Diseases (Home Care) Research Center, Hamadan University of Medical Sciences, Hamadan, Iran Morteza Shamsizadeh University School of Management and Entrepreneurship, Delhi Technological University, New Delhi, India Department of Pulmonary Medicine, Fudan University, Shanghai, China Jun She Centre for Medical Informatics, University of Edinburgh, Edinburgh, UK Aziz Sheikh Division of General Internal Medicine, Harvard University, Boston, MA, USA National Institute of Infectious Diseases, Tokyo, Japan Mika Shigematsu Department of Health Education & Promotion, Kermanshah University of Medical Sciences, Kermanshah, Iran Soraya Siabani School of Health, University of Technology Sydney, Sydney, New South Wales, Australia Brasília University, Brasília, Brazil Dayane Gabriele Alves Silveira Department of the Health Industrial Complex and Innovation in Health, Federal Ministry of Health, Brasília, Brazil Department of Epidemiology, University of Alabama at Birmingham, Birmingham, AL, USA Jasvinder A. Singh Department of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA Department of Epidemiology, School of Preventive Oncology, Patna, India Dhirendra Narain Sinha Department of Epidemiology, Healis Sekhsaria Institute for Public Health, Mumbai, India Centre for Fertility and Health, Norwegian Institute of Public Health, Bergen, Norway Vegard Skirbekk Department of Pediatrics, King Saud University, Riyadh, Saudi Arabia Badr Hasan Sobaih & Mohamad-Hani Temsah Pediatric Department, King Khalid University Hospital, Riyadh, Saudi Arabia Hospital Universitario de la Princesa, Autonomous University of Madrid, Madrid, Spain Joan B. Soriano Centro de Investigación Biomédica en Red Enfermedades Respiratorias (CIBERES), Madrid, Spain Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, UK Ireneous N. Soyiri Hull York Medical School, University of Hull, Hull, UK Division of Community Medicine, International Medical University, Kuala Lumpur, Malaysia Chandrashekhar T. Sreeramareddy Department of Nursing, Muhammadiyah University of Surakarta, Kartasura, Indonesia Agus Sudaryanto Department of Public Health, China Medical University, Taichung, Taiwan Department of Community Medicine, Ahmadu Bello University, Zaria, Nigeria Mu'awiyyah Babale Sufiyan Neurology Department, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, India PN Sylaja Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, India Department of Medicine, University of Valencia, Valencia, Spain Rafael Tabarés-Seisdedos Carlos III Health Institute, Biomedical Research Networking Center for Mental Health Network (CIBERSAM), Madrid, Spain Department of Pediatrics, Hawassa University, Hawassa, Ethiopia Birkneh Tilahun Tadesse International Vaccine Institute, Seoul, South Korea College of Medicine, Alfaisal University, Riyadh, Saudi Arabia Mohamad-Hani Temsah Department of Anesthesiology, Perioperative, and Pain Medicine, University of Virginia, Charlottesville, VA, USA Abdullah Sulieman Terkawi Department of Anesthesiology, King Farah Medical City, Riyadh, Saudi Arabia Department of Medical Microbiology, University of Gondar, Gondar, Ethiopia Belay Tessema Department of Epidemiology and Biostatistics, University of Gondar, Gondar, Ethiopia Zemenu Tadesse Tessema Department of Public Health and Community Medicine, Central University of Kerala, Kasaragod, India Kavumpurathu Raman Thankappan Faculty of Health Sciences, Jagiellonian University Medical College, Krakow, Poland Roman Topor-Madry The Agency for Health Technology Assessment and Tariff System, Warsaw, Poland Department of Pathology and Legal Medicine, University of São Paulo, Ribeirão Preto, Brazil Marcos Roberto Tovani-Palone Department of Health Economics, Hanoi Medical University, Hanoi, Vietnam Bach Xuan Tran Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore Lorainne Tudor Car Gomal Center of Biochemistry and Biotechnology, Gomal University, Dera Ismail Khan, Pakistan Irfan Ullah TB Culture Laboratory, Mufti Mehmood Memorial Teaching Hospital, Dera Ismail Khan, Pakistan Division of Health Sciences, University of Warwick, Coventry, UK Olalekan A. Uthman Argentine Society of Medicine, Ciudad de Buenos Aires, Argentina Pascual R. Valdez Velez Sarsfield Hospital, Buenos Aires, Argentina Psychosocial Injuries Research Center, Ilam University of Medical Sciences, Ilam, Iran Yousef Veisani Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy Francesco S. Violante Occupational Health Unit, Sant'Orsola Malpighi Hospital, Bologna, Italy Department of Health Care Administration and Economics, National Research University Higher School of Economics, Moscow, Russia Vasily Vlassov Department of Global Health and Population, Harvard University, Boston, MA, USA Sebastian Vollmer Department of Economics, University of Göttingen, Göttingen, Germany Foundation University Medical College, Foundation University Islamabad, Islamabad, Pakistan Yasir Waheed Department of Psychiatry, University of São Paulo, São Paulo, Brazil Yuan-Pang Wang Institute of Health and Society, University of Oslo, Oslo, Norway Andrea Sylvia Winkler Department of Neurology, Technical University of Munich, Munich, Germany School of Population Health & Environmental Sciences, King's College London, London, UK Charles D. A. Wolfe NIHR Biomedical Research Centre, Guy's and St Thomas' Hospital and Kings College London, London, UK Department of Diabetes and Metabolic Diseases, University of Tokyo, Tokyo, Japan Tomohide Yamada Wolkite University, Wolkite, Ethiopia Alex Yeshaneh Centre for Suicide Research and Prevention, University of Hong Kong, Hong Kong, China Paul Yip Department of Social Work and Social Administration, University of Hong Kong, Hong Kong, China School of Allied Health Sciences, Addis Ababa University, Addis Ababa, Ethiopia Engida Yisma Department of Psychopharmacology, National Center of Neurology and Psychiatry, Tokyo, Japan Naohiro Yonemoto Health Economics & Finance, Global Health, Jackson State University, Jackson, MS, USA Mustafa Z. Younis School of Medicine, Tsinghua University, Peking, China Prevention of Cardiovascular Disease Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran Mahmoud Yousefifard Global Health Institute, Wuhan University, Wuhan, China Chuanhua Yu Department of Epidemiology and Biostatistics, Wuhan University, Wuhan, China Department of Medicine, Monash University, Melbourne, Victoria, Australia Sojib Bin Zaman Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Bangladesh, Dhaka, Bangladesh George Warren Brown School, Washington University in St Louis, St Louis, MO, USA Jianrong Zhang School of Public Health, Wuhan University of Science and Technology, Wuhan, China Yunquan Zhang Hubei Province Key Laboratory of Occupational Hazard Identification and Control, Wuhan University of Science and Technology, Wuhan, China , Kalkidan Hassen Abate , Foad Abd-Allah , Oladimeji M. Adebayo , Victor Adekanmbi , Mahdi Afshari , Olufemi Ajumobi , Tomi Akinyemiju , Fares Alahdab , Ziyad Al-Aly , Jacqueline Elizabeth Alcalde Rabanal , Mehran Alijanzadeh , Vahid Alipour , Khalid Altirkawi , Mohammadreza Amiresmaili , Nahla Hamed Anber , Catalina Liliana Andrei , Mina Anjomshoa , Carl Abelardo T. Antonio , Olatunde Aremu , Krishna K. Aryal , Mehran Asadi-Aliabadi , Suleman Atique , Marcel Ausloos , Ashish Awasthi , Beatriz Paulina Ayala Quintanilla , Alaa Badawi , Joseph Adel Mattar Banoub , Suzanne Lyn Barker-Collo , Anthony Barnett , Neeraj Bedi , Derrick A. Bennett , Krittika Bhattacharyya , Suraj Bhattarai , Zulfiqar A. Bhutta , Ali Bijani , Boris Bikbov , Gabrielle Britton , Zahid A. Butt , Rosario Cárdenas , Félix Carvalho , Carlos A. Castañeda-Orjuela , Franz Castro , Ester Cerin , Jung-Chen Chang , Cyrus Cooper , Rajat Das Gupta , Jan-Walter De Neve , Kebede Deribe , Beruk Berhanu Desalegn , Melaku Desta , Meghnath Dhimal , Daniel Diaz , Mesfin Tadese Dinberu , Shirin Djalalinia , Manisha Dubey , Eleonora Dubljanin , Andre R. Durães , Mohammad Ebrahimi Kalan , Ziad El-Khatib , Babak Eshrati , Mahbobeh Faramarzi , Mohammad Fareed , Andre Faro , Seyed-Mohammad Fereshtehnejad , Eduarda Fernandes , Irina Filip , Florian Fischer , Takeshi Fukumoto , Jose A. García , Paramjit Singh Gill , Tiffany K. Gill , Philimon N. Gona , Sameer Vali Gopalani , Ayman Grada , Yuming Guo , Rajeev Gupta , Vipin Gupta , Arvin Haj-Mirzaian , Arya Haj-Mirzaian , Randah R. Hamadeh , Samer Hamidi , Hamid Yimam Hassen , Delia Hendrie , Andualem Henok , Michael K. Hole , Naznin Hossain , Mehdi Hosseinzadeh , Guoqing Hu , Olayinka Stephen Ilesanmi , Seyed Sina Naghibi Irvani , Sheikh Mohammed Shariful Islam , Neda Izadi , Mihajlo Jakovljevic , Ravi Prakash Jha , John S. Ji , Jost B. Jonas , Zahra Jorjoran Shushtari , Jacek Jerzy Jozwiak , Tanuj Kanchan , Amir Kasaeian , Ali Kazemi Karyani , Peter Njenga Keiyoro , Chandrasekharan Nair Kesavachandran , Yousef Saleh Khader , Morteza Abdullatif Khafaie , Ejaz Ahmad Khan , Mona M. Khater , Aliasghar A. Kiadaliri , Daniel N. Kiirithio , Yun Jin Kim , Ruth W. Kimokoti , Adnan Kisa , Soewarta Kosen , Ai Koyanagi , Kewal Krishan , Barthelemy Kuate Defo , Manasi Kumar , Pushpendra Kumar , Faris Hasan Lami , Paul H. Lee , Shanshan Li , Yu Liao , Lee-Ling Lim , Stefan Listl , Jaifred Christian F. Lopez , Marek Majdan , Reza Majdzadeh , Azeem Majeed , Reza Malekzadeh , Mohammad Ali Mansournia , Francisco Rogerlândio Martins-Melo , Anthony Masaka , Benjamin Ballard Massenburg , Kala M. Mehta , Walter Mendoza , George A. Mensah , Tuomo J. Meretoja , Tomislav Mestrovic , Ted R. Miller , G. K. Mini , Erkin M. Mirrakhimov , Dara K. Mohammad , Aso Mohammad Darwesh , Shafiu Mohammed , Farnam Mohebi , Lorenzo Monasta , Yoshan Moodley , Mahmood Moosazadeh , Ghobad Moradi , Maziar Moradi-Lakeh , Paula Moraga , Lidia Morawska , Shane Douglas Morrison , Seyyed Meysam Mousavi , Ghulam Mustafa , Azin Nahvijou , Farid Najafi , Vinay Nangia , Duduzile Edith Ndwandwe , Ionut Negoi , Ruxandra Irina Negoi , Josephine W. Ngunjiri , Cuong Tat Nguyen , Long Hoang Nguyen , Dina Nur Anggraini Ningrum , Jean Jacques Noubiap , Malihe Nourollahpour Shiadeh , Peter S. Nyasulu , Felix Akpojene Ogbo , Andrew T. Olagunju , Bolajoko Olubukunola Olusanya , Jacob Olusegun Olusanya , Obinna E. Onwujekwe , Doris D. V. Ortega-Altamirano , Eduardo Ortiz-Panozo , Simon Øverland , Mahesh P. A. , Adrian Pana , Songhomitra Panda-Jonas , Sanghamitra Pati , George C. Patton , Norberto Perico , Maarten J. Postma , Swayam Prakash , Parul Puri , Mostafa Qorbani , Amir Radfar , Fakher Rahim , Vafa Rahimi-Movaghar , Mohammad Hifz Ur Rahman , Chhabi Lal Ranabhat , David Laith Rawaf , Salman Rawaf , Giuseppe Remuzzi , Andre M. N. Renzaho , Aziz Rezapour , Carlos Rios-González , Leonardo Roever , Luca Ronfani , Ali Rostami , Enrico Rubagotti , Yahya Safari , Rajesh Sagar , Nasir Salam , Payman Salamati , Yahya Salimi , Abdallah M. Samy , Juan Sanabria , Milena M. Santric Milicevic , Brijesh Sathian , Arundhati R. Sawant , David C. Schwebel , Anbissa Muleta Senbeta , Sadaf G. Sepanlou , Masood Ali Shaikh , Mehran Shams-Beyranvand , Morteza Shamsizadeh , Kiomars Sharafi , Rajesh Sharma , Jun She , Aziz Sheikh , Mika Shigematsu , Soraya Siabani , Dayane Gabriele Alves Silveira , Jasvinder A. Singh , Dhirendra Narain Sinha , Vegard Skirbekk , Badr Hasan Sobaih , Moslem Soofi , Joan B. Soriano , Ireneous N. Soyiri , Chandrashekhar T. Sreeramareddy , Agus Sudaryanto , Mu'awiyyah Babale Sufiyan , Ipsita Sutradhar , PN Sylaja , Rafael Tabarés-Seisdedos , Birkneh Tilahun Tadesse , Mohamad-Hani Temsah , Abdullah Sulieman Terkawi , Belay Tessema , Zemenu Tadesse Tessema , Kavumpurathu Raman Thankappan , Roman Topor-Madry , Marcos Roberto Tovani-Palone , Bach Xuan Tran , Lorainne Tudor Car , Irfan Ullah , Olalekan A. Uthman , Pascual R. Valdez , Yousef Veisani , Francesco S. Violante , Vasily Vlassov , Sebastian Vollmer , Giang Thu Vu , Yasir Waheed , Yuan-Pang Wang , Andrea Sylvia Winkler , Charles D. A. Wolfe , Tomohide Yamada , Alex Yeshaneh , Paul Yip , Engida Yisma , Naohiro Yonemoto , Mustafa Z. Younis , Mahmoud Yousefifard , Chuanhua Yu , Sojib Bin Zaman , Jianrong Zhang , Yunquan Zhang , Sanjay Zodpey S.I.H. and N.G. conceived and planned the study. K.W. and J.H. extracted, processed, and geo-positioned the data. L.W. and N.G. carried out the statistical analyses. All authors provided intellectual inputs into aspects of this study. N.G., L.W., J.H., and L.E. prepared figures and tables. N.G. wrote the manuscript with assistance by S.B.M., and all authors contributed to subsequent revisions. Correspondence to Simon I. Hay. Peer review information Nature thanks M. Dolores Ugarte and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Extended data figures and tables Extended Data Fig. 1 Modelling regions based on geographical and SDI regions from the GBD. Modelling regions were defined as follows. Andean South America, Central America and the Caribbean, central sub-Saharan Africa, East Asia, eastern sub-Saharan Africa, Middle East, North Africa, Oceania, Southeast Asia, South Asia, southern sub-Saharan Africa, Central Asia, Tropical South America, and western sub-Saharan Africa. Regions in grey were not included in our models due to high-middle and high SDIs27. The map was produced using ArcGIS Desktop 10.6. Extended Data Fig. 2 Probability that the ratio of men to women aged 20–24 years who attained primary and secondary education is >1 in 2000 and 2017. a–d, Probability that ratio is >1 (for example, men complete at a higher rate than women) for attaining primary education (a, b) and secondary education (c, d), aggregated to first administrative-level units in 2000 (a, c) and 2017 (b, d). Maps were produced using ArcGIS Desktop 10.6. Extended Data Fig. 3 Average educational attainment and proportion with no primary school at the first administrative level and absolute difference between women and men aged 20–24 years. a–d, Average educational attainment for women (a) and men (c) and proportion with no primary school for women (b) and men (d) aged 20–24 years in 2017. e, f, The absolute difference in average educational attainment between men and women aged 20–24 years in 2017 (e) and proportion of individuals with no primary school education (f). Maps reflect administrative boundaries, land cover, lakes and population; grey-coloured grid cells were classified as 'barren or sparsely vegetated' and had fewer than ten people per 1 × 1-km2 grid cell49,58,59,60,62, or were not included in these analyses. Interactive visualization tools are available at https://vizhub.healthdata.org/lbd/education. Maps were produced using ArcGIS Desktop 10.6. .Guidelines for Accurate and Transparent Health Estimates Reporting Compliance Checklist, Supplementary Discussion, Supplementary Text on data, methods, and covariates, Model descriptions, Supplementary References, Supplementary Sections 4.3 and 4.3.2, and Supplementary Tables 2, 3, and 44. Graetz, N., Woyczynski, L., Wilson, K.F. et al. Mapping disparities in education across low- and middle-income countries. Nature 577, 235–238 (2020). https://doi.org/10.1038/s41586-019-1872-1 Issue Date: 09 January 2020 Nature menu
CommonCrawl
Probing spermiogenesis: a digital strategy for mouse acrosome classification Partially spatially coherent digital holographic microscopy and machine learning for quantitative analysis of human spermatozoa under oxidative stress condition Vishesh Dubey, Daria Popova, … Balpreet Singh Ahluwalia Estimation of three-dimensional chromatin morphology for nuclear classification and characterisation Priyanka Rana, Arcot Sowmya, … Yang Song Human live spermatozoa morphology assessment using digital holographic microscopy Marzena Kamieniczna, Ewa Stachowska, … Maciej K. Kurpisz Identification of individual cells from z-stacks of bright-field microscopy images Jean-Baptiste Lugagne, Srajan Jain, … Pascal Hersen 3D imaging of sex-sorted bovine spermatozoon locomotion, head spin and flagellum beating Mustafa Ugur Daloglu, Francis Lin, … Aydogan Ozcan 3D Shape Modeling for Cell Nuclear Morphological Analysis and Classification Alexandr A. Kalinin, Ari Allyn-Feuer, … Ivo D. Dinov High spatially sensitive quantitative phase imaging assisted with deep neural network for classification of human spermatozoa under stressed condition Ankit Butola, Daria Popova, … Balpreet Singh Ahluwalia A robust unsupervised machine-learning method to quantify the morphological heterogeneity of cells and nuclei Jude M. Phillip, Kyu-Sang Han, … Pei-Hsun Wu Reconstruction of bovine spermatozoa substances distribution and morphological differences between Holstein and Korean native cattle using three-dimensional refractive index tomography Hao Jiang, Jeong-woo Kwon, … Nam-Hyung Kim Alessandro Taloni1,2,3, Francesc Font-Clos4, Luca Guidetti1,5, Simone Milan ORCID: orcid.org/0000-0002-1888-23931,4,5, Miriam Ascagni6, Chiara Vasco7, Maria Enrica Pasini6, Maria Rosa Gioria6, Emilio Ciusani7, Stefano Zapperi ORCID: orcid.org/0000-0001-5692-54651,2,4,8,9 & Caterina A. M. La Porta ORCID: orcid.org/0000-0002-3010-89661,5 Classification and taxonomy Nonlinear phenomena An Author Correction to this article was published on 14 November 2018 Classification of morphological features in biological samples is usually performed by a trained eye but the increasing amount of available digital images calls for semi-automatic classification techniques. Here we explore this possibility in the context of acrosome morphological analysis during spermiogenesis. Our method combines feature extraction from three dimensional reconstruction of confocal images with principal component analysis and machine learning. The method could be particularly useful in cases where the amount of data does not allow for a direct inspection by trained eye. Spermatogenesis is a dynamic process during which undifferentiated diploid stem cells mature to differentiated haploid cells called spermatozoa. Mammalian spermatogenesis occurs within the seminiferous tubules and consists of three phases: a mitotic phase in which spermatogonia divide mitotically; a meiotic phase in which spermatocytes divide to form haploid round spermatids and a third phase, called spermiogenesis, in which spermatids encompass morphological changes including acrosome formation, chromatin condensation, and flagellum development resulting in the formation of spermatozoa1,2,3,4,5,6,7. A key element of spermiogenesis is the mammalian sperm acrosome, an exocytotic vesicle present on the apical surface of the head8, 9 whose correct formation is crucial for the successful fertilization of the egg10. Acrosomal biogenesis takes place at the initial step of spermiogenesis and can be divided into four phases that cumulatively complete in about 2 weeks in the mouse and in 1 month in the humans8,9,10,11,12,13,14,15. In rodent spermatids, proacrosomal vesicles (granules) containing a variety of proteins assemble and fuse to form a single sphere acrosomal granule in the center of the acrosomal vesicle at the Golgi phase. At the cap phase, the acrosomal granule forms a head cap-like structure that gradually enlarges to cover the nucleus. The head cap continues to elongate outlining the dorsal edge, protruding apically at the acrosome phase, and finally the structure of the acrosome is completed at the end of maturation phase12. Our current understanding of human reproduction is increasing thanks to the use of Assisted Reproductive Techniques (ART) and many studies aim to find a better way to select viable sperm16. Even though many aspects of sperm formation have been investigated, only few studies report quantitative measurements of sperm and its components, mainly focusing on the whole sperm heads17, 18. Since infertility is a common problem for men, it would be useful to devise standard parameters that could help in ART. A correct formation of the acrosome is crucial for a physiological reproduction capability and the quantification of the ratio between spermatides and spermatozoa can be a valid support for the correct prognosis of diseases linked to an impaired biogenesis of sperm cells. Conventional strategies to study mammal spermiogenesis usually try to characterize specific morphological features supposed to play a key role in the development of the cells to spermatozoa with the aim of targeting them for possible prognostic/therapeutic strategies. The morphological analysis of spermatozoa is usually performed by a trained eye, but due to the increasing amount of digital images stored, it is becoming important to develop automatic techniques of classification and diagnosis. In this respect, there is still a pressing need to develop reliable automated method for cell morphology assessment. While objective tools for sperm motility assessment exist19, current automatic methods for sperm morphology are still not accurate and difficult to use20. Hence, subjective morphology sperm cell assessment is the standard in laboratories but results in large variability in the outcome. Machine learning-based intelligent systems could play a pivotal role to reach this goal. The method starts from an input feature matrix, including characteristic values of designated positive and negative samples, and self-trains the prediction models by learning the patterns in the feature matrix. The final goal is then to be able to automatically classify a data set with unknown labels. In this paper, we present a machine learning approach to classify in a quantitative and semi-automatic way important morphometric characteristics of mammalian acrosomes during spermatogenesis. We start by a three-dimensional digital reconstruction of confocal images of acrosomes from which we extract a discretized mesh representing the surface of each acrosome. We then compute a series of morphological parameters such as volume, surface and local curvatures. These morphological parameters represent the features that will then be analyzed through machine learning and principal component analysis. We illustrate the method by analyzing acrosomes from spermatides and spermatozoa, obtained from seminiferous tubules of young mice, which are known to have different shapes. The ground truth is established by direct classification by eye and the results compared with automatic methods based on machine learning. Here we develop a new method combining computational science, quantitative biology and machine learning to classify acrosomes, distinguishing spermatides from spermatozoa in a semi-automatic way, obtaining robust quantitative morphological observables. To this end, we carry out a 3D reconstruction of the surface of acrosomes of spermatides and spermatozoa from sexually mature healthy mice maintained in vitro for a few days. Quantifying differences in the fraction of spermatides and spermatozoa could be useful to detect in advance important pathological conditions related to sterility and have impact of ART17, 18. In order to maximize the number of acrosomes for the analysis, we carried out the 3D reconstruction of the acrosomes in cells extracted from seminiferous tubules and imaged at different times, either immediately (time T0) or maintained in vitro overnight (time T1). An analysis by electron microscopy shows that the overall architecture is preserved between T0 and T1 (Fig. 1) and we did not record any statistical difference in the quantitative parameters extracted from confocal images. Transmission electron micrograph of mouse seminiferous epithelium. Adult testis tubules obtained as described in Materials and Methods section were immediately fixed (time T0) or after 1 day in culture (time T1). (a,b) At T0 a well preserved tubular basal compartment of a stage VII tubule shows normal Sertoli cells (S), spermatogonia (Sg), primary spermatocytes (Sc) and spermatids (Sd). x 3500–4800. (c,d) At T1 the tubular basal compartment shows some signs of cellular degeneration (*). x 4800. The detailed procedure for the reconstruction of the acrosomes surfaces, is discussed in the Materials and Methods section. Figure 2 shows two typical examples of meshes obtained by 3D reconstruction of the acrosomes membranes. The analysis of each acrosome yields a set of morphological characteristics (parameters): the acrosome's volume V, its surface area Σ, the sphericity Ψ, the average mean and Gaussian curvatures (\(\overline{M}\) and \(\overline{G}\), respectively) and their relative fluctuations (\(\frac{{\rm{\Delta }}M}{\overline{M}}\) and \(\frac{{\rm{\Delta }}G}{\overline{G}}\), respectively). Averaging these morphological parameters (〈…〉) over the subpopulations of spermatids and spermatozoa gives the values reported in Fig. 3. Moreover we also report on top the p-values from a Kolmogorov-Smirnov test that considers the entire Spermatids and Spermatozoa cumulative distributions. Acrosomes surface 3D reconstruction. Panel (a): the round spermatid acrosome is singled out within one of the fields of a 3D confocal stack of the experimental slide. The spermatid surface is identified thanks to the SP56 marker of its acrosomal matrix (in green). Panel (b): the Active Contour plugin reconstructs the acrosome mesh by furnishing the closest three dimensional segmented surface to the acrosome bilipidic membrane. For a 3D rendering of the acrosome mesh see Supplementary Video S1. Panel (c): acrosome mesh and the local Gaussian curvature superimposed on each mesh node. The color code is from blue (low Gaussian curvature) to red (high Gaussian curvature). Panel (d): acrosome mesh and the local Mean curvature superimposed on each mesh node. The color code is from blue (low Mean curvature) to red (high Mean curvature). Panel (e): the spermatozoon acrosome is singled out within the confocal stack field, and identified thanks to the SP56 marker of its acrosomal matrix (in green). Panel (f): the Active Contour plugin reconstructs the acrosome mesh by furnishing the closest three dimensional segmented surface to the acrosome bilipidic membrane. Notice the typical harpin shape. For a 3D rendering of the acrosome mesh see Supplementary Video S2. Panel (g): acrosome mesh and the local Gaussian curvature superimposed on each mesh node. Color code is as in panel (c). Panel (h): acrosome mesh and the local Mean curvature superimposed on each mesh node. Color code is as in panel (d). Statistical analysis: Average values. Average values of the morphological parameters for spermatids (green) and spermatozoa acrosomes (red). We also report the p-value from a KS test on top of each morphological parameter. These data show that the acrosomes in spermatozoa are, in average, nearly 50% larger than those in spermatids and similar differences are recorded for the surfaces. This is not surprising since volume and surface are strongly correlated as illustrated in Fig. 4. In particular, volumes and surface follow the general law 〈Σ〉~〈V〉2/3 as expected based on simple dimensional considerations. Features plot. Overall view of the distribution of five morphological features (\(\overline{G}\), ΔG/\(\overline{G}\), Σ, V, Ψ) and their bivariate relations. Diagonal panels: normed histograms (semi-transparent filled bins) and kernel density estimates (solid colored lines) corresponding to the log-transformed data. Lower-diagonal panels: scatter plots in logarithmic coordinates. Notice that the x-axes are shared within columns. The diagonal panels are in units of density (not shown). During spermiogenesis, acrosomes from spermatids are typically more spherical than those capping the spermatozoa nuclei. The spherical shape is probably reminiscent of an early vesicle form. We recover this observation by measuring the sphericity 1 of each acrosome in both populations. By definition, when Ψ = 1 the spherical shape is recovered, while smaller values indicate eccentricity and/or asymmetry of the surface. The mean values reported in Fig. 3 confirm indeed that acrosomes from spermatids tend to be more spherical than those from spermatozoa (see also the reconstructed meshes in Fig. 3). This difference is statistically significant (p = 1.06 × 10−6). To further characterize the morphology, we have considered surface curvatures. The Gaussian curvature, defined in Eq. 2, is positive for spheres, negative for hyperboloids and zero for planes. Hence, the sign of the Gaussian curvature indicates if a surface is locally convex or saddle-like. We have measured the average Gaussian curvature \(\overline{G}\) per cell, as defined in Eq. 5. The average value 〈\(\overline{G}\)〉 clearly shows that spermatids tend to have a more convex acrosome membrane as compared to spermatozoa (see Fig. 3, p = 1.14 × 10−2). The mean curvature, defined in Eq. 3, is zero for a plane, constant for a sphere and, more generally, it is positive for convex surfaces and negative for concave ones. Fig. 3 shows that, as in the case of Gaussian curvature, acrosomes from spermatids appear more spherical than those from spermatozoa (p = 4.15 × 10−2). In addition to the average values of Gaussian and mean curvature, we also consider their standard deviations which display significant differences between spermatids and spermatozoa (p = 2.19 × 10−6 for the Gaussian curvature and p = 4.64 × 10−3 for the mean curvature). In summary, the quantitative morphological analysis reveals clear, statistically significant differences between spermatids and spermatozoa. These differences, however, arise at the population level and do not necessarily translate into a successful automated classification at the individual cell level. This is clear observing the plots in Fig. 4, where we report the bivariate relations and distribution for five morphological features. Notice that while these features all give rise to significant differences in the average parameters (Fig. 3), there is an important overlap in the individual values for spermatids and spermatozoa. To overcome these problems, we decided to investigate if machine learning and principal component analysis could be useful to provide reliable information at the single cell level and more importantly to build up a predictive semi-quantitative method. Fig. 4 shows that the data display more uniform-like densities in logarithmic space (lower-diagonal panels) rather than in the original linear space (Supplementary Fig. S1). Hence the SVM classification is performed in logarithmic space. Having more uniform densities over the feature space is desirable for SVM classification, because penalties for misclassification are weighted according to their distance to the decision boundary. Figure 5 shows the projection onto the first two principal components of the dataset, both in linear and logarithmic space. Although certain differences in the distribution of values for spermatids and spermatozoa can be appreciated, clearly these differences are insufficient to define non-overlapping clusters. In other words, the two subpopulations cannot be distinguished by eye in a PCA projection of the 7-feature dataset. This is, indeed, what motivated us to use a SVM in the full 7-dimensional feature space. PCA projection. Projection of the seven morphological features onto its two first principal components (see Methods section), computed both in linear space (left panel) and in logarithmic space (right panel). Although some differences between spermatids and spermatozoa are apparent, no clear clusters arise. Our results are summarized in Table 1. The values of the class accuracy (defined in Eq. 16) show that the SVM classification algorithm gets the correct answer in the 73% of trials (74% of trials for spermatids and in 69% of trials for spermatozoa acrosome, equivalent ROC AUC statistic 0.76). Although an average classification accuracy of 73% would not suffice for a potential automatized acrosome classification method, it is definitely beyond what a random or a constant classifier would achieve, marking the existing of a signal that could potentially be further exploited. In addition, it is interesting to notice the consistency by which cells are correctly classified/misclassified: 71% of all cells are correctly classified on at least 85% of the algorithm runs, i.e. r a = 0.85 = 0.71. If the value of a is raised to 0.99, then this figure drops only to 68%, i.e. r a = 0.99 = 0.68. In other words, there is a large subset of the data that is almost always correctly classified, and smaller subset of the data that is misclassified most of the time. This can be better seen in Fig. 6, where the cell accuracy has been used to color a scatter plot of the data. We have visually inspected the distribution of features, and found that misclassified cells lie in regions of mixed spermatid/spermatozoa density, while correctly classified ones tend to be on regions of more unequal spermatid/spermatozoa density. Therefore, it appears there is no more obvious information left, and further exploiting classification results to enhance the SVM would result in over-fitting. Table 1 Summary of results of the SVM classification: class-averaged accuracy A C (Eq. 16); ratio of cells with classification accuracy equal to or greater than 0.85 and 0.99, r 0,85, r 0,99; and area under the curve for the receiver operating characteristic (ROC AUC). SVM analysis. Left panel: spermatids acrosomes (green dots) and spermatozoa acrosomes (red dots) plotted in the Volume-Sphericity plane. Right panel: same data, colored according to the value of the classification accuracy A c (Eq. 16) obtained with the SVM: spermatids are colored from totally white (0% accuracy) to totally green (100% accuracy), while spermatozoa are colored from totally white (0% accuracy) to totally red (100% accuracy). Notice that a perfect classifier would render both panels identical. The two small images above the colorbar are example confocal images (a red coloring filter was applied to the spermatozoa image for clarity). The small triangular markers in the colorbar mark the class-level accuracy values (see Table 1). The choice of SVM among other classifiers responds to its simplicity and the fact that it handles well class imbalance. In particular, we compared our result with those obtained with a Random Forest (RF) classifier using either class weights or downsampling to correct for class imbalance. In the first case, we obtain a 92% accuracy for spermatids, but only 27% for spermatozoa. In the second case, we achieve 69% accuracy for spermatids and 57% for spermatozoa. Therefore, SVM gives better results than RF, probably due to how class imbalance is handled. In conclusion, we have proposed a general strategy to classify acrosomes from spermatides and spermatozoa according to their morphological features. The methods starts from a three dimensional reconstruction of the surface of the acrosome from confocal images and extracts a set of morphological parameters from the reconstructed surface. These parameters are then analyzed by machine learning and compared with the ground truth provided by a direct assessment by eye. The method we propose could be helpful to assist the analysis of spermatozoa during spermiogenesis, especially in presence of large quantities of data where direct classification by eye is not feasible. Future studies along these lines should aim at finding automated tools to distinguish between a normal cap and a cap with distortions of outer and inner acrosomal membranes, identify damages of the acrosomal matrix or to estimate the fraction of sperm cells with loosen cap after freezing-thawing. This could help solve the relevant clinical issue of quantifying the percentage of sperm cells with normal acrosome and therefore assess fertility. Animals and culture medium Sexually mature CD1 male mice (four-five months) were purchased from Charles River (Calco, Italy). Mice were kept in controlled conditions and all procedures were conformed to Italian law (D. Lgs n. 2014/26, implementation of the 2010/63/UE) and approved by the Animal Welfare Body of the University of Milan and by the Italian Minister of Health. Isolation of single cells from testis Testes were isolated and decapsulated in 0.1 M Phosphate Buffer. The seminiferous tubules were gently placed onto a small cube made of 1,5% agarose and soaked in culture medium for more than 24 h to replace water. The amount of medium was adjusted in order to cover half to four fifth of the height of agarose cubes. Tubules were maintained in incubator at 34 °C, 5% and controlled humidity overnight in the following culture medium: RPMI (Euroclone), 10% Fetal Bovine Serum (FBS) (Euroclone), 2 mM Stable L-glutamine (Euroclone), antibiotic antimycotic solution (A5955, Sigma-Aldrich). Seminiferous tubules were picked up from agarose cubes (Sigma) and fixed in 2% paraformaldehyde dissolved in PBS pH 7.2–7.4 for 10 min. A single fixed tubule was laid down onto a slide, covered with a coverslip and a gentle pressure was applied in order to allow cells to come out from the seminiferous tubule. Slides were then frozen in liquid nitrogen for further analyses. Acrosome staining The slides were rinsed with ice-cold phosphate buffered saline (PBS) 1X for 5 min at room temperature (RT), fixed with cold 100% methanol for 15 min at −20 °C then incubated with 10% goat-serum in PBS for 1 h at RT. The slides were incubated with anti-sperm Protein sp56 antibody (7C5; 1:150 Life Technologies-MA1-10866) overnight at 4 °C and then incubated with anti-mouse IgM/G/A (H + L) 488 secondary antibody (1:250 Millipore-AP501F) for 1 h at RT. The slides were mounted with Prolong Gold Antifade reagent with DAPI (Life Technologies-P36935). At least 60 stack images were acquired with Leica SP2 laser scanning confocal microscope (63X). Transmission electron microscopy The seminiferous tubules were fixed in loco with 2,5% glutaraldehyde (electron microscope grade) in 0,1 M phosphate buffer (PB) pH 7.2 for 3 h at room temperature. The tubules were then mounted between two layers of 1,5% agarose (Sigma) of about 2 mm in height, which was cut into small cubes 2 × 2 × 3 mm in size and postfixed in 2% osmium tetroxide in 0,1 M PB overnight at 4 °C. The samples were dehydrated in a graded ethanol series, and embedded in epoxy resin. Semithin section (1 μm) were stained with toluidine blue in borax and examined by light microscopy. Ultrathin section (70 nm) were cut using a diamond knife on a Reichert Ultracut ultramicrotome, mounted on a Cu/Rh grids (200 mesh), contrasted with uranyl acetate and lead citrate, examined and photographed with a Zeiss 902 transmission electron microscope operating at 80 kV. The exposed films were developed according to common photographic techniques, captured with an Epson V700 Photo scanner with a final resolution of 600dpi and appropriately calibrated for contrast and brightness (see Fig. 1). 3D acrosome reconstruction by immunofluorescence images of sp56 A 3D reconstruction of the acrosome obtained from confocal images of sperm cells stained with anti-sp56 has been done with ICY software tools (http://icy.bioimageanalysis.org/). Briefly, confocal stacks (at least 80–90 stacks) were first pre-processed to extract the individual cells images. Images were picked in diverse fields of the slide, to consider all the different stages that are present in a single portion of the tubule and not to overestimate the presence of cells in a particular stage of differentiation. A minimum of twenty cells were scored and analyzed for each slide. Two subpopulations in the seminiferous tubules were considered: round spermatids and spermatozoa. The formers represent the early stage of spermatogenesis and are identified by the presence of one or two spots of condensed heterochromatin in a spheroidal nucleus. The latters show a compact chromatin, an acrosome with hooked shape and the presence of the flagellum, according to previous paper1, 7. Cells were singled out by tracing a region of interest (ROI) around every acrosome in each subpolpulation. Subsequently this ROI has been cropped by using the Fast crop tool. Hence, our analysis could take advantage of single high resolution images, for any acrosome under consideration. The 3D ROI of individual acrosomes were also refined by using the HK-Means plugin (http://icy.bioimageanalysis.org/plugin/HK-Means). This method performs a N-class thresholding based on a K-Means classification of the image histogram. The acrosome membrane reconstruction has been obtained by the segmentation technique implemented in the 3D Active Contour plugin (http://icy.bioimageanalysis.org/plugin/Active_Contours)21. The algorithm at the basis of this plugin performs three dimensional segmentation and tracking, using a triangular mesh optimized over the original signal as a target. In Fig. 2 and in the movie M1 (see the Supplementary Informations) the 3D reconstruction of a typical spermatid and a spermatozoa acrosome are displayed. The three dimensional renderings of the meshes (in Fig. 2 and movies (Supplementary Video S1 and Supplementary Video S2) were performed thanks to Paraview (http://www.paraview.org/). Single cell data analysis Once each three dimensional acrosome mesh was reconstructted, we proceeded to measure its cell volume (V) and surface area (Σ) using Meshlab tools (http://meshlab.sourceforge.net/). The acrosome sphericity is calculated according to the definition $${\rm{\Psi }}=\frac{{\pi }^{1/3}{(6V)}^{2/3}}{{\rm{\Sigma }}}$$ The local Gaussian and Mean curvatures were calculated by a custom python code, which massively makes use of vtk libraries (http://www.vtk.org/). Typical images of a spermatid and spermatozoon acrosome mesh, with superimposed local curvatures (blue-to-red) color maps, are reported in Fig. 2(c,d) and Fig 2(g,h) for Gaussian and mean curvature respectively. We label every node on the single mesh by i (with 1 ≤ i ≤ N), therefore the local mean and Gaussian curvature fields on each node are denoted as M i and G i respectively. The local curvature of a surface entails the notion of principal curvatures, k 1, k 2, defined as the smallest and largest one dimensional curvatures on a point. The Gaussian curvature is defined as $${G}_{i}={k}_{1}^{i}{k}_{2}^{i}$$ where the index i runs over the nodes of an acrosome mesh (see Fig. 2(c,g)). The mean curvature instead is defined as the average of the principal curvatures: $${M}_{i}=\frac{{k}_{1}^{i}+{k}_{2}^{i}}{2},$$ and has the dimension of length −1. The averaged value of the Mean and Gaussian curvature on the acrosome surface are defined as $$\overline{M}=\frac{\sum _{i=1}^{N}{M}_{i}}{N}$$ $$\overline{G}=\frac{\sum _{i=1}^{N}{G}_{i}}{N}.$$ Besides the average local curvature per cell (Eqs 4 and 5 respectively), we also define relative fluctuations of the local mean curvature of individual acrosomes as $$\frac{{\rm{\Delta }}M}{\overline{M}}=\sqrt{\frac{\sum _{i=1}^{N}{({M}_{i}-\overline{M})}^{2}}{(N-\mathrm{1)}{\overline{M}}^{2}}},$$ and similarly for the Gaussian curvature $$\frac{{\rm{\Delta }}G}{\overline{G}}=\sqrt{\frac{\sum _{i=1}^{N}{({G}_{i}-\overline{G})}^{2}}{(N-\mathrm{1)}{\overline{G}}^{2}}}$$ The statistical analysis is performed by averaging a set of 7 morphological parameters (V, Σ, Ψ, \(\overline{M}\), \(\overline{G}\), \(\frac{{\rm{\Delta }}M}{\overline{M}}\), \(\frac{{\rm{\Delta }}G}{\overline{G}}\)) over the statistical ensemble of the spermatids and spermatozoa acrosomes subpopulations composed by 158 spermatids and 51 spermatozoa. The statistical significance was evaluated using Kolmogorov-Smirnov tests as implemented in the python library scipy (https://www.scipy.org/). Our code is available at https://github.com/ComplexityBiosystems/. Principal Component Analysis (PCA) We use Principal Component Analysis (PCA)22 as implemented in the open-source python library scikit-learn (https://scikit-learn.org/stable). PCA is very popular visualization and dimensionality-reduction technique based on the singular-value decomposition of the features-samples matrix. The decomposition entails a new space of uncorrelated features, where each new feature or principal component is a linear combination of the original features. The principal components are of interest because (i) they are uncorrelated and (ii) they are such that the first principal component accounts for as much variability of the original data as possible; the second one accounts for as much of the remaining variability as possible, and so on. In this way, projecting the data onto the first few principal components we preserve most of the variability of the data while keeping the number of features low. In this manuscript we use PCA as a visualization technique, to discard the existence of "obvious" clusters in the dataset. By projecting the data onto the two first principal components, we obtain the 2-dimensional scatter plot that better represents the original data, in terms of explained variability. Support Vector Machine (SVM) We first give a brief mathematical introduction to the algorithm behind SVM, and then discuss the implementation to our problem. SVM are a set of widely-used machine learning algorithms, highly popular for their simplicity and the fact that they yield good results in many cases. Here we use its simplest version, a SVM with a linear kernel. In essence, the algorithm boils down to finding the hyper-plane h parametrized by \(\overrightarrow{w},b\), $$h:=\{\overrightarrow{x}\in {{\mathbb{R}}}^{d}:\overrightarrow{w}\cdot \overrightarrow{x}+b=0\}$$ that better separates the data \({\overrightarrow{x}}_{i}\) into the known classes y i ∈ {−1, 1}. In mathematical terms, the problem is cast into an optimization problem with constrains, which is easily solved via Lagrange multipliers. In particular, one needs to find \(\overrightarrow{w},b\), ξ i that minimize $$\frac{1}{2}{\Vert \overrightarrow{w}\Vert }^{2}+C\sum _{i}{\xi }_{i}$$ under the constraints that $${y}_{i}(\overrightarrow{w}\cdot {\overrightarrow{x}}_{i}+b)+{\xi }_{i}\ge 1\quad \quad \forall i$$ where ξ i ≥ 0 are auxiliary variables that allow for misclassification (a penalty proportional to the distance to the decision boundary is set for misclassified points), and C sets a global weight for the misclassification penalty. We refer the reader interested in mathematical details to22. The hyperplane h is determined using only a subset of the data, called the training set, and then the labels of the rest of the data, called test set, are predicted as follows: $$y\equiv {\rm{sign}}\,(\overrightarrow{w}\cdot {\overrightarrow{x}}_{ts}+b)$$ where y is the predicted label of a point \({\overrightarrow{x}}_{ts}\) in the test set. There are many, more involved strategies to split the data into different sets for training and prediction. The interested reader will find good introductory material in ref. 22 and references therein. Our data is given by the seven morphological features of the acrosomes and the acrosome subpopulation to which each cell belongs (Spermatids/Spermatozoa). That is, each cell is represented by a pair (\({\overrightarrow{x}}_{i},{y}_{i}\)) with \({\overrightarrow{x}}_{i}\in {{\mathbb{R}}}^{7}\) a vector containing its morphological information, and y i ∈ {−1, 1} a subpopulation class label, where −1 encodes for "Spermatid" and 1 for "Spermatozoa". We use the python implementation of Support Vector Machines provided by the machine-learning library scikit-learn (https://scikit-learn.org/stable). In particular, we use the function "sklearn.svm.SVC()". Given the difference in sample size of the two groups (158 spermatids and 51 spermatozoa, see the Materials and Methods), it is important to set the keyword "class_weights" to "balanced", which effectively sets statistical weights in the computation of the error term inversely proportional to the class observed frequencies. We use 10-fold cross validation, which means that, for each run of the algorithm, the data is randomly split into ten groups: nine are used to train the SVM, i.e. to determine the parameters \(\overrightarrow{w},b\) of the hyperplane, and one is used for prediction. This is repeated ten times, one for each group, so that in the end each datapoint has received one predicted label. Given the stochasticity in splitting the data, we average results over N r = 1000 runs of the algorithm. Increasing N r does not improve the results. In summary, for each run of the algorithm, the output is a predicted label "Spermatid" or "Spermatozoa" for each of the 209 acrosomes, which we then compare with the ground truth. If the predicted label corresponds to the true nature of the acrosome, we assign a binary value 1, otherwise we assign 0 if it is misclassified. Thus, we obtain a binary matrix B ij of size 209 × 1000 where each row represents a cell and each column a run of the algorithm. We define the cell accuracy a i as the ratio of the times a specific cell i was correctly classified, $${a}_{i}=\frac{1}{{N}_{r}}\sum _{j=1}^{{N}_{r}}{B}_{ij}.$$ We then define the average class accuracy A C as the average of a i over all cells i of a given class C, where C can be either spermatids or spermatozoa, $${A}_{C}=\frac{1}{|C|}\sum _{i\in C}{a}_{i},$$ and |C| is the size of the acrosomes subpopulations, i.e. |C| = 158 for spermatids and |C| = 51 for spermatozoa. Notice that A C corresponds also to the average over algorithm runs j = 1…N r of the class accuracy, $$1{A}_{C}=\frac{1}{|C|}\sum _{i\in C}{a}_{i}$$ $$\quad =\,\frac{1}{|C|}\sum _{i\in C}(\frac{1}{{N}_{r}}\sum _{j=1}^{{N}_{r}}{B}_{ij})$$ $$\quad =\,\frac{1}{{N}_{r}}\sum _{j=1}^{{N}_{r}}(\frac{1}{|C|}\sum _{i\in C}{B}_{ij}).$$ Finally we define the quantity r a as the ratio of cells above certain accuracy a in a given class C, i.e. $${r}_{a}=\frac{|\{i\in C,{a}_{i} > a\}|}{|C|}$$ For instance, if one takes a value of a = 0.99, then r a=0.99 would indicate the (relative) number of cells that would be correctly classified with a probability equal to or higher than 99%. All the custom codes codes are available at https://github.com/ComplexityBiosystems/. A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper. Parvinen, M. & Ruokonen, A. Endogenous steroids in the rat seminiferous tubules. comparison of the stages of the epithelial cycle isolated by transillumination-assisted microdissection. Journal of Andrology 3, 211–220 (1982). McCarrey, J. R. et al. Differential transcription of pgk genes during spermatogenesis in the mouse. Developmental biology 154, 160–168 (1992). De Rooij, D. G. & Grootegoed, J. A. Spermatogonial stem cells. Current opinion in cell biology 10, 694–701 (1998). Eddy, E. M. Male germ cell gene expression. Recent progress in hormone research 57, 103–128 (2002). Wiederkehr, C. et al. Germonline, a cross-species community knowledgebase on germ cell differentiation. Nucleic acids research 32, D560–D567 (2004). Zhao, G.-Q. & Garbers, D. L. Male germ cell specification and differentiation. Developmental cell 2, 537–547 (2002). Vasco, C., Zuccotti, M., Redi, C. A. & Garagna, S. Identification, isolation, and rt-pcr analysis of single stage-specific spermatogenetic cells obtained from portions of seminiferous tubules classified by transillumination microscopy. Molecular reproduction and development 76, 1173–1177 (2009). Leblond, C. & Clermont, Y. Spermiogenesis of rat, mouse, hamster and guinea pig as revealed by the "periodic acid-fuchsin sulfurous acid" technique. American Journal of Anatomy 90, 167–215 (1952). Yanagimachi, R. Mammalian fertilization. The physiology of reproduction 1, 189–317 (1994). Berruti, G. Towards defining an 'origin'—the case for the mammalian acrosome. In Seminars in cell & developmental biology, vol. 59, 46–53 (Elsevier, 2016). Lin, Y.-N., Roy, A., Yan, W., Burns, K. H. & Matzuk, M. M. Loss of zona pellucida binding proteins in the acrosomal matrix disrupts acrosome biogenesis and sperm morphogenesis. Molecular and cellular biology 27, 6794–6805 (2007). Abou-Haila, A. & Tulsiani, D. R. Mammalian sperm acrosome: formation, contents, and function. Archives of biochemistry and biophysics 379, 173–182 (2000). Kierszenbaum, A. L. & Tres, L. L. The acrosome-acroplaxome-manchette complex and the shaping of the spermatid head. Archives of histology and cytology 67, 271–284 (2004). Buffone, M. G., Foster, J. A. & Gerton, G. L. The role of the acrosomal matrix in fertilization. International Journal of Developmental Biology 52, 511–522 (2004). Kanemori, Y. et al. Biogenesis of sperm acrosome is regulated by pre-mrna alternative splicing of acrbp in the mouse. Proceedings of the National Academy of Sciences 113, E3696–E3705 (2016). Boer, Pd, Vries, Md & Ramos, L. A mutation study of sperm head shape and motility in the mouse: lessons for the clinic. Andrology 3, 174–202 (2015). Rijsselaere, T., Soom, A., Maes, D. & Nizanski, W. Computer-assisted sperm analysis in dogs and cats: an update after 20 years. Reproduction in Domestic Animals 47, 204–207 (2012). Yániz, J., Soler, C. & Santolaria, P. Computer assisted sperm morphometry in mammals: A review. Animal reproduction science 156, 1–12 (2015). Katz, D. & Davis, R. Automatic analysis of human sperm motion. Journal of andrology 8, 170–181 (1987). Rijsselaere, T., Van Soom, A., Hoflack, G., Maes, D. & de Kruif, A. Automated sperm morphometry and morphology analysis of canine semen by the hamilton-thorne analyser. Theriogenology 62, 1292–1306 (2004). Dufour, A., Thibeaux, R., Labruyere, E., Guillen, N. & Olivo-Marin, J.-C. 3-d active meshes: fast discrete deformable models for cell tracking in 3-d time-lapse microscopy. IEEE Transactions on Image Processing 20, 1925–1937 (2011). Article ADS MathSciNet Google Scholar Barber, D. Bayesian Reasoning and Machine Learning (Cambridge University Press, 2012). A.T., F.F.C. and S.Z. are supported by ERC Advanced grant SIZEFFECTS. Center for Complexity and Biosystems University of Milano, via Celoria 16, 20133, Milano, Italy Alessandro Taloni, Luca Guidetti, Simone Milan, Stefano Zapperi & Caterina A. M. La Porta Department of Physics, University of Milano, Via Celoria 16, 20133, Milano, Italy Alessandro Taloni & Stefano Zapperi CNR-Consiglio Nazionale delle Ricerche, ISC, Via dei Taurini 19, 00185, Roma, Italy Alessandro Taloni ISI Foundation, Via Chisola 5, 10126, Torino, Italy Francesc Font-Clos, Simone Milan & Stefano Zapperi Department of Environmental Science and Policy, University of Milano, via Celoria 26, 20133, Milano, Italy Luca Guidetti, Simone Milan & Caterina A. M. La Porta Department of Biosciences University of Milano, via Celoria 26, 20133, Milano, Italy Miriam Ascagni, Maria Enrica Pasini & Maria Rosa Gioria Istituto Neurologico Carlo Besta, Via Celoria, 11, 20133, Milano, Italy Chiara Vasco & Emilio Ciusani Department of Applied Physics, Aalto University, P.O. Box 11100, FIN-00076, Aalto, Espoo, Finland Stefano Zapperi CNR-Consiglio Nazionale delle Ricerche, ICMATE, Via Roberto Cozzi 53, 20125, Milano, Italy Francesc Font-Clos Luca Guidetti Simone Milan Miriam Ascagni Chiara Vasco Maria Enrica Pasini Maria Rosa Gioria Emilio Ciusani Caterina A. M. La Porta A.T., F.F.C., L.G., S.M. analyzed data. L.G., M.A., C.V., M.E.P., M.R.G., E.C., C.A.M.L.P. performed experiments. S.Z. and C.A.M.L.P. designed and coordinated the project. A.T., F.F.C., S.Z., C.A.M.L.P. wrote the paper. Correspondence to Caterina A. M. La Porta. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Video S1 Supplementary Info Taloni, A., Font-Clos, F., Guidetti, L. et al. Probing spermiogenesis: a digital strategy for mouse acrosome classification. Sci Rep 7, 3748 (2017). https://doi.org/10.1038/s41598-017-03867-7 Combined Feature Extraction Method Acrosomal Surface Acrosomal Membrane Seminiferous Tubules
CommonCrawl
Utilising identifier error variation in linkage of large administrative data sources Katie Harron ORCID: orcid.org/0000-0002-3418-28561, Gareth Hagger-Johnson2, Ruth Gilbert3 & Harvey Goldstein4 Linkage of administrative data sources often relies on probabilistic methods using a set of common identifiers (e.g. sex, date of birth, postcode). Variation in data quality on an individual or organisational level (e.g. by hospital) can result in clustering of identifier errors, violating the assumption of independence between identifiers required for traditional probabilistic match weight estimation. This potentially introduces selection bias to the resulting linked dataset. We aimed to measure variation in identifier error rates in a large English administrative data source (Hospital Episode Statistics; HES) and to incorporate this information into match weight calculation. We used 30,000 randomly selected HES hospital admissions records of patients aged 0–1, 5–6 and 18–19 years, for 2011/2012, linked via NHS number with data from the Personal Demographic Service (PDS; our gold-standard). We calculated identifier error rates for sex, date of birth and postcode and used multi-level logistic regression to investigate associations with individual-level attributes (age, ethnicity, and gender) and organisational variation. We then derived: i) weights incorporating dependence between identifiers; ii) attribute-specific weights (varying by age, ethnicity and gender); and iii) organisation-specific weights (by hospital). Results were compared with traditional match weights using a simulation study. Identifier errors (where values disagreed in linked HES-PDS records) or missing values were found in 0.11% of records for sex and date of birth and in 53% of records for postcode. Identifier error rates differed significantly by age, ethnicity and sex (p < 0.0005). Errors were less frequent in males, in 5–6 year olds and 18–19 year olds compared with infants, and were lowest for the Asian ethic group. A simulation study demonstrated that substantial bias was introduced into estimated readmission rates in the presence of identifier errors. Attribute- and organisational-specific weights reduced this bias compared with weights estimated using traditional probabilistic matching algorithms. We provide empirical evidence on variation in rates of identifier error in a widely-used administrative data source and propose a new method for deriving match weights that incorporates additional data attributes. Our results demonstrate that incorporating information on variation by individual-level characteristics can help to reduce bias due to linkage error. Linkage of administrative data is an important tool for service evaluation and research, as individual-level information can be combined in a relatively cost-effective and timely manner compared with conventional data collection models. Most administrative data sources were not developed with linkage in mind, posing unique challenges for identifying the same individual in different sources [1]. Typographical errors, missing values and identifiers that change over time can prevent records from matching and lead to linkage error (false-matches and missed-matches) [2, 3]. Even low error rates can lead to biased results, particularly when records from particular types of individuals or organisations are less likely to link successfully than others. Such 'differential' linkage can lead to a form of bias in analysis, for example when specific groups of records are misclassified or excluded from the linked dataset [4–8]. For linkage of data sources that do not contain a reliable unique identifier, probabilistic methods are commonly used [9, 10]. Probabilistic linkage makes use of variables such as sex, date of birth and postcode to create a match weight for classifying records as matches or non-matches. Match weights are traditionally based on the Fellegi-Sunter approach using conditional probabilities derived from estimated rates of errors in identifiers: the probability that identifiers agree given records belong to the same subject (m-probability), and the probability that identifiers agree given records belong to different subjects (u-probability) [11, 12]. Conditional probabilities can be derived from 'training' data, i.e. a sub-sample of data where the true match status for each record pair is known (supervised matching) [13]. If no training data are available, probabilities are typically estimated using statistical methods, such as the expectation-maximisation (EM) algorithm [14]. There are several problems associated with the calculation of probabilistic match weights using the traditional approach. Firstly, match weights are calculated assuming that identifier errors occur randomly within a dataset, and that the probability of an identifier error is unrelated to any other characteristic (age, ethnicity etc.) [11]. However, this assumption is often invalid: data quality is often associated with individual-level characteristics and can also vary on an organisational level [15]. These associations are typically ignored, unless these characteristics are incorporated into a blocking scheme with match weights being produced separately for each block. Secondly, match weights are typically calculated by summing the logarithms of m- and u-probability ratios across identifiers. This requires the assumption that identifier errors are independent (i.e. agreement on year of birth is independent of agreement on forename) - an assumption that often fails and can lead to misclassification of record pairs [16]. One approach to overcome these problems is to estimate match weights jointly over a set of identifiers (the agreement pattern), thus overcoming the need for independence between identifiers. It is also possible to calculate match weights allowing dependence on individual- and/or organisational-level covariates. Although characteristics of the identifying variables, such as the frequency of common or rare surnames, are often incorporated into match weight calculation, this has not been the case for individual characteristics that are not used for matching (e.g. ethnicity) or at an organisational level (e.g. by hospital). The present study aims first to provide empirical evidence on the associations between identifier error rates and individual characteristics in a national administrative data source (Hospital Episode Statistics; HES). Quality of identifier recording in HES is likely to be representative of other administrative sources, i.e. those where identifiers are input using a range of IT systems, and so information on identifier error rates will be relevant to linkage of other large administrative data sources. Secondly, we develop methods to estimate match weights without relying on the independence assumption, and incorporating individual or organisational-level attributes, and evaluate these weights as alternatives to traditional probabilistic match weights. Hospital Episode Statistics (HES) is an administrative data source containing information on all admissions to NHS hospitals in England. Linkage of HES is coordinated through NHS Digital (previously known as the Health and Social Care Information Centre) [17]. The HES extract used for this study had previously been linked with a reference (gold-standard) dataset of records extracted from the Personal Demographic Service (PDS), which is also coordinated by NHS Digital (http://systems.digital.nhs.uk/demographics/pds). PDS contains the latest demographic details corresponding to a given NHS number. PDS also contains historical information such as previous addresses and is used for the NHS number tracing service (known as the Demographics Batch Service) and to provide identifiers for the NHS Patient Spine. Linkage with PDS reference data allowed us to quantify identifier errors. In this study, we define identifier error as discrepancies between PDS and HES, e.g. where identifiers had been recorded incorrectly, had legitimately changed over time (e.g. postcode) or were missing in HES. For the purposes of this study, we defined our true (reference) match status by agreement or disagreement of NHS number between HES and PDS. We used a random sample of 10,000 record pairs from HES inpatient data linked with PDS, for the financial year 1st April 2011 to 31st March 2012, for each of three cohorts defined by date of birth: i) infants aged <1 year; ii) children aged 5–6 years; and iii) young adults aged 18–19 years. For each age cohort, the set of matches was created by identifying the PDS record associated with the NHS number on each HES record (n = 10,000 matches). The set of non-matches was created by identifying all PDS records with different NHS numbers to each HES record. This resulted in (10,000 × 10,000)-10,000 = 99,000,000 non-matches for each age cohort. However, the majority of these non-matches did not agree on any identifier, or only agreed on sex, and so were excluded from consideration. This resulted in around 30,000 non-matches for each age cohort. The data used for this study comprised patterns of agreement/disagreement between date of birth, sex and postcode in HES-PDS linked pairs, but contained no actual identifiers. Agreement patterns were aggregated by age cohort, sex and ethnic group. Identifier error rates We estimated identifier error rates for sex, date of birth and postcode, based on the number of times these identifiers disagreed in matched HES-PDS records. We modelled the risk of identifier error using logistic regression with a set of attribute predictors recorded in HES (ethnicity, age and sex). We used a multi-level model with hospital as a random effect to explore organisational-level variation. Dependence between pairwise identifiers was also tested using multi-level logistic regression models using Stata [18]. Probabilistic match weights Traditional probabilistic match weights (assuming independence between identifiers) We derived conditional probabilities for sex, date of birth and postcode based on the observed error rates for each identifier. Probabilities were derived from the number of times an identifier agreed or disagreed in pairs of matched HES-PDS records, e.g. for sex: $$ \begin{array}{l} m- probability = {m}_{sex}= P\left( agree\ on\ sex\Big| M\right)\hfill \\ {} u- probability = {u}_{sex} = P\left( agree\ on\ sex\Big| U\right)\hfill \end{array} $$ where M represents a match and U represents a non-match. Missing values were treated as disagreement. Match weights were then derived by summing the log-ratio of m- and u-probabilities over all k identifiers, i.e. $$ \boldsymbol{W} = {\displaystyle \sum_k} l o{g}_2\left(\frac{m_k}{u_k}\right) = l o{g}_2\left(\frac{m_{sex}}{u_{sex}}\right)+ l o{g}_2\left(\frac{m_{dob}}{u_{dob}}\right)+ l o{g}_2\left(\frac{m_{postcode}}{u_{postcode}}\right) $$ Match weights incorporating dependence between identifiers Each HES-PDS record pair was associated with an agreement pattern φ representing agreement or disagreement on the joint set of three identifiers {sex, date of birth, postcode}. For binary agreement (agree = 1; disagree = 0), there are 23 = 8 possible agreement patterns for sex, date of birth and postcode: {1,1,1}, {1,1,0} … and {0,0,0} etc. Conditional probabilities were derived jointly over all identifiers for each observed agreement pattern, e.g. for agreement on sex, date of birth and disagreement on postcode, represented as {110}: $$ \begin{array}{l} m- Probability={m}_{\varphi}= P\left( agree\ on\ sex\ and\ date\ of\ birth,\ disagreement\ on\ postcode\ \left| M\right.\right)= P\left(\varphi =\left\{110\right\}\ \left| M\right.\right)\hfill \\ {} u- probability={u}_{\varphi}= P\left( agree\; on\; sex\ and\ date\ of\ birth,\ disagreement\ on\ postcode\ \left| U\right.\right)= P\left(\varphi =\left\{110\right\}\ \left| U\right.\right)\hfill \end{array} $$ Match weights were then derived as: $$ W={ \log}_2\left(\frac{m_{\varphi}}{u_{\varphi}}\right) $$ Attribute-specific and organisational-specific match weights We derived attribute-specific match weights using the procedures described above, but now for each combination of characteristics as recorded in PDS (age cohort, sex, ethnic group, N combinations = 36). This process is distinct from blocking, in that agreement on any of these attributes is not required for linkage (and attribute-specific weights can be calculated for variables not used within the linkage, e.g. ethnic group). Organisational-specific match weights were derived by calculating m- and u-probabilities separately for each hospital (N hospitals= 388). Attribute-specific and organisational-specific match weights were calculated in the traditional manner (i.e. assuming independence between identifiers), as it was not possible to stratify each agreement pattern by age, sex, ethnicity due to low numbers. We performed a simulation study to determine the effect of the identifier-independence assumption and the value of incorporating attribute information into match weight calculation. Our scenario was linkage of hospital admissions records containing sex, date of birth, postcode, and NHS number. The aim was to estimate readmission rates by linking multiple hospital records for the same individual over time. Where there was a match between hospital records, this indicated that an individual had been admitted multiple times within the study period. Individuals with only a single hospital record and no matches were admitted only once during the study year. Data generating mechanism For each simulation, we created our 'matches' by randomly sampling agreement patterns (with replacement) from matched pairs in the HES-PDS extract, retaining distributions of age, sex and ethnicity from the original data. We created our 'non-matches' by sampling agreement patterns from non-matches in the HES-PDS extract. Sampling of matches and non-matches was stratified by age, sex and ethnicity, in order to reflect differences in readmission rates observed in the literature.[19] This approach avoided any distributional assumptions about identifier error rates for date of birth, sex or postcode, and also preserved associations between identifiers and individual characteristics. Since by design, the original HES-PDS extract only included records that agreed on NHS number, we introduced NHS number identifier error rates representative of those observed in the literature [20, 21]. We used several scenarios to determine the effect of different NHS number error rates on results: NHS number was randomly missing or incorrect in 30% of records NHS number was randomly missing or incorrect in 0.5% of records. NHS number was missing or incorrect in 30% of records overall, but was twice as likely to contain errors if there were errors in any of the other identifiers (sex, date of birth or postcode). NHS number was missing or incorrect in 30% of records overall, but errors were distributed with the same pattern as errors in ethnicity (as observed in the HES-PDS extract). For each simulation, records were rank ordered by match weight, and a cut-off threshold for classifying records as matches was chosen by determining the maximum weight or probability that would not exceed a false-match rate of 1% (or 99% specificity). It was possible to fix this threshold since the true match status was known in the simulated data, although this would not be possible in real data. Results from three approaches were averaged over 500 simulated datasets and compared with those from traditional match weights: i) match weights incorporating dependence between identifiers (based on agreement patterns), ii) attribute-specific match weights (based on 36 different combinations of characteristics) and iii) organisational-specific match weights (based on 388 hospitals). We compared sensitivity (i.e. the proportion of true matches that were identified) between methods and compared estimated readmission rates from each method with the 'true' readmission rate within 12 months (8.8%) in the simulated data. We assessed the performance of each method by measuring bias, i.e. the percentage difference between estimated and true readmission rates. Identifier errors (including missing values) were found in 0.11% of records for sex and date of birth, and in 53% of records for postcode. In these data, there was no evidence of dependence between postcode and date of birth or sex (p = 0.266 and 0.187 respectively from the multi-level logistic regression model). Although the error rate for date of birth was low, errors in this variable were more likely to occur in records where there was also an error in sex (p = 0.021). The probability of identifier error (disagreement of identifier values between HES and PDS) differed significantly according to age (p < 0.0001), ethnicity (p = 0.0005) and sex (p < 0.0001) (Fig. 1). Identifier errors occurred less frequently in records from females compared with males (odds ratio 0.84; 95% CI 0.81-0.86); and were lowest for Asian ethnicity (odds ratio 0.89; 95% CI 0.84-0.94 compared with White ethnicity). Across all identifiers, errors occurred less frequently in 5–6 year olds and 18–19 year olds compared with infants (odds ratios 0.39; 95% CI 0.37-0.40 and 0.37; 95% CI 0.36-0.39 respectively). However, patterns differed according to the identifier: sex was more likely to be correct in 18–19 year olds than infants, but the pattern was reversed for date of birth (Fig. 1). Percentage of HES-PDS linked records with disagreeing or missing identifiers according to age, ethnicity and sex. The larger identifier error rates in postcode reflect that postcode was missing for 83% of records for infants aged 0–1 years Multi-level logistic regression showed there was substantial variation on an organisational level although no particular hospital provider had a significantly higher error rate than the overall mean (Fig. 2). Variation in identifier error rates by hospital provider (n = 167). Each dot represents one hospital (hospitals with <500 matches were excluded). Inner lines = 95% control limits; outer lines = 99.8% control limits Absolute values differed for traditional match weights, match weights incorporating dependence between identifiers, and attribute-specific match weights (Table 1). The ordering of weights (and therefore of record pairs) was the same using both traditional weights and weights incorporating dependence. However, for attribute-specific weights, ordering differed according to individual characteristics. For example, for 0–1 year olds, agreement on date of birth and sex only had a higher weight than agreement on sex and postcode only, but these weights were reversed for the older age groups. Variation in attribute-specific match weights reflected underlying identifier error rates. For example, the match weight for agreement on date of birth and sex but disagreement on postcode was 9.2 for infants, but 7.9 for 5–6 year olds, reflecting the fact that postcode was more likely to be missing in infant records. Table 1 Traditional match weights, match weights incorporating dependence between identifiers, and attribute-specific match weights according to agreement pattern {date of birth, sex, postcode}. Record pairs with no agreement on any identifiers, or where only sex agreed (agreement patterns {000} and {010}), were assumed to be non-matches and excluded Sensitivity of linkage varied from 79% using traditional match weights and match weights incorporating dependence, to 97% using attribute-specific match rates. With an error rate of 30% in NHS number, all methods underestimated the 'true' overall readmission rate of 8.8%, except for the organisation-specific match weights (Table 2). Traditional match weights and match weights incorporating dependence provided similar results; organisational- and attribute-specific match weights performed best overall. Table 2 Simulation study results: estimated readmission rates. The 'true' readmission rate was 8.8% Bias in estimated readmission rates was highest when NHS number errors were more likely to occur in records with at least one other identifier error (21% bias using traditional weights or weights incorporating dependence, 3% using attribute-specific weights, 0.1% using organisational-specific weights). Errors in date of birth were highest in records with missing ethnicity, lower in the White group compared to Mixed, Asian, or Black, and there were no errors in the 'Other' category (Fig. 1). When this distribution was applied to NHS number errors, bias varied accordingly: little bias was introduced to the estimated readmission rate for the 'Other' group, but estimates for the Missing group were substantially biased (Figs. 3–4). Attribute- or organisational-specific weights performed well at handling these dependencies, with an overall bias of 1% and 0.2% respectively, Simulation study results: estimated readmission rates by ethnicity, according to NHS number error rate distribution Simulation study results: absolute bias in estimated readmission rates for NHS number error associated with ethnicity. Results for traditional match weights fall behind those for weights incorporating dependence between identifiers Our study provides empirical evidence on variation in identifier error rates by individual characteristics in a widely-used and extensive administrative data source. This information will be valuable for other researchers assessing the feasibility of linkage with administrative data sources, particularly where no training data are available, as the identifier error rates observed in HES will provide an appropriate starting point for estimating m- and u-probabilities in other similar datasets. We provide methods for incorporating dependence between identifiers, and variation in identifier errors by individual and organisational-level characteristics, into match weight calculation. Our simulation study demonstrated that match weights incorporating individual characteristics or organisational variation were effective at reducing bias associated with errors in linkage, particularly when errors are distributed non-randomly. Results from our simulation study support a large body of literature showing that substantial bias can be introduced into results of analyses based on data containing linkage errors [22–25]. This is particularly important when error is non-random, i.e. dependent on individual-level characteristics, and when there are a large number of missed-matches (e.g. with linkage requiring exact matching of identifiers). Evidence from previous studies highlights that the most vulnerable groups are those most likely to be affected by linkage error [2, 26]. In our study, readmission rates estimated using linkage with traditional match weights were underestimated, due to low sensitivity when fixing the false-match rate at 1%. In practice, false-match rates are often lower than 1%, corresponding with a lower sensitivity (where false-matches are avoided, to the expense of missed-matches) and a greater risk of under-estimated readmission rates [15, 27]. Bias was greatest for Mixed, Asian, or Black ethnic groups, meaning that relative comparisons by ethnicity would be biased using match weights derived by the traditional method. However, we show that attribute- or organisational-specific match weights, incorporating information on variation in identifier errors, can substantially reduce bias associated with linkage error. Additional methods for handling linkage error, such as incorporating match weights into analysis in a multiple imputation framework, could be used to reduce bias further [23, 28]. Incorporating information on individual or organisational characteristics, or dependence between identifiers, into match weight estimation is a relatively simple process, given a large enough sample from which to estimate the relevant parameters. In practice, detailed information on identifier error rates is not always available and parameters are often derived from a sample of data. Where a large enough sample on which to base estimates of error rates is not available, it would be possible to incorporate characteristics into latent class models such as the Expectation-Maximisation (EM) algorithm, which can be used to estimate conditional probabilities for the traditional Fellegi-Sunter approach [14, 29]. The value of incorporating information on record attributes is likely to be most evident in linkage of large-scale administrative datasets, particularly where records are grouped, for example by organisation or region. However, our study used a relatively simple design of linkage within one longitudinal dataset, and further evaluation is required to understand performance and practicalities of the method in large, complex linkages involving multiple files. There is limited evidence on how the failure of the assumption of independence affects linkage quality over and above the calculation of match weights. Tromp et al. (2008) found that dependence between highly correlated identifiers (such as expected birth and actual date of birth) had a negative impact on match weights and that this resulted in an incorrect ranking of record pairs ordered by match weight [16]. Similarly, Herzog et al. (2010) found that match weights assigned to non-dependent identifiers were too low in the presence of dependent identifiers [30]. Methods for accounting for dependence between identifiers have also been shown to improve the quality of linkage [31]. Others believe that the impact of dependence between identifiers is small, and that the failure of the independence assumption can be ignored [32, 33]. In our study, ordering of record pairs based on weights incorporating dependence between identifiers was the same as with traditional match weights, mainly due to a lack of strong dependence between errors in sex, postcode and date of birth observed in HES. However, incorporating dependence into match weights may become more important in data where obvious dependencies do exist, although handling dependence between a large number of identifiers may become impractical. A major strength of this study was the use of a large, generalizable administrative data source that is frequently linked with other datasets and used for commissioning and monitoring of the NHS in England and for research. Our study demonstrates the usefulness of the PDS as a reference dataset. There is a lack of published information available on PDS but it holds potential for developing a better understanding of the mechanisms underlying identifier errors, for improving data linkage methods, and for validating identifiers in HES [34]. However, our study was limited by the assumption that agreement on NHS number between HES and PDS indicated that records belonged to the same individual. In reality, NHS number is not always a reliable identifier for linkage [20]. If well-completed NHS number is indicative of good data quality more generally, we may have underestimated identifier error rates through our study design. In addition, we based our extract on date of birth, and so excluded all records where date of birth was missing. We also used a one year study period, and therefore would not have captured changes in postcode over time. Inspection of PDS reveals that 55% of children have at least two postcodes in their first year of life and 69% have at least two postcodes by age 5/6 (19% have four or more different postcodes by this age). In our simulation study, we fixed our threshold at a false-match rate of 1%. In practice, choice of appropriate thresholds can be difficult, and is typically chosen based on a sample of manually-reviewed records, or using synthetic data [35]. Incorporating information on individual characteristics or organisational variation into match weight calculation can reduce bias associated with errors in linkage, particularly when errors are distributed non-randomly. Continued improvement of linkage methods will allow more efficient exploitation of administrative data sources, reduce bias associated with linkage of imperfect identifiers and improve the reliability and transparency of analysis based on linked data. This will improve the ability of those working in government and health policy, who frequently use research data generated from administrative data sources to inform health policy, to make informed decisions on patient care and health systems. Evaluation of services for specific age or ethnic groups can be important for policy, but as our study shows, results for specific groups can be biased if associated linkage error is not addressed. Careful consideration should be given to the trade-off between bespoke linkage strategies for each study (that prioritise the quality of linkage) versus routine linkage systems that maximise efficiency and security. In order for data users to understand the limitations of linked data sources, it is vital that information on linkage quality and error rates are made available on release of linked data. Data providers need to improve transparency about data processing before during and after linkage. HES: Hospital Episode Statistics NHS: Personal Demographic Service Benchimol EI, et al. The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) Statement. PLoS Med. 2015;12(10):e1001885. Bohensky M, et al. Data linkage: A powerful research tool with potential problems. BMC Health Serv Res. 2010;10(1):346–52. Harron K, et al. Opening the black box of record linkage. J Epidemiol Community Health. 2012;66(12):1198. Neter J, Maynes E, Ramanathan R. The effect of mismatching on the measurement of response error. J Am Stat Assoc. 1965;60(312):1005–27. Lariscy JT. Differential Record Linkage by Hispanic Ethnicity and Age in Linked Mortality Studies: Implications for the Epidemiologic Paradox. J Aging Health. 2011;23(8):1263–84. Jasilionis D, et al. Ethnic mortality differentials in Lithuania: contradictory evidence from census-linked and unlinked mortality estimates. J Epidemiol Community Health. 2011;66(6):e7. Schmidlin K, et al. Impact of unlinked deaths and coding changes on mortality trends in the Swiss National Cohort. BMC Med Inform Decis Mak. 2013;13(1):1–11. Ford JB, Roberts CL, Taylor LK. Characteristics of unmatched maternal and baby records in linked birth records and hospital discharge data. Paediatr Perinat Epidemiol. 2006;20(4):329–37. Clark D. Practical introduction to record linkage for injury research. Inj Prev. 2004;10(3):186–91. Sayers, A. et al. Probabilistic record linkage. Int J Epidemiol. 2015;45(3):954–964. Fellegi IP, Sunter AB. A theory for record linkage. J Am Stat Assoc. 1969;64(328):1183–210. Jaro M. Probabilistic linkage of large public health data files. Stat Med. 1995;14(5–7):491–8. Christen P. A two-step classification approach to unsupervised record linkage. In Proceedings of the sixth Australasian conference on Data mining and analytics-Volume 70. Darlinghurst: Australian Computer Society, Inc.; 2007. Yancey W. Improving EM algorithm estimates for record linkage parameters. in Joint Statistical Meetings - Section on Survey Research Methods. 2004. Hagger-Johnson G, et al. Identifying false matches in anonymised hospital administrative data without patient identifiers. Health Serv Res. 2014;50(4):1162–78. Tromp M, et al. Ignoring dependency between linking variables and its impact on the outcome of probabilistic record linkage studies. J Am Med Inform Assoc. 2008;15(5):654–60. Health and Social Care Information Centre. Methodology for creation of the HES Patient ID (HESID). 2014. Stata. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP; 2015. Wijlaars L.P.M.M. et al. Who comes back with what: a retrospective database study on reasons for emergency readmission to hospital in children and young people in England. Arch Dis Child, 2016. in press. Aldridge RW, et al. Accuracy of Probabilistic Linkage Using the Enhanced Matching System for Public Health and Epidemiological Studies. PLoS One. 2015;10(8):e0136179. Hippisley-Cox J. Validity and completeness of the NHS Number in primary and secondary care: electronic data in England 1991–2013. In Project Report. University of Nottingham; 2013. Moore CL, et al. A new method for assessing how sensitivity and specificity of linkage studies affects estimation. PLoS One. 2014;9(7):e103690. Harron K, et al. Evaluating bias due to data linkage error in electronic healthcare records. BMC Med Res Methodol. 2014;14(1):36. Brenner H, Schmidtmann I. Effects of record linkage errors on disease registration studies. Method Inf Med. 1998;37(1):69–74. Krewski D, et al. The effect of record linkage errors on risk estimates in cohort mortality studies. Surv Methodol. 2005;31(1):13–21. Bohensky M. Chapter 4: Bias in data linkage studies. In: Harron K, Dibben C, Goldstein H, editors. Methodological Developments in Data Linkage. London: Wiley; 2015. Hagger-Johnson G, et al. Data linkage errors in hospital administrative data when applying a pseudonymisation algorithm to paediatric intensive care records. BMJ Open. 2015;5(8):e008118. Goldstein H, Harron K, Wade A. The analysis of record-linked data using multiple imputation with data value priors. Stat Med. 2012;31(28):3481–93. Winkler WE. Chapter 2: Probabilistic linkage. In: Harron K, Dibben C, Goldstein H, editors. Methodological Developments in Data Linkage. London: Wiley; 2015. Herzog TH, Scheuren F, Winkler WE. Record linkage. WIREs Comput Stat. 2010;2(5):535–43. Daggy J, et al. A practical approach for incorporating dependence among fields in probabilistic record linkage. BMC Med Res Methodol. 2013;13(1):97. Herzog T, Scheuren F, Winkler W. Data quality and record linkage techniques. New York: Springer Verlag; 2007. Winkler WE. The state of record linkage and current research problems. 1999. NHS England. Understanding the impact of data quality on data linkage. 2016. Winglee M, Valliant R, Scheuren F. A case study in record linkage. Surv Methodol. 2005;31(1):3–11. The authors would like to thank Wataru Suzuki and Trevor Anders from NHS Digital for their help with this collaboration. We are grateful to NHS Digital for enabling this work by allowing GHJ to access data within NHS Digital. The authors would also like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the Data Linkage and Anonymisation programme, where initial findings were presented and discussed. Katie Harron is funded by the Wellcome Trust (grant number 103975/Z/14/Z). This work was also supported by the Administrative Data Research Centre for England funded by the ESRC (grant number ES/L007517/1) and the initial work for this study was undertaken with funding from the ESRC NCRM (grant number ES/F035098/1). This work was discussed at the Isaac Newton Institute for Mathematical Sciences, Cambridge, supported by EPSRC grant no EP/K032208/1. Access to Hospital Episode Statistics and the Personal Demographic Service requires approval from NHS Digital; these data cannot be made publicly available. Simulated aggregate datasets are available from the corresponding author on request. KH and HG conceived the study; GHJ performed initial analyses of data, KH wrote the first draft of the manuscript, analysed aggregate data, and performed the simulation study. RG and HG critically reviewed the manuscript. All authors read and approved the final manuscript. The authors have no competing interests to declare. Hospital Episode Statistics were made available by NHS Digital (at the time of the study, named the Health and Social Care Information Centre). As the analysis was a service evaluation to improve the quality of service provided by NHS Digital, which did not directly involve participants in research, the study was exempt from UK ethics approval. GHJ conducted initial analyses internally at NHS Digital; aggregate results tables were then shared with co-authors (with no small cell sizes). The study design and results were shared with NHS Digital staff during meetings between January and May 2016. London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1 H 9SH, UK Katie Harron Administrative Data Research Centre for England, UCL, 222 Euston Road, London, NW1 2DA, UK Gareth Hagger-Johnson Administrative Data Research Centre for England and UCL Great Ormond Street Institute of Child Health, 30 Guilford Street, London, WC1 N 1EH, UK Ruth Gilbert University of Bristol, Administrative Data Research Centre for England and UCL Great Ormond Street Institute of Child Health, 30 Guilford Street, London, WC1 N 1EH, UK Harvey Goldstein Correspondence to Katie Harron. Harron, K., Hagger-Johnson, G., Gilbert, R. et al. Utilising identifier error variation in linkage of large administrative data sources. BMC Med Res Methodol 17, 23 (2017). https://doi.org/10.1186/s12874-017-0306-8 Record linkage Linkage error Linkage evaluation
CommonCrawl
What are some examples of a mathematical result being counterintuitive? As I procrastinate studying for my Maths Exams, I want to know what are some cool examples of where math counters intuition. My first and favorite experience of this is Gabriel's Horn that you see in intro Calc course, where the figure has finite volume but infinite surface area (I later learned of Koch's snowflake which is a 1d analog). I just remember doing out the integrals for it and thinking that it was unreal. I later heard the remark that you can fill it with paint, but you can't paint it, which blew my mind. Also, philosophically/psychologically speaking, why does this happen? It seems that our intuition often guides us and is often correct for "finite" things, but when things become "infinite" our intuition flat-out fails. intuition big-list 3 revs, 2 users 100% Steven-Owen $\begingroup$ Why does it happen? Because our intuition is developed by dealing with finite things: it is quite unsurprising that we are surprised by phenomena specific to infinite objects! This is exactly the same as the fact that our bodies are trained to move and act under the effect of gravity, so when we are in space we become clumsy and need to retrain. Intuition is not fixed: if you study phenomena associated to infinite objects, you develop an intuition for that, and presumably people working with large cardinals, (cont.) $\endgroup$ $\begingroup$ (cont) or strange objects like graphs with chromatic number $\aleph_8$ or Banach-Tarski partitions of a sphere, after a while find them just as intuitive as you and me find the formula for the area of a triangle. Intuition is, in most situations, just a name we put on familiarity. $\endgroup$ $\begingroup$ Philosophically / psychologically speaking, human brains weren't adapted for intuiting mathematical truths. The fact that we can repurpose our brains to do mathematics at all (beyond counting etc.) is astonishing. As for Gabriel's horn, I don't think this is a good example: see math.stackexchange.com/a/14634/232 . $\endgroup$ – Qiaochu Yuan $\begingroup$ I think remarks like "you can fill it with paint, but you can't paint it" are actually not helpful. In trying to appeal to our everyday intuition, they get in the way of mathematical understanding. Of course, you can't paint Gabriel's Horn (it's surface area is infinite) but you can't fill it with paint either (because paint molecules have a finite size, and Gabriel's Horn gets infinitely thin). Or, more prosaically, you can't fill Gabriel's Horn with paint because it's a mathematical idealisation that doesn't exist in the physical world. $\endgroup$ – Chris Taylor $\begingroup$ "In mathematics you don't understand things. You just get used to them." ---John von Neumann. $\endgroup$ – Nate Eldredge Here's a counterintuitive example from The Cauchy Schwarz Master Class, about what happens to cubes and spheres in high dimensions: Consider a n-dimensional cube with side length 4, $B=[-2,2]^n$, with radius 1 spheres placed inside it at every corner of the smaller cube $[-1,1]^n$. Ie, the set of spheres centered at coordinates $(\pm 1,\pm 1, \dots, \pm 1)$ that all just barely touch their neighbor and the wall of the enclosing box. Place another sphere $S$ at the center of the box at 0, large enough so that it just barely touches all of the other spheres in each corner. Below is a diagram for dimensions n=2 and n=3. Does the box always contain the central sphere? (Ie, $S \subset B$?) Surprisingly, No! The radius of the blue sphere $S$ actually diverges as the dimension increases, as shown by the simple calculation in the following image, The crossover point is dimension n=9, where the central sphere just barely touches the faces of the red box, as well as each of the 512(!) spheres in the corners. In fact, in high dimensions nearly all of the central sphere's volume is outside the box. Nick Alger $\begingroup$ But the volume of the box diverges just as well. As you increase dimensions shouldn't you expect everything to just keep growing? $\endgroup$ – Steven-Owen $\begingroup$ 1) This is not counterintuitive, one can see what happen comparing cases $n=2$ and $n=3$, relative difference in volumes between blue sphere and box is less. 2) $2^n$ spheres always has radious 1 when diagonal of box increases. 3) The fact that a sphere bounded by the vertex of a box can get out of the box in any dimension. 3 facts that makes this result perfectly logic!. $\endgroup$ – Gaston Burrull $\begingroup$ @Steven-Owen but notice that the distance from the origin to the center of each cube face remains constant. $\endgroup$ – Thomas Ahle $\begingroup$ This example shows how important is that we think outside the box! :-) $\endgroup$ – Asaf Karagila ♦ $\begingroup$ The alignment from n=2 to n=3 is different. I am trying to imagine the alignment in n=4 and above. $\endgroup$ – Xonatron It's somewhat counterintuitive that simple symmetric random walks in 1 dimension and in 2 dimensions return to the origin with probability 1. Once one has absorbed that fact, it may be somewhat counterintuitive that the same thing is not true in higher dimensions. (see Proving that 1- and 2-d simple symmetric random walks return to the origin with probability 1, Examples of results failing in higher dimensions, and Pólya's Random Walk Constant) $\begingroup$ This one makes my Collatz bat sense tingle $\endgroup$ – samerivertwice As some other people said, "intuition is highly subjective". Different people think about problems in different ways. That said, there are many, many counter-intuitive results in mathematics. This is why people demand rigorous proof! ;-) Almost any result involving probability. Humans suck at probability! (E.g., the birthday paradox: The probability that anyone in the room shares the same birthday as you is very small, unless you have a lot of people. But the probability that anybody in the room shares a birthday is very high. Way higher than you'd imagine...) Almost any result involving infinite sets. Infinity doesn't behave how you'd expect at all! ("Infinity" actually comes in different sizes. $\mathbb{Q}$ is the same size as $\mathbb{N}$, despite being a superset of it. Subtracting an infinite set from an infinite set can yield a result of positive finite size. Etc.) Several results about things which are impossible to compute. (E.g., the halting problem looks like it should be really, really easy, but it's actually impossible. Rice's theorem also sounds completely ludicrous. The busy beaver function is non-computable, regardless of how easy it looks. And so forth.) Fractal geometry contains a few results which break people's minds. (E.g., polygon which has infinity perimeter and zero area. A Julia set where every point simultaneously touches three basins of attraction. A connected curve with no derivatives...) I could probably think of more, given enough time... 2 revs, 2 users 96% MathematicalOrchid $\begingroup$ Rice's theorem does sound completely ludicrous. $\endgroup$ – superAnnoyingUser $\begingroup$ "polygon which has infinity perimeter and zero area", like a very long and slim rectangle? $\endgroup$ $\begingroup$ "the halting problem looks like it should be really, really easy" I don't find this true at all. When I first heard about it my thought process was "hmm... you could approach that by... wait, no, that wouldn't work... huh, that's really hard." $\endgroup$ – Challenger5 The topological manifold $\mathbb{R}^n$ has a unique smooth structure up to diffeomorphism... as long as $n \neq 4$. However, $\mathbb{R}^4$ admits uncountably many exotic smooth structures. Jesse Madnick $\begingroup$ The only dimension for which $\mathbb{R}^n$ admits exotic smooth structures is $n = 4$... I just can't get over it. $\endgroup$ – Jesse Madnick $\begingroup$ @JessMadnich: Why is this? When or how does "4" enter the proof? $\endgroup$ – Nikolaj-K $\begingroup$ Interesting coincidence, the only dimension for which $\mathbb{R}^n$ admits a (non-comutative) skew filed structure, compatible with the multiplication of $\mathbb{R}$ is also $n=4$. $\endgroup$ – N. S. $\begingroup$ There are no coincidences in mathematics - only reasons too abstract for us to have spotted yet :) $\endgroup$ $\begingroup$ @NikolajK There are very distinct techniques used for $n \leq 3$ and $n \geq 5$. The key to the former is that in low dimensions, smooth manifolds are the same as topological manifolds; every top. man. has a unique smooth structure. In high dimensions this is a largely algebraic story; relevant parts were written by Kirby and Seibenmann. See this answer. In low dimensions we can't use algebra as Whitehead's trick doesn't work, and the special cases for $n \leq 3$ depend on certain homotopy groups vanishing, which only happens in small dimensions. $\endgroup$ The Monty Hall Problem is another finite example which most people find highly counter-intuitive. I believe even Erdos refused to believe its solution was correct for a while. Fixee $\begingroup$ Here is a reference to the story about Erdős, but I agree with this guy's interpretation: "I doubt Erdős was really confused. The Monty Hall problem is complicated because usually the person explaining it tries to make it complicated by leaving out necessary information." $\endgroup$ – Dan Brumleve $\begingroup$ I heard the story from a mathematician who was actually there when Erdös learned about the problem (Ken Binmore). Erdös was confused about the problem. $\endgroup$ – Michael Greinecker $\begingroup$ I've heard that most of the confusion was caused by faulty or conflicting statements of the problem. $\endgroup$ – rschwieb $\begingroup$ For me, everything becomes crystal clear if the number of doors is changed from 3 to 100. Then saying that switching doesn't make a difference is akin to saying you have good chances at guessing a secret number between 1 and a 100 on your first try. $\endgroup$ – Alex R. $\begingroup$ I was going to mention the Monty Hall problem as well. Other examples in probability are the waiting time paradox and Benford's law for lead digits. Fir contingency tables in statistics there is Simpson's paradox. Probability has a wealth of counterintuitive examples $\endgroup$ – Michael R. Chernick It is possible to define a curve which fills every point of a two-dimensional square (or, more generally, an $n$-dimensional hypercube). Such curves are called space-filling curves, or sometimes Peano curves. More precisely, there is a continuous surjection from the interval $I$ onto the square $I\times I$. This is related to the (also counter-intuitive?) result of Cantor, that the cardinality of the number of points in the unit interval is the same as the that of the unit square, or indeed any finite-dimensional manifold. $\begingroup$ The particular one pictured here is called a Hilbert Curve. $\endgroup$ – robjohn ♦ Suppose we are tossing a fair coin. Then the expected waiting time for heads-heads is 6 throws, but the expected waiting time for tails-heads is 4 throws. This is very counterintuitive to me because the events heads-heads and tails-heads has the same probability, namely $\tfrac{1}{4}$. The general result is the following: Suppose we are throwing a coin that has probability $p$ for heads and probability $q=1-p$ for tails. Let $V_{\text{HH}}$ be first time we encounter two heads in a row and $V_{\text{TH}}$ be the first time we encounter heads and tails in a row, i.e. $$ V_{\text{HH}}(\omega)=\min\{n\geq 2\mid \omega\in H_{n-1}\cap H_n\},\\ V_{\text{TH}}(\omega)=\min\{n\geq 2\mid \omega\in H_{n-1}^c\cap H_n\}, $$ where $H_n$ is the event that we see heads in the $n$'th throw. Then $$ E[V_{\text{HH}}]=\frac{1+p}{p^2},\\ E[V_{\text{TH}}]=\frac{1}{pq}. $$ Putting $p=q=\tfrac{1}{2}$ we see that if our coin is a fair coin then $E[V_{\text{HH}}]=6$ and $E[V_{\text{TH}}]=4$. Stefan Hansen $\begingroup$ Presumably the 'intuition' comes from realising that the possible two-toss combinations that don't end the game (namely, TT and HT) both end with tails - one more toss can end the game on TH, but you need at least two tosses to end the game on HH. $\endgroup$ $\begingroup$ I'm not sure I follow. The waiting time for TH would yield the same result as HT but TH does not end with tails. $\endgroup$ – Stefan Hansen $\begingroup$ He was misinterpreting the game as a two-player game where the players wait for "their" pattern to occur before the other which is also a well-known counterintuitive result, also related to intransitive probabilities. $\endgroup$ – Phira $\begingroup$ It's interesting to examine why this is. If you analyze the case of 4 coin flips, for example, you'll observe that the pattern for TT and HT both come up 12 times as expected (counting duplicates that occur in a single outcome). However, HT appears in 11 possible outcomes (it is duplicated once in HTHT), whereas TT appears in only 8 outcomes (it is duplicated twice in HTTT and TTTH, and 3 times in TTTT). $\endgroup$ – Briguy37 Whether something is intuitive or counterintuitive is a very subjective matter. Lots of results are counterintuitive if you don't have the correct intuition. But here's one elementary result of my own that you may find counterintuitive. Suppose $N$ players are to conduct a knockout tournament. Their starting positions, on the leaves of a rooted binary tree, are chosen randomly, all such assignments being equally likely. When two players are at the children of an unoccupied node, they play a game and the winner (ties are not allowed) advances to that node. The winner of the tournament is the player who reaches the root. We assume that in any game between two given players $i$ and $j$, the probability that $i$ wins is a given number $p_{ij}$, independent of past history. These probabilities are assumed to satisfy strong stochastic transitivity, which means that if $p_{ij} \ge 1/2$ then $p_{ik} \ge p_{jk}$ for all $k$, i.e. if $i$ wins against $j$ at least half the time, then $i$ does at least as well as $j$ against any other player. Thus the probabilities $p_{ij}$ generate a consistent ordering of the players by ability. Now it seems intuitive that under these conditions, better players have a better chance of winning the tournament. Indeed, it was conjectured that this was the case. However, it is not true, as I proved: "Stronger Players Need Not Win More Knockout Tournaments", Journal of the American Statistical Association 76 (1981) 950-951: http://www.tandfonline.com/doi/abs/10.1080/01621459.1981.10477747 Robert Israel $\begingroup$ Is there an in depth explanation of that available that's not behind a paywall? $\endgroup$ – Dan Is Fiddling By Firelight $\begingroup$ I haven't seen either paper, but the abstract of the Chen and Hwang paper Stronger players win more balanced knockout tournaments says that your counterintuitive result applies only for unbalanced tournaments. Is your counterintuitive result essentially that the strongest player might have to play more games than a weaker player? If so, the result seems much less surprising than it did at first. $\endgroup$ – MJD $\begingroup$ It's more than that. In the particular example I found, one player gets a "bye" into the final round. The most probable way for one of the two weakest players (4 and 5) to win the tournament is to not only get that "bye" but to play one of the 13 identical players labelled 2 (whom both have probability $\epsilon$ of beating) rather than player 1 (whom they have no chance of beating). Player 2 is the only one who has a chance against player 1. $\endgroup$ – Robert Israel $\begingroup$ If the worst player (5) gets the bye, player 4 is more likely to beat player 3 in the first round than player 5 would have; in the second round player 4 or 5 would then lose for sure, but player 3 would have had a chance to advance against 2. So 5 getting the bye increases player 1's chance of facing player 2 rather than 3 in the third round, and this is what gives 5 a better chance of winning the tournament than 4. $\endgroup$ $\begingroup$ @DanNeely Come on, it's only $43! $\endgroup$ – I. J. Kennedy Choose a natural number, for example $n=8$. Then pick a base, for example $b=2$, and finally select another natural number called the bump factor, for example $B=1000$. Then construct a sequence of natural numbers as follows: The first term of the sequence is simply $n$ written in expanded base $b$. $$m_{0}=2^{2+1}=8$$ The second term is obtained from the first by bumping the base $b$ by a factor of $B$ and then subtracting $1$ from the result. $$m_{1}=2000^{2000+1}-1=\sum_{k=0}^{2000}1999\cdot2000^{k}>10^{10^3}$$ The third term is obtained from the second by bumping the new base ($2000$) by a factor of $B$ and then subtracting $1$ from the result. Denoting $d=2\cdot 10^{6}$ we have $$m_{2}=1999d^{d}+1999d^{1999}+\cdots+1999d+1998>10^{10^7}$$ Continuing in this fashion we denote $e=2\cdot10^{9}$ and the next term is $$m_{3}=1999e^{e}+1999e^{1999}+\cdots+1999e+1997>10^{10^{10}}.$$ The next term $m_{5}$ has over 24 trillion decimal digits. Intuition tells us that the sequence $(m_{r})$ goes to infinity, and very fast. However, this is not the case. Surprisingly, the sequence will reach $0$ in finitely many steps. That is, there is an $r\in \mathbb{N}$ for which $m_{r}=0$. The sequence we constructed is an example of a Goodstein sequence, and the fact that it terminates is a very particular case of Goodstein's Theorem. This theorem is counterintuitive for two reasons. First because of what the theorem concludes. Roughly speaking, it states that any sequence of natural numbers of the type constructed above (i.e. a Goodstein sequence) will always terminate. Second, because of what it is required to prove it. Goodstein's theorem is a fairly elementary statement about natural numbers (i.e. formulated within the Peano Axioms of Arithemtic) and yet its proof cannot be carried out using only these axioms. It requires infinite ordinals. I also Think The Kakeya Needle Problem is worth mentioning (see http://mathworld.wolfram.com/KakeyaNeedleProblem.html). To me it is counter-intuitive that there is no smallest set, in which a needle of unit length can be freely rotated. Unless it has to be convex, of course. cimrg.joe $\begingroup$ This is great; I hadn't heard of that problem before. $\endgroup$ – joriki $\begingroup$ I heard about it in a Fractal Geometry-course. Funny result. In general, I think fractal geometry takes some getting used to. Fx the notion of Hausdorff-dimension. It is somewhat counterintuitive to me that a set, such as the Cantor Set can have irrational Hausdorff dimension (here ln 2/ ln 3), even if it is a subset of R, and has Lebesgue measure 0. $\endgroup$ – torbonde $\begingroup$ This is great indeed. I finally can put a name on this problem. Mine was years ago with a (rather strange) car that had to park in a (even stranger) parking lot. @TorBonde: I don't see how it is related to fractal geometry? $\endgroup$ – jmad $\begingroup$ Well, it might not be directly related to fractal geometry, I guess. I heard of it in a fractal geometry course. The thing about it is, that it is possible to rotate the needle in a set of Lebesgue measure 0, but Hausdorff dimension 2. The latter takes fractal geometry to prove. $\endgroup$ Just to throw in something different, it's pretty wild that Khinchin's constant is universal for almost every real number (except for rationals and a few other miscreants). By definition if $x$ has continued fraction $x=a_0+\frac{1}{a_1+\frac{1}{a_2+\ldots}}$, then for almost all $x$, $\lim_{n\rightarrow\infty} (a_1a_2\cdots a_n)^{1/n}\approx 2.685$ $\begingroup$ This is cool but I'm not sure what intuition it violates. $\endgroup$ $\begingroup$ You do see this for decimal expansions. By the strong law of large numbers, if $x$ has decimal expansion $a_0.a_1 a_2 a_3 \ldots$, then for almost all $x$, $\lim_{n \to \infty} (1/n) (a_1 + \cdots + a_n) = 4.5$. $\endgroup$ – Michael Lugo $\begingroup$ Khinchin's result has nothing to do with base 10. The continued fraction expansion of a number does not depend on what base you are using to write your numbers. $\endgroup$ – Johan $\begingroup$ @Sam: think of it like this: for "random" $x$, the terms of the continued fraction are also "random". Heuristically, the relevant fact is something to the effect that the early terms in the sequence don't substantially affect the distribution of the later terms. $\endgroup$ $\begingroup$ @Sam, my intuition is that the value should not be the same for all numbers (essentially because of the infinite freedom we have in creating numbers), and also that it should be difficult to find two numbers for which the value is not the same (because the function suppresses gross details). So in that sense it is not really surprising that almost all values are the same, although Khinchin's constant is definitely an enigma (e.g. might it be rational?). I think the fact that it is not known to be irrational is even less intuitive than its existence. $\endgroup$ Really interesting question, I have some examples that many people find counterintuitive. The set $\mathbb Q$ of rational numbers as the same cardinality of the set of natural numbers $\mathbb N$, although $\mathbb N$ is strictly contained in $\mathbb Q$. Similarly many people find it to be counterintuitive that even numbers are equal in cardinality to the naturals (i.e. the sets $\{2n \mid n \in \mathbb N\}$ and $\mathbb N$ have the same cardinality). The set $\mathbb R$ has cardinality strictly greater than the set $\mathbb N$ (and so also of the set $\mathbb Q$) (so there's not just one type of infinity). Another good example of a counterintuitive fact is the Banach-Tarski paradox stating that a ball can be decomposed in a finite number of pieces which can be glued together to build up two balls identical to the first one (I say that this is a paradox because the axiom of choice is clearly true :D). If other examples come to my mind I'll add them later. ineff $\begingroup$ +1 for the Banach-Tarski paradox, it's the first that came to mind when read the question. I think that it is counter-intuitive because intuition would tell that any 3d object has volume. But no well-defined volume can be assigned to these pieces. $\endgroup$ – ypercubeᵀᴹ $\begingroup$ "The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" (According to Wikipedia, this is a quote from someone called Jerry Bona). $\endgroup$ Here are a few counter-intuitive results that have surprised me at one point or another: Impossible Constructions using Straightedge and Compass. Not all regular $n$-gons are constructible with straightedge and compass. Godel's Incompleteness Theorems. Certain non-trivial arithmetics cannot be both complete and consistent. Exotic spheres. In certain dimensions there are spheres which are homeomorphic but not diffeomorphic to the standard sphere. Kuratowski's Closure-Complement Theorem. The largest number of distinct sets obtainable by repeatedly applying closure and complement to a given starting subset of a topological space is 14. Dehn's Answer to Hilbert's Third Problem. The cube and regular tetrahedron are not scissor-congruent. Löb's or Curry's paradox: If this sentence is true, then Germany borders China. Logic says this means Germany borders China (or anthing you want to put after the then). $\begingroup$ This got a lot more interesting after I thought about it for a minute! It's different from "this sentence is false". $\endgroup$ – Nick Alger $\begingroup$ @NickAlger Not really, instead of just "paradox" it is "If true fact, then paradox." as in: If Germany does not border China, then this sentence is false. $\endgroup$ $\begingroup$ What does it mean for the sentence to be true? Sentences of the form if p then q, are true (or provable) in my naive sense if I can get you from p to q using "logic," however, "if this sentence is true, then Q" is confusing. $\endgroup$ $\begingroup$ @Phira: Your statement is just the same as the original with a Contrapositive added: original S <=> S=>F ; yours S <=> ~F=>~S. And note the result is both_S and F are true, not _no solution like a true paradox like "This sentence is false" i.e. S <=> ~S $\endgroup$ – Mark Hurd $\begingroup$ @ricky: in formal logic, "if $P$ then $Q$" is true if either $Q$ is true or $Q$ is false and $P$ is false. (Another way to say this is that it can only be false if $P$ is true but $Q$ is false.) $\endgroup$ The existence of countable countably infinite connected Hausdorff spaces is (to me) counterintutive. (Just one example; I could think of others . . . . .) Later edit: A Hausdorff space is a topological space in which, for every pair of points $x$ and $y$, there are open neighborhoods of $x$ and $y$ that do not intersect each others, i.e. $x$ and $y$ can be in a certain sense separated from each other. A connected space is a topological space that cannot be broken into separate components having no proximity to each other. Imagine two disks remote from each other. No sequence of points in one disk can approach a point in the other as a limit. That's a space that is not connected. Countable means either finite or countably infinite, as opposed to uncountably infinite, and that means one can list all the point in a sequence: $x_1,x_2,x_3,\ldots$. The sequence may be infinite, but each term in the sequence has only finitely many terms before it. So figure out what a countable connected Hausdorff space is based on all that. Michael Hardy $\begingroup$ It would be interesting if you expanded. However, it sounds quite advanced theory. $\endgroup$ – Pedro Tamaroff ♦ $\begingroup$ @Peter: see here $\endgroup$ – t.b. $\begingroup$ It doesn't required any background beyond a semester of point-set topology. $\endgroup$ – Michael Hardy $\begingroup$ So the one-point space with your favourite topology is counterintuitive? Interesting... $\endgroup$ – jmc $\begingroup$ @jmc : I actually meant a countably infinite connected Hausforff space. $\endgroup$ I didn't think of this until today, but it's an important thing that I, and many other people, find completely mindboggling. Let's consider properties, like "is red" or "has kidneys" or "has a heart". Now there's a certain sense in which two properties might be the same even though they don't look the same, which is that they might be true of exactly the same entities. For example, it might turn out that everything that has kidneys also has a heart and vice versa, so that even though the two properties have different meanings (kidneys are not the same as hearts), they amount to the same thing in practice. Mathematics is of course full of such properties; consider for example the property ${\mathcal O}_1$ of being expressible in the form $2n+1$ for some integer $n$, and the property ${\mathcal O}_2$ of being expressible in the form $S_{n+1} - S_n$ for some pair of consecutive squares. Many theorems are of this type, that two seemingly different properties are actually the same. So let's try to abstract away the senses of properties, leaving only the classes of things that possess them. We'll say that there are these entities called sets which are abstractions of properties. Things belong to a set exactly if they possess the property of which the set is the extension: For every property $P(x)$, there is a corresponding set $\{x : P(x)\}$ of exactly those entities $x$ for which $P(x)$ is true. An entity $y$ is a member of a set $\{x : P(x)\}$ if, and only if, $P(y)$ is true. That seems utterly straightforward and utterly unexceptionable, and yet, it is utterly wrong. There are completely mundane properties for which there is no corresponding set of all the entities with the property. What? Who ordered that? $\begingroup$ Could you give some examples? Are you taking about something akin to Russell's paradox? Thanks! $\endgroup$ $\begingroup$ @jake: I am talking about Russell's paradox. Take $P(x) = x\not\in x$, $P(x) = (x\in x\implies 2+2=5)$, or $P(x) = \lnot\exists y: y\in x\wedge x\in y$. None of these properties has an extension. $\endgroup$ There are a number of results of the form "Proposition P fails in dimension $d$" where P holds in lower dimensions, many of which can seem counterintuitive until you understand higher dimensional phenomena. Here's an elementary one, which many people on this site won't find counterintuitive but some might. Consider the question "What is the maximum number of vertices a polyhedron in $\mathbb{R}^d$ can have such that there is a segment joining every pair of points which is an edge of the polyhedron?" For $d=2$, the answer is obviously 3, with a triange. It's not difficult to see that a tetrahedron is optimal for $d=3$. Intuition suggests that the $d$-simplex is optimal based on this. But for $d=4$, in fact, there is no maximum number. There are polyhedra in $\mathbb{R}^4$ with arbitrarily many vertices and an external edge joining each pair of vertices. If you take any finite collection of points on the moment curve $\{(t,t^2,t^3,t^4)\, | \, t>0\}$, the segment joining any two of the points is a face of the convex hull of the collection. Once you have an intuition for higher dimensional geometry, this is obvious, but it can seem counterintuitive. A more advanced example, that I still find counterintuitive at times, is this: In $\mathbb{R}^d$ for $d=2,3$, given any polyhedron, one can move each of the vertices a small amount to obtain a combinatorially equivalent polyhedron with rational vertices. But in $d=4$ and higher there are polyhedra which can not be realized with rational coordinates. EDIT: I was asked to provide a reference. This is a well-known result in some circles, particularly in computational geometry, so it's covered in a number of locations. Marcel Berger's Geometry Revealed covers both of the above so-called counterintuitive statements, as well as the surprisingly nonobvious case $d=3$, in chapter 8, roughly page 550, and is a pretty easy read. If you don't have access to Springer, the paper Realization spaces of polytopes by Richter-Gebert is the most comprehensive treatment I know of, and probably any book citing this paper is quoting the result. Logan Maingi $\begingroup$ Another example is that the hypervolume of the n-sphere increases until $n=5$, and then decreases. Generalizing the factorial to $\Gamma$ in the hypervolume formula shows that the dimension with the largest sphere isn't even an integer! $\endgroup$ $\begingroup$ I never found this one to be as counterintuitive. Comparing hypervolumes of n-spheres is geometrically more meaningfully thought of as comparing the ratio of their hypervolumes to those of unit hypercubes (via dimensional analysis). But for me, the more natural thing was to compare the ratio of their hypervolumes to that of their circumscribing cubes, which then gives a monotonically decreasing sequence... $\endgroup$ – Logan M $\begingroup$ The question then becomes whether the sequence decreases faster than $2^{-n}$, and you can probably convince yourself that the sequence should decrease super-geometrically based on geometric intuition. If you look at it that way, then there's nothing mysterious about the volume formula. Unfortunately, this is how I first considered the problem, and so I never had the opportunity to be surprised by this result. $\endgroup$ $\begingroup$ That is a good point. When I first discovered this I thought it was really weird (maybe because I'm not a very visual thinker). Here is another one: a 2-dimensional random walk returns to the origin almost surely, but in 3 or more dimensions it may not! $\endgroup$ $\begingroup$ I have very little intuition for why random walks with infinite state spaces behave the way(s) they do. The 1d case is easy, but for everything else my only recourse is to actually do the calculation. For me the counterintuitive thing about this is mostly that I have no idea why, geometrically, the 2d random walk should have more in common with the 1d case than the 3d case. What is it that adding a third direction does to make the random walk non-recurrent, conceptually? I don't know; the only proof I know for this fact is just getting one's hands dirty with explicit calculations. $\endgroup$ Another elementary example: Connelly spheres, also known as flexible polyhedra. These are non-convex polyhedra, homeomorphic to a sphere, with triangular faces; the polyhedra can be deformed continuously, while the faces remain rigid. It took about 211 years to find a counterexample to Euler's conjecture that embedded polyhedra are rigid. See e.g. http://www.reocities.com/jshum_1999/polyhedra/introduction.htm $\begingroup$ Aigner and Ziegler's Proofs from the Book has a diagram showing you how to build this thing. I made one from paper and sticky tape in about 15 minutes, and it flexed beautifully. $\endgroup$ – TonyK $\begingroup$ But its volume doesn't change! (Connelly's Bellows Theorem) $\endgroup$ – JeffE Although well-known, I feel compelled to note the remarkable equation $$ e^{i\pi} + 1 = 0. $$ That five of mathematics most well-known quantities are related in such a pleasantly simple way is astonishing and, to the the uninitiated, is certainly not intuitive. Of course, once one knows about infinite series, their basic properties and how to define the trigonometric and exponential functions with them, deriving this equation is routine. But, without this knowledge, the above equation seems almost mystical. In fact, this equation is what first piqued my own interest in mathematics. ItsNotObvious $\begingroup$ $\pi$ is not a natural constant and that 1 is weird on the left side, so the more realistic equation is $e^{i \frac{\tau}{2}} = -1$. Not so "deep" any more. $\endgroup$ – isarandi A famous example of a counterintuitive fact in statistics is the James-Stein phenomenon. Suppose $X_1,\ldots,X_m$ are independent normally distributed random variables with expected values $\mu_1,\ldots,\mu_m$. One wishes to estimate $\mu_1,\ldots,\mu_m$ based on observation of $X_1,\ldots,X_m$. If instead of using $(X_1,\ldots,X_m)$ as the estimator of $(\mu_1,\ldots,\mu_m)$, one uses the James-Stein estimator $$ \left(1-\frac{(m-2)\sigma^2}{X_1^2+\cdots+X_m^2}\right)(X_1,\ldots,X_m) $$ (where $\sigma^2$ is the common variance) then the mean square error is smaller, regardless of the value of $(\mu_1,\ldots,\mu_m)$. And the James-Stein estimator is demonstrably not even an admissible estimator, in the decision-theoretic sense. Thus the obvious estimator is inferior to one that is inferior to some admissible estimators. One is "shrinking toward the origin", and it should be apparent that it doesn't matter which point you take to be the origin. In practice one should take the point toward which one shrinks to be the best prior guess about the value of $(\mu_1,\ldots,\mu_n)$. The reason for the non-admissibility is that sometimes $(m-1)\sigma^2/(X_1^2+\cdots+X_n^2)$ is more than $1$, so that the sign gets reversed. That's too extreme by any standards. A piecewise-defined estimator that shrinks toward the origin but no further than the origin is superior in the mean-squared-error sense. In the '80s and '90s, Morris L. Eaton showed that the fact that this works if $m\ge 3$ but not if $m\le2$ (apparent from the "$m-2$" in the numerator) is really the same fact as the fact that random walks are recurrent in dimension $\le2$ and transient in dimension $\ge 3$, which I think was discovered about a hundred years ago. $\begingroup$ If I remember correctly, that last fact about random walk was discovered by G Polya in the 1920's, so a little less than 100 year! $\endgroup$ – kjetil b halvorsen I think Smale's paradox (sphere eversion) is pretty counterintuitive. Video: http://www.youtube.com/watch?v=R_w4HYXuo9M Also check out Wikipedia's list of mathematical paradoxes. ("'Paradox' here has the sense of 'unintuitive result', rather than 'apparent contradiction'.") Dan Brumleve Intuition is a really subjective and personal matter. To go even further with the problem of such list is that there are many proof requiring some use of the axiom of choice. On the other hand, not assuming the axiom of choice can be equally reasonable, and here is a short list of how things might break down completely: The real numbers can be a countable union of countable sets. There might be no free ultrafilters, at all (on any set). The rational numbers might have at least two non-isomorphic algebraic closures. The natural numbers with the discrete topology might not be a Lindelof space. Some results in ZFC which are completely unintuitive the first time you hear them: While being perfectly definable, the set $\mathcal P(\mathbb N)$ can differ greatly between models of ZFC; or an even worse formulation: There are models $M\subseteq N\subseteq V$ such that $N$ has more reals than $M$ and $V$ has more reals than $N$, but the amount of real numbers of $M$ and $V$ is the same. There is a polynomial with integer coefficients which has a rational root if and only if ZFC is inconsistent. Every model of ZFC is one class forcing from being $L[a]$ where $a$ is a real number; and every model is one set forcing away from being $HOD[A]$ for some set $A$. The union of countably many disjoint open intervals might have uncountably many boundary points (e.g. the complement of the Cantor set in $[0,1]$). Both lists are infinitely long, and I can probably ramble about the first list for several days. The point, as I say at first, is what we take for "granted" as intuitive which can change greatly between two people of different mathematical education; mathematical culture; and what is their usual axiomatic system (which is essential for "results"). One strange result on mathematicians is a direct corollary of the first result in the second list: People are used to think that there is only one universe, only one fixed way to handle sets. While it is true that for the working mathematician this is often a reasonable approach, set theorists deal with models of set theory, much like group theorists deal with models of group theory. Somehow everyone is flabbergasted when they are being told (for the first time, if not more) that there are many models of ZFC with different number of sets of natural numbers in each model; but no one falls of their chair when they are told that some fields have more irrational numbers than others... Asaf Karagila I think a puzzle at calculus level is the following: Given a real number $x$ and a conditionally convergent series, the series can be re-arranged so that its sum is $x$. Maesumi The fact that one can easily prove the existence of uncountably infinite (as opposed to countably infinite) sets is counterintutive to me. Not that fact that uncountably infinite sets exist, but the fact that the proof is so simple. I was astonished when I first learned of it. I was in ninth grade. I think it was in a book by Vilenkin that I read the proof. Similarly the fact that one can easily prove that the square root of $2$ is irrational. I hadn't expected that to be so simple. And the mere existence of irrational numbers seems counterintuitive: why shouldn't fractions be enough to fill up all the space between integers? The fact that for any infinite set $A$ there is a bijection between $A$ and $A \times A$ is very counterintuitive for me... N. S. I think the following has (suprisingly) not been pointed out already: http://en.wikipedia.org/wiki/List_of_paradoxes#Mathematics As a general rule paradoxes (counterintuitive truths) are very important in mathematics and there are many books dedicated to them. 1 and 2 are famous examples. The Monty Hall problem and Banach-Tarski paradox even have books dedicated to them, and each is the subject of ongoing research. Paradoxes arise when simplification does not work, when usual assumptions do not hold. Of course this will depend on the person thinking about the phenomenon, on her experience. A topologist is well aware of counterexamples in her field so she would not find them paradoxical anymore. Also I am not sure the Blue-eyed Islanders Paradox has been mentioned here. It has received much internet attention recently, foremost thanks to Terence Tao, c.f. also xkcd. $\begingroup$ I should perhaps comment that counterexamples are not necessarily paradoxical, but only often so. They may be interesting because they are intuitive but hard to prove or because they are really counterintuitive, like a continuous but nowhere differentiable function. This example also shows that what was once paradoxical is now the basis for fractal thinking, thus something very natural for many analysts. It also shows that paradoxical examples are important for overcoming simplifications that may block deep advances. $\endgroup$ – plm The concentration of measure phenomena on the sphere: If $A\subset\mathcal{S}^{n-1}$ is a measurable set on the sphere with $\lambda(A)=1/2$ and $A_\epsilon$ is an epsilon neighborhood of $A$ on $\mathcal{S}^{n-1}$, then $$\lambda(A_\epsilon)\geq 1-\frac{2}{e^{n\epsilon^2/2}}$$ So say you take $A$ to be a cap on the sphere and fix a small $\epsilon$. As the dimension of the sphere increases, eventually the $\epsilon$ enlargement of $A$ will have almost the entire area of the sphere! Playing with the upper and lower cap and the corresponding enlargements, one sees that area is concentrated around the equator. Imagine you have a lawnmower and you cut the grass moving along the equator. What percentage of the sphere do you mow? Well, in 3 dimensions, not that much. But as you cut the grass on higher and higher dimensional spheres, moving centered along the equator, the surface area covered becomes almost 100% of the entire area of the sphere! This result felt pretty counter-intuitive to me the first time I saw it. $\begingroup$ Also the closely related result, Dvoretzky's theorem, which states that a random slice through a convex set in high dimensions is almost certain to be approximately circular! math.lsa.umich.edu/~barvinok/total710.pdf $\endgroup$ The Weierstrass function. It showed that a function can be continuous everywhere but differentiable nowhere. This was (and still is) counterintuitive. Perhaps the Banach–Tarski paradox: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of non-overlapping pieces (i.e. subsets), which can then be put back together in a different way to yield two identical copies of the original ball. The reassembly process involves only moving the pieces around and rotating them, without changing their shape. http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox $\begingroup$ Already quoted. $\endgroup$ – Giorgio Mossa $\begingroup$ ... in ineff's answer. $\endgroup$ – Martin Sleziak How about the Löwenheim–Skolem theorem? One of its consequences is that the field of real numbers has a countable model. Bill Cook $\begingroup$ Not exactly. The completeness property is second order. The countable model will be a real-closed field, but not complete. $\endgroup$ Not the answer you're looking for? Browse other questions tagged intuition big-list or ask your own question. 'Obvious' theorems that are actually false Which one result in mathematics has surprised you the most? A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language What are some counter-intuitive results in mathematics that involve only finite objects? Counterintuitive examples in probability The Monty Hall problem Examples of results failing in higher dimensions Proving that $1$- and $2D$ simple symmetric random walks return to the origin with probability $1$ How can a structure have infinite length and infinite surface area, but have finite volume? Constructing a number not in $\bigcup\limits_{k=1}^{\infty} (q_k-\frac{\epsilon}{2^k},q_k+\frac{\epsilon}{2^k})$ What are some examples of subtle logical pitfalls? What are some examples of infinite nonabelian groups? Counterintuitive Partial Derivative Result What are some interesting examples of stochastic preferences for random variables? What are some great examples of cooperative games with stochastic payoffs? What are some large or small mathematical constants?
CommonCrawl
Open Access Review The Reliability of Willingness-to-Pay Responses in the Contingent Valuation Method: A New Single-Sample Approach Bradley Jorgensen * La Trobe University, Melbourne, Australia * Correspondence: Bradley Jorgensen Academic Editor: Grigorios L. Kyriakopoulos Special Issue: Environmental Economics and Management Received: October 24, 2022 | Accepted: January 12, 2023 | Published: January 18, 2023 Recommended citation: Jorgensen B. The Reliability of Willingness-to-Pay Responses in the Contingent Valuation Method: A New Single-Sample Approach. Adv Environ Eng Res 2023; 4(1): 009; doi:10.21926/aeer.2301009. The Contingent Valuation (CV) Method, like other stated preference techniques, seeks to measure economic preferences for public goods. Throughout the development and application of this non-market procedure, the accuracy of the measured preferences has been front-and-centre among practitioners and potential users. The most important issue of debate has been the extent to which the method can reliably measure economic preferences. In this article, a new methodology is described that enables multiple indicators of latent preferences. Multiple-indicator CV (MCV) enables the application of reliability analyses that are well established in psychology and sociology and represent the foundation of evaluating the measurement of latent variables. Furthermore, with the new MCV approach, the reliability of measurement at the individual level can be assessed in a single administration of the MCV survey thereby alleviating any need for longitudinal methodologies or comparison of mean estimates with other valuations of the same ecosystem service or public good should these be available. Once adequate reliability is established, the multiple-indicator framework supports the estimation of mean values via existing econometric techniques. With greater confidence in the reliability of measured contingent values, the interpretation of validity tests is enhanced. Stated preferences; contingent valuation; reliability; validity; willingness to pay Environmental economics has developed the contingent valuation (CV) method to establish the value of ecosystem services. The relevance of the method stems from the absence of markets in which transactions can be observed and used to infer the maximum amount that individuals are willing to pay (WTP) or willing to accept (WTA) compensation for. Transactions involving buyers and sellers represent observed behaviour from which economic preferences – a latent (or unobservable) variable [1] – can be understood. Without a means of estimating preferences, efficiency analyses such as the benefit-cost test cannot be undertaken. Accurately measuring latent variables is not straightforward because they are unobservable constructs. The accuracy of the values of latent variables that are inferred from observed indicators depends upon the degree to which the measured data coincides with the unobservable construct. One source of influence on indicators is random measurement error [2]. This type of error in the observed data reflects solely chance variation. Random sources of variation serve to decrease the correlation between the latent variable and its indicators. To index the degree of random error in observed indicators, the term reliability is used [3]. Reliability refers to the extent to which an indicator consistently provides the same result under the same conditions. This consistency does not mean that individuals should give the same response on one or more occasions. Rather, it requires that the rank-order of individual responses remains the same from one valuation occasion to the next. Larger amounts of random variance decrease an indicator's reliability and contributes to its inconsistent performance in situations where consistency is expected. If researchers can reduce random noise in indicators (e.g., by designing unambiguous survey questions; by providing adequate time to respond to a survey; by providing respondents with a comfortable, distraction-free environment when implementing the survey; etc.) the reliability of an indicator should increase and the opportunity to measure the latent variable the researcher intended to measure (e.g., economic preferences) is more likely. This article introduces a new approach to the valuation of economic preferences that facilitates the straightforward assessment of measurement reliability. In the following section, a brief history of reliability testing in contingent valuation is provided with reference to more substantial, earlier reviews of this literature by Jorgensen et al. [4] and Jorgensen [5]. Next, recent theoretical contributions to understanding the reliability of contingent values are discussed with reference to any significant strengths and limitations. Methods of testing the reliability of measured observations are discussed in Section 4 with particular reference to those methods directly applicable when multiple indicators are available. A new valuation approach – multiple-indicator contingent valuation (MCV) – is presented in Section 5 which enables the application of standard reliability assessment procedures because of its use of multiple-indicator measurement models. Readers are introduced to the approach in a step-by-step manner focusing on its implementation in applied settings. Section 6 briefly reviews the range the steps required to estimate the measurement model and calculate the required reliability coefficients. Conclusions are provided in the final section of the article. 2. Reliability Assessment of Willingness-to-Pay (WTP) Responses in Contingent Valuation Previous research on the reliability of measurements of WTP suffers from significant methodological flaws that have made their conclusions irrelevant to questions concerning random error [4,6]. These errors continue to appear in recent research on the temporal stability of WTP responses [7,8,9]. The most frequently used means of estimating reliability in this literature has been the Test-retest method in which WTP is measured at two time points from the same sample of individuals. Consistency here is not limited to individuals providing the same response on two or more separate occasions. Rather, it refers to the consistency of the relative rank-order of individual responses on separate occasions. Responses on one occasion can demonstrate perfect reliability if their correlation with responses on a second occasion equals one. The correlation between responses obtained at the two testing occasions constitutes the reliability coefficient. The test-retest correlation indexes the degree of reliability given that random sources of error variance are uncorrelated with systematic sources of variance. Correlations close to zero indicate low reliability and values close to ±1.0 reflect high reliability. In the Test-retest studies that are methodologically sound, most reliability coefficients are in the order of 0.50 to 0.70 suggesting moderate levels of reliability [4]. Since the review of reliability studies provided by Jorgensen et al. [4], some test-retest studies have continued to show modest levels of reliability [10]. Higher reliability ranging from 0.51 to 0.94 have been reported in several studies with relatively small samples of between 47 and 97 respondents [11,12,13]. All of these examples were conducted in health contexts valuing goods such as WTP to avoid 16 different health states [13]; WTP to obtain a vaccine [12]; and WTP for perfect health [11]. Measuring reliable preferences is a challenging task given that people have likely little experience in assigning a monetary value to changes in public goods, and in a manner that is consistent with the notion of economic preference [14]. Several researchers have noted the lability of these newly created preferences [15,16] and contend that they are constructed in the very process of measurement [17]. Others have questioned whether the potential instability in WTP responses is the product of survey respondents having no attitude or opinion toward paying a certain amount for a public good [18,19]. If the assumption that economic preferences for public goods are stable over time cannot be sustained, one must question the utility of test-retest reliability with its reliance on the temporal stability of a concept. Venkatachalam [20] reported that a significant number of CV studies have been carried out without any concern for the reliability of the results. But, if preference indicators are not reliable measures, they have limited utility for hypothesis testing in validity trials [4]. This is because noise factors conceal causal effects on the dependent variable [21]. Demonstrating that individual WTP responses are reliable would provide some evidence that they are at least meaningful to respondents [19]. 3. Applying the Concept of Reliability to Assess the Accuracy of Nonmarket Value Estimates Mitchell and Carson [22] argued that researchers are required to "demonstrate that the individual willingness-to-pay amounts are not simply random responses" (p.213). To this end, they suggest that researchers show that their measure of WTP has a coefficient of determination of at least 0.15 when regressed on a small set of key independent variables. This exceptionally liberal cut-off is considerably lower than the conventional criterion of 0.70 in other areas of measurement in the social sciences. Researchers may seek to achieve the highest coefficient of determination possible but, as Mitchell and Carson state, "High R2s for equations based on the kitchen sink approach in which explanatory power is maximized by the indiscriminate use of as many variables as possible, are not particularly meaningful especially in smaller samples…" (p.213). However, with this statement the authors recommend limiting the reliability analysis to maintain some semblance of validity. In the end, the suggestion restricts researchers from understanding either the reliability or the validity of their measurements. Bishop and Boyle [23] note that assessments to date of the "accuracy" (i.e., the reliability and validity) of non-market values has been insufficient. They suggest that changing the way in which reliability is applied to non-market valuation research can lead to concomitant improvements in the quality of research. This seems a reasonable goal since it is apparent that understandings of reliability and its application to value measurement need to improve. However, the framework proposed by Bishop and Boyle focuses on the reliability of the mean value estimate calculated from the distribution of WTP/WTA values generated from a single valuation exercise. As a result, the assessment of reliability of the mean presupposes the existence of a distribution of mean values derived from several valuations of the exact same ecosystem service. The coefficient of reliability in this approach is the standard error of the grand mean, and "the larger the standard error, the less reliable the estimate" (p.562). Bishop and Boyle [23] illustrate their conceptualisation of reliability in the following manner: "An archer is shooting arrows at a target. Reliability of the archer depends on whether the arrows are tightly grouped or scattered about on the target." (p.560) In this illustration, the archer represents different valuation exercises with the underlying logic being that WTP/WTA estimates from different valuations of the same public good should hit the same mark on the target. In standard approaches to reliability, based on classical test theory, focus rests with the unit of analysis, that is, the entity from which the measured response was obtained. In contingent valuation, that unit of analysis is the individual and not the sample from which the average WTP/WTA value is derived. Furthermore, there is no requirement according to classical test theory to equate larger variance of the distribution of individual- or group-level responses with lower reliability. Rather, responses should be relatively free of random error variance. Measurements at the individual- or group-level can display significant variance due to systematic errors (i.e., measurement bias). Questions of systematic sources of variance are issues of validity and not reliability. The approach to reliability proposed by Bishop and Boyle [23] requires access to several mean WTP/WTA estimates of the same ecosystem service or public good. Moreover, even studies focusing on the same good are likely to differ on a myriad of contextual factors including unmeasured characteristics of the participants, the time the valuations were undertaken, and characteristics of the ecosystem providing the service being valued. This places significant limitations on the application of the approach since finding a set of comparable studies is likely to be difficult to achieve. The above limitations notwithstanding, the biggest problem with the approach to reliability offered by Bishop and Boyle [23] is that it does not address the reliability of measurement. That is the degree of random error variance in the measured WTP/WTA responses of the individuals that provided them. Analysis at the mean level does not provide insights into the reliability of individual responses. Furthermore, as will become evident later in this article, it is unnecessary to dispense with classical notions of reliability for the sake of approaches that retreat from the unit of analysis at which the responses are generated (i.e., the individual level). Jorgensen et al. [4] proposed that, by employing multiple indicator measurement models of WTP/WTA, reliability could be directly estimated for both the individual indicators and their combination. The authors recognized that the development of indicators that are responses to a bid offer that varies over subsamples of respondents (e.g., dichotomous choice WTP) is likely to be challenging. However, they suggested several avenues toward this end that might prove fruitful but, at the time of publication, required significant foundation research. In the following section, a new approach to valuation that facilitates assessments of measurement accuracy is introduced. Like Jorgensen et al. [4] the method employs several indicators to measure the latent preference variable, but the indicators are simply standard WTP/WTA responses to questions that vary by bid amount. That is, significant development work to determine matters such as the best question-wording and bid vector design has already been conducted. This new approach is detailed in the remainder of the article. 4. Multiple-indicator Options for Reliability Assessment Psychology has developed a general approach for measuring unobservable variables that relies on asking several survey questions that are designed to measure the same latent variable [24,25]. The benefit of multiple-indicators exists because researchers have access to more than one indicator of the latent variable. Oftentimes, these indicators are combined to create a single measure with the idea that the random errors contained in each individual indicator will offset one another [26]. However, apart from any benefit that may or may not arise from scale development [27], the central benefit for current purposes is that multiple indicators enable reliability assessment in a straightforward manner. That is, by using several indicators of latent preferences no assumption of temporal stability is required and the reliability coefficient can be calculated from a single sample. No re-test sample is necessary, and stability over time of the latent concept is not assumed. Multiple-indicator reliability assessment is based on the notion of internal consistency rather than stability over time. Reliability, therefore, is frequently assessed by examining the degree of correlation among multiple indicators, such that they are consistent with one another and measuring the same thing [28]. Larger correlations between items suggest better reliability because random error in measurements can attenuate correlations under certain conditions [29] or compromise the interpretation of hypothesis tests in other ways [30]. Cronbach's alpha (α) [31] is the most frequently used reliability coefficient for multiple indicator data and the Kuder-Richardson formulae (KR-20 and KR-21) are often used when the indicators are dichotomous [32]. However, there are a variety of ways to assess the reliability of multiple-indicators and much debate about their relative strengths and weaknesses [33,34,35]. But there are practical strategies available to suit most kinds of data [36]. Interested readers are referred to this literature and to Jorgensen et al. [4] for an empirical example, as the remainder of this discussion will focus on describing the procedures required to undertake WTP measurement using MCV. 5. Multiple-indicator Contingent Valuation (MCV) 5.1 The Valuation Questions CV surveys have employed a variety of WTP and Willingness-to-Accept (WTA) response formats: Open-ended, dichotomous choice; payment card, etc. MCV is an extension of the discrete response take-it-or-leave-it measurement strategy developed by [37]. Survey respondents are randomly assigned a value from a vector of prices (or bids) and then asked if they are willing to pay the price to gain or avoid the proposed change in the public good. MCV also employs a price vector but does not involve randomly assigning one value to each participant.1 Rather, all prices are presented to each respondent as separate WTP questions, and the order by which these take-it-or-leave-it WTP questions are presented to participants is randomised. By randomly ordering the presentation of the questions for each individual, question-order effects such as assimilation and contrast effects are controlled [41]. The number of WTP questions needed to support estimation of a reliability coefficient depends upon the assessment approach taken. Cronbach's alpha for example is a simple model that assumes uncorrelated errors among indicators [42]. Alpha is based on the average correlation between indicators such that, in principle, it can be calculated with just two measures although the statistic would be based on just one correlation coefficient in that instance. More general confirmatory factor analysis measurement models require four or more indicators to support an assessment of measurement reliability concerning a single, latent variable [43]. This results in an over-identified measurement model for which all the parameters can be estimated.2 An over-identified model enables an analysis of how well each indicator measures latent preferences and therefore the reliability of the WTP measures [44]. While at least four WTP indicators are required for the measurement model to be over-identified, more indicators may be necessary to include in an MCV survey. In the take-it-or-leave-it approach, it is likely that responses to the lowest and highest bids will not be normally distributed (see, for example, Cooper and Loomis [39]). While measurement models can be estimated with non-normal data, researchers will need to screen the WTP responses for extreme departures from normality. In situations, where the standard deviations of WTP indicators reflect little variation in responses, researchers may need to restrict the reliability assessment to those indicators that meet whatever statistical assumptions their analysis requires. However, modern structural equation modelling software is capable of dealing with many types of data including dichotomous distributions, non-normal ordinal distributions, and non-normal latent variables [45]. 5.2 Specifying the Intended Behaviour Clearly specifying the intended behaviour in the valuation question is an important design consideration to reduce any perceived ambiguity and increase the likelihood that the intention will correspond with future behaviour [46,47]. Presseau et al., recommend specifying the required behaviour using a framework encompassing five domains: Action, Actor, Context, Target, Time (AACTT). For example, the action relevant to contingent valuation is paying, voting, or accepting compensation. The actors applicable to a WTP behavioural intention are those persons who perform the action of paying, voting, or accepting compensation. Actors in CV surveys have frequently been individual respondents and households from a relevant population. The target in a valuation question is the focus of the behaviour and the object of valuation (i.e., the change in the public good). Context refers to the physical, emotional, or social conditions accompanying performance of the behaviour. In CV, the physical conditions and, to some extent the social and emotional conditions too, are likely linked to the contextual properties of the payment vehicle (e.g., entry charges to national parks, license fees for access to a fishing area, rates to local government, etc.). Finally, the time domain refers to when the behaviour will be performed. This might be the next time the individual visits a national park or pays for a fishing license. By clearly specifying the MCV behaviour in this way, ambiguity is reduced by making explicit who (actor) does what (action), when (time) and where (context), for what objective (target). 5.3 Selecting a Response Format The response options for each WTP/WTA question can be either dichotomous (YES/NO) or polychotomous. Polychotomous response scales to WTP questions appear in the CV literature in studies of response uncertainty [38,48] and commitment [49]. Anchors on the ordinal, polychotomous scale can vary depending upon how the WTP question is framed. In general, ordinal scales are preferred to dichotomous scales because the former indicate not only the valence of the response, but its intensity as well. Several WTP questions and response options are possible from a multi-indicator perspective and some of these are presented in Table 1. All the examples in Table 1 include a price which is varied over subsamples. Moreover, all examples follow the AACTT structure discussed earlier, and the last question follows the referendum (voting) format while the remainder target payment behaviour. All these examples are amenable to a MCV approach because they comprise the same action, actor, context, target, and time. The different wording of each question necessitates different anchors (or labels) to define the response options. While the anchors are somewhat intuitive given the wording of the questions, there is also a strong methodological literature to guide researchers if needed (e.g., [50,51,52]). One way of selecting anchors is to pre-test an open-ended version of the elicitation question on a pilot sample with frequently cited responses representing potential anchors in the MCV survey. Question-wording that elicits relatively high rates of "don't know" responses warrant further investigation and revision if they prove to be ambiguous for some reason. While most of the example scales in Table 1 have a label for each response option, some authors recommend fewer labels [53] but there are advantages and disadvantages to both formats [52]. Pretesting alternative response formats can highlight any uncertainties for respondents. Perhaps more important than the labelling of response options is avoiding the use of "off-scale" options, for example, labelling the midpoint of an agree-disagree response scale as "don't know" and "unsure". These responses conflate a respondent's level of certainty in paying with their level of support for the change in the public good. Similarly, a midpoint of "no answer" might indicate that the respondent refuses to answer the question for some reason rather than signalling a response of mid-level agreement. In these cases, practitioners would be better served either leaving the midpoint unlabeled or labelling it as "neither agree nor disagree" [54]. Researchers may choose to vary the wording of the elicitation question within the MCV survey (as evident in Examples 1 to 4). These four questions are all examples of payment intentions and share the same AACTT components. While the price and wording can vary across elicitation questions, the response scale and anchors need to be consistent if researchers wish to estimate mean or median WTP values. But for the purpose of reliability assessment, there is no reason why the response scale labels could not also differ across elicitation questions. The need for consistency for the purpose of calculating mean and median WTP will become evident later in the article when the discussion shifts from one of reliability assessment to value estimation. Table 1 WTP questions and response scales. Elicitation Question Possible Scale Anchors I intend to pay $x more for my fishing licence next month to have Rainbow Smelt eradicated from Emerald Lake. Somewhat disagree Somewhat agree Would you be willing to pay $x more for your fishing license next month if the money was used to eradicate Rainbow Smelt from Emerald Lake? Definitely no Probably no Maybe yes, maybe no Probably yes Definitely yes How likely is it that you would pay an additional $x next month for your fishing license if the money was used to eradicate Rainbow Smelt from Emerald Lake? Very unlikely Fairly unlikely Neither likely nor unlikely Fairly likely How willing are you to pay an additional $x next month for your fishing licence if the money was used to eradicate Rainbow Smelt from Emerald Lake? Very unwilling Fairly unwilling Neither willing nor unwilling Fairly willing Very willing If I was asked to vote in a referendum on a proposal to increase the cost of a fishing licence next month by an additional $x so that the money raised could be used to eradicate Rainbow Smelt from Emerald Lake, I would … Definitely not support it Probably not support it Might or might not support it Probably support it Definitely support it Reject it 7. Accept it In sum, researchers employing MCV might use just one type of WTP question and response scale but repeat it with only the price bid changing. Alternatively, different types of WTP questions might be used and vary the price offered. However researchers decide to structure the valuation question and what response scales they choose, there should be as many questions as there are bids in the price vector, and at least four bids, questions, and responses. 6. Model-based Reliability Estimation As noted earlier in the discussion, there are several methods to estimate the reliability of multiple indicators, and some of these depend upon the properties of the response data (e.g., its metric; distribution, etc.). To illustrate, a confirmatory factor analysis model-based approach will be described (see also [4]). Model-based approaches to reliability are superior to simpler approaches such as Cronbach's α because the relationship between the theoretical latent variable and its indicators is made explicit [33]. Restrictive assumptions such as uncorrelated errors among indicators can be formerly examined and subjected to hypothesis testing. In this sense, reliability assessment is a by-product of a specific measurement model rather than a "stand-alone" enterprise [34]. Figure 1 Unidimensional measurement model with congeneric indicators. A simple measurement model is presented in Figure 1 that shows a single latent preference variable ($\xi_{j}$) and six endogenous observed WTP indicators ($x_{i}$). Slope parameters ($\lambda_{i j}$) link the observed variables to the latent variable and are represented in the lambda ($\Lambda $) matrix. The linear regression coefficients also include a vector of intercepts τi. The residual variances ($\delta_{i}$) of each indicator are contained in the theta matrix ($\Theta $) and may comprise both random and systematic error variance not explained by the latent variable. Shared systematic residual variance accounts for correlations between the errors of two or more indicators which are represented as off-diagonal elements ($\theta_{ij} $) in the theta matrix. Therefore, observed indicators are represented as a linear function of a latent variable, an intercept, and a stochastic error term: \[ x_{i}=\tau_{i}+\lambda_{i j} \xi_{j}+\delta_{i} \] The parameters of the measurement model can be estimated using a variety of commercially available software. The results provide all the information necessary for calculating several reliability coefficients. The reliability of individual WTP indicators is given by the coefficient of determination from the regressions on the latent variable [44]: \[ R^{2}{ }_{x i}=\lambda^{2}{ }_{i j} \phi_{i} /\left(\lambda^{2}{ }_{i j} \phi_{i}+\delta_{i}\right) \] where $\phi_{i}$ is the variance of item i. To obtain a reliability coefficient for the combination of WTP indicators in Figure 1, researchers can calculate a reliability coefficient that has its origins in the work of [55,56,57]. The coefficient is variously referred to as construct reliability, composite reliability, or coefficient omega [58] and is calculated with the following formula: \[ \rho c=\left(\sum \lambda_{i j}\right)^{2} /\left(\sum \lambda_{i j}\right)^{2}+\sum \delta_{j} \] In the formula above ($\sum \lambda_{i j}$)2 is the true-score variance and $\sum \delta_{j}$ is the error variance. All the information necessary to calculate $\rho c$ is generated from the estimation of the measurement model. If this model-based coefficient proves that the indicators are sufficiently reliability (i.e., $\rho c$ > 0.70 by convention) then researchers can proceed to the estimation of mean WTP. Reliability assessed within a MCV framework means that researchers know whether their measurement data is reliable, rather than having to generalise from test-retest studies undertaken in sometimes completely different contexts. 6.1 Estimating the Measurement Model The model described in Figure 1 was estimated using the data simulation function in Mplus 8.8 [59]. The range of reliability estimates available from the environmental economics literature were used to set the indicator loadings and error variances. The review by Jorgensen et al. [4] provides reliability coefficients ranging from 0.30 to 0.95. The following values were selected for the purpose of illustrating how reliability coefficients can be estimated from a measurement model like the one in Figure 1: 0.30, 0.40, 0.59, 0.71, 0.79, 0.95. The model estimates are shown in Figure 2. Several goodness-of-fit indices are provided upon which to assess the adequacy of the model. For example, the model chi-square (degrees-of-freedom) is 7.97(8) suggesting that the variance-covariance matrix estimated from the model is a close fit to the actual sample variance-covariance matrix. The reliability coefficient for each indicator is the R2 value which were set to values consistent with Jorgensen et al. [4]. These values reflect the value of the standardised loadings ($\lambda_{i j}$) and error variances ($\delta_{j}$) for their respective indicators. The composite reliability ($\rho c$) can be calculated from the loadings and error variances using the equation from the previous section. This equation produced a reliability coefficient of 0.90. This $\rho c$ estimate is marginally larger than Cronbach's alpha (α = 0.89) calculated with SPSS 28 [60] from the correlation matrix simulated from the parameters of the measurement model. Part of the output information provided in the estimation of alpha is the identification of indicators responsible for low values. In this illustration, the output suggested that the removal of WTP$198 would serve to increase alpha by a trivial 0.01 to a value of 0.90. That is, but two different procedures, the reliability estimates, if not identical, are extremely close. As the simulated modelling has illustrated, reliability coefficients can be easily calculated for individual indicators and for a set of indicators. The latent structure of the WTP responses can also be subject to hypothesis tests to identify the number of latent preference variables required to explain observed responses. This is done by estimating a range of models varying in the number of latent variables posited and the pattern of indicator loadings on those latent variables (see, for example, Jorgensen and Stedman, [61]). As noted by Jorgensen et al. ([4], p.50), the model-based approach to reliability (and validity): "…requires CV practitioners to think more about the constructs that they wish to measure. Specifying and testing particular models informs questions of reliability and validity at the same time." Figure 2 Results of the simulated model estimation. While the parameter estimates in Figure 2 were obtained using simulated data, interested readers can generate real-world data by simply following the procedures and examples presented in earlier sections of this article. 7. Estimating Mean WTP 7.1 Random Selection of WTP Responses MCV data can be re-configured to be identical to standard take-it-or-leave-it data which are easily used to estimate mean and median WTP values. To establish a take-it-or-leave-it basis for estimating mean WTP, a randomly assigned bid and associated WTP response are required for every case in the sample. If the researcher has a vector of prices that contains five bids, for example, and there are five corresponding WTP questions randomised for each individual, then only one bid and response from each participant is required to derive a mean WTP value (see Figure 3). To create a subsample in which all participants have just one bid and one WTP response, researchers can randomly select an individual's WTP response from the five questions he or she was presented with during surveying. The process of randomly selecting WTP responses from each individual needs to be consistent with the size of the subsamples for which the bids were varied. In standard CV it is usually the case that the same number of participants receive each bid from the price vector. This process leads to a data set in which each participant is associated with the response to just one of the five WTP questions they were initially asked. For the sample, there are an equal number of participants who received each bid since it is this value that was the focus of the random selection process and not the WTP response itself. The result is a dataset that has the same form as might be produced from a standard take-it-or-leave-it CV survey. Figure 3 Data transformation process. For the estimation of mean WTP from dichotomous response data, the regression techniques are well documented and researched [62,63]. Researchers who have used polychotomous response scaling have also adopted a variety of regression approaches depending upon their objectives [38,48,49,64]. However, among these studies, recoding the polychotomous categories into a dichotomous response variable frequently occurs. As noted previously in the discussion, this strategy, while preserving the valence of the responses, loses information about the intensity of the response. Therefore, instead of rescaling the WTP variable, researchers might follow Obeng and Aguilar [65] who use ordered logistic regression to estimate mean WTP from their polychotomous responses. However, adopting this approach requires that the data satisfies the assumption of parallel regressions across the levels of the polychotomous WTP variable [66]. 7.2 Obtaining a Distribution of WTP Sample Means MCV offers researchers not only a single sample way of estimating the reliability of WTP measurements, but it enables the estimation of a probability distribution of the WTP mean simply by repeatedly resampling from the data. This latter benefit arises from the randomised, multiple indicator innovation which is characteristic of MCV. Recall from Figure 3 that one WTP response was randomly selected (within price levels) from a possible six responses. By repeating this process and obtaining different responses on each occasion creates a distribution of WTP means to be calculated from a single MCV dataset. If data from six WTP questions were elicited from 100 respondents, the total number of combinations and permutations approaches one million. From a sufficient number of observations and participants, the distribution of WTP means can provide a standard error of the grand mean thereby providing researchers with an estimate of the uncertainty associated with this WTP estimate. 8. Reliability and Validity It is crucial to establish the reliability of a variable prior to undertaking validity tests because reliability is a necessary but not sufficient condition for validity [3]. A variable that is not reliable cannot be valid. Failed validity tests can therefore be due to unreliability rather than to any theoretical explanation underpinning the validity test [4,67]. MCV can establish reliability using a small set of indicators and statistical models that can accommodate different types of data including non-normal distributions. Given an adequate reliability coefficient, researchers might proceed to assessing validity prior to the estimation of mean and median WTP values. On the occasions that those validity tests fail, researchers will have confidence in discounting poor measurement reliability as an explanation. The nature of validity tests in CV will ultimately depend upon how firmly researchers are committed to the logical requirements of rationality as a validity criterion. The challenge of determining trade-off rates using hypothetical markets is considerable and the reliable and valid measurement of well-defined and continuous preferences is essential. However, rational choice theory is not a necessary basis of validity in MCV. Rather, practitioners are free to scrutinise their data against whatever theory they choose. The reliable and valid measurement of behavioural intentions (e.g., WTP) supports accurate predictions of actual behaviour but not at the cost of the theoretical axioms that describe economic preferences. When viewed as a behavioural intentions, factors that contribute to the reliability of WTP (e.g., protest responses) but are dismissed from a neoclassical economic perspective (because they are assumed to arise from non-compensatory preferences) are, nonetheless, completely valid motivations of actual behaviour [68,69]. Furthermore, there is a well-established literature identifying the factors and conditions that influence the likelihood that intentions will translate into actual performance of the behaviour [46,70,71,72]. Researchers might ask individuals if they are willing to pay a given amount of money for a fishing license next month if the funds raised were used to eradicate rainbow smelt from Emerald Lake with the validity criterion being actual payment under the conditions explicit in the question (i.e., action, actor, context, target, and time). This behavioural criterion changes researchers' determinations of the valid sources of variance in WTP responses, such that, some types of bias once considered reliable but not valid influences in CV may be re-considered as both reliable and valid. Contingent valuation research, and stated preference research in general, has enthusiastically pursued the identification and avoidance of biases in preference measurement and afforded comparably little effort in establishing the extent to which the method can provide consistent individual responses. Past research on reliability has been fraught with methodological and conceptual limitations that have rendered their interpretation difficult. Furthermore, the demands of drawing a re-test sample (of the same individuals in the initial sample) in tests of the reliability of individual WTP responses limits it proliferation in the literature in addition to its assumption that labile preferences are stable over time. A new approach to measuring preferences – Multiple-indicator Contingent Valuation (MCV) – enables the assessment of reliability in several ways and with the same sample from which benefit estimates are derived. Assessment of reliability from a single sample alleviates the need to undertake more resource intensive methodologies such as test-retest reliability. This innovation will support an increase in reliability studies since every application of MCV can produce at least one estimate of measurement reliability. With this renewed interest in reliability assessment, the field can substantially improve its understanding of the conditions that influence the reliability of preference indicators because independent variables can be manipulated in the same research design that generates the reliability coefficient. Whatever way the field of non-market valuation progresses, it is imperative that evidence of the reliability of WTP responses is properly generated and that these results are convincingly scrutinised for policymakers to comprehend their meaning and implications in the contexts where they are to be applied. The author did all the research work of this study. The author has declared that no competing interests exist Bollen KA. Latent variables in psychology and the social sciences. Annu Rev Psychol. 2002; 53: 605-634. [CrossRef] Costner HL. Theory, deduction, and rules of correspondence. Am J Sociol. 1969; 75: 245-263. [CrossRef] Meyer P. Understanding measurement: Reliability. New York: Oxford University Press; 2010. [CrossRef] Jorgensen BS, Syme GJ, Smith LM, Bishop BJ. Random error in willingness to pay measurement: A multiple indicators, latent variable approach to the reliability of contingent values. J Econ Psychol. 2004; 25: 41-59. [CrossRef] Jorgensen BS. The determinants of assigned value: A social psychological approach to welfare measurement. Perth: Curtin University; 1996. Jorgensen BS. Perceived justice and the economic valuation of the environment: A role for fair decision-making procedures. In: Towards an environment research agenda: A second selection of papers. London: Palgrave Macmillan; 2003. pp. 146-161. [CrossRef] He J, Zhang B. Current air pollution and willingness to pay for better air quality: Revisiting the temporal reliability of the contingent valuation method. Environ Resour Econ. 2021; 79: 135-168. [CrossRef] Perni Á, Barreiro-Hurlé J, Martínez-Paz JM. Contingent valuation estimates for environmental goods: Validity and reliability. Ecol Econ. 2021; 189: 107144. [CrossRef] Williams G. The temporal stability of WTP estimates for the emissions reduction using the contingent valuation survey in Queensland, Australia. J Environ Econ Policy. 2022. Doi: 10.1080/21606544.2022.2149628. [CrossRef] Onwujekwe O, Fox‐Rushby J, Hanson K. Inter‐rater and test–retest reliability of three contingent valuation question formats in south‐east Nigeria. Health Econ. 2005; 14: 529-536. [CrossRef] Mavrodi AG, Chatzopoulos SA, Aletras VH. Examining willingness-to-pay and zero valuations for a health improvement with logistic regression. Inquiry. 2021; 58: 00469580211028102. [CrossRef] Shiell A, Hawe P. Test-retest reliability of willingness to pay. Eur J Health Econ. 2006; 7: 173-178. [CrossRef] Smith RD. The reliability of willingness to pay for changes in health status. Appl Health Econ Health Policy. 2004; 3: 35-38. [CrossRef] Kahneman D. Comments on the contingent valuation method. In: Valuing environmental goods: An assessment of the contingent valuation method. Totowa: Roweman and Allanheld; 1986. pp. 185-193. Kahneman D, Ritov I, Schkade D, Sherman SJ, Varian HR. Economic preferences or attitude expressions? An analysis of dollar responses to public issues. In: Elicitation of preferences. Dordrecht: Springer; 1999. pp. 203-242. [CrossRef] Kahneman D, Ritov I, Jacowitz KE, Grant P. Stated willingness to pay for public goods: A psychological perspective. Psychol Sci. 1993; 4: 310-315. [CrossRef] Lichtenstein S, Slovic P. The construction of preference. New York: Cambridge University Press; 2006. [CrossRef] Jorgensen BS, Syme GJ, Nancarrow BE. The role of uncertainty in the relationship between fairness evaluations and willingness to pay. Ecol Econ. 2006; 56: 104-124. [CrossRef] Schuman H. The sensitivity of CV outcomes to CV survey methods. In: The contingent valuation of environmental resources: Methodological issues and research needs. Cheltenham: Edward Elgar Publishing; 1996. pp. 75-96. Venkatachalam L. The contingent valuation method: A review. Environ Impact Assess Rev. 2004; 24: 89-124. [CrossRef] Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton, Mifflin and Company; 2002. Mitchell RC, Carson RT. Using surveys to value public goods: The contingent valuation method. Washington: Rff Press; 2013. [CrossRef] Bishop RC, Boyle KJ. Reliability and validity in nonmarket valuation. Environ Resour Econ. 2019; 72: 559-582. [CrossRef] Curtis RF, Jackson EF. Multiple indicators in survey research. Am J Sociol. 1962; 68: 195-204. [CrossRef] Sullivan JL, Feldman S. Multiple indicators: An introduction. Thousand Oaks: SAGE; 1979. [CrossRef] Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. McGraw-Hill series in psychology. New York: McGraw-Hill; 1994. Drolet AL, Morrison DG. Do we really need multiple-item measures in service research? J Serv Res. 2001; 3: 196-204. [CrossRef] Streiner DL. Starting at the beginning: An introduction to coefficient alpha and internal consistency. J Pers Assess. 2003; 80: 99-103. [CrossRef] Rigdon EE. Demonstrating the effects of unmodeled random measurement error. Struct Equ Modeling. 1994; 1: 375-380. [CrossRef] Loken E, Gelman A. Measurement error and the replication crisis. Science. 2017; 355: 584-585. [CrossRef] Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951; 16: 297-334. [CrossRef] Kuder GF, Richardson MW. The theory of the estimation of test reliability. Psychometrika. 1937; 2: 151-160. [CrossRef] Bentler PM. Alpha, dimension-free, and model-based internal consistency reliability. Psychometrika. 2009; 74: 137-143. [CrossRef] Sijtsma K. On the use, the misuse, and the very limited usefulness of Cronbach's alpha. Psychometrika. 2009; 74: 107-120. [CrossRef] Viladrich C, Angulo-Brunet A, Doval E. A journey around alpha and omega to estimate internal consistency reliability. Ann Psychol. 2017; 33: 755-782. [CrossRef] Trizano-Hermosilla I, Alvarado JM. Best alternatives to Cronbach's alpha reliability in realistic conditions: Congeneric and asymmetrical measurements. Front Psychol. 2016; 7: 769. [CrossRef] Bishop RC, Heberlein TA. Measuring values of extramarket goods: Are indirect measures biased? Am J Agric Econ. 1979; 61: 926-930. [CrossRef] Alberini A, Boyle K, Welsh M. Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty. J Environ Econ Manage. 2003; 45: 40-62. [CrossRef] Cooper JC, Hanemann M, Signorello G. One-and-one-half-bound dichotomous choice contingent valuation. Rev Econ Stat. 2002; 84: 742-750. [CrossRef] Kanninen BJ, Kriström B. Sensitivity of willingness-to-pay estimates to bid design in dichotomous choice valuation models: Comment. Land Econ. 1993; 69: 199-202. [CrossRef] Rasinski KA, Lee L, Krishnamurty P. Question order effects. In: APA handbook of research methods in psychology, vol 1: Foundations, planning, measures, and psychometrics. Washington: American Psychological Association; 2012. pp. 229-248. [CrossRef] Miller MB. Coefficient alpha: A basic introduction from the perspectives of classical test theory and structural equation modeling. Struct Equ Modeling. 1995; 2: 255-273. [CrossRef] Reilly T, O'BRIEN RM. Identification of confirmatory factor analysis models of arbitrary complexity: The side-by-side rule. Sociol Methods Res. 1996; 24: 473-491. [CrossRef] Bollen KA. Structural equations with latent variables. Toronto: John Wiley & Sons; 1989. [CrossRef] Wall MM, Guo J, Amemiya Y. Mixture factor analysis for approximating a nonnormally distributed continuous latent factor with continuous and dichotomous observed variables. Multivariate Behav Res. 2012; 47: 276-313. [CrossRef] Fishbein M, Ajzen I. Predicting and changing behavior: The reasoned action approach. New York: Psychology press; 2010. [CrossRef] Presseau J, McCleary N, Lorencatto F, Patey AM, Grimshaw JM, Francis JJ. Action, actor, context, target, time (AACTT): A framework for specifying behaviour. Implement Sci. 2019; 14: 102. [CrossRef] Ready RC, Whitehead JC, Blomquist GC. Contingent valuation when respondents are ambivalent. J Environ Econ Manage. 1995; 29: 181-196. [CrossRef] Ropicki AJ, Larkin SL, Adams CM. Seafood substitution and mislabeling: WTP for a locally caught grouper labeling program in Florida. Mar Resour Econ. 2010; 25: 77-93. [CrossRef] Converse JM, Presser S. Survey questions: Handcrafting the standardized questionnaire, survey questions: Handcrafting the standardized questionnaire. Thousand Oaks: SAGE Publications, Inc.; 1986. Fink A. The survey handbook. 2nd ed. Thousand Oaks: SAGE Publications, Inc.; 2003. Robinson SB, Leonard KF. Designing quality survey questions. 1st ed. Los Angeles: SAGE Publications, Inc.; 2018. Darbyshire P, McDonald H. Choosing response scale labels and length: Guidance for researchers and clients. Australas J Mark Res. 2004; 12: 17-26. [CrossRef] Shishido K, Iwai N, Yasuda T. Designing response categories of agreement scales for cross-national surveys in east Asia: The approach of the Japanese General Social Surveys. Jpn J Soc. 2009; 18: 97-111. [CrossRef] Jöreskog KG. Statistical analysis of sets of congeneric tests. Psychometrika. 1971; 36: 109-133. [CrossRef] McDonald RP. The theoretical foundations of principal factor analysis, canonical factor analysis, and alpha factor analysis. Br J Math Stat Psychol. 1970; 23: 1-21. [CrossRef] Werts CE, Linn RL, Jöreskog KG. Intraclass reliability estimates: Testing structural assumptions. Educ Psychol Meas. 1974; 34: 25-33. [CrossRef] Cho E. Making reliability reliable: A systematic approach to reliability coefficients. Organ Res Methods. 2016; 19: 651-682. [CrossRef] Muthén LK, Muthén BO. Mplus user's guide. 8th ed. Los Angeles: Muthen & Muthen; 1998. IBM Corp. IBM SPSS statistics for windows, version 28.0. Armonk: IBM Corp; 2021. Jorgensen BS, Stedman RC. Sense of place as an attitude: Lakeshore owners attitudes toward their properties. J Environ Psychol. 2001; 21: 233-248. [CrossRef] Cameron TA. A new paradigm for valuing non-market goods using referendum data: Maximum likelihood estimation by censored logistic regression. J Environ Econ Manage. 1988; 15: 355-379. [CrossRef] Hanemann WM. Welfare evaluations in contingent valuation experiments with discrete responses. Am J Agric Econ. 1984; 66: 332-341. [CrossRef] Whitehead JC, Huang JC, Blomquist GC, Ready RC. Construct validity of dichotomous and polychotomous choice contingent valuation questions. Environ Resour Econ. 1998; 11: 107-116. [CrossRef] Obeng EA, Aguilar FX. Value orientation and payment for ecosystem services: Perceived detrimental consequences lead to willingness-to-pay for ecosystem services. J Environ Econ Manage. 2018; 206: 458-471. [CrossRef] Brant R. Assessing proportionality in the proportional odds model for ordinal logistic regression. Biometrics. 1990; 46: 1171-1178. [CrossRef] Jorgensen BS, Wilson MA, Heberlein TA. Fairness in the contingent valuation of environmental public goods: Attitude toward paying for environmental improvements at two levels of scope. Ecol Econ. 2001; 36: 133-148. [CrossRef] Jorgensen BS, Syme GJ, Bishop BJ, Nancarrow BE. Protest responses in contingent valuation. Environ Resour Econ. 1999; 14: 131-150. [CrossRef] Jorgensen BS, Syme GJ, Lindsey G. Discussion and closure: Market models, protest bids, and outliers in contingent valuation. J Water Resour Plan Manag. 1995; 121: 400-402. [CrossRef] Sheeran P. Intention—behavior relations: A conceptual and empirical review. Eur Rev Soc Psychol. 2002; 12: 1-36. [CrossRef] Sheeran P, Webb TL. The intention–Behavior gap. Soc Personal Psychol Compass. 2016; 10: 503-518. [CrossRef] Jorgensen BS, Boulet M, Hoek AC. A Level-of-analysis issue in resource consumption and environmental behavior research: A theoretical and empirical contradiction. J Environ Manage. 2020; 260: 110154. [CrossRef]
CommonCrawl
Simultaneous dual-contrast multi-phase liver imaging using spectral photon-counting computed tomography: a proof-of-concept study Daniela Muenzel1, Heiner Daerr2, Roland Proksa2, Alexander A. Fingerle1, Felix K. Kopp1, Philippe Douek3, Julia Herzen4, Franz Pfeiffer1,4,5, Ernst J. Rummeny1 & Peter B. Noël1,4 To assess the feasibility of dual-contrast spectral photon-counting computed tomography (SPCCT) for liver imaging. We present an SPCCT in-silico study for simultaneous mapping of the complementary distribution in the liver of two contrast agents (CAs) subsequently intravenously injected: a gadolinium-based contrast agent and an iodine-based contrast agent. Four types of simulated liver lesions with a characteristic arterial and portal venous pattern (haemangioma, hepatocellular carcinoma, cyst, and metastasis) are presented. A material decomposition was performed to reconstruct quantitative iodine and gadolinium maps. Finally, a multi-dimensional classification algorithm for automatic lesion detection is presented. Our simulations showed that with a single-scan SPCCT and an adapted contrast injection protocol, it was possible to reconstruct contrast-enhanced images of the liver with arterial distribution of the iodine-based CA and portal venous phase of the gadolinium-based CA. The characteristic patterns of contrast enhancement were visible in all liver lesions. The approach allowed for an automatic detection and classification of liver lesions using a multi-dimensional analysis. Dual-contrast SPCCT should be able to visualise the characteristic arterial and portal venous enhancement with a single scan, allowing for an automatic lesion detection and characterisation, with a reduced radiation exposure. Based on our simulations, dual-contrast liver imaging with spectral photon counting computed tomography is feasible. A dual-contrast injection protocol (first gadolinium-based, second iodine-based) enables simultaneous multiphase scanning. The arterial contrast enhancement is visible thanks to the gadolinium-based contrast agent while the portal venous enhancement is visible thanks to the iodine-based contrast agent. Dual-contrast liver SPCCT imaging could help reducing radiation exposure. Computed tomography (CT) is a standard imaging technique for detection and characterisation of focal liver lesions, the majority of them being benign (cysts and haemangiomas). In clinical routine it is important to differentiate benign lesions from malignant lesions, as the liver is a preferential organ for metastases of many primary tumours and benign and malignant lesions may coexist. Accurate diagnosis of hepatocellular carcinoma (HCC) is also important as it is the most common primary malignancy in the liver and the second leading cause of cancer-related mortality in the world [1,2,3]. With respect to oncological liver diagnostics, conventional contrast-enhanced CT is the workhorse in the clinical day-to-day routine. A controversial discussion concerns dynamic phases (native, early and late arterial, portal venous, equilibrium) in the setting of different liver pathologies [4,5,6]. Today, a standard CT liver protocol typically includes an arterial and portal venous contrast-enhanced series and, when thought to be necessary, a native unenhanced scan. Detection and characterisation of liver lesions on conventional CT is based on differences in the attenuation values between lesions and the normal liver parenchyma. This difference occurs due to a tissue-dependent uptake of contrast agent (CA). In contrast to standard energy-integrating detectors, spectral photon-counting computed tomography (SPCCT) provides additional information on the energy spectrum of x-ray photons after passing through the patient and can thus detect spectral differences in the attenuation. This technology enables material decomposition, i.e. the analysis of different material components within the examined object. The aim of our study was to explore the feasibility of a new approach for liver imaging using two CAs with different x-ray attenuation properties for an improved visualisation and automated detection and characterisation of lesions: a dual-contrast single-scan SPCCT protocol. More specifically, our aim was to investigate if this can be achieved by a specifically adjusted injection protocol, with arterial contrast distribution of one CA and portal venous enhancement of another CA. To achieve this goal, we present the potential of a sequential injection procedure, including a first intravenous injection of a gadolinium-based CA (CA1) and a second injection of an iodine-based CA (CA2), with a specific time gap depending on the individual circulation time. The scan acquisition is simulated at the time point of portal venous distribution for CA1 and at the time of arterial phase for CA2. This retrospective in-silico study was conducted in accordance to the guidelines of the local institutional review board. Anonymised CT datasets of one participant with a healthy liver (n = 1) and four patients, each of them with characteristic liver lesions (HCC, haemangioma, cyst, and metastasis) were used as a template for the numerical experiment. The diagnosis of liver lesions was confirmed by biopsy for HCC and metastasis while follow-up studies including magnetic resonance imaging confirmed the diagnosis of haemangioma and cyst. Dual-contrast injection protocol For dual-contrast liver imaging, we defined a dedicated injection protocol (Fig. 1) with a sequential application of two different CAs. The attenuation curve of CA1 (gadolinium-based CA) within a region of interest (ROI) in the abdominal aorta was used as a test bolus to determine the individual patient-specific timing of the blood circulation and the maximum of arterial contrast enhancement. This information is essential to precisely forecast the time point of maximal arterial enhancement of the liver by CA2 (iodine-based CA). Injection protocol and timing characteristics for dual-contrast agent enhanced multi-phase SPCCT of the liver. The portal venous phase is reached within T3 (~70 s) following CA injection in humans, whereas the exact time point of arterial distribution (T1–T0) depends on the individual patient blood flow. For dual-contrast SPCCT imaging, synchronised portal venous distribution of CA1 and arterial distribution of CA2 at one time point is necessary (= T3). The sequence is started at T0, where CA1 is injected. At T3, CA1 shows a portal venous distribution. The time period T1–T0 defines the time necessary for enhancement in arterial phase. Then, CA2 is injected at T2 to assure arterial phase of CA2 at T3. At T3, the SPCCT scan is performed, with an arterial distribution of CA2 and a portal venous contrast of CA1. Blue dotted line, arterial distribution of CA1; blue line, portal venous distribution of CA1; red line, arterial distribution of CA2 At time point T0, CA1 (Magnograf®, Bayer Pharma AG, Berlin, Germany, administered at a dose of 0.2 mL/kg body weight, with an injection rate of 4 mL/s) is injected generating a maximal arterial enhancement of the liver at time point T1, followed by injection of CA2 at T2 (Iomeron 370, administered at a dose of 1 mL/kg body weight, with an injection rate of 4 mL/s). The time difference ΔT = T1 – T0 represents the period from CA injection to maximum of arterial enhancement. Sequential injection of CA1 and CA2 leads to a dual-phase dual-contrast distribution at the time point T3 = T2 + ΔT, with CA1 in the portal venous and CA2 in the arterial phase (see Fig. 1). At time point T3, the SPCCT examination is performed to simultaneously assess the contrast distribution of both CAs in the liver at different phases. Numerical SPCCT experiment The numerical experiment was set up to be as realistic as possible by modelling both the energy-dependent attenuation characteristics of the patients and the actual physical performance of a SPCCT scanner. The number of detected photons in a detector pixel mainly depend on the three major components: the x-ray tube spectrum, the detector response and the attenuation properties of the patient [7]. In order to mimic an SPCCT acquisition, a realistic x-ray tube spectrum model [8] and a realistic detector response model (measurements at the European Synchrotron Radiation Facility [ESRF] in 2014, not published) were utilised. The resulting photon counts for each x-ray were calculated using the approach presented by Roessl and Proksa [9]. Poisson-distributed noise was added to the calculated photon counts. The simulation parameters match an already existing SPCCT prototype, where axial scans over 360° are obtained with a tube current of 50 mA, a tube voltage of 120 kVp, scanner rotation time of 1 s and 2400 projections per rotation. The noise threshold in the detector modelling was set to 30 keV and all images were reconstructed on a voxel grid of 0.39 × 0.39 × 0.25 mm [10]. Preparation of numerical liver phantom and liver lesions To gain information of the attenuation characteristics in a real patient, we used an axial slice of a multi-phase liver CT scan and substituted the liver with a synthetic liver model. For this, we segmented the CT image by thresholding the data into a bone and soft tissue image (Fig. 2), followed by manual segmentation and replacement with a synthetic liver model with homogeneous CT values of 50 HU. A total of four common liver lesions have been selected from the clinic's patient database by an experienced radiologist (D.M. nine years of experience). Preparation of the patient-inspired in-silico synthetic liver model for the numerical SPCCT experiments. a Original CT image of a healthy liver, (b) segmented bone, (c) segmented soft tissues, and (d) liver part of the original CT image. The liver is removed from the soft tissue image and is replaced in the synthetic liver phantom with a homogenous attenuation of 50 HU Characteristics of arterial and portal venous contrast enhancement of these lesions were considered as follows (Fig. 3). The cyst does not enhance in both the arterial and the portal venous phase with a CT value of about 0 HU. HCC typically shows an arterial hyper-perfusion in the arterial contrast phase (+30 HU compared to liver enhancement) and a washout phenomenon in the venous phase (– 15 HU compared to liver enhancement). Haemangioma typically presents with a 'closing iris' pattern of enhancement, with a peripheral nodular hyper-vascularisation in the arterial phase (+75 HU compared to liver enhancement) and a centripetal contrast filling in the venous phases (+55 HU compared to liver enhancement). The metastasis (typically from colorectal cancer) shows a peripheral rim-like enhancement in the arterial phase (100 HU), which is further increased in the portal venous phase images (120 HU). The core of the metastasis was simulated without enhancing in arterial phase images and a little uptake in portal venous phase (35 HU). The liver lesions were added into the homogenous synthetic liver model, each with three different sizes: 5 mm, 10 mm and 20 mm, respectively. The 20 mm and 10 mm liver lesions were positioned in the right lobe of the liver in segment VIII and segment VII and the 5 mm lesions had their position in segment II according to Couinaud's system of liver anatomy [11]. The enhancement of healthy liver was 60 HU in the arterial and 90 HU in the portal venous phase. Selected shapes for the four liver lesions used in this study. The grey values indicate the distribution of the contrast uptake during the arterial and portal phase. Light grey stands for a high contrast uptake and dark grey for a low contrast uptake. The uptake of the HCC typically presents with a homogenous, high uptake during the arterial phase followed by washout in the portal phase. The haemangioma typically shows the 'closing iris' structure in the transition from arterial to portal venous distribution. The cyst does not uptake CA, in either the arterial or the portal venous phase. The metastasis has a small rim with a high CA uptake during both phases and a core with no enhancement during arterial phase and a low enhancement during the portal phase Material decomposition We employed a projection-based maximum-likelihood method for the material decomposition into water, CA1, and CA2 [7]. This method uses the photon x-ray spectrum, the linear attenuation coefficients of water, CA1, and CA2 together with the spectral detector response function [7, 9]. The decomposition algorithm estimates the material composition that fits best to the simulated noisy photon counts for each projection and detector pixel. More precisely, the multi-bin data were pre-processed and an integrated conventional CT image was obtained. After further corrections, a maximum likelihood material decomposition of the attenuation into a water, iodine, and gadolinium material basis was performed (see below). To test the feasibility of spectral CT liver imaging with respect to its clinical applicability, we carefully adjusted the dose to be comparable to that used in conventional CT liver imaging protocols. Therefore, a conventional CT simulation using an energy-integrating detector of the chosen dataset was carried out to determine the nominal x-ray dose yielding an image noise of approximately 20 HU in the liver using the same reconstruction parameters as for the SPCCT simulation [10]. Image reconstruction and processing The outcome of the material decomposition are three material projection datasets (CA1, CA2, and water). The anti-correlated noise in the material images is suppressed by an iterative image based statistical de-noising algorithm [11]. Pixel-by-pixel based image analyses (e.g. cluster analysis or support vector machine) can be applied to the final material images to make best use of the simultaneously acquired arterial and portal venous phases. The three materials CA1, CA2, and water can be understood as a three-dimensional (3D) vector space. Each point \( \overrightarrow{x} \) in this space represents a different combination of water, CA1, and CA2. Image pixels belonging to the same tissue type form clusters in the 3D vector space. In case the clusters do not or only partially overlap, the different types of tissue can be identified. We model a cluster for tissue type t (e.g. HCC, metastasis…) by a 3D joint real normal distribution, as follows: $$ {p}^t\left(\overrightarrow{x}\right)={\prod}_{i=\left\{ water, CA1, CA2\right\}}\frac{1}{\sqrt{2\pi }{\sigma}_i^t}\mathit{\exp}\left(-\frac{1}{2}{\left(\frac{x_i-{\mu}_i^t}{\sigma_i^t}\right)}^2\right), $$ where: x i with i = {water, CA1, CA2} are the entries of \( \overrightarrow{x} \) describing a point in the 3D vector space; \( {\mu}_i^t \) with i = {water, CA1, CA2} are the coordinates of the centre of the cluster for tissue type t; and \( {\sigma}_i^t \) with i = {water, CA1, CA2} are the standard deviations of the cluster along the material axes. Correlated noise between the material images was suppressed to a level allowing to neglect it in \( {p}^t\left(\overrightarrow{x}\right) \). The value \( {p}^t\left(\overrightarrow{x}\right) \) describes the likelihood for a point \( \overrightarrow{x} \) to belong to the tissue type t. Thus, displaying \( {p}^t\left(\overrightarrow{x}\right) \) for each image pixel is called likelihood map for tissue type t. The scatter plot and the likelihood representation of the decomposed material data were used to proof the lesion detection capability. The material decomposition was allowed to differentiate between liver enhancement of CA1 and CA2, as shown in Figs. 4 and 5, where we show the distribution of both CAs within the liver parenchyma, in addition to a conventional CT image, for comparison. The iodine images present with high concentration values in the aorta and no visible contrast enhancement of the liver parenchyma, as it is typical for the arterial phase. In the gadolinium images, the portal venous distribution of CA1 results in a homogenous enhancement of the liver. SPCCT imaging of benign liver lesions, exemplified by cysts (a–e) and haemangiomas (f–j). a, f Conventional CT image. b Iodine-enhanced image, showing cysts without any contrast uptake in the arterial phase. c Gadolinium-enhanced image showing cysts in portal venous phase. g, h Iodine- and gadolinium-enhanced image of haemangiomas, with a typical closing-iris pattern. d Likelihood map showing the 20-mm, 10-mm, and 5-mm diameter cysts. i For haemangiomas, the smallest lesion was missed in the calculated likelihood map. e, j Scatter plots illustrate the results of integrative analysis of SPCCT images with regard to the content of iodine and gadolinium as well as the non-contrast fraction (= water) for all lesions and all sizes without significant overlap of liver parenchyma (light grey) and lesion (dark grey). For this, two ROIs in the three material images were evaluated; one ROI was placed in the lesion (dark grey markers) and the other in healthy liver tissue (light grey markers). Image windowing: a and f, level/window 50/300 HU; b and g, level/window 25/100 μmol/cc of iodined CA; c and h, level/window 25/100 μmol/cc of gadolinium-based CA SPCCT imaging of malignant liver lesions, exemplified by HCC (a–e) and metastasis (f–j). a, f Conventional CT image. b Iodine-based image, showing HCC with a typical hyper-enhancement in arterial phase. c Gadolinium-based image of HCC in portal venous phase, which shows only subtle early washout with hypodense presentation in the largest lesion. g, h Iodine- and gadolinium-based image of metastases, with a typical ring-like enhancement in the arterial phase (g), increased in the portal venous phase (h). d Likelihood map for HCC, which shows HCC in the right lobe but misses the small lesion in the left lobe, due to artifacts along the liver contour. i For metastases, the likelihood map clearly illustrated all the three lesions. e, j The scatter plots show an overlap for the fractionised analysis for iodine, gadolinium and water for HCC (e, dark grey), whereas all metastases (j, dark grey) are clearly separated from liver parenchyma (e, j, light grey). For this, two ROIs in the three material images were evaluated, one placed in the lesion (dark grey markers) and the other one in healthy liver tissue (light grey markers). Image windowing: a and f, level/window 50/300 HU; b and g, level/window 25/100 μmol/cc of iodined CA; c and h, level/window 25/100 μmol/cc of gadolinium-based CA Cysts and haemangiomas, two typical benign liver lesions, usually present with a characteristic behaviour in arterial and portal venous phases. Fig. 4 outlines the results for these benign conditions. Cysts appear with lower concentration values compared to normal liver parenchyma in the iodine (Fig. 4b) and gadolinium (Fig. 4b) maps, indicating that there is no uptake of CA1 and CA2 in the arterial and portal venous phases. The likelihood map illustrates the cystic lesions of all sizes within the normal liver. Two separate clusters can be identified in the scatter plot for cysts and for liver tissue for all sizes of liver lesions (Fig. 4e). SPCCT images of haemangiomas reveal the typical peripheral nodal enhancement in the arterial phase, followed by a centripetal contrast filling of the lesion in portal venous phase (Fig. 4). All three lesions are visible in standard CT, iodine, and gadolinium images. However, the likelihood map for haemangiomas does not indicate the 5-mm lesion in the left lobe of the liver. Scatter diagrams of haemangiomas and normal liver showed no overlap for all sizes, indicating an excellent discrimination of liver lesions. Figure 5 summarises the results of SPCCT for two common malignant liver lesions, HCC (Fig. 5a–e) and peripheral hyper-vascular metastases (Fig. 5f–j). SPCCT illustrates arterial hyper-enhancement by CA2, followed by washout with a hypodense presentation compared to normal liver tissue in the portal phase images for 20-mm and 10-mm lesions. The 5-mm lesion, however, can only be identified on the attenuation CT and the iodine image in the arterial phase. Scatter plots show two distinct clusters for HCC and for liver tissue for the lesion sizes of 20 mm and 10 mm. For the smallest lesions both cluster were positioned close by, resulting in a slight overlap of SPCCT imaging features of HCC and normal liver (Fig. 5e). The simulated metastases with a ring-like enhancement in the arterial phase, and particularly in the portal venous phase, were clearly visible in all images, including the likelihood map, for all sizes of lesions. In addition, the corresponding scatter plots illustrate an excellent delineation of metastases and healthy liver parenchyma, without relevant overlap between the imaging features in both tissues. In this study we present our first results of an in-silico simulation of simultaneous dual-contrast multi-phase SPCCT liver imaging. A dedicated perfusion protocol was defined to simultaneously assess the arterial and portal venous distribution of two CAs. We evaluated the typical presentation of the arterial and portal venous contrast enhancement for four characteristic types of liver lesions. Further, the likelihood maps, based on the information of both CAs in different dynamic phases, illustrated areas with high probability for the respective lesion within normal liver tissue. This new dual-contrast SPCCT protocol therefore shows a potential for a clinically relevant simultaneous assessment of the arterial and portal venous contrast enhancement of liver lesions at one time point using a single SPCCT scan acquisition, together with a virtually unenhanced image. This approach goes beyond the currently possible ability of dual-energy systems, which can be used to acquire simultaneous iodine and gadolinium images [12], but not a third additional virtual unenhanced image simultaneously. In SPCCT, the x-ray photon is directly converted into an electrical pulse signal, whose amplitude is indicative for the energy of the initial x-ray photon [13,14,15,16,17]. Without considering K-edge absorption, the energy-dependent attenuation of the human body can be modelled as a linear combination of a set of materials [18] and is then typically expressed in a basis decomposition of two materials or, in terms of the underlying physical interaction processes, photo-electric effect and Compton-scattering. If K-edges are taken into account, some materials feature an abrupt increase of the attenuation due to resonant ionisation of a K-shell electron at a certain energy (K-edge) within the energy range relevant for imaging, roughly from 40 keV up to 120 keV. This characteristic attenuation behaviour at the K-edge can be used to perform a specific material decomposition with a high signal-to-noise ratio. Gadolinium features a K-edge at 50.24 keV and thus is an ideal candidate for K-edge CT imaging, even better than iodine, which is the standard CA for contrast-enhanced CT scanning with a relatively low K-edge energy at 33.17 keV. SPCCT allows for a valid differentiation between iodine and gadolinium, even if both iodine-based and gadolinium-based contrast CAs can have the same density values on standard CT [10]. In SPCCT, material decomposition delivers a set of basis images, which typically features higher noise than an integrated image. But the noise in the decomposed images is anti-correlated so that, using recent reconstruction methods, this noise can efficiently be reduced [19]. Such de-noising methods not only lower the noise, but also yield a higher contrast resolution. Notably, liver parenchyma receives a dual blood supply from the portal vein and hepatic artery. Normal liver parenchyma is mainly perfused via the portal vein, whereas HCCs are mostly supplied with blood from the hepatic artery. Therefore, information of perfusion patterns has an important impact on diagnostic decision-making [20]. Thus, a dedicated liver protocol includes at least one arterial and one portal venous contrast phases. Individual criteria can also require a native unenhanced scan, e.g. for the detection of calcifications and in patients with a history of transarterial chemoembolisation with iodined oil [21]. According to the Dose Index Registry Report from the American College of Radiology, the median radiation dose in terms of dose-length product for a standard single CT acquisition of the abdomen is 445 mGy × cm [22]. As a consequence, triple scan examination of the liver leads to a radiation dose of around 1335 mGy × cm. Dual-contrast SPCCT of the liver offers detailed three-phase contrast information with radiation exposure similar to that needed for a single CT scan of the abdomen. With standard single-energy CT, a differentiation between normal liver parenchyma and focal liver lesions is achieved by comparing the changes in contrast enhancement between arterial and portal venous phase. In the past, poorly aligned multi-phase CT examination protocols made it difficult to assess these changes in contrast enhancement of the same anatomic areas and post-processing techniques (including motion-correction of repetitive CT scans) were essential for reliable results of extracted perfusion parameters [23, 24]. Using the method presented here, all dynamic information is extracted from a single SPCCT scan and hence is inherently co-registered (on a voxel-by-voxel basis), circumventing any previous issues due to subsequent multi-phase scans or motion artifacts (e.g. those determined by different depths of breathing). This further allows for a pixel-by-pixel evaluation of contrast enhancement of a lesion, as advanced analysis using scatter plots and likelihood maps require a perfect spatial match of the vector space components. This perfect match can be provided by the protocol here proposed. Of note, we have used a gadolinium-based agent as first CA and an iodine-based agent as second CA. In future studies it will be necessary to address the question if this order is the preferred one or if iodine-based agent as first CA and gadolinium-based agent as second CA might be advantageous. One future perspective of our approach could be adding a delayed dynamic phase, a clinically relevant issue particularly for characterising haemangioma and HCC [25]. While technologically feasible, such an approach would require a third CA. This, however, is presently a challenge, as iodine-based CAs and gadolinium-based CAs are the only available CAs with a potential for this use. There are limitations of our study. First, we used numerical experiments to evaluate the imaging features in dual-contrast liver SPCCT. Although the simulations were performed using sophisticated tools and incorporated most physical effects in the SPCCT system, the findings have to be proved in an in-vivo study with a real SPCCT system in future. Second, there is currently very little clinical experience with the subsequent injection of iodine-based CA and gadolinium-based CA in humans. Therefore, pharmacological testing of potential adverse side effects will be necessary before an application of our presented contrast injection protocol can be used in patients. Third, there is a general limitation with respect to the minimum lesion size assessable with this approach, as the contrast uptake and characteristic patterns cannot be studied when the lesion becomes too small. Finally, we should consider pharmacodynamic considerations about the long-term deposition of gadolinium in patients, which presently is a subject to intense discussions in the community. In conclusion, we proved the concept of single-scan dual-CA CT as a new approach for K-edge SPCCT imaging, which exploits the spectral information for simultaneously assessing the gadolinium-based and iodine-based liver enhancement in different dynamic phases. Our simulation results have shown that we can successfully detect and visually discriminate the clinical presentation of four typical liver lesions by simultaneously evaluating unenhanced, arterial, and portal venous phases at one time point by a single scan, with a potential for dose reduction in clinical practice. CA: HCC: SPCCT: Spectral photon-counting computed tomography McGlynn KA, Petrick JL, London WT (2015) Global epidemiology of hepatocellular carcinoma: an emphasis on demographic and regional variability. Clin Liver Dis 19:223–238 Cogley JR, Miller FH (2014) MR imaging of benign focal liver lesions. Radiol Clin North Am 52:657–682 Belghiti J, Cauchy F, Paradis V et al (2014) Diagnosis and management of solid benign liver lesions. Nat Rev Gastroenterol Hepatol 11:737–749 Soyer P, Poccard M, Boudiaf M et al (2004) Detection of hypovascular hepatic metastases at triple-phase helical CT: sensitivity of phases and comparison with surgical and histopathologic findings. Radiology 231:413–420 Portugaller HR, Stacher R, Komaz G et al (2002) The value of different spiral CT phases in the detection of liver metastases. Röfo 174:452–458 Kim T, Murakami T, Takahashi S et al (1999) Optimal phases of dynamic CT for detecting hepatocellular carcinoma: evaluation of unenhanced and triple-phase images. Abdom Imaging 24:473–480 Schlomka JP, Roessl E, Dorscheid R et al (2008) Experimental feasibility of multi-energy photon-counting K-edge imaging in pre-clinical computed tomography. Phys Med Biol 53:4031–4047 Tucker DM, Barnes GT, Chakraborty DP (1991) Semiempirical model for generating tungsten target x-ray spectra. Med Phys 18:211–218 Roessl E, Proksa R (2007) K-edge imaging in x-ray computed tomography using multi-bin photon counting detectors. Phys Med Biol 52:4679–4696 Muenzel D, Bar-Ness D, Roessl E et al (2017) Spectral photon-counting computed tomography: initial experience with dual contrast agent K-edge colonography in a colon phantom. Radiology 283:723–728 Couinaud C, Delmas A, Patel J (1957) Le foie: études anatomiques et chirurgicales. Masson, Paris, France Wang X, Meier D, Taguchi K et al (2011) Material separation in x-ray CT with energy resolved photon-counting detectors. Med Phys 38:1534–1546 Taguchi K, Iwanczyk JS (2013) Vision 20/20: Single photon counting x-ray detectors in medical imaging. Med Phys 40:100901 Pourmorteza A, Symons R, Sandfort V et al (2016) Abdominal imaging with contrast-enhanced photon-counting CT: first human experience. Radiology 279:239–245 Kalender WA, Kolditz D, Steiding C et al (2017) Technical feasibility proof for high-resolution low-dose photon-counting CT of the breast. Eur Radiol 27:1081–1086 Gutjahr R, Halaweish AF, Yu Z et al (2016) Human imaging with photon counting-based computed tomography at clinical dose levels: contrast-to-noise ratio and cadaver studies. Invest Radiol 51:421–429 Roessl E, Brendel B, Engel KJ et al (2011) Sensitivity of photon-counting based K-edge imaging in X-ray computed tomography. IEEE Trans Med Imaging 30:1678–1690 Alvarez RE, Macovski A (1976) Energy-selective reconstructions in X-ray computerized tomography. Phys Med Biol 21:733–744 Brown K, Zabic S, Shechter G (2015) Impact of spectral separation in dual-energy CT with anti-correlated statistical reconstruction. In: Proceedings of the 13th Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, p 491–494. (http://www.fully3d.org/2015/proceedings.html). Accessed 26 Oct 2017. Nishikawa H, Kita R, Kimura T et al (2014) Transcatheter arterial embolic therapies for hepatocellular carcinoma: a literature review. Anticancer Res 34:6877–6886 Duan F, Wang EQ, Lam MG et al (2016) Superselective chemoembolization of HCC: comparison of short-term safety and efficacy between drug-eluting LC beads, quadraspheres, and conventional ethiodized oil emulsion. Radiology 278:612–621 American College of Radiology (2006) Standardized dose index registry report user guide. http://www.acr.org/~/media/ACR/Documents/PDF/QualitySafety/NRDR/DIR/Standardized%20DIR%20Report%20User%20Guide.pdf. Accessed 3 Apr 2017 Ng CS, Chandler AG, Wei W et al (2011) Reproducibility of CT perfusion parameters in liver tumors and normal liver. Radiology 260:762–770 Chandler A, Wei W, Anderson EF et al (2012) Validation of motion correction techniques for liver CT perfusion studies. Br J Radiol 85:e514–e522 Ronot M, Vilgrain V (2014) Hepatocellular carcinoma: diagnostic criteria by imaging techniques. Best Pract Res Clin Gastroenterol 28:795–812 No specific funding was received for this article. Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, Technical University of Munich, Ismaningerstrasse 22, 81675, München, Germany Daniela Muenzel, Alexander A. Fingerle, Felix K. Kopp, Franz Pfeiffer, Ernst J. Rummeny & Peter B. Noël Philips GmbH Innovative Technologies, Research Laboratories, Hamburg, Germany Heiner Daerr & Roland Proksa Department of Interventional Radiology and Cardio-vascular and Thoracic Diagnostic Imaging, Louis Pradel University Hospital, Bron, France Philippe Douek Chair of Biomedical Physics, Department of Physics and School of BioEngineering, Technical University of Munich, Garching, Germany Julia Herzen, Franz Pfeiffer & Peter B. Noël Institute for Advanced Study, Technical University of Munich, Garching, Germany Franz Pfeiffer Daniela Muenzel Heiner Daerr Roland Proksa Alexander A. Fingerle Felix K. Kopp Julia Herzen Ernst J. Rummeny Peter B. Noël DM and PBN were guarantors of the integrity of the entire study; DM, HD, RP and PBN were responsible for study concepts/study design; all authors were responsible for data analysis/interpretation, for manuscript drafting or manuscript revision for important intellectual content, and for approval of the final version of submitted manuscript; HD and FKK were responsible for the literature search; all authors were responsible for manuscript editing. All authors read and approved the final manuscript. Correspondence to Daniela Muenzel. HD and RP are employees of Philips Healthcare. The remaining authors (DM, AF, FK, FD, JH, FP, ER and PN) have no financial disclosures and had complete, unrestricted access to the study data at all stages of the study. Muenzel, D., Daerr, H., Proksa, R. et al. Simultaneous dual-contrast multi-phase liver imaging using spectral photon-counting computed tomography: a proof-of-concept study. Eur Radiol Exp 1, 25 (2017). https://doi.org/10.1186/s41747-017-0030-5 Dual-contrast computed tomography Gadolinium-based contrast agent Gadolinium mapping Iodine-based contrast agent Iodine mapping Spectral photon-counting computed tomography (SPCCT)
CommonCrawl
On the non-Lipschitz stochastic differential equations driven by fractional Brownian motion Bin Pei1 & Yong Xu1 In this paper, we use a successive approximation method to prove the existence and uniqueness theorems of solutions to non-Lipschitz stochastic differential equations (SDEs) driven by fractional Brownian motion (fBm) with the Hurst parameter \(H\in(\frac{1}{2},1)\). The non-Lipschitz condition which is motivated by a wider range of applications is much weaker than the Lipschitz one. Due to the fact that the stochastic integral with respect to fBm is no longer a martingale, we definitely lost good inequalities such as the Burkholder-Davis-Gundy inequality which is crucial for SDEs driven by Brownian motion. This point motivates us to carry out the present study. Stochastic differential equations (SDEs) have been greatly developed and are well known to model diverse phenomena, including but not limited to fluctuating stock prices, physical systems subject to thermal fluctuations, forecasting the growth of a population, from various points of view [1–4]. There is no doubt that the mathematical models under a random disturbance of 'Gaussian white noise' have seen rapid development. However, it is not appropriate to model some real situations where stochastic fluctuations with long-range dependence might exist. Due to the long-range dependence of the fBm which was introduced by Hurst [5], Kolmogorov [6], Mandelbrot [7] originally, SDEs driven by fBm have been used as the models of a number of practical problems in various fields, such as queueing theory, telecommunications, and economics [8–10]. On most occasions, the coefficients of SDEs driven by fBm are assumed to satisfy the Lipschitz condition. The existence and uniqueness of solutions of SDEs driven by fBm with Lipschitz condition have been studied by many scholars [11–14]. However, this Lipschitz condition seemed to be considerably strong when one discusses variable applications in real world. For example, the hybrid square root process and the one-dimensional semi-linear SDEs with Markov switching. Such models appear widely in many branches of science, engineering, industry and finance [15–17]. Therefore, it is important to obtain some weaker condition than the Lipschitz one under which the SDEs still have unique solutions. Fortunately, many researchers have investigated the SDEs under non-Lipschitz condition and they presented many meaningful results [18–22]. But, to the best of our knowledge, the existence and uniqueness of solutions of SDEs driven by fBm with a non-Lipschitz condition have not been considered. Since fBm is neither a semi-martingale nor a Markov process, we definitely lost good inequalities such as the Burkholder-Davis-Gundy inequality, which is crucial for SDEs driven by Brownian motion. Then it seems not to be very easy to obtain the existence and uniqueness of solutions to non-Lipschitz SDEs with fBm. This point motivates us to carry out the present study. We in the present paper discuss the SDEs with fBm under the non-Lipschitz condition. Using the successive approximation method, the existence and uniqueness theorems of solutions to the following non-Lipschitz SDEs driven by fBm are proved: $$\begin{aligned} X( t) = X( 0) + \int_{0}^{t} {b\bigl( {s,X( s)} \bigr)}\,ds + \int_{0}^{t} {\sigma\bigl( {s,X( s)} \bigr)} \,d{B^{H}}( s),\quad t \in[ {0,T} ], \end{aligned}$$ where the initial data \(X(0)=\xi\) is a random variable, \(0< T<\infty\), the process \(B^{H}(t)\) represents the fBm with Hurst index \(H\in(\frac {1}{2},1)\) defined in a complete probability space \((\Omega,\mathcal {F}, \mathbb{P})\), and \(b( {t,X( t)}):[0,T] \times R \to R\) and \(\sigma( {t,X( t)}):[ {0,T} ] \times R \to R \) are all measurable functions; \(\int_{0}^{t} \cdot{\,d{B^{H}}( s)}\) stands for the stochastic integral with respect to fBm. Let \(( {\Omega,\mathcal{F},\mathbb{P}})\) be a complete probability space. SDEs with respect to fBm have been interpreted via various stochastic integrals, such as the Wick integral, the Wiener integral, the Skorohod integral, and path-wise integrals [13, 23–26]. In this paper, we consider the path-wise integrals [27] with respect to fBm. Let \(\varphi:{{R}_{+} } \times{{R}_{+} } \to{{R}_{+} }\) be defined by $$\begin{aligned} \varphi( {t,s}) = H( {2H - 1}){\vert {t - s} \vert ^{2H - 2}},\quad t,s \in{{R}_{+} }, \end{aligned}$$ where H is a constant with \(\frac{1}{2} < H < 1\). Let \(g:{{R}_{+} } \to{R}\) be Borel measurable. $$\begin{aligned} L_{\varphi}^{2}( {{{R}_{+} }}) = \biggl\{ {g:\Vert g \Vert _{\varphi}^{2} = \int _{{{R}_{+} }} { \int_{{{R}_{+} }} {g( t)g( s)\varphi( {t,s})\,ds\,dt < \infty} } } \biggr\} . \end{aligned}$$ If we equip \(L^{2}_{\varphi}({R}_{+})\) with the inner product $$\begin{aligned} {\langle{{g_{1}},{g_{2}}} \rangle_{\varphi}} = \int_{{{R}_{+}}} { \int _{{{R}_{+}}} {{g_{1}}( t){g_{2}}( s) \varphi( {t,s})\,ds\,dt} },\quad {g_{1}},{g_{2}} \in L_{\varphi}^{2}( {{R_{+} }}), \end{aligned}$$ then \(L^{2}_{\varphi}({R}_{+})\) becomes a separable Hilbert space. Let \(\mathcal{S}\) be the set of smooth and cylindrical random variables of the form $$\begin{aligned} F(\omega) = f \biggl( \int_{0}^{T}\psi_{1}(t)\,dB^{H}_{t}, \ldots, \int _{0}^{T}\psi_{n}(t)\,dB^{H}_{t} \biggr), \end{aligned}$$ where \(n \ge1\), \(f \in\mathcal{C}_{b}^{\infty}( {{{R}^{n}}})\) (i.e. f and all its partial derivatives are bounded), and \({\psi_{i}} \in\mathcal{H}\), \(i = 1,2,\ldots, n\). \(\mathcal{H}\) is the completion of the measurable functions such that \(\Vert \psi \Vert _{\varphi}^{2} <\infty\) and \(\{\psi_{n}\}\) is a sequence in \(\mathcal{H}\) such that \(\langle \psi_{i}, \psi_{j}\rangle_{\varphi}=\delta_{ij}\). The Malliavin derivative \(D_{t}^{H}\) of a smooth and cylindrical random variable \(F\in\mathcal{S}\) is defined as the \(\mathcal{H}\)-valued random variable: $$\begin{aligned} D_{t}^{H}F = \sum_{i = 1}^{n} {\frac{{\partial f}}{{\partial {x_{i}}}} \biggl( \int_{0}^{T}\psi_{1}(t)\,dB^{H}_{t}, \ldots, \int _{0}^{T}\psi_{n}(t)\,dB^{H}_{t} \biggr)} {\psi_{i}(t)}. \end{aligned}$$ Then, for any \(p\geq1\), the derivative operator \(D_{t}^{H}\) is a closable operator from \(L^{p}(\Omega)\) into \(L^{p}(\Omega;\mathcal{H})\). Next, we introduce the φ-derivative of F: $$\begin{aligned} D_{t}^{\varphi}F = \int_{{{R}_{+} }} {\varphi( {t,v})} D_{v}^{H} F\,dv. \end{aligned}$$ The elements of \(\mathcal{H}\) may not be functions but distributions of negative order. Thanks to this, it is convenient to introduce the space \(\vert \mathcal{H} \vert \) of the measurable function h on \([ {0,T} ]\) satisfying $$\begin{aligned} \Vert h \Vert _{\vert \mathcal{H} \vert }^{2} = \int_{0}^{T} { \int_{0}^{T} {\bigl\vert {h( t)} \bigr\vert \bigl\vert {h( s)}\bigr\vert \varphi( {t,s})\,ds\,dt < \infty}}. \end{aligned}$$ It is not difficult to show that \(\vert \mathcal{H} \vert \) is a Banach space with the norm \(\Vert {\cdot} \Vert _{\vert \mathcal{H} \vert }^{2}\). In addition, we denote by \(D_{t}^{H,k}\) the iteration of the derivative operator for any integer \(k\geq1\). The Sobolev space \({\mathbb {D}}^{k,p}\) is the closure of \(\mathcal{S}\) with respect to the norm, for any \(p\geq1\) (⨂ denotes the tensor product), $$\begin{aligned} \Vert F\Vert _{k,p}^{p}=\mathbb{E}\vert F\vert ^{p}+\mathbb{E}\sum_{j=1}^{k}{ \bigl\Vert D_{t}^{H,j}F\bigr\Vert _{\mathcal{H}^{\bigotimes j}}^{p}}. \end{aligned}$$ Similarly, for a Hilbert space U, we denote by \({\mathbb {D}}^{k,p}(U)\) the corresponding Sobolev space of U-valued random variables. For any \(p>0\) we denote by \({\mathbb{D}}^{1,p}(\vert \mathcal{H}\vert )\) the subspace of \({\mathbb{D}}^{1,p}(\mathcal{H})\) formed by the elements h such that \(h\in \vert \mathcal{H}\vert \). Biagini et al. [14], Alos, Mazet and Nualart [24], Hu and Øksendal [9] have given more details as regards the fBm. Let \(u(t)\) be a stochastic process in the space \({\mathbb{D}}^{1,2}(\vert \mathcal{H}\vert )\), satisfying $$\begin{aligned} \int_{0}^{T} { \int_{0}^{T} {\bigl\vert {D_{s}^{H}u( t)} \bigr\vert {{\vert {t - s} \vert }^{2H - 2}}\,ds\,dt} } < \infty, \end{aligned}$$ then the symmetric integral coincides with the forward and backward integrals (P159,[14]). Definition 2 The space \(\mathcal{L}_{\varphi}[0,T]\) of integrands is defined as the family of stochastic processes \(u(t)\) on \([0,T]\), such that \(\mathbb{E}\Vert {u( t)} \Vert _{\varphi}^{2} < \infty\), \(u(t)\) is φ-differentiable, the trace of \(D_{s}^{\varphi}u( t)\) exists, \(0 \le s \le T\), \(0 \le t \le T\), and $$\begin{aligned} \mathbb{E} \int_{0}^{T} { \int_{0}^{T} {{{\bigl[ {D_{t}^{\varphi}u( s)} \bigr]}^{2}}\,ds\,dt} } < \infty, \end{aligned}$$ and for each sequence of partitions \(( {{\pi_{n}},n \in\mathbb{N}})\) such that \(\vert {{\pi_{n}}} \vert \to0\) as \(n \to\infty\), $$\begin{aligned} \sum_{i = 0}^{n - 1} {\mathbb{E} \biggl[ { \int_{t_{i}^{( n)}}^{t_{i + 1}^{( n)}} { \int_{t_{j}^{( n)}}^{t_{j + 1}^{( n)}} {\bigl\vert {D_{s}^{\varphi}u^{\pi}\bigl(t_{i}^{( n)}\bigr) D_{t}^{\varphi}u^{\pi}\bigl(t_{j}^{( n)}\bigr) -D_{s}^{\varphi}{u(t)}D_{t}^{\varphi}{u(s)}} \bigr\vert \,ds\,dt} } } \biggr]} \end{aligned}$$ $$\begin{aligned} \mathbb{E}\bigl[ {\bigl\Vert {{u^{\pi}} - u} \bigr\Vert _{\varphi}^{2}} \bigr] \end{aligned}$$ tend to 0 as \(n\to\infty\), where \({\pi_{n}} = t_{0}^{( n)} < t_{1}^{( n)} < \cdots < t_{n - 1}^{( n)} < t_{n}^{( n)} = T\). Let \(B^{H}(t)\) be a fBm with \(\frac{1}{2}< H<1\), and \(u(t)\) be a stochastic process in \({{\mathbb{D}}^{1,2}}( {\vert \mathcal{H} \vert }) \cap {\mathcal{L}_{\varphi}}[ {0,T} ]\), then for every \(T<\infty\), $$\begin{aligned} \mathbb{E} { \biggl[ { \int_{0}^{T} {u( s)\,{d^{\circ}} {B^{H}}( s)} } \biggr]^{2}} \le2H{T^{2H - 1}} \mathbb{E} \biggl[ { \int_{0}^{T} {{{ \bigl\vert {u( s)}\bigr\vert }^{2}}\,ds} } \biggr] + 4T\mathbb{E} \int_{0}^{T} \bigl[{D_{s}^{\varphi}u( s)\bigr]^{2}\,ds}. \end{aligned}$$ The detailed proof of Lemma 3 can be found in the authors' previous work [28–30]. In this paper, we always assume the following non-Lipschitz condition, which was proposed by Yamada and Watanabe [22], is satisfied. Hypothesis 4 There exists a function \(\kappa( q)>0\), \(q > 0\), \(\kappa( 0 )=0\) such that \(\kappa( q)\) is a continuous non-decreasing, concave function and \(\int_{0 + } {\frac{{dq}}{{\kappa( q)}}} = + \infty\), \(b( {t,0})\), \(\sigma( {t,0})\) are locally integral with respect to t, Furthermore, \(\forall t \in[ {0,T} ]\), \(b( {t,\cdotp}),\sigma( {t,\cdotp}) \in {\mathcal{L}_{\varphi}}[ {0,T} ] \cap{\mathbb{D}^{1,2}}( {\vert \mathcal{H} \vert } )\), we have $$\begin{aligned}& \mathbb{E} {\bigl\vert {b( {t,X}) - b( {t,Y})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {\sigma( {t,X}) - \sigma( {t,Y})} \bigr\vert ^{2}} \\& \quad {} + \mathbb{E} {\bigl\vert {D_{t}^{\varphi}\bigl( {\sigma({t,X}) - \sigma( {t,Y})} \bigr)} \bigr\vert ^{2}} \le\kappa\bigl( {\mathbb{E} {{\vert {X - Y} \vert }^{2}}} \bigr). \end{aligned}$$ The above-mentioned Hypothesis 4 is the so-called non-Lipschitz condition. The non-Lipschitz condition has a variety of forms [31–34]. Here, we consider one kind of them. In particular, we see clearly that if we let \(\kappa( q) = K'q\), then the non-Lipschitz condition reduces to the Lipschitz condition. In other words, the non-Lipschitz condition is weaker than the Lipschitz condition. Now, we give some concrete examples of the function κ. Let \(K'>0\) and let \(\mu\in\mathopen{]}0,1[\) be sufficiently small. Define $$\begin{aligned}& {\kappa_{1}} ( x) = K'x,\quad x \ge0,\\& {\kappa_{2}} ( x) = \textstyle\begin{cases} x\log({x^{ - 1}}),& 0 \le x \le\mu, \\ \mu\log({\mu^{ - 1}}) + \kappa'_{2} ( {\mu-}) ( {x - \mu}),& x > \mu, \end{cases}\displaystyle \\& {\kappa_{3}} ( x) = \textstyle\begin{cases} x\log({x^{ - 1}})\log\log({x^{ - 1}}),& 0 \le x \le\mu,\\ \mu\log({\mu^{ - 1}})\log\log({\mu^{-1}}) + \kappa'_{3} ( {\mu - }) ( {x - \mu}),& x > \mu, \end{cases}\displaystyle \end{aligned}$$ where \(\kappa'\) denotes the derivative of the function κ. They are all concave and non-decreasing functions satisfying \(\int_{0 + } {\frac{1}{{{\kappa_{i}}( { x})}}}\,dx = \infty\) (\(i = 1,2,3\)). The main theorems In this section, using an iteration of the Picard type, we will discuss the solutions for non-Lipschitz SDEs with fBm. Let \({X_{0}}( t) \equiv \xi\) be a random variable with \(\mathbb{E}{\vert \xi \vert ^{2}} < + \infty\), and construct an approximate sequence of stochastic process \(\{ X_{k}(t)\}_{k \geq1}\) as follows: $$\begin{aligned} {X_{k}}( t) = \xi + \int_{0}^{t} {b\bigl( {s,{X_{k - 1}}( s)} \bigr)}\,ds + \int _{0}^{t} {\sigma\bigl( {s,{X_{k - 1}}( s)} \bigr)} \,d^{\circ}{B^{H}}( s),\quad k = 1,2, \ldots. \end{aligned}$$ Hereafter, we assume that \(1 \le T < + \infty\) without losing generality. First, we given the following four key lemmas. The proofs for Lemma 5 and Lemma 6 will be presented in the Appendix. There exists a positive number K, \(\forall b( {t,\cdot}),\sigma( {t,\cdot}) \in{\mathcal{L}_{\varphi}}[ {0,T} ] \cap{\mathbb{D}^{1,2}}( {\vert \mathcal{H} \vert })\), \(t \in[ {0,T} ]\), and we have $$\begin{aligned} \mathbb{E} {\bigl\vert {b( {t,X})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {\sigma( {t,X})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {D_{t}^{\varphi}\sigma( {t,X})} \bigr\vert ^{2}} \le K\bigl( {1 + \mathbb {E} {{\vert X \vert }^{2}}} \bigr). \end{aligned}$$ Under the conclusion of Lemma 5, one can get $$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k}}( t)} \bigr\vert ^{2}} \le{C_{1}},\quad k = 1,2,\ldots,t \in[0,T], \end{aligned}$$ where \({C_{1}} = 3( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}})\exp( {12K{T^{2}}} )\). If \(b(t,X)\) and \(\sigma(t, X)\) satisfy the Hypothesis 4, then for \(t \in[0,T]\), \(n \ge1\), \(k \geq1\), we have $$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)} \bigr\vert ^{2}} \le{C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {{X_{n + k - 1}}( s) - X_{n - 1}( s)}\bigr\vert }^{2}}} \bigr)}\,ds \end{aligned}$$ $$\begin{aligned} \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n+k}}( s) -{X_{n}}( s)} \bigr\vert ^{2}} \le{C_{3}}t, \end{aligned}$$ where \(C_{2}=8T\) and \(C_{3}\) is a constant. For \(0 \le s \le t\), we show that $$\begin{aligned}& \mathbb{E}\bigl\vert X_{n+k}( s) - X_{n}( s)\bigr\vert ^{2} \\& \quad \le2\mathbb{E}\biggl\vert \int_{0}^{s} \bigl(b\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - b\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) \bigr)\,d{s_{1}} \biggr\vert ^{2} \\& \qquad {} + 2\mathbb{E}\biggl\vert \int_{0}^{s} \bigl(\sigma\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - \sigma\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) \bigr) \,d^{\circ}{B^{H}}(s_{1}) \biggr\vert ^{2} \\& \quad \le8T\mathbb{E} \int_{0}^{t} \bigl[\bigl\vert b\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - b(s_{1},X_{n - 1}(s_{1}) \bigr\vert ^{2} \\& \qquad {} +\bigl\vert \sigma\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - \sigma( s_{1},X_{n -1}(s_{1})\bigr\vert ^{2} \\& \qquad {}+\bigl\vert D_{s_{1}}^{\varphi}\bigl(\sigma \bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - \sigma \bigl(s_{1},X_{n - 1}(s_{1})\bigr)\bigr)\bigr\vert ^{2}\bigr]\,d{s_{1}} \\& \quad \le{C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {{X_{n + k - 1}}( s) -X_{n - 1}( s)} \bigr\vert }^{2}}} \bigr)}\,ds. \end{aligned}$$ Then it is easy to verify $$\begin{aligned} \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n+k}}( s) -{X_{n}}( s)} \bigr\vert ^{2}} \le& {C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{ \bigl\vert {{X_{n+ k - 1}}( s) - X_{n - 1}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& {C_{2}} \int_{0}^{t} {\kappa( {4{C_{1}}})}\,ds \le{C_{3}}t. \end{aligned}$$ This completes the proof of Lemma 7. □ Now, choose \(0 < {T_{1}} \le T\), such that \(t \in[ {0,{T_{1}}} ]\), for \({\kappa_{1}}( {{C_{3}}t}) \le{C_{3}}\), \({\kappa_{1}}( q) = {C_{2}}\kappa( q)\) holds. We should note that in the following part, we first of all prove the following main theorem, Theorem 9, in the time interval \([0,{T_{1}}]\), then we extend the result in the whole interval \([0,T]\). Fix \(k \geq1\) arbitrarily and define two sequences of functions \({\{ {{\phi_{n}}( t)} \}_{n = 1,2, \ldots}}\) and \({\{ {{{\tilde{\phi}}_{n,k}}( t)} \}_{n = 1,2, \ldots}}\), where $$\begin{aligned}& {\phi_{1}}( t)= {C_{3}}t, \\& {\phi_{n + 1}}( t) = \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n}}( s)} \bigr)}\,ds, \\& {\tilde{\phi}_{n,k}}( t) = \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)} \bigr\vert ^{2}},\quad n = 1,2, \ldots. \end{aligned}$$ Under the Hypothesis 4, $$\begin{aligned} 0 \le{\tilde{\phi}_{n,k}}( t) \le{\phi_{n}}( t) \le{\phi_{n - 1}}( t) \le \cdots \le{\phi_{1}}( t),\quad t \in[ {0,{T_{1}}} ], \end{aligned}$$ for all positive integer n. By Lemma 7, we have $$\begin{aligned} {\tilde{\phi}_{1,k}}( t) = \sup _{0 \le s \le t} \mathbb{E} { \bigl\vert {{X_{1+k}}( s) - {X_{1}}( s)} \bigr\vert ^{2}} \le{C_{3}}t = {\phi _{1}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$ Then, since \({\kappa_{1}}( q) = {C_{2}}\kappa( q)\), \(\kappa( q) \) is a concave function and $$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k + 1}}( s) - {X_{1}}( s)} \bigr\vert ^{2}} \le\sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{k + 1}}( s) - {X_{1}}( s)}\bigr\vert ^{2}} = {\tilde{\phi}_{1,k}}( t),\quad 0 \le s \le t, \end{aligned}$$ it is easy to verify $$\begin{aligned} {{\tilde{\phi}}_{2,k}}( t) =& \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{2 + k}}( s) - {X_{2}}( s)} \bigr\vert ^{2}} \\ \le& {C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {{X_{k + 1}}( s) -{X_{1}}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{{ \tilde{\phi}}_{1,k}}( s)} \bigr)}\,ds \le \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{1}}( s)} \bigr)}\,ds \\ =& {\phi_{2}}( t) = \int_{0}^{t} {{\kappa_{1}}( {{C_{3}}s})}\,ds \\ \le& {C_{3}}t = { \phi_{1}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$ That is to say, for \(n=2\), we have $$\begin{aligned} {\tilde{\phi}_{2,k}}( t) \le{\phi_{2}}( t) \le{ \phi_{1}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$ Next, assume (3.4) for \(n \geq2\) and by the assumption for n $$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)} \bigr\vert ^{2}} \le\sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)}\bigr\vert ^{2}} = {\tilde{\phi}_{n,k}}( t) \le{\phi_{n}}( t), \end{aligned}$$ it is easy to verify for \(n+1\) $$\begin{aligned} {{\tilde{\phi}}_{n + 1,k}}( t) =& \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n + k + 1}}( s) - {X_{n + 1}}( s)} \bigr\vert ^{2}} \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( { \mathbb{E} {{\bigl\vert {{X_{n + k}}( s) -{X_{n}}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{{ \tilde{\phi}}_{n,k}}( s)} \bigr)}\,ds \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n}}( s)} \bigr)}\,ds = {\phi_{n + 1}}( t) \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n - 1}}( s)} \bigr)}\,ds = {\phi _{n}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$ Under the Hypothesis 4, then $$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$ By Theorem 9, we say that \(\{{X_{k}}(\cdotp)\}_{k\geq1}\) is a Cauchy sequence and define its limit as \(X( \cdotp)\). Then letting \(k\to \infty\) in (3.1), we finally see that the solutions to (1.1) exist. Step 1: In this step we shall show $$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le{T_{1}}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$ By Lemma 8, we know \({\phi_{n}}( t)\) decreases monotonically when \(n \to\infty\) and \({\phi_{n}}( t)\) is non-negative function on \(t \in[ {0,{T_{1}}} ]\). Therefore, we can define the limit function \(\phi( t)\) by \({\phi_{n}}( t) \downarrow\phi( t)\). It is easy to verify that \(\phi( 0) = 0\) and \(\phi( t)\) is a continuous function on \(t \in[ {0,{T_{1}}} ]\) [35]. According to the definition of \({\phi_{n}}( t)\) and \({\phi}( t)\), we obtain $$\begin{aligned} \phi( t) = \lim _{n \to\infty} {\phi_{n + 1}}( t) = \lim _{n \to\infty} \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n}}( s)} \bigr)}\,ds = \int_{0}^{t} {{\kappa_{1}}\bigl( {\phi( s)} \bigr)}\,ds,\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$ Since \(\phi( 0) = 0\) and $$\begin{aligned} \int_{0 + } {\frac{{dq}}{{{\kappa_{1}}( q)}}} = \frac {1}{{{C_{2}}}} \int_{0 + } {\frac{{dq}}{{\kappa( q)}}} = + \infty, \end{aligned}$$ (3.5) implies \(\phi( t) \equiv0\), \(t\in[0,T_{1}]\). Therefore we obtain $$\begin{aligned} 0 \le\lim _{k,n \to\infty} \sup _{0 \le t \le{T_{1}}} \mathbb{E} { \bigl\vert {{X_{n + k}}( t) - {X_{n}}( t)} \bigr\vert ^{2}} = \lim _{k,n \to\infty} {\tilde{\phi}_{n,k}}( {{T_{1}}}) \le\lim _{n \to\infty} {\phi_{n}}( {{T_{1}}}) = 0, \end{aligned}$$ namely, Step 2: Define $$\begin{aligned} T_{2} = \sup \Bigl\{ {\tilde{T} :\tilde{T} \in[ {0,T} ] \mbox{ and } \lim _{n,i \to\infty} \sup _{0 \le t \le\tilde{T}} \mathbb{E} \bigl\vert {X_{n}}( t) - {X_{i}}( t) \bigr\vert ^{2}= 0} \Bigr\} . \end{aligned}$$ Immediately, we can observe \(0 < {T_{1}} \le T_{2} \le T\). Now, we shall show $$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$ Let \(\varepsilon>0\) be an arbitrary positive number. Choose \(S_{0}\) so that \(0 < {S_{0}} < \min( {T_{2},1})\). And $$\begin{aligned} {C_{4}} {S_{0}} < \frac{\varepsilon}{{10}}, \end{aligned}$$ where \({C_{4}} = 8K({1 + {K_{1}}( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}} )}){S_{0}}\). From the definition of \(T_{2}\), we have $$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T_{2} - {S_{0}}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$ Then, for large enough N, we observe $$\begin{aligned} \sup _{0 \le t \le T_{2} - {S_{0}}} \mathbb{E} {\bigl\vert {{X_{n}}(t) - {X_{i}}( t)} \bigr\vert ^{2}} < \frac{\varepsilon}{{10}},\quad n,i \ge N. \end{aligned}$$ On the other hand, one can get $$\begin{aligned} \sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) -{X_{i}}( t)} \bigr\vert ^{2}} \le& 3\sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb {E} {\bigl\vert {{X_{n}}( t) - {X_{n}}( {T_{2} - {S_{0}}})} \bigr\vert ^{2}} \\ &{}+3\mathbb{E} {\bigl\vert {{X_{n}}( {T_{2} - {S_{0}}}) - {X_{i}}( {T_{2} - {S_{0}}})}\bigr\vert ^{2}} \\ &{}+3\sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{i}}( {T_{2} - {S_{0}}}) - {X_{i}}( t)} \bigr\vert ^{2}} \\ =& 3{I_{1}} + 3{I_{2}} + 3{I_{3}}. \end{aligned}$$ Now, using Lemma 3, we obtain $$\begin{aligned} {I_{1}} =& \sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb {E} {\bigl\vert {{X_{n}}( t) - {X_{n}}( {T_{2} - {S_{0}}})} \bigr\vert ^{2}} \\ \le& 2{S_{0}} \mathbb{E} \int_{T_{2} - {S_{0}}}^{T_{2}} {{{\bigl\vert {b\bigl( s_{1},X_{n -1}(s_{1}) \bigr)} \bigr\vert }^{2}}} \,d{s_{1}} \\ &{}+ 4H{S_{0}}^{2H - 1} \mathbb{E} \int_{T_{2} - {S_{0}}}^{T_{2}} {{{\bigl\vert {\sigma \bigl(s_{1},X_{n - 1}(s_{1}) \bigr)} \bigr\vert }^{2}}} \,d{s_{1}} \\ &{}+ 8{S_{0}}\mathbb{E} \int_{T_{2} - {S_{0}}}^{T_{2}} {{{\bigl\vert {D_{s_{1}}^{\varphi}\sigma\bigl( s_{1},X_{n - 1}(s_{1}) \bigr)} \bigr\vert }^{2}}} \,d{s_{1}} \\ \le& 8{S_{0}} \int_{T_{2} - {S_{0}}}^{T_{2}} {K\bigl({1 + {K_{1}}\bigl( {1 + \mathbb {E} {{\vert \xi \vert }^{2}}} \bigr)} \bigr)} \,d{s_{1}} \\ \le& 8S_{0}^{2}K\bigl( {1 + {K_{1}}\bigl( {1 + \mathbb{E} {{\vert \xi \vert }^{2}}} \bigr)} \bigr). \end{aligned}$$ Therefore by (3.6) we have $$\begin{aligned} {I_{1}} \le\frac{\varepsilon}{{10}} \end{aligned}$$ $$\begin{aligned} {I_{3}} \le\frac{\varepsilon}{{10}}. \end{aligned}$$ Meanwhile, (3.7) implies $$\begin{aligned} {I_{2}} = \mathbb{E} {\bigl\vert {{X_{n}}( {T_{2} - {S_{0}}}) - {X_{i}}( {T_{2} - {S_{0}}})}\bigr\vert ^{2}} < \frac{\varepsilon}{{10}},\quad n,i \ge N. \end{aligned}$$ Now putting (3.7)-(3.11) together, we have $$\begin{aligned} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) -{X_{i}}( t)} \bigr\vert ^{2}} \le& \sup _{0 \le t \le T_{2} - {S_{0}}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} \\ &{}+ \sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} \\ \le& \frac{\varepsilon}{{10}} + 3{I_{1}} + 3{I_{2}} + 3{I_{3}} < \varepsilon. \end{aligned}$$ $$\lim _{n,i \to\infty} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. $$ Step 3: Using the method of reduction to absurdity, we shall show \(T_{2}=T\). Assume \(T_{2}< T\), we can choose a sequence of numbers \({\{ {{a_{i}}} \} _{i = 1,2, \ldots}}\) so that \({a_{i}} \downarrow0\) (\({i \to + \infty }\)) and for \(n > i \ge1\), $$\begin{aligned} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) -{X_{i}}( t)} \bigr\vert ^{2}} \le{a_{i}}. \end{aligned}$$ We shall divide the step into several sub-steps. First, for \(n > i \ge1\), we shall show $$\begin{aligned} \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n}}( s) - {X_{i}}( s)} \bigr\vert ^{2}} \le3{a_{i}} + {C_{5}}t,\quad T_{2} + t \le T, \end{aligned}$$ where \({C_{5}} = 12TK({1 + {K_{1}}( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}})})\). To show this, set $$\begin{aligned}& J_{1}^{( i)} = \mathbb{E} {\bigl\vert {{X_{n}}( {T_{2}}) - {X_{i}}( {T_{2}})} \bigr\vert ^{2}}, \\& J_{2}^{( i)}( t) = \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\biggl\vert { \int_{T_{2}}^{s} {\bigl( {b\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) - b\bigl( {{s_{1}},{X_{i - 1}}(s_{1})} \bigr)} \bigr)\,d{s_{1}}} } \biggr\vert ^{2}}, \\& J_{3}^{( i)}( t)= \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\biggl\vert { \int_{T_{2}}^{s} {\bigl( {\sigma\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) - \sigma\bigl( {{s_{1}},{X_{i - 1}}(s_{1})} \bigr)} \bigr)} \,d^{\circ}{B^{H}}(s_{1})} \biggr\vert ^{2}}. \end{aligned}$$ Then (3.12) implies \(J_{1}^{( i)} \le{a_{i}}\) and $$\begin{aligned} J_{2}^{i}( t) + J_{3}^{i}( t) \le& 4T \mathbb{E} \int_{T_{2}}^{T_{2}+ t} \bigl[ \bigl\vert b \bigl(s_{1},X_{n - 1}(s_{1})\bigr) - b \bigl(s_{1},X_{i - 1}(s_{1})\bigr) \bigr\vert ^{2} \\ &{}+\bigl\vert \sigma\bigl(s_{1},X_{n - 1}(s_{1}) \bigr) - \sigma\bigl(s_{1},X_{i - 1}(s_{1})\bigr) \bigr\vert ^{2} \\ &{}+ \bigl\vert D_{s_{1}}^{\varphi}\bigl(\sigma\bigl(s_{1},X_{n - 1}(s_{1})\bigr) - \sigma\bigl(s_{1},X_{i -1}(s_{1})\bigr) \bigr)\bigr\vert ^{2}\bigr]\,ds_{1} \\ \le& 4TK\bigl(1 + {K_{1}}\bigl( 1 + \mathbb{E}\vert \xi \vert ^{2}\bigr) \bigr)t. \end{aligned}$$ $$\begin{aligned} \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n}}( s) - {X_{i}}( s)} \bigr\vert ^{2}} \le& 3J_{1}^{( i)} + 3J_{2}^{( i)}( t) + 3J_{3}^{( i)}( t) \\ \le& 3{a_{i}} + {C_{5}}t,\quad T_{2} + t \le T. \end{aligned}$$ Next, we shall show an assertion which is analogous to Lemma 8. To state the assertion, we need to introduce several notations. Choose a positive number \(0 < \eta \le T - T_{2}\) and a positive integer \(j \geq1\), so that $$\begin{aligned} {C_{6}}\kappa( {3{a_{j}} + {C_{5}}t}) \le{C_{5}},\quad t \in[ {0,\eta} ],{\kappa _{2}}( q) = {C_{6}}\kappa( q), \end{aligned}$$ where \(C_{6}=12T\). Introduce the sequence of functions \({\{ {{\psi_{k}}( t)} \}_{k = 1,2, \ldots}}\), \(t \in[ {0,\eta} ]\), defined by $$\begin{aligned}& {\psi_{1}}( t) = 3{a_{j}} + {C_{5}}t, \\ & { \psi_{k + 1}}( t)= 3{a_{j + k}} + \int_{0}^{t} {{\kappa_{2}}\bigl( {{\psi _{k}}( s)} \bigr)\,ds} , \\& {\tilde{\psi}_{k,n}}( t)= \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j + k}}( s)} \bigr\vert ^{2}}. \end{aligned}$$ Now, the assertion to be proved is the following: $$\begin{aligned} {\tilde{\psi}_{k,n}}( t) \le{\psi_{k}}( t) \le{\psi_{k - 1}}( t) \le \cdots \le{\psi_{1}}( t),\quad t \in[ {0, \eta} ], \end{aligned}$$ for all positive integer k. Noticing that \({\kappa_{2}}( q)\) is a non-decreasing, concave function, and (3.13) holds, from this for \(k=1\), we work out $$\begin{aligned} {{\tilde{\psi}}_{1,n}}( t) =& \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n + 1}}( s) - {X_{j + 1}}( s)} \bigr\vert ^{2}} \\ \le& 3a_{j + 1} + {C_{6}} \mathbb{E} \int_{T_{2}}^{T_{2} + t} \bigl[ \bigl\vert b \bigl({s_{1}},X_{n}(s_{1})\bigr) - b\bigl( s_{1},X_{j}(s_{1}) \bigr) \bigr\vert ^{2} \\ & {}+ \bigl\vert \sigma\bigl( s_{1},X_{n}(s_{1}) \bigr) - \sigma\bigl(s_{1},X_{j}(s_{1})\bigr) \bigr\vert ^{2} \\ &{}+\bigl\vert D_{s_{1}}^{\varphi}\bigl( \sigma\bigl( s_{1},X_{n}(s_{1})\bigr) - \sigma\bigl(s_{1},X_{j}(s_{1})\bigr)\bigr)\bigr\vert ^{2}\bigr]\,d{s_{1}} \\ \le& 3{a_{j + 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( { \mathbb {E} {{\bigl\vert {{X_{n}}(s_{1}) - {X_{j}}(s_{1})} \bigr\vert }^{2}}} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}( {3{a_{j}} + {C_{5}} {s_{1}}} )\,d{s_{1}}} \le{\psi_{1}}( t),\quad t \in[ {0,\eta} ]. \end{aligned}$$ On the other hand, using (3.14) we arrive at $$\begin{aligned} {{\tilde{\psi}}_{2,n}}( t) \le& \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n + 2}}( s) - {X_{j + 2}}( s)} \bigr\vert ^{2}} \\ \le& 3{a_{j + 2}} + {C_{6}} \int_{T_{2}}^{T_{2} + t} {\kappa\bigl( {\mathbb {E} {{\bigl\vert {{X_{n + 1}}(s_{1}) - {X_{j + 1}}(s_{1})} \bigr\vert }^{2}}} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + 2}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{{ \tilde{\psi}}_{1,n}}( t)} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{ \psi_{1}}( t)} \bigr)\,d{s_{1}}} = {\psi_{2}}( t) \\ \le& 3{a_{j}}+{C_{5}}t = {\psi_{1}}( t),\quad t \in[ {0,\eta} ]. \end{aligned}$$ Then we have proved $$\begin{aligned} {\tilde{\psi}_{2,n}}( t) \le{\psi_{2}}( t) \le{ \psi_{1}}( t). \end{aligned}$$ Now assume that the assertion holds for \(k \geq2\). Then, by an analogous argument, one can obtain $$\begin{aligned} {{\tilde{\psi}}_{k + 1,n}}( t) \le& 3{a_{j + k + 1}} + \int _{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( { \mathbb{E} {{\bigl\vert {{X_{n + k}}(s_{1})-{X_{j + k}}(s_{1})} \bigr\vert }^{2}}} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + k + 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{{ \tilde{\psi}}_{k,n}}(s_{1})} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + k}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{\psi _{k}}(s_{1})} \bigr)\,d{s_{1}}} = { \psi_{k + 1}}( t) \\ \le& 3{a_{j + k - 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{\psi _{k - 1}}(s_{1})} \bigr)\,d{s_{1}}} \\ =& { \psi_{k}}( t),\quad t \in[ {0,\eta} ]. \end{aligned}$$ Therefore, we obtain (3.15) for all k. In terms of (3.15), we can define the function \(\psi( t)\) by \({\psi_{k}}( t) \downarrow\psi( t)\) (\({k \to\infty}\)). We observe that $$\begin{aligned} \psi( 0) =& \lim _{k \to\infty} {\psi_{k + 1}}( 0) \\ =& \lim _{k \to\infty} {a_{j + k}} = 0. \end{aligned}$$ It is easy to verify that \(\psi( t)\) is a continuous function on \([ {0,\eta} ]\). Now by the definition of \({\psi_{k + 1}}( t)\) and \(\psi( t)\), we have $$\begin{aligned} \psi( t) =& \lim _{k \to\infty} {\psi_{k + 1}}( t) \\ =& \lim _{k \to\infty} \biggl[ {3{a_{j + k}} + \int_{0}^{t} {{\kappa_{2}}\bigl( {{ \psi_{k}}( s)} \bigr)\,ds} } \biggr] \\ =& \int_{0}^{t} {{\kappa_{2}}\bigl( {\psi( s)} \bigr)\,ds} . \end{aligned}$$ Since \(\psi( 0) = 0\) and (3.16) implies \(\psi( t) = 0\), \(t \in[ {0,\eta} ]\). Therefore, we obtain $$\begin{aligned} \lim _{k \to\infty} {{\tilde{\psi}}_{k,n}}( t) =& \lim _{k \to\infty} \sup _{0 \le s \le T_{2} + t } \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j + k}}( s)}\bigr\vert ^{2}} \\ \le& \lim _{k \to\infty} \sup _{0 \le s \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j + k}}( s)}\bigr\vert ^{2}} \\ &{}+ \lim _{k \to\infty} \sup _{T_{2} \le s \le T_{2} + \eta} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j +k}}( s)} \bigr\vert ^{2}} \\ \le& \lim _{k \to\infty} {\psi_{k}}( \eta) = \psi( \eta) = 0, \end{aligned}$$ $$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T_{2} + \eta} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$ But this conclusion is contradictory to the definition of \(T_{2}\). In other words, we have already shown that The proof of the existence of solutions of SDEs (1.1) is complete. □ Theorem 10 Under the Hypothesis 4, the path-wise uniqueness holds for (1.1), \(t\in[0,T]\). Let \(X( t)\) and \(\tilde{X}( t)\) be two solutions of (1.1) on the same probability space and \(X( 0) = \tilde{X}( 0)\). We observe $$\begin{aligned}& \mathbb{E} {\bigl\vert {X( t) - \tilde{X}( t)} \bigr\vert ^{2}} \\& \quad = \mathbb{E} {\biggl\vert { \int_{0}^{t} {\bigl( {b\bigl( {s,X( s)} \bigr) - b \bigl( {s,\tilde{X}( s)} \bigr)} \bigr)\,ds} + \int_{0}^{t} {\bigl( {\sigma\bigl( {{s_{1}},X( s)} \bigr) - \sigma\bigl( {s,\tilde{X}( s)} \bigr)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\& \quad \le2\mathbb{E} {\biggl\vert { \int_{0}^{t} {\bigl({b\bigl( {s,X( s)} \bigr) - b \bigl( {s,\tilde{X}( s)} \bigr)} \bigr)\,ds} } \biggr\vert ^{2}} + 2 \mathbb{E} {\biggl\vert { \int_{0}^{t} {\bigl( {\sigma\bigl( {s,X( s)} \bigr) - \sigma\bigl( {s,\tilde{X}( s)} \bigr)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\& \quad \le8T\mathbb{E} \int_{0}^{t} \bigl(\bigl\vert {b\bigl( s, X( s) \bigr) - b\bigl( s,\tilde{X}( s) \bigr)\bigr\vert ^{2} + \bigl\vert \sigma\bigl( s,X( s) \bigr) - \sigma\bigl( s,\tilde{X}( s)\bigr)\bigr\vert }^{2} \\& \qquad {} + \bigl\vert D_{s}^{\varphi}\bigl( \sigma \bigl( s,X( s) \bigr) - \sigma\bigl( s,\tilde{X}( s) \bigr) \bigr) \bigr\vert ^{2}\bigr)\,ds. \end{aligned}$$ Combining the above inequalities and the Hypothesis 4, one has $$\begin{aligned} \mathbb{E} {\bigl\vert {X( t) - \tilde{X}( t)} \bigr\vert ^{2}} \le8T \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {X( s) - \tilde{X}( s)} \bigr\vert }^{2}}} \bigr)}\,ds. \end{aligned}$$ Then, noticing that \(\int_{0 + } {\frac{{dq}}{{\kappa( q)}}} = + \infty\), the above inequality (3.17) implies $$\begin{aligned} \mathbb{E} {\bigl\vert {X( t) - \tilde{X}( t)} \bigr\vert ^{2}} = 0,\quad t\in[0,T]. \end{aligned}$$ Since T is an arbitrary positive number, we obtain from this \(X( t) \equiv\tilde{X}( t)\), for all \(0\le t \le T\). Thus the path-wise uniqueness holds for (1.1). □ Øksendal, B: Stochastic Differential Equations. Springer, Berlin (2005) Arnold, L: Stochastic Differential Equations, Theory and Applications. John Wiley and Sons, New York (1974) Friedman, A: Stochastic Differential Equations and Applications. Dover Publications, New York (2006) Gard, T: Introduction to Stochastic Differential Equations. Marcel Dekker, New York (1988) Hurst, H: Long-term storage capacity in reservoirs. Trans. Am. Soc. Civ. Eng. 116, 400-410 (1951) Kolmogorov, A: Wienersche spiralen und einige andere interessante kurven im Hilbertschen raum. C. R. (Dokl.) Acad. Sci. URSS 26, 115-118 (1940) Mandelbrot, B, Van Ness, J: Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10(4), 422-427 (1968) Chakravarti, N, Sebastian, K: Fractional Brownian motion models for polymers. Chem. Phys. Lett. 267, 9-13 (1997) Hu, Y, Øksendal, B: Fractional white noise calculus and application to finance. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 6, 1-32 (2003) Scheffer, R, Maciel, F: The fractional Brownian motion as a model for an industrial airlift reactor. Chem. Eng. Sci. 56, 707-711 (2001) Lyons, T: Differential equations driven by rough signals. Rev. Mat. Iberoam. 14, 215-310 (1998) Nualart, D, Rascanu, A: Differential equations driven by fractional Brownian motion. Collect. Math. 53, 55-81 (2002) Mishura, Y: Stochastic Calculus for Fractional Brownian Motion and Related Processes. Springer, Berlin (2008) Biagini, F, Hu, Y, Oksendal, B, Zhang, T: Stochastic Calculus for Fractional Brownian Motion and Applications. Springer, London (2008) Cox, J, Ingersoll, J, Ross, S: A theory of the term structure of interest rate. Econometrica 53, 385-407 (1985) Taniguchi, T: Successive approximations to solutions of stochastic differential equations. J. Differ. Equ. 96, 152-169 (1992) Kwok, Y: Pricing multi-asset options with an external barrier. Int. J. Theor. Appl. Finance 1, 523-541 (1998) Watanabe, S, Yamada, T: On the uniqueness of solution of stochastic differential equations II. J. Math. Kyoto Univ. 11(3), 553-563 (1971) Barlow, M: One dimensional stochastic differential equations with no strong solution. J. Lond. Math. Soc. 26, 335-347 (1982) Yamada, T: On a comparison theorem for solutions of stochastic differential equations and its applications. J. Math. Kyoto Univ. 13(3), 497-512 (1973) Yamada, T: On the successive approximation of solutions of stochastic differential equations. J. Math. Kyoto Univ. 21(3), 501-515 (1981) Yamada, T, Watanabe, S: On the uniqueness of solutions of stochastic differential equations. J. Math. Kyoto Univ. 11, 155-167 (1971) Carmona, P, Coutin, L, Montseny, G: Stochastic integration with respect to fractional Brownian motion. Ann. Inst. Henri Poincaré Probab. Stat. 39, 27-68 (2003) Alòs, E, Mazet, O, Nualart, D: Stochastic calculus with respect to Gaussian process. Ann. Probab. 29, 766-801 (2001) Duncan, T, Hu, Y, Pasik-Duncan, B: Stochastic calculus for fractional Brownian motion I: theory. SIAM J. Control Optim. 38, 582-612 (2000) Alòs, E, Nualart, D: Stochastic integration with respect to the fractional Brownian motion. Stoch. Stoch. Rep. 75(3), 129-152 (2003) Russo, F, Vallois, P: Forward, backward and symmetric stochastic integration. Probab. Theory Relat. Fields 97, 403-421 (1993) Xu, Y, Pei, B, Guo, R: Stochastic averaging for slow-fast dynamical systems with fractional Brownian motion. Discrete Contin. Dyn. Syst., Ser. B 20, 2257-2267 (2015) Xu, Y, Guo, R, et al.: Stochastic averaging principle for dynamical systems with fractional Brownian motion. Discrete Contin. Dyn. Syst., Ser. B 19(4), 1197-1212 (2014) Xu, Y, Pei, B, Li, Y: An averaging principle for stochastic differential delay equations with fractional Brownian motion. Abstr. Appl. Anal. 2014, Article ID 479195 (2014) Albeverio, S, Brzézniak, Z, Wu, J: Existence of global solutions and invariant measures for stochastic differential equations driven by Poisson type noise with non-Lipschitz coefficients. J. Math. Anal. Appl. 371, 309-322 (2010) Taniguchi, T: The existence and uniqueness of energy solutions to local non-Lipschitz stochastic evolution equations. J. Math. Anal. Appl. 360, 245-253 (2009) Barbu, D, Bocsan, G: Approximations to mild solutions of stochastic semilinear equations with non-Lipschitz coefficients. Czechoslov. Math. J. 52, 87-95 (2002) Xu, Y, Pei, B, Wu, J: Stochastic averaging principle for differential equations with non-Lipschitz coefficients driven by fractional Brownian motion. Stoch. Dyn. (2016). doi:10.1142/S0219493717500137 Xu, Y, Pei, B, Guo, G: Existence and stability of solutions to non-Lipschitz stochastic differential equations driven by Lévy noise. Appl. Math. Comput. 263, 398-409 (2015) This work was supported by the NSF of China (11572247), the Fundamental Research Funds for the Central Universities and Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University. Authors would like to thank the referees for their helpful comments. Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, 710072, China Bin Pei & Yong Xu Bin Pei Yong Xu Correspondence to Yong Xu. Proof of Lemma 5 Since \(\kappa( q)\) is a concave and non-negative function, we can choose two positive constants \(a > 0\) and \(b > 0\), so that $$\kappa(q) \le a + bq,\quad q \ge0, $$ then, by (2.1), we get $$\begin{aligned}& \mathbb{E} {\bigl\vert {\sigma( {t,X})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {b( {t,X})}\bigr\vert ^{2}} + \mathbb{E} { \bigl\vert {D_{t}^{\varphi}\sigma( {t,X})} \bigr\vert ^{2}} \\& \quad \le2\mathbb{E}\bigl( {{{\bigl\vert {\sigma( {t,0})} \bigr\vert }^{2}} + {{\bigl\vert {b( {t,0})}\bigr\vert }^{2}}} +{ \bigl\vert {D_{t}^{\varphi}\sigma( {t,0})} \bigr\vert ^{2}}\bigr)+ 2\mathbb{E} {\bigl\vert {\sigma( {t,X}) - \sigma( {t,0})} \bigr\vert ^{2}} \\& \qquad {} + 2\mathbb{E} {\bigl\vert {b( {t,X}) - b( {t,0})} \bigr\vert ^{2}} + 2\mathbb{E} {\bigl\vert {D_{t}^{\varphi}(\sigma( {t,X})-\sigma({t,0}))} \bigr\vert ^{2}} \\& \quad \le2\sup _{0 \le t \le T} \mathbb{E}\bigl( {{{\bigl\vert {\sigma( {t,0})} \bigr\vert }^{2}} + {{\bigl\vert {b( {t,0})} \bigr\vert }^{2}} + {{\bigl\vert {D_{t}^{\varphi}\sigma( {t,0})} \bigr\vert }^{2}}} \bigr) + 2\kappa\bigl( {\mathbb{E} {{\vert X \vert }^{2}}} \bigr) \\& \quad \le K\bigl( {1 + \mathbb{E} {{\vert X \vert }^{2}}} \bigr), \end{aligned}$$ where \(K = \max[ {2\sup _{0 \le t \le T} \mathbb {E}( {{{\vert {\sigma( {t,0})} \vert }^{2}} + {{\vert {b( {t,0})} \vert }^{2}} + {{\vert {D_{t}^{\varphi}\sigma( {t,0})} \vert }^{2}}}) + 2a,2b} ] < + \infty\). □ Using mathematical induction, we first assume that $$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k}}( t)} \bigr\vert ^{2}} \le3\mathbb{E} {\vert \xi \vert ^{2}}\sum _{l = 0}^{k} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} + \sum_{l = 1}^{k} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} \end{aligned}$$ (A.1) holds, \(t \in[0,T]\), \(k = 1,2, \ldots\) . Clearly, by Lemma 3 and Lemma 5, we arrive at $$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{1}}( t)} \bigr\vert ^{2}} \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 3 \mathbb{E} {\biggl\vert { \int_{0}^{t} {b\bigl( {s,{X_{0}}( s)} \bigr)}\,ds} \biggr\vert ^{2}} + 3\mathbb{E} {\biggl\vert { \int_{0}^{t} {\sigma\bigl( {s,{X_{0}}( s)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12T\mathbb{E} \int_{0}^{t} {\bigl( {{{ \bigl\vert {b \bigl({s,{X_{0}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {\sigma\bigl( {s,{X_{0}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {D_{s}^{\varphi}\sigma\bigl( {s,{X_{0}}( s)} \bigr)} \bigr\vert }^{2}}} \bigr)\,ds} \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12KTt\bigl( {1 + \mathbb{E} {{\vert \xi \vert }^{2}}} \bigr). \end{aligned}$$ Now, assume that (A.1) holds for k, then we have, for \(k+1\), $$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k + 1}}( t)} \bigr\vert ^{2}} \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 3\mathbb{E} {\biggl\vert { \int_{0}^{t} {b\bigl( {s,{X_{k}}( s)} \bigr)}\,ds} \biggr\vert ^{2}} + 3\mathbb{E} {\biggl\vert { \int_{0}^{t} {\sigma\bigl( {s,{X_{k}}( s)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12T\mathbb{E} \int_{0}^{t} {\bigl( {{{ \bigl\vert {b \bigl({s,{X_{k}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {\sigma\bigl( {s,{X_{k}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {D_{s}^{\varphi}\sigma\bigl( {s,{X_{k}}( s)} \bigr)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12KT \int_{0}^{t} {\bigl( {1 + \mathbb{E} {{\bigl\vert {{X_{k}}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& 3 \mathbb{E} {\vert \xi \vert ^{2}} + 12KT \int_{0}^{t} {\Biggl( {1 + 3\mathbb{E} {{\vert \xi \vert }^{2}}\sum_{l = 0}^{k} { \frac{{{{( {12KT})}^{l}}}}{{l!}}} {s^{l}} + \sum_{l = 1}^{k} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {s^{l}}} \Biggr)}\,ds \\ =& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12KTt + 3\mathbb{E} {\vert \xi \vert ^{2}}\sum_{l = 1}^{k + 1} { \frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} + \sum_{l = 2}^{k + 1} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} \\ =& 3\mathbb{E} {\vert \xi \vert ^{2}}\sum_{l = 0}^{k + 1} { \frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} + \sum_{l = 1}^{k + 1} {\frac {{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}}. \end{aligned}$$ Therefore, by induction, (A.1) holds for all k. Now, we obtain the form \({C_{1}} = 3( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}} )\exp( {12K{T^{2}}})\), then (A.2) implies (3.2). □ Pei, B., Xu, Y. On the non-Lipschitz stochastic differential equations driven by fractional Brownian motion. Adv Differ Equ 2016, 194 (2016). https://doi.org/10.1186/s13662-016-0916-1 fractional Brownian motion existence and uniqueness stochastic differential equations non-Lipschitz condition
CommonCrawl
Engaging with community-based public and private mid-level providers for promoting the use of modern contraceptive methods in rural Pakistan: results from two innovative birth spacing interventions Syed Khurram Azmat1,2,3, Waqas Hameed3, Hasan Bin Hamza4, Ghulam Mustafa3, Muhammad Ishaque3, Ghazunfer Abbas3, Omar Farooq Khan3, Jamshaid Asghar3, Erik Munroe5, Safdar Ali3, Wajahat Hussain3, Sajid Ali3, Aftab Ahmed3, Moazzam Ali6 & Marleen Temmerman1 Reproductive Health volume 13, Article number: 25 (2016) Cite this article Family planning (FP) interventions aimed at reducing population growth have negligible during the last two decades in Pakistan. Innovative FP interventions that help reduce the growing population burden are the need of the hour. Marie Stopes Society - Pakistan implemented an operational research project - 'Evidence for Innovating to Save Lives', to explore effective and viable intervention models that can promote healthy timing and spacing of pregnancy in rural and under-served communities of Sindh, Punjab and Khyber Pakhtunkhwa provinces of Pakistan. We conducted a quasi-experimental (pre - and post-intervention with control arm) study to assess the effectiveness of each of the two intervention models, 1) Suraj model (meaning 'Sun' in English), which uses social franchises (SF) along with a demand-side financing (DSF) approach using free vouchers, and 2) Community Midwife (CMW) model, in promoting the use of modern contraceptive methods compared to respective controls. Baseline and endline cross-sectional household surveys were conducted, 24 months apart, by recruiting 5566 and 6316 married women of reproductive age (MWRA) respectively. We used Stata® version 8 to report the net effect of interventions on outcome indicators using difference-in-differences analysis. Multivariate Cox proportional hazard regression analysis was used to assess the net effect of the intervention on current contraceptive use, keeping time constant and adjusting for other variables in the model. The Suraj model was effective in significantly increasing awareness about FP methods among MWRA by 14 % percentage points, current contraceptive use by 5 % percentage points and long term modern method - intrauterine device (IUD) use by 6 % percentage points. The CMW model significantly increased contraceptive awareness by 28 % percentage points, ever use of contraceptives by 7 % percentage points and, IUD use by 3 % percentage points. Additionally the Suraj intervention led to a 35 % greater prevalence (prevalence ratio: 1.35, 95 % CI: 1.22–1.50) of contraceptive use among MWRA. Suraj intervention highlights the importance of embedding subsidized FP services within the communities of the beneficiaries. The outcomes of the CMW intervention also improved the use of long-term contraceptives. These findings indicate the necessity of designing and implementing FP initiatives involving local mid-level providers to expand contraceptive coverage in under-served areas. Population growth in Pakistan presents significant challenges. Contraceptive prevalence rates (CPR) and fertility rates have largely remained unchanged, or have shown slow and insufficient improvements, during the last two decades [1]. Currently Pakistan has an estimated population of over 190 million people [2] and is the sixth most populous country [2, 3]. A high burden of population in developing countries with limited resources such as Pakistan makes resource allocation to health and development all the more difficult in the presence of other competing necessities [3, 4]. The challenge of high population growth in Pakistan necessitates the use and deployment of innovative plans that are effective in curtailing the future increase in population. There is a rural–urban differential in key fertility and family planning (FP) indicators such as in Pakistan. The total fertility rate (TFR) is high recorded at 3.8 births per woman between 2010 and 2012 [5]. Urban–rural stratification indicates the TFR in rural areas (4.2 births woman) to be considerably higher than in urban areas (3.2 births woman) [5]. Additionally, the Pakistan Demographic and Health Survey (PDHS) 2012–13 reports a current CPR of 35 % for all contraceptive methods and a CPR of 26 % for modern method use with an urban (44.8 %) and rural (30.7 %) differential of 1.5 fold [5]. A high TFR combined with traditionally low CPR levels have resulted in a high unmet need for contraception in Pakistan [6] indicated by 20 % of currently married women of reproductive age (15–49) who desire to delay or limit their next birth [5]. The penetration of FP interventions in rural areas has remained lower compared to urban setting(s) demonstrated by the higher TFR and unmet need in rural areas [7]. With close to 63 % of the population living in rural areas in Pakistan [5, 8], there is considerable room for introducing FP interventions in targeted rural communities. The World Health Organization (WHO) recommends engaging the private sector in FP promotion, considering its role in health care, including reproductive health (RH) service delivery in most settings [9]. In Pakistan's context, less than half (45 %) of FP service provision through the public sector means that the private sector is meeting a significant proportion of contraceptive demand in the country [10]. However, the involvement of the private sector in FP promotion and delivery, although desirable, has limitations. The price and quality of family planning products – especially long term products, vary and are a constraint for potential FP method users in low income countries [9]. The World Health Organization (WHO) has suggested that, in order to overcome the lack of contraceptive services in regions of the world, the implementation of contracting out, social franchising and voucher schemes are of value [9]. Social franchising (SF) in combination with demand-side financing (DSF) based free voucher is an approach advocated to overcome financial constraints in order to increase access to, and uptake of, FP services [11]. Social franchises are mid-level private-provider networks and are considered to be effective business models having the potential to rapidly expand health services, promote access and contribute to national health goals [9]. Integration of FP service provision with existing public sector health service delivery mechanisms at the community level is an alternative approach, aiming to increase FP access and uptake for underserved communities. The National, Maternal, Newborn and Child Health (MNCH) program of Pakistan aims to improve MNCH indicators by deploying community based health workers known as Community Midwives (CMWs) [12]. CMWs are selected from communities they are most likely to stay in and work [10]. These CMWs are trained to provide individualized care to pregnant women, monitor their physical, emotional and social well-being, taking appropriate action within available resources, providing guidance to community members about maternal health issues, identifying conditions necessitating referrals and making those referrals to relevant practitioner [10]. The Lady Health Worker (LHW) program was assigned a parallel function role with the MNCH led CMW program or in other words the LHWs role was expanded in order to make child birth/delivery referrals for CMWs as well as to facilitate use and support of FP services by women in their catchment areas [12]. However, the available evidence suggested that the CMW program had difficulties in showing to show significant improvements in maternal health indicators due to weak linkages between these two programs [12]. Integration of FP service provision with existing CMW-provided reproductive health services can possibly ensure a continuum of care for the recipients. The training of CMWs and their close proximity to women has the potential to improve contraceptive access and uptake. Enhancing the availability of these products and services in underserved areas is essential to improving national level FP indicators such as contraceptive prevalence rate including modern contraceptive uptake and reducing unmet need. The delivery of the products and services has brought about improvements in FP method uptake in urban settings [5]. It is essential to devise ways and means that address this problem in rural, hard to reach and underserved areas. In order to produce evidence-based learnings for policy and practice for Maternal and Newborn Health in Pakistan, the Department of International Development (DFID), British Government and Australian Agency for International Development (AusAID) jointly funded a Maternal and Newborn Health Programme in Pakistan called 'Research and Advocacy Fund (RAF)' with a central objective for "Improved practices and supporting policies related to MNH affecting poor and marginalized people in Pakistan. The objective was to be achieved through large and small grants for Research and Advocacy in MNH; thereby linking evidence to policy and practice" [13]. Under this initiative, two intervention models were designed by Marie Stopes Society Pakistan as research initiatives for the RAF funding namely - 1) Social Franchising in combination with demand-side financing led voucher schemes, and 2) integration of FP services ong-term in particular with existing reproductive health services provided by community midwives - CMWs at the community level present an opportunity to test interventions aimed at improving FP indicators in hard to access remote areas [14, 15]. Recent evidence, from Pakistan, shows that family planning interventions, incorporating social franchising in combination with voucher scheme, have been instrumental in raising awareness and enhancing the use of intrauterine devices (IUDs) in study areas [10]. Prior to recommending a similar scaling up of this approach at the national level, given the variation in social and health seeking practices in different geographical areas of Pakistan, it was important to assess whether these findings are replicable in other districts also. In this context, therefore Marie Stopes Society (MSS) - Pakistan, implemented a 41-month (including 24 months of intervention) operations/operational research project titled 'Evidence for Innovating to Save Lives' [14–17]. The project's aim was to explore effective and viable intervention models to promote healthy timing and spacing of pregnancies in rural and under-served communities of Sindh, Punjab and Khyber Pakhtunkhwa (KP) provinces in Pakistan [14–17]. Objectives of the research project The study was conducted to 1) to assess and compare the effectiveness of an intervention model, a private provider partnership i.e. Suraj social franchise model, with a control group, and 2) to assess and compare the effectiveness of an intervention model, FP integration in the existing MNCH services provided by Community midwives intervention model, with a control group, in promoting the use of modern contraceptive methods. Box 1 Primary and secondary outcomes Intervention description a) Study setting: Intervention and control arms The project investigators employed a quasi-experimental (pre and post intervention with control) mixed method research study with sequential implementation at design level [14, 17]. The overall study design comprises of two (02) Qualitative and two (02) Quantitative data collection components or surveys. Hence, the present paper only describes the Quantitative 2a and 2b surveys i.e. the Baseline and Endline comparison on selected indicators as presented in Fig. 1 below. Kindly refer to the below study design flow chart as Fig. 1 to understand the study components as the present paper describes the quantitative baseline and endline results: Overall study design flow chart The study was conducted in eight districts of Sindh, Punjab and KP provinces of Pakistan. Within the districts, Marie Stopes identified rural and under-served Union Councils (UCs) for inclusion in the study. Districts were selected based on key socioeconomic, demographic and reproductive health indicators (Table 1). Interventions were purposefully allocated: in Sindh, district Naushero Feroze was selected as an intervention (Suraj model) district and Nawabshah as a control district. In Punjab province, districts Pakpattan and Rajanpur were selected as intervention districts for the CMW model while district Khanewal was identified as intervention district for Suraj model whereas district Bahawalpur served as the control district. For KP, district Haripur served as an intervention (Suraj model) district while district Abbottabad was the control. Table 1 Comparability of intervention and control districts b) Suraj model - intervention arm MSS established a private health providers' network branded as 'Suraj' (meaning 'Sun' in English) in the intervention districts [10]. The model is a partnership between MSS and private local health service providers (mainly mid-level) for the provision of quality contraceptive services. Ten Suraj providers per district were selected. Each Suraj provider operated a health care facility, covering a population ranging from 12–16,000 that resided within a 3–4 km radius around the heath facility. The Suraj providers were located at an average distance of 40–50 km from District Head Quarter (DHQ) hospitals. In order to minimize any spill-over effect between areas of Suraj providers, it was ensured that the minimum distance between two providers was large enough. The selection and training of Suraj providers was a three step process. First, mapping of districts was conducted to ascertain the existing number of health care facilities and providers in a given district. Second, providers were selected for training by arranging individual meetings with MSS field teams and collection of information on provider eligibility criteria (see Table 2) [10]. For the details of Suraj intervention components, refer to Table 4. Table 2 Provider eligibility criteria - Suraj intervention model Third, Suraj SF providers were imparted training to improve their skills for provision of quality FP services, and enable them to look after the business side of their ventures. c) CMW model - intervention arm In contrast to the Suraj model, the community midwives - CMW intervention model was an arrangement between MSS and CMWs for the provision of quality contraceptive services in the community. We obtained a list of CMWs from the Maternal Newborn and child Health (MNCH) program and ten CMW providers for each district were selected. Each CMW provider covered a population ranging from 7000 to 12,000 that resided within a 3–4 km radius around the facility which is operated by a provider. The CMW providers were located at an average distance of 40–70 km from the district headquarter hospital. The selection of CMW ensured a minimum distance between any two CMW providers in order to minimize any spill-over effects. The selection and training of CMW providers was also a three step process similar to that adopted for Suraj providers CMW provider eligibility criteria are listed here (Table 3). For the details of CMW intervention components, refer to Table 4. Table 3 Provider eligibility criteria - CMW intervention model Table 4 Intervention components d) Control arm The recruitment for providers in control districts was a three step process. First, mapping was initiated to get information on the existing number of health care and family planning (FP) facilities and providers in terms of distance and accessibility to women. Second, an MSS team comprising district and regional personnel identified the Union Councils (UCs) based on locally available records. Within each Union Council an MSS team member met with different key stakeholders such as pharmacists, drug stores, UC Mayors, farmer-councilor, community based organizations, influential personalities and others to capture key information on population, location of private providers, Union Council boundaries, number of schools, male and female literacy, number of healthcare centers' such as basic health units, rural health centers and tertiary care hospitals. Third, a series of meetings with each provider/facility was conducted to invite the providers for participation in the study. Providers were considered eligible for participation provided the following criteria were met: a) Health facility owned or staffed by a female; b) provider lived in the same community; c) provider was interested in providing family planning services; d) provider must have formal medical qualifications; e) there must be adequate facility infrastructure (e.g. space to perform family planning services, availability of required instruments/equipment and essential amenities such as running water and electricity, and sanitation and waste disposal facilities); and f) provider must be willing to adhere to the study protocol for control sites (i.e. record keeping and reporting). The providers in the control arm were not given any exposure to study interventions. A total of 3 Rural, 10 Basic centers and 14 CMWs were recruited for this study. Each facility/provider was located approximately 30 km away (in any direction) from the district headquarter hospital in the predominantly rural area and covered a population ranging between 8000-12000 for CMWs and 35–40,000 for basic and rural health centers. The minimum distance between any two facilities/providers was large enough to avoid a spill over effect. For the details of intervention components, refer to Table 4. Study duration As mentioned earlier, the research project was a 41 months initiative commencing in October, 2010 and ending in March, 2014 including 24 months of intervention (i.e. service provision) [14, 17]. Endline evaluation study design Pre (baseline) - and post-intervention (endline) cross-sectional surveys were conducted to assess the impact of interventions on the use of modern contraceptive methods. A baseline household survey was conducted prior to the implementation of the interventions (a benchmark for future evaluations of the project's key performance indicators). Towards the end of project interventions, an endline cross-sectional household survey was conducted to gauge the impact of the two interventions by measuring the same set of indicators including reproductive health and family planning Awareness, behavior and practices of the respondents. a) Study participants At the baseline, married women of reproductive age (MWRA) between 15–49 years of age with at least one child less than 2 years of age were included in the study and interviewed. The endline survey included two groups of MWRA, 1) MWRA between 15–49 years of age and with at least one child less than 2 years of age and 2) MWRA between 15–49 years of age irrespective of the number of children. MWRA who were mentally or physically handicapped and were unable to give an interview, or who refused to provide informed consent or were unmarried/separated/widowed were excluded. b) Sample size The overall sample size for baseline survey was 5566 comprising of 1995, 1435 and 2136 MWRAs recruited from the Suraj, community midwives (CMW) and control catchment areas respectively. For baseline a minimum of 70 interviews were conducted per cluster or service provider catchment area. For endline, sample size calculations were run separately for two groups based on anticipated change in CPR: 1) MWRA and 2) MWRA with a child under the age of 2 years. The key indicator (contraceptive prevalence rate - CPR) being assessed in each required a separate sample size calculation. Sample size was calculated for treatment groups rather than districts. The calculation is presented below: $$ \begin{array}{l}n=\frac{deff\times {\scriptscriptstyle \frac{{\left({z}_{\alpha }+{z}_{\beta}\right)}^2\left({p}_1\left(1-{p}_1\right)+\left({p}_2\left(1-{p}_2\right)\right.\right)}{\delta^2}}}{\left(1-l\right)}\\ {}n=\frac{2\times {\scriptscriptstyle \frac{{\left(2.241+0.842\right)}^2\left(0.303\left(1-0.303\right)+\left(0.403\left(1-0.403\right)\right.\right)}{0.1^2}}}{\left(1-0.1\right)}\end{array} $$ The sample size for MWRAs with young children was based on a comparison between odds ratios. We took the most conservative measure, adjusted for a pooled p of 0.05, that resulted in non-overlapping CIs based on a 10 % increase from the mean baseline modern CPR figures by intervention arms compared to a 2 % increase from mean baseline modern CPR in controls. Table 5 below shows the estimated sample for two different types of respondents by districts. Table 5 Names of districts and number of interviews c) Sampling strategy We used probability proportional to population size (PPS) technique within each of the three study arms to select study participants. Each target area of study districts was considered as a separate stratum. The data collection was conducted within the same catchment population of the study sites for both the baseline and the endline surveys. Prior to data collection, all the households (within 4–5 km radius) around each selected healthcare facility were independently allotted a unique identifier. A list of households with unique identifiers in the intervention and control areas comprised the sampling frame of households which were selected using simple random techniques through statistical package for social sciences (SPSS) version 17.0. A household was considered as a primary sampling unit at both baseline and endline surveys. If more than one MWRA, meeting survey criteria, were identified in a randomly selected household, only the first one (or if she refused, then the next one) was recruited for data collection. d) Data collection and management The baseline data were collected during March-July 2011 while endline survey was conducted between July-August, 2013. We adapted the questionnaire from the Pakistan Demographic and Health Survey (PDHS) 2006–07 with modifications to measure use of any contraceptive methods. The questionnaire was designed to capture information on socio-demographic characteristics, awareness of reproductive health (RH) and family planning (FP), FP practices, and health seeking behaviours, health care access and FP needs of study participants. The questionnaires were translated into Urdu and pre-tested prior to commencement of data collection. Completed questionnaires were checked for completeness and logical errors. Reliability checks helped ensure that similar data were received. Principal and co-Investigators routinely visited field to ensure the quality of data. Forms were checked for logical errors, missing values, and unclear responses during those visits. All survey data were double-entered to ensure the quality of data and minimize entry and logical errors using a specifically designed data entry programme on FoxPro version 6.0. e) Data analysis We used SPSS software version 17.0™ to analyse the data and generate tables from a list of survey variables for descriptive analysis. The analysis was performed for MWRA with a child less than 2 years of age - a sub-group of the sample. This was done to ensure comparability with the baseline information collected on a similar group of MWRA. Descriptive statistics were computed for socio-demographic variables and potential associated factors. Frequencies, proportions, means and standard deviations were obtained as appropriate. Where needed, continuous variables were categorized through important cut-off-points and variables such as total number of children, years of education and age were categorized based on commonly used categories. Stata® version 8 was used to assess the effect of interventions (Suraj SF model vs. control and CMW model vs. control) on outcome indicators through Difference-In-Difference (DID) analysis. Univariate DIDs were estimated employing the following steps: a) at first, we calculated the change (from baseline to endline) in the control arm and the change (from baseline to endline) in the intervention arm; b) we then estimated the net effect of intervention by subtracting the change in control arm from the change in intervention arm. Similar procedure was followed for different key indicators. We conducted multivariable analysis, to determine factors associated with current contraceptive use (dependent variable) in each intervention arm, using Cox proportional hazard regression keeping time constant adjusting for clusters. Prevalence ratio with 95 % confidence interval (CI) was computed for each independent variable by likelihood ratio test for significance of estimated regression coefficients. Variables with p < 0.25 on univariate analysis were considered for a stepwise multivariate analysis. Wald statistic and likelihood ratio test were used to assess the significance of variables and models respectively, towards obtaining a parsimonious and meaningful model. The analysis was adjusted for independent variables such as age, education, province, number of children and social economic status. F: Ethics statement Verbal and written (participants' signature or thumb impression) informed consent were obtained from the study respondents. Personal identifiers were not recorded to ensure confidentiality. Designated authorized personnel had completed hard copies of the questionnaires under safe keeping. Electronic version of the data was stored on password protected computers. The project was approved by the Program Oversight Committee of Research and Advocacy Fund (RAF). The ethical approval for the research study was provided by the National Bioethics Committee (NBC) of Pakistan (Ref no: 4-87/10/NBC-43/RDC/). Note: Brief information describing design and methods of study is also published in a separate paper/s [17]. We present findings for married women of reproductive age (MWRA), with a child less than 2 years of age, recruited at baseline (5566) and endline (2892). a) Socio demographic characteristics Table 6 describes the socio demographic characteristics of MWRA. The mean age of MWRA at the baseline and endline was 28.0 ± 5.5 and 29.1 ± 5.6 respectively. The average marriage age (age at first marriage) of MWRA between the two time points was around 20 years. Illiteracy proportions for MWRA demonstrated a drop of 8 % points at the endline (Table 6). MWRA who reported working increased slightly by 2.7 % at the endline. A concomitant increase in unskilled employment and agricultural work by their husbands is also noted (Table 6) and might be explained by an increase in seasonal agricultural work. Table 6 Socio-demographic characteristics of respondents b) Contraceptive methods awareness At the baseline, awareness was relatively lower in community midwives - CMW areas than Suraj areas; however, at endline awareness about pills, condoms, injectables, and IUDs increased to above 80 % in both the intervention arms, leading to a greater increase in overall awareness (from 61.3 % at baseline to 94.4 % at endline) in CMW areas than Suraj areas. In Suraj intervention areas, overall awareness about contraceptive methods improved from the baseline (77.6 %) to endline (97.6 %) (p < 0.001). The largest increase in awareness levels was reported for Intra Uterine Devices - IUDs (absolute percentage change: 29.2 %) followed by contraceptive pills (absolute percentage change: 25.2 %). Male sterilization and implants were the least known methods across the three study groups (Table 7). Table 7 Contraceptive method awareness among MWRA c) Ever and current contraceptive use Ever and current contraceptive use patterns for MWRA are described in Table 8. MWRA in the Suraj intervention arm reported a 13.7 % points increase in current contraceptive use from 34.0 % at baseline to 47.7 % at end line (p < 0.0001). Current contraceptive use in the CMW intervention arm increased from 17 % at baseline to 24.6 % at endline (p < 0.0001). Method mix indicates that the use of modern methods among the MWRA in the Suraj intervention model increased by 7.6 % (p < 0.0001) while it increased by 6.1 % (p < 0.0003) among MWRA in the CMW intervention model (Table 8). Table 8 Ever and current contraceptive use reported by MWRA d) Impact analysis Suraj intervention versus control At the endline, the CPR of the Suraj intervention group was 48 % and resulted in a net CPR increase of 5 % (p < 0.05) (Table 9). This net increase in CPR among MWRA of the Suraj intervention model can be explained by the significantly increased use of IUDs (6 %) (p < 0.001), and a concomitant significant reduction in use of withdrawal method (−1 % p < 0.001) and condoms (−3 % p < 0.001) (Table 9). A net increase of 14 % (p < 0.01) among MWRA who had heard about contraception demonstrates positive impact of the Suraj intervention model on contraceptive awareness (Table 9). Table 9 Difference-in-difference results for key indicators between Suraj intervention arm and control arm Community midwives - CMW intervention versus control The results demonstrate a significant positive effect of CMW intervention on contraceptive awareness, ever use, and use of modern long term contraceptive methods such as IUDs (Table 10). The net CPR in the CMW areas remained unchanged from the baseline to the endline (Table 10). However, modern methods usage showed a net significant increase of 3 % in IUD use. Additionally, a net decrease of 4 % in withdrawal usage was also observed in CMW intervention areas (Table 10). A similar positive effect of CMW intervention was observed on contraceptive awareness which increased by a net 28 % (p < 0.001). Table 10 Difference-in-difference results for key indicators between CMW intervention arm and control arm e) Factors associated with current contraceptive use Suraj intervention model MWRA in Suraj intervention arm had a 35 % greater prevalence (prevalence ratio (PR): 1.35, 95 % Confidence interval (CI): 1.22–1.50) of current contraceptive use compared to their counterparts in the control arm while adjusting for other factors (Table 11). Older MWRA (35+ years, PR: 1.21, 95 % CI:1.05–1.39), those with education (1–8 years [PR: 1.22, 95 % CI:1.07–1.38] and secondary to higher [PR: 1.43, 95 % CI: 1.26–1.62]) and greater number of children (3–4 [PR: 1.42, 95 % CI: 1.26–1.60] and 5+ [PR: 1.54, 95 % CI: 1.34–1.76]) were more likely to use contraception than their counterparts in the control arm (Table 11). Table 11 Multivariate-Cox proportional hazard analysis of factors associated with current contraceptive use among married women with at least one child < 2 years in Suraj intervention and control areas across Pakistan Community midwives - CMW intervention model MWRA aged more than 35 years had significantly increased prevalence ratios of current contraceptive use (PR: 1.36, 95 % CI: 1.07–1.72), followed by MWRA aged 31 to 35 years (PR: 1.25, 95 % CI: 1.00–1.54). The CPR showed significant increase with higher levels of education i.e. current contraceptive use was highest among MWRA having secondary or higher education (PR: 2.26, 95 % CI: 1.91–2.66) followed by MWRA with 1–8 years of education (PR: 1.86, 95 % CI: 1.59–2.18) compared to those without any education. MWRA with 5 or more children had significant increase in current contraceptive use (PR: 1.81, 95 % CI: 1.45–2.27) as compared to MWRA having no children or up to 2 children (Table 12). Table 12 Multivariate-Cox proportional hazard analysis of factors associated with current contraceptive use among married women with at least one child < 2 years in CMW intervention and control areas across Pakistan Findings of this quasi-experimental study demonstrate that both FP intervention models, i.e. the Suraj SF model along with demand-side financing vouchers, and the integrated FP services with existing CMW providers model at the community level, are effective in improving key FP indicators such as women's awareness of FP methods, ever use and current use of contraceptives besides the use of long term contraceptives such as IUDs, in hard to reach remote areas of Pakistan. Our findings indicate that the Suraj SF intervention model was instrumental in eliciting a significant net increase of about 14 % (p < 0.001) in awareness about FP methods among MWRA. Previous evidence from Pakistan shows that FP interventions, such as social franchising incorporating FP service delivery through a voucher scheme, have been successful in raising FP awareness and the use of IUDs including improvements in the IUD continuation rates [10, 18, 19]. Recently in the year 2012 in Pakistan, a social franchising initiative along with free contraceptive vouchers significantly increased the awareness of modern contraceptives among women by 5 % in intervention areas [10]. Increase in CPR is reportedly rooted in increased Awareness levels of all contraceptive methods especially modern methods and places to obtain them [20]. Additionally, we found a net increase of 5 % (p < 0.001) in the current contraceptive use and 28.5 % increase in ever use of modern contraceptive. The results corroborate with the earlier study conducted in Pakistan in similar settings [10]. In our study IUD use among MWRA in the Suraj SF intervention recorded a net increase of 6 % (p < 0.001) similar to a significant increase of 11.1 % previously recorded in the uptake of IUDs by MWRA, which were being promoted with vouchers [10]. The significantly increased usage of IUDs in Suraj intervention areas may be explained by the accompanying decrease in the usage of traditional method of withdrawal. Regression analysis further identified that the Suraj SF intervention model led to a 35 % significantly greater prevalence of current contraceptive use among MWRA compared with control. Current contraceptive use in Suraj intervention areas was also found to be associated with higher education, parity and socio-economic status. These observed changes indicate the effectiveness of Suraj interventions and reach of the program in increasing contraceptive use among MWRA in Suraj areas. From programmatic view point, an encouraging aspect of increased knowledge and awareness of MWRA about contraceptive methods is the successful awareness raising efforts by MSS Field Health Educators who were the agents of information in this regard. This is encouraging programmatically since it shows that enhancing the capacity and efficacy of frontline workers can greatly impact perceptions about contraception that ultimately translate into increased and consistent usage of these methods [21, 22]. Increased Awareness levels have also translated into the understanding and demand creation for contraception that is long term rather than shorter term methods indicated by enhanced IUD related Awareness and usage. The effectiveness of Suraj SF intervention model is a critical finding due to the importance of long term contraceptive use in ensuring desirable spacing between births and reducing unwanted pregnancies. The CMW model was successful in several aspects. We found that the CMW intervention model had a significant positive effect on contraceptive awareness and ever use. The CMW intervention model increased the contraceptive awareness among MWRA by a net 28 % (p < 0.001) in the intervention arm from baseline to end line. The CMW model also resulted in an 8 % increase in CPR between baseline and endline. However, the net effect was nullified when CPR in the control arm was taken into account. We also found that while the CMW model did not significantly affect the overall CPR i.e. both traditional and modern methods combined, the model did significantly increase the use of long term contraceptive method - IUDs among MWRA by a net 3 % (p < 0.05). The method mix of modern contraceptives use highlights a shift towards long term contraceptive methods among MWRA in CMW areas. The CPR was associated with higher levels of education in the CMW intervention model where MWRA with secondary or higher education had a 2.26 times greater prevalence of current contraceptive use. Considering the greater prevalence of current contraceptive use among MWRA who have at least 3 children, are educated and older than 30 years in CMW intervention areas, it appears that these determinants are driving the 3 % net increase in IUD use in the CMW arm. Among the two intervention arms, the Suraj intervention model showed the most encouraging results. The current contraceptive use was the highest at 48 %; a proportion that is exceptionally encouraging since it is 13 % higher than the national averages [5], pointing towards the effectiveness of strategies adopted through this intervention. Women were generally appreciative of the quality of counseling in managing side effects and resultant fewer clinic visits besides availability of free FP (IUD) services [19]. They highly valued cleanliness, privacy, and confidentiality, sterilization of instruments and ease of communication with Suraj providers [19]. On the provider side, IUD insertion and infection- prevention training have been reported to enhance provider ability in providing IUD services while at the same time having a positive impact on their reputation in local communities [19]. Suraj providers have previously identified that the role of female and male community mobilizers is of critical importance in mobilizing the community and increasing their FP clientele [19]. The impact on contraceptive use by MWRA in Suraj areas and specifically the significant increase in IUD use by MWRA in both the Suraj and CMW intervention arms is indicative of a need to adopt similar strategies for public contraceptive promotion programs. In addition both intervention models also demonstrated high IUD method continuation rates [16, 17], providing a strong rationale for scaling up of Suraj as well as CMW intervention at the national level to promote modern contraception. An earlier study also documented similar improvements in the IUD continuation rates at 12-months period (18.8 %) after using Suraj model as an intervention along with free vouchers which is significantly lower than the national trend of 26 % [18]. However, this will entail comprehensive training of not only the health care providers, but community based mobilizers as well who have direct access to potential clients in targeted communities. This finding from our study corroborated a 2002 national survey that married women living within 5km of community-based workers who have direct access to potential clients were significantly more likely to use modern reversible methods than those with no access [23]. The Suraj voucher scheme has the potential to have a national level impact on FP service uptake however, three key factors will determine the reach of the voucher program (i) keeping management costs low, (ii) inducing a large demand-side response among the two low socio-economic quintiles, and (iii) achieving a quality of care that translates a greater number of facility-based deliveries into a reduction in maternal morbidity and mortality [24]. In addition to training and capacity building, financial incentives are an important factor in encouraging women to adopt contraceptive methods [25]. The findings emphasize that approaches like Suraj model, when complemented with vouchers and community mobilization efforts, can improve the utilization of long-term contraceptive methods among rural and underserved women. Evidence suggests that financial incentives can enhance demand, as well as impact the quality and quantity of maternal health services [25]. This is possible as financial incentives can be useful in overcoming health system and financial barriers that prevent women from accessing services and providers from delivering quality maternal care [24]. Vouchers deliver subsidies to individuals who otherwise would have to seek the services of an unskilled provider or most likely would not have sought care [26]. Social franchising complemented with targeted voucher schemes not only improves access to FP services but also helps reduce inequalities in health services and enables the extremely poor or financially vulnerable population groups to avail these services [27]. Overall, both Suraj and the CMW intervention models not only demonstrated increase in the use of long term contraceptive method – intra uterine device (IUDs) among the married women of reproductive age but based on a very recent evidence piece from a nested study from this same project confirms that both Suraj and CMW providers are similarly capable of ensuring higher IUD method continuation rate at different intervals [16, 17]. For example, at 12-month interval, the cumulative probability of IUD continuation in Suraj and CMW models were 85 % and 94 %, respectively; and likewise it was 82 and 80 % at 24 months. Such low discontinuation rates are well below the national average [5, 17]. Hence, it is proposed that both the government and private sectors may consider training the community midwives as well as to engage with the non-regulated private sector mid-level providers to promote the use of IUDs in Pakistan which presently is very low – 2.3 % [5]. The present findings somewhat also confirms the alarming need of trained and qualified female healthcare providers for long term reversible method of contraception at local health facilities instead of periodical fertility camps arranged by government or private sector [16]. This need was identified during a pre-project qualitative inquiry/needs assessment (QUAL 1a – refer to Fig. 1) by the general population – men and women of reproductive age residing in the similar project study areas/sites [27]. The project showed uptake and continuity of long term IUDs, with attempts to address access, affordability, availability about modern contraception. The project was also able to involve men as identified in the needs assessment in order to meet women and couples needs to fulfill their fertility and reproductive health objectives [27]. The results should be interpreted with caution. Quasi-experimental designs using pre and post intervention analysis can have some limitations. The study clients are not randomly assigned. However, pre-post intervention analysis with control is internationally accredited for use in situations where controlled trials are not feasible due to logistic, financial or other ethical reasons. This was a field project in a real life situation and due to the nature of the intervention i.e. vouchers and provision of contraceptive services made it difficult to blind the study participants. We ensured that there was no spill over within different intervention areas by choosing areas at a minimum distance from each other. The difference in cultural background of participants from the intervention and control areas is a potential limitation. However, since the intervention and control areas are located within the same province we believe the differences would be minimal with a consequently limited impact on study findings. Another potential limitation is the presence of competing health providers, providing family planning services, operating within the areas of project health providers. Selecting a health care facility for the project where no other service providers exist is difficult. To address this limitation we had a control group to assess the impact of routine practice in health facilities towards family planning. Therefore, we are confident that the increase in outcomes in our study was due to the project intervention(s). The findings of the study can be generalized to other settings with similar context. Besides taking into account the intervention details, replication will need to take into account the local cultural sensitivities as well as the local health system structure where the research is expected to be replicated. The successful implementation of Suraj intervention scheme highlights the importance of demand generation in tandem with provision of low cost family planning services embedded within the communities of the beneficiaries. The FP service integration with existing CMW providers approach also has some benefits in improving FP uptake (especially IUD) at the community level with increased probability of method continuation. Since the CMW interventions were not subsidized or free, the approach may be sustainable in the long term ensuring access to FP services for the underserved population segments. In addition, having dedicated and full-time community health workers or lady health workers (LHWs) for modern contraceptive services such as IUDs can facilitate connecting prospective and current users with the respective facility for building a strong referral system – either by increasing the existing LHWs numbers or introducing new cadre of FP field workers. It will be beneficial to conduct further research on evaluating the FP integration approach in order to identify factors that can facilitate potential expansion of the approach in other areas of Pakistan with explicit focus on costing perspectives. CMW: community midwife DSF: demand-side financing FCM: female community mobiliser FP: IUD: KP: LARC: long-acting reversible contraceptives MSS: Marie Stopes Society MWRA: married women of reproductive age TFR: Jain AK, Mahmood A, Sathar ZA, and Masood I. Reducing unmet need and unwanted childbearing: Evidence from a panel survey in Pakistan. Studies in Family Planning. 2014;45(2):277–299. World Population Data Sheet 2014 [Internet]. PRB. 2014. Available from: http://www.prb.org/pdf14/2014-world-populationdata-sheet_eng.pdf. Bhutta ZA, Hafeez A, Rizvi A, Ali N, Khan A, Ahmad F, et al. Reproductive, maternal, newborn, and child health in Pakistan: challenges and opportunities. The Lancet. 2013;381(9884):2207–18. Anderson BO, Yip CH, Ramsey SD, Bengoa R, Barun S. Breast cancer in limited- resource countries: health care systems and public policy. Breast J. 2006;12(1):S54–69. National Institute of Population Studies Pakistan, Macro International Inc. Pakistan Demographic and Health Survey 2012–13. Islamabad: Government of Pakistan; 2014. Bongaarts J, Mir AM, Mahmood A. Policies for Capturing the Demographic Dividend in Pakistan. Pakistan: The Population Council; 2014. USAID. What Unmet Need for Family Planning means in Pakistan. Pakistan: Policy Briefs Series; 2012. Word Development Indicators [Internet]. 2015 [cited December 2015]. Available from: http://data.worldbank.org/indicator/SP.DYN.CONU.ZS. WHO. Public policy and franchising reproductive health: current evidence and future directions. Geneva: World Health Organization, Department of Reproductive Health and Research; 2007. ISBN: 978 92 4 159602 1. Available from: http://apps.who.int/iris/bitstream/10665/43735/1/9789241596021_eng.pdf. Azmat SK, Shaikh BT, Hameed W, Mustafa G, Hussain W, Asghar J, et al. Impact of social franchising on contraceptive use when complemented by vouchers: a quasi- experimental study in Rural Pakistan. PLoS One. 2013;8(9):e74260. Murray S, Hunter B, Bisht R, Ensor T, Bick D. Effects of demand-side financing on utilisation, experiences and outcomes of maternity care in low- and middle-income countries: a systematic review. BMC Pregnancy Childbirth. 2014;17(1):30. Sarfraz M, Hamid S. Challenges in delivery of skilled maternal care - experiences of community midwives in Pakistan. BMC pregnancy and childbirth. 2014;14:59. Maternal and Newborn Health Programme Research and Advocacy Fund, 2015, DIFD; [cited 2015 December 15]. Available from: http://r4d.dfid.gov.uk/Project/60901/Default.aspx MSS. RAF "EVIDENCE FOR INNOVATING TO SAVE LIVES" Karachi: Marie Stopes Society Pakistan; 2014 [cited 2015 December 15]. 'Evidence for Innovating to Save Lives' is a project funded by the Maternal and Newborn Health Programme – Research and Advocacy Fund (RAF) British Council Pakistan jointly funded by DFID and AusAID, and is implemented by Marie Stopes Society (MSS)]. Available from: http://mariestopespk.org/raf/research-project/ RAF, Quarterly Newsletter, Issue 7, January-March 2014 Islamabad: Maternal and Newborn Health Programme Research and Advocacy Fund (RAF), British Council Pakistan; [cited 2015 December 15]. Available from: http://r4d.dfid.gov.uk/pdf/outputs/RAF/RAF_Newsletter_Issue7.pdf Hirose A, Hall S, Memon Z, Hussein J. Bridging evidence, policy, and practice to strengthen health systems for improved maternal and newborn health in Pakistan. Health Res Policy Syst. 2015;13 Suppl 1:47. Hameed W, Azmat SK, Ishaque M, Hussain W, Mustafa G, Khan OF, et al. Continuation rates and reasons for discontinuation of intra-uterine device in three provinces of Pakistan: results of a 24-month prospective client follow-up. Health Res Policy Syst. 2015;13 Suppl 1:53. Azmat SK, Shaikh BT, Hameed W, Bilgrami M, Mustafa G, Ali M, et al. Rates of IUCD discontinuation and its associated factors among the clients of a social franchising network in Pakistan. BMC Women's Health. 2012;12(1):8. Azmat SK, Mustafa G, Hameed W, Asghar J, Ahmed A, Shaikh BT. Social franchising and vouchers to promote long-term methods of family planning in rural pakistan: a qualitative stocktaking with stakeholders. J Pak Med Assoc. 2013;63(4 Suppl 3):S46–53. Najafi-Sharjabad F, Yahya S, Zainiyah S, Abdul Rahman H, Hanafiah M, et al. Barriers of modern contraceptive practices among Asian women: A mini literature review. Global J Health Sci. 2013;5(5):181–192. Pearson M. Demand Side Financing for Health Care. London: DFID Health Systems Resource Centre; 2001. Nishtar NA, Sami N, Alim S, Pradhan N, Hasnain FU. Determinants of contraceptives use amongst youth: an exploratory study with family planning service providers in Karachi Pakistan. Global J Health Sci. 2013;5(3):1–8. Sultan M, Cleland JG, Ali MM. Assessment of a new approach to family planning services in rural Pakistan. Am J Public Health 2002;92:1168–72. Bellows BW, Conlon CM, Higges ES, Townsend JW, Nahed MG, Cavanaugh K, et al. A taxonomy and results from a comprehensive review of 28 maternal health voucher programs. J Health Popul Nutr. 2013;31:106–28. Morgan L, Staton ME, Higgs ES, Blaster RL, Bellows BW, Brandes N, et al. Financial incentives and maternal health: where do we go from here? J Health Popul Nutr. 2013;31:8–22. Grainger C, Gorter A, Okal J, Bellows B. Lessons from sexual and reproductive health voucher program design and function: a comprehensive review. Int J Equity Health. 2014;13:33. doi:10.1186/1475-9276-13-33. Mustafa G, Azmat SK, Hameed W, Ali S, Ishaque M, Hussain W, Ahmed A and Munroe E. Family Planning Knowledge, Attitudes, and Practices among Married Men and Women in Rural Areas of Pakistan: Findings from a Qualitative Need Assessment Study. Inter J Reproduct Med; 2015:190520. doi:10.1155/2015/190520 This research study "Evidence for Innovating to Save Lives is a project funded by the Maternal and Newborn Health Programme, Research and Advocacy Fund (RAF) - Pakistan. The study was implemented in the field with the collaboration of Marie Stopes International (MSI) and MSS-Pakistan. This document is an output from a project funded by the UK Department for International Development (DFID) and the Australian Department of Foreign Affairs and Trade (DFAT) for the benefit of developing countries. The views expressed and information contained are not necessarily those of or endorsed by DFID, DFAT, the Maternal and Newborn Health Program, Research and Advocacy Fund (RAF), or WHO, which can accept no responsibility or liability for such views, for completeness or accuracy of the information, or any reliance placed on them. Department of Urogynecology, University of Ghent, Ghent, Belgium Syed Khurram Azmat & Marleen Temmerman The Hospital for Sick Children, Toronto, Ontario, Canada Syed Khurram Azmat Marie Stopes Society, Research, Monitoring and Evaluation Department, Technical Services, Karachi, Sindh, Pakistan Syed Khurram Azmat, Waqas Hameed, Ghulam Mustafa, Muhammad Ishaque, Ghazunfer Abbas, Omar Farooq Khan, Jamshaid Asghar, Safdar Ali, Wajahat Hussain, Sajid Ali & Aftab Ahmed Freelance Public Health Professional, Adelaide, South Australia, Australia Hasan Bin Hamza Research, Monitoring and Evaluation Department, Marie Stopes International, London, UK Erik Munroe Department of Reproductive Health and Research, World Health Organization, Geneva, Switzerland Moazzam Ali Waqas Hameed Ghulam Mustafa Muhammad Ishaque Ghazunfer Abbas Omar Farooq Khan Jamshaid Asghar Safdar Ali Wajahat Hussain Marleen Temmerman Correspondence to Syed Khurram Azmat. Authors' contribution Conceived and designed the experiments: SKA, W Hameed, GM and JA. Performed the experiments: SKA, WH, GM, GA, MI, OMF, SA, W Hussain and AA. Analyzed the data: HBH, W Hameed, SKA, MI, W Hussain. Contributed materials/analysis tools: SKA, W Hameed, EM, and HBH. Wrote the manuscript: SKA, W Hameed, HBH, GM, OFK, JA, SA, EM, MA and MT. Intellectual contribution: EM, MA and MT. Authors GM and MI have same contributions. All authors read and approved the final manuscript. Azmat, S.K., Hameed, W., Hamza, H.B. et al. Engaging with community-based public and private mid-level providers for promoting the use of modern contraceptive methods in rural Pakistan: results from two innovative birth spacing interventions. Reprod Health 13, 25 (2016). https://doi.org/10.1186/s12978-016-0145-9 Community midwives Rural Pakistan
CommonCrawl
Proceedings of the American Mathematical Society Published by the American Mathematical Society, the Proceedings of the American Mathematical Society (PROC) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Proceedings of the American Mathematical Society is 0.85. Journals Home eContent Search About PROC Editorial Board Author and Submission Information Journal Policies Subscription Information On Fourier restriction and the Newton polygon by �kos Magyar PDF Proc. Amer. Math. Soc. 137 (2009), 615-625 Request permission Local $L^p\to L^2$ bounds are proved for the restriction of the Fourier transform to analytic surfaces of the form $S=(x,f(x))$ in $\mathbb {R}^3$. It is found that the range of exponents is determined by the so-called distance of the Newton polygon, associated to $f$, except when the principal quasi-homogeneous part of $f(x)$ contains a factor of high multiplicity. The proofs are based on the method of Phong-Stein and Rychkov, adapted to scalar oscillatory integrals. V. I. Arnol′d, S. M. Guseĭn-Zade, and A. N. Varchenko, Singularities of differentiable maps. Vol. I, Monographs in Mathematics, vol. 82, Birkhäuser Boston, Inc., Boston, MA, 1985. The classification of critical points, caustics and wave fronts; Translated from the Russian by Ian Porteous and Mark Reynolds. MR 777682, DOI 10.1007/978-1-4612-5154-5 Allan Greenleaf, Principal curvature and harmonic analysis, Indiana Univ. Math. J. 30 (1981), no. 4, 519–537. MR 620265, DOI 10.1512/iumj.1981.30.30043 Lars Hörmander, The analysis of linear partial differential operators. I, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 256, Springer-Verlag, Berlin, 1983. Distribution theory and Fourier analysis. MR 717035, DOI 10.1007/978-3-642-96750-4 V. N. Karpushkin, A theorem on uniform estimates for oscillatory integrals with a phase depending on two variables, Trudy Sem. Petrovsk. 10 (1984), 150–169, 238 (Russian, with English summary). MR 778884 D. H. Phong and E. M. Stein, The Newton polyhedron and oscillatory integral operators, Acta Math. 179 (1997), no. 1, 105–152. MR 1484770, DOI 10.1007/BF02392721 D. H. Phong and E. M. Stein, Oscillatory integrals with polynomial phases, Invent. Math. 110 (1992), no. 1, 39–62. MR 1181815, DOI 10.1007/BF01231323 D. H. Phong, E. M. Stein, and J. A. Sturm, On the growth and stability of real-analytic functions, Amer. J. Math. 121 (1999), no. 3, 519–554. MR 1738409 Vyacheslav S. Rychkov, Sharp $L^2$ bounds for oscillatory integral operators with $C^\infty$ phases, Math. Z. 236 (2001), no. 3, 461–489. MR 1821301, DOI 10.1007/PL00004838 Elias M. Stein, Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals, Princeton Mathematical Series, vol. 43, Princeton University Press, Princeton, NJ, 1993. With the assistance of Timothy S. Murphy; Monographs in Harmonic Analysis, III. MR 1232192 H. Schulz: On the decay of the Fourier transform of measures on hypersurfaces, generated by radial functions, and related restriction theorems (unpublished). Peter A. Tomas, A restriction theorem for the Fourier transform, Bull. Amer. Math. Soc. 81 (1975), 477–478. MR 358216, DOI 10.1090/S0002-9904-1975-13790-6 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 42B10, 43A32 Retrieve articles in all journals with MSC (2000): 42B10, 43A32 �kos Magyar Affiliation: Department of Mathematics, University of Georgia, Athens, Georgia 30602 Address at time of publication: Department of Mathematics, University of British Columbia, 1984 Mathematics Road, Room 121, Vancouver, British Columbia V6T 1Z2, Canada MR Author ID: 318009 Email: [email protected] Received by editor(s): August 20, 2007 Received by editor(s) in revised form: January 25, 2008 Published electronically: August 26, 2008 Additional Notes: This research was supported in part by NSF Grant DMS-0456490 Communicated by: Andreas Seeger Journal: Proc. Amer. Math. Soc. 137 (2009), 615-625 MSC (2000): Primary 42B10; Secondary 43A32 DOI: https://doi.org/10.1090/S0002-9939-08-09510-5 MathSciNet review: 2448583
CommonCrawl
Mustafa from Wilson's School sent us this excellent and comprehensive solution. Michael, also from Wilson's, came to the same conclusion about the possible results when the dice lands on an edge: If two sides were showing, the following scores could be possible: + 1 2 3 4 5 6 1 - 3 4 5 6 - 2 - - 5 6 - 8 3 - - - - 8 9 4 - - - - 9 10 5 - - - - - 11 The reason there are no numbers down the left is because 1+2 is the same as 2+1, etc. There is one 3, one 4, two 5's, two 6's, NO 7's (since the numbers adding up to 7 are on opposite sides of the dice), two 8's, two 9's, one 10 and one 11. Altogether there are 12 possibilities; 12 is divisible by the 3 family members and can therefore be split equally and fairly between them. Matthew from The King's School in Grantham followed this up by showing how the numbers could be shared out: They could be allocated like the following: 'Person A': 3, 4, 10, 11. Total of 4 chances. 'Person B': 5, 6. Total of 4 chances. 'Person C': 8, 9. Total of 4 chances. If you have 2 people at the table, they could be split like the following: 'Person A': 3, 4, 5, 6. Total of 6 chances. 'Person B': 8, 9, 10, 11. Total of 6 chances. If you have 4 people at the table, they could be allocated like the following: 'Person A': 3, 6. Total of 3 chances. 'Person C': 8, 10. Total of 3 chances. 'Person D' : 9, 11. Total of 3 chances. IT IS IMPOSSIBLE TO HAVE 5 PEOPLE AT THE TABLE, AND SHARE THE TOTALS OUT FAIRLY. If you have 6 people at the table, the totals could be shared out fairly. Rian and Ben from Waverley School agreed: Sum of scores on the edge: Chances of getting 3 = $\frac{1}{12}$ 10 = $\frac{1}{12}$ You can split them fairly: if person 1 gets a score of 3, 4, 10 or 11 the chances of winning are $\frac{4}{12}$ and then person 2 gets a score of 5 or 8 the chances of winning is $\frac{4}{12}$ and then person 3 gets 6 or 9 and the chances of winning are $\frac{4}{12}$. Sum of scores at the corner: For this situation, you cannot split them fairly because each score is unique so the probability of getting each number is $\frac{1}{8}$. There are 8 numbers and 3 people so you can't split it fairly but you can split it between 2, 4 and 8 people. Irene and Remminbi from Dulwich College in Beijing and Callum from Cholsey Primary School also sent us partial solutions to this problem. Symmetry. Mathematical reasoning & proof. Working systematically. Dice. Cubes & cuboids. Interactivities. Creating and manipulating expressions and formulae. Visualising. Generalising. 2D representations of 3D shapes.
CommonCrawl
We cover 8,498 topics besides Differential Privacy 151 resources related to Differential Privacy Topics related to Differential Privacy IEEE Organizations related to Differential Privacy Conferences related to Differential Privacy Periodicals related to Differential Privacy Most published Xplore authors for Differential Privacy Xplore Articles related to Differential Privacy Educational Resources on Differential Privacy Standards related to Differential Privacy Jobs related to Differential Privacy No topics related to "Differential Privacy" No organizations are currently tagged "Differential Privacy" 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) algorithms and data structures, computational complexity, cryptography, computational learningtheory, economics and computation, parallel and distributed algorithms, quantum computing,computational geometry, computational applications of logic, algorithmic graph theory andcombinatorics, optimization, randomness in computing, approximation algorithms, algorithmiccoding theory, algebraic computation, and theoretical aspects of areas such as networks,privacy, information retrieval, computational biology, and databases. algorithms and data structures, computational complexity, cryptography, computational learning theory, economics and computation, parallel and distributed algorithms, quantum computing, computational geometry, computational applications of logic, algorithmic graph theory and combinatorics, optimization, randomness in computing, approximation algorithms, algorithmic coding theory, algebraic computation, and theoretical aspects of areas such as networks, privacy, information retrieval, computational biology, and databases. Papers presenting new and original research on theory of computation are sought. Typical butnot exclusive topics of interest include: algorithms and data structures, computationalcomplexity, cryptography, computational learning theory, computational game theory, paralleland distributed algorithms, quantum computing, computational geometry, computationalapplications of logic, algorithmic graph theory and combinatorics, optimization, randomness incomputing, approximation algorithms, algorithmic coding theory, algebraic computation, andtheoretical aspects of areas such as networks, privacy, information retrieval, computationalbiology, and databases. Papers that broaden the reach of the theory of computing, or raiseimportant problems that can benefit from theoretical investigation and analysis, are encouraged. Papers presenting new and original research on theory of computation are sought. Typical but not exclusive topics of interest include: algorithms and data structures, computational complexity, cryptography, computational learning theory, computational game theory, parallel and distributed algorithms, quantum computing, computational geometry, computational applications of logic, algorithmic graph theory and combinatorics, optimization, randomness in computing, approximation algorithms, algorithmic coding theory, algebraic computation, and theoretical aspects of areas such as networks, privacy, information retrieval, computational biology, and databases. Papers that broaden the reach of the theory of computing, or raise important problems that can benefit from theoretical investigation and analysis, are encouraged. Mathematical research on the foundations of computer science, including algorithms, complexity, and their applications to other fields of computer science and other sciences 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science (FOCS) The 53rd Annual Symposium on Foundations of Computer Science (FOCS 2012), sponsored by the IEEE Computer Society Technical Committee on Mathematical Foundations of Computing, will be held at the Hotel Zozo in Palm Springs, CA, October 20-23, 2012. A series of tutorial presentations will be given. Papers presenting new and original research on the theory of computation are sought, including papers that broaden the reach of computer science theory, or raise important problems. 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS) The 512nd Annual Symposium on Foundations of Computer Science (FOCS2011), sponsored by the IEEE Computer Society Technical Committee on Mathematical Foundations of Computing, will be held at the Hotel Zozo in Palm Springs, CA, October 23-25, 2011. A series of tutorial presentations will be given on October 22. Papers presenting new and original research on the theory of computation are sought, including papers that broaden the reach of computer science theory, or raise important problems. 2010 IEEE 51st Annual Symposium on Foundations of Computer Science (FOCS) The 51st Annual Symposium on Foundations of Computer Science (FOCS2010), sponsored by the IEEE Computer Society Technical Committee on Mathematical Foundations of Computing, will be held at the Monte Carlo Hotel in Las Vegas, Nevada, October 24-26, 2010. A series of tutorial presentations will be given on October 23. Papers presenting new and original research on the theory of computation are sought, including papers that broaden the reach of computer science theory, or raise important problems that can ben 2009 IEEE 50th Annual Symposium on Foundations of Computer Science - FOCS The 50th Annual Symposium on Foundations of Computer Science (FOCS2009), sponsored by the IEEE Computer Society Technical Committee on Mathematical Foundations of Computing, will be held in at the Renaissance Atlanta Hotel Downtown in Atlanta, GA, October 24-27, 2009. Papers presenting new and original research on theory of computation are sought. Typical but not exclusive topics of interest include: algorithms and data structures, computational complexity, cryptography, computational geometry, computationa 2007 IEEE 48th Annual Conference on Foundations of Computer Science - FOCS 2019 IEEE Information Theory Workshop (ITW) The scope of IEEE ITW2019 will be in all areas of Information Theory. Fields of interest include, but are not limited to Information Theory for Cyber-Physical Systems, Modern Coding Theory, and Security, Privacy and Trust. Original papers on Information and Coding Theory are encouraged for submission. The scope of submission includes, but is not limited toInformation Theory and its ApplicationsFrontiers of Coding Theory and PracticeBoundaries between Information Theory and Data Science, Biology and Signal ProcessingNetwork Information Theory Network Coding and Distributed StorageInformation Theoretic Security The scope of IEEE ITW2017 will be in all areas of information theory with special emphasis on the following:• Information Theory for Content Distribution - Distributed data storage - Peer-to-peer network coded broadcasting - Coded caching for wireless and wireline transmissions - Delay-constrained communications• Information Theory and Biology - Information theory and intercellular communication - Information theory and neuroscience - Information-theoretical analysis of biologically-inspired communication systems• Information Theory and Quantum Communication - Quantum information - Quantum computation - Quantum cryptography• Information Theory and Coding for Memories - Inter-cell interference in nonvolatile memories - Rank modulation and constrained codes for nonvolatile memories Broad scope of information theory. The Information Theory Workshop 2015 in Jerusalem will cover all fields of information, with special emphasis on the interaction between information theory and computer science, signal procession and networks. ITW2014 is a forum for technical exchange among scientists and engineers working on the fundamentals of information theory. The agenda is broad and will cover the diverse topics that information theory presently impacts. There will be both invited and contributed sessions. 2013 IEEE Information Theory Workshop (ITW 2013) The scope of the workshop includes, but is not limited to the following topics: BioInformatics, Communication, Machine Learning, Security, Spectrum Sharing, Coding, Compression, Networks, Signal Processing, Statistics. The past decade has seen an exponential increase in the data stored in distributed locations in various forms including corporate & personal data, multimedia, and medical data in repositories. The grand challenge is to store, process and transfer this massive amount of data, efficiently and securely over heterogeneous communication networks. Algebraic Methods in Communications Technology Covers the most relevant topics in Information Theory and Coding Theory of interest to the most recent applications to wireless networks, sensor networks, and biology This workshop will take a brief look into the recent information theory past to commemorate the 60th anniversary of Shannon's landmark paper, and then proceed to explore opportunities for information theory research in quantum computation, biology, statistics, and computer science. 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) The conference solicits experimental and theoretical works on social network analysis and mining along with their application to real life situations. social networks and mining The conference will consider papers in all areas of social networks and mining. In recent years, social network research has advanced significantly; the development of sophisticated techniques for Social Network Analysis and Mining (SNAM) has been highly influenced by the online social Web sites, email logs, phone logs and instant messaging systems, which are widely analyzed using graph theory and machine learning techniques. People perceive the Web increasingly as a social medium that fosters interaction among people, sharing of experiences and knowledge, group activities, community formation and evolution. This has led to a rising prominence of SNAM in academia, politics, homeland security and business. This follows the pattern of known entities of our society that have evolved into networks in which actors are increasingly dependent on their structural embedding. The international conference on Advances in Social Network Analysis and Mining (ASONAM 2015) will primarily provide an interdisciplinary venue that will bring together practitioners and researchers from a variety of SNAM fields to promote collaborations and exchange of ideas and practices. ASONAM 2015 is intended to address important aspects with a specific focus on the emerging trends and industry needs associated with social networking analysis and mining. The conference solicits experimental and theoretical works on social network analysis and mining along with their application to real life situations. The IEEE/ACM International Conference on Advances in Social Network Analysis and Mining (ASONAM) provides a premier interdisciplinary forum to bring together researchers and practitioners from all social networking analysis and mining related fields for presentation of original research results, as well as exchange and dissemination of innovative, practical development experiences. ASONAM 2014 seeks to address important challenging problems with a specific focus on the emerging trends and industry needs associated with social networking analysis and mining. The conference solicits experimental and theoretical findings along with their real-world applications. General areas of interest to ASONAM 2014 include the design, analysis and implementation of social networking theory, systems and applications from computer science, mathematics, communications, business administration, sociology, psychology, anthropology, applied linguistics, biology and medicine. 2013 International Conference on Advances in Social Networks Analysis and Mining (ASONAM) People perceive the Web increasingly as a social medium that fosters interaction among people, sharing of experiences and knowledge, group activities, community formation and evolution. This has led to a rising prominence of Social Network Analysis and Mining (SNAM) in academia, politics, homeland security and business. The 2013 international conference on Advances in Social Network Analysis and Mining (ASONAM-13 will primarily provide an interdisciplinary venue brings together practitioners and researchers from a variety of SNAM fields to promote collaborations and exchange of ideas and practices. The conference will address important aspects with a specific focus on the emerging trends and industry needs associated with social networking analysis and mining. The conference solicits experimental and theoretical works on social network analysis and mining with their application to real life situations. 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012) In recent years, social network research has advanced significantly; the development of sophisticated techniques for Social Network Analysis and Mining (SNAM) has been highly influenced by the online social Web sites, email logs, phone logs and instant messaging systems, which are widely analyzed using graph theory and machine learning techniques. People perceive the Web increasingly as a social medium that fosters interaction among people, sharing of experiences and knowledge, group activities, community formation and evolution. This has led to a rising prominence of SNAM in academia, politics, homeland security and business. The international conference on Advances in Social Network Analysis and Mining (ASONAM 2011) will primarily provide an interdisciplinary venue that will bring together practitioners and researchers from a variety of SNAM fields to promote collaborations and exchange of ideas and practices. ASONAM 2011 is intended to address important aspects with a specific focus on the emerging trends and industry needs associated with social networking analysis and mining. The conference solicits experimental and theoreti In recent years, social network research has advanced significantly; the development of sophisticated techniques for Social Network Analysis and Mining (SNAM) has been highly influenced by the online social websites, email logs, phone logs and instant messageing systems, which are widely analyzed using graph theory and machine learning techniques. People perceive teh web increasingly as a social medium that fosters interaction among people, sharing of experiences and knowledge, group activities, community 2012 International Conference on Privacy, Security, Risk and Trust (PASSAT) The 4th International Conference on Information Privacy, Security, Risk and Trust will be held at Amsterdam, The Netherlands. The aim is to provide an international forum for information privacy, risk, trust, and security researchers and practitioners to explore solutions to profound challenges 2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust (PASSAT) / 2011 IEEE Third Int'l Conference on Social Computing (SocialCom) Cyber Security and Social Computing. 2009 IEEE International Conference on Privacy, Security, Risk and Trust (PASSAT) privacy, risk, trust and security issues and exchange recent progresses No periodicals are currently tagged "Differential Privacy" Jordi Soria-Comas Josep Domingo-Ferrer David Megías Differential Privacy Protection Recommendation Algorithm Based on Student Learning Behavior 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), 2018 Traditional collaborative filtering recommendation algorithm based on learning resources use a large amount of student personal information and behavior information. This will put the user's privacy at risks since that students' information can be mined by analyzing the recommendation results. Considering that differential privacy theory can effectively protect user privacy through strict mathematical definition and maximum background knowledge assumptions, this ... Secure Medical Data Collection via Local Differential Privacy 2018 IEEE 4th International Conference on Computer and Communications (ICCC), 2018 As the volume of medical data mining increases, so do the need to preserve patient privacy. And the exposure of medical data may degrade the level of health care service and reduce the trust of patients. Local Differential Privacy (LDP) was proposed to solve problems in the context of local privacy, by which data collectors are hardly to get exact ... Deep Q-Network Based Route Scheduling for TNC Vehicles with Passengers' Location Differential Privacy IEEE Internet of Things Journal, None The transportation network company (TNC) services efficiently pair the passengers with the vehicles/drivers through mobile applications such as Uber, Lyft, Didi, etc. TNC services definitely facilitate the traveling of passengers, while it is equally important to effectively and intelligently schedule the routes of cruising TNC vehicles to improve TNC drivers' revenues. From the TNC drivers' side, the most critical question ... Multi-Party High-Dimensional Data Publishing under Differential Privacy IEEE Transactions on Knowledge and Data Engineering, None In this paper, we study the problem of publishing high-dimensional data in a distributed multi-party environment under differential privacy. In particular, with the assistance of a semi-trusted curator, the parties collectively generate a synthetic integrated dataset while satisfying $\varepsilon$-differential privacy. To solve this problem, we present a differentially private sequential update of Bayesian network (DP-SUBN) approach. In DP-SUBN, the parties ... DRAKO: Differentially pRivate Algorithm to meet K-anonymity for Online portal service 2018 IEEE International Conference on Big Data (Big Data), 2018 Digital data on the Web are nowadays regarded significant sources of information for marketing and user profiling, etc. However, digital data are risky sources of privacy violation. To address privacy breaches, we can use differential privacy, which has become the de facto standard for privacy protection in statistical databases. However, problems need to be solved, including those related to noise ... No IEEE.tv Videos are currently tagged "Differential Privacy" Traditional collaborative filtering recommendation algorithm based on learning resources use a large amount of student personal information and behavior information. This will put the user's privacy at risks since that students' information can be mined by analyzing the recommendation results. Considering that differential privacy theory can effectively protect user privacy through strict mathematical definition and maximum background knowledge assumptions, this paper proposes a differential privacy collaborative filtering recommendation algorithm based on learner behavior similarity. By adding noise obeying the Laplace distribution to the learner behavior similarity matrix, the recommendation accuracy rate does not reduce, as well as the privacy of student is protected effectively. As the volume of medical data mining increases, so do the need to preserve patient privacy. And the exposure of medical data may degrade the level of health care service and reduce the trust of patients. Local Differential Privacy (LDP) was proposed to solve problems in the context of local privacy, by which data collectors are hardly to get exact individual information. We present a secure medical data collection framework and apply our framework on synthetic data at different scale. Finally, we evaluate the performance of our work. Our experimental results show that both privacy and validity of medical data are aligned. The transportation network company (TNC) services efficiently pair the passengers with the vehicles/drivers through mobile applications such as Uber, Lyft, Didi, etc. TNC services definitely facilitate the traveling of passengers, while it is equally important to effectively and intelligently schedule the routes of cruising TNC vehicles to improve TNC drivers' revenues. From the TNC drivers' side, the most critical question to address is how to reduce the cruising time, and improve the efficiency/earnings by using their own vehicles to provide TNC services. In this paper, we propose a deep reinforcement learning based TNC route scheduling approach, which allows the TNC service center to learn about the dynamic TNC service environment and schedule the routes for the vacant TNC vehicles. In particular, we jointly consider multiple factors in the complex TNC environment such as locations of the TNC vehicles, different time periods during the day, the competition among TNC vehicles, etc., and develop a deep Q-network (DQN) based route scheduling algorithm for vacant TNC vehicles based on distributed framework, which makes the server closer to the terminal users and accelerates the training speed. Furthermore, we apply the geo-indistinguishability scheme based on differential privacy to preserve the sensitive location information uploaded by the passengers. We evaluate the proposed algorithm's performance via simulations using open data sets from Didi Chuxing. Through extensive simulations, we show that the proposed scheme is effective in reducing the cruising time of vacant TNC vehicles and improving the earnings of TNC drivers. In this paper, we study the problem of publishing high-dimensional data in a distributed multi-party environment under differential privacy. In particular, with the assistance of a semi-trusted curator, the parties collectively generate a synthetic integrated dataset while satisfying $\varepsilon$-differential privacy. To solve this problem, we present a differentially private sequential update of Bayesian network (DP-SUBN) approach. In DP-SUBN, the parties and the curator collaboratively identify the Bayesian network $\mathbb{N}$ that best fits the integrated dataset in a sequential manner, from which a synthetic dataset can then be generated. The fundamental advantage of adopting the sequential update manner is that the parties can treat the intermediate results provided by previous parties as their prior knowledge to direct how to learn $\mathbb{N}$. The core of DP-SUBN is the construction of the search frontier, which can be seen as a priori knowledge to guide the parties to update $\mathbb{N}$. Leveraging the correlations of attribute pairs, we propose exact and heuristic methods to construct the search frontier. In particular, to privately quantify the correlations of attribute pairs without introducing too much noise, we first put forward a non-overlapping covering design (NOCD) method, and then devise a dynamic programming method for determining the optimal parameters used in NOCD. Through privacy analysis, we show that DP-SUBN satisfies $\varepsilon$-differential privacy. Extensive experiments on real datasets demonstrate that DP-SUBN offers desirable data utility with low communication cost. Digital data on the Web are nowadays regarded significant sources of information for marketing and user profiling, etc. However, digital data are risky sources of privacy violation. To address privacy breaches, we can use differential privacy, which has become the de facto standard for privacy protection in statistical databases. However, problems need to be solved, including those related to noise parameter configuration, even before differential privacy can be applied into the real world. In this study, we introduce a linkage attack to identify a user with different nicknames for each subservice on a hue online portal service. In addition, we propose a configuration technique for the upper bound of noise parameter ε to prevent linkage attack. We demonstrate the linkage attack with experiments by using real-world online portal service data. Finally, we validate the proposed configuration technique. Adaptive Differential Privacy Interactive Publishing Model Based on Dynamic Feedback Data publishing is very meaningful and necessary. However, there are much personal (especially sometimes sensitive) information in the datasets to be published. So, privacy preserving has become a more and more important problem what we must deal with in big data era. Because of the strong mathematics foundation, provable and quantized privacy properties, DP (differential privacy) attracts the most interests and is becoming one of the most prevalent privacy models. This paper, based on differential privacy preserving mechanism, engages in queries restriction problem in interactive privacy data publishing framework. One adaptive differential privacy interactive publishing model based on dynamic feedback model (ADP M-DF) is proposed. Then, its technological process is presented by the flow chart in detail. And, the dynamic feedback scheme is proposed with an iteration algorithm to generate new privacy budget parameter. Finally, some qualities are discussed. Analysis shows that the new model can run well with good practical meanings and provide better user query experience. An Improved Differential Privacy K-Means Algorithm Based on MapReduce In order to solve the low clustering accuracy and the local optimum problem of the traditional differential privacy k-means algorithm, this paper proposed an improved differential privacy K-means algorithm based on MapReduce. The proposed algorithm uses Canopy to select the initial center point, and uses Laplace mechanism to realize the differential privacy protection. The simulation shows that the clustering results of the proposed algorithm outperform the traditional DP K-means in usability and convergence speed. Differentially Private Publication of Vertically Partitioned Data In this paper, we study the problem of publishing vertically partitioned data under differential privacy, where different attributes of the same set of individuals are held by multiple parties. In this setting, with the assistance of a semi-trusted curator, the involved parties aim to collectively generate an integrated dataset while satisfying differential privacy for each local dataset. Based on the latent tree model (LTM), we present a differentially private latent tree (DPLT) approach, which is, to the best of our knowledge, the first approach to solving this challenging problem. In DPLT, the parties and the curator collaboratively identify the latent tree that best approximates the joint distribution of the integrated dataset, from which a synthetic dataset can be generated. The fundamental advantage of adopting LTM is that we can use the connections between a small number of latent attributes derived from each local dataset to capture the cross-dataset dependencies of the observed attributes in all local datasets such that the joint distribution of the integrated dataset can be learned with little injected noise and low computation and communication costs. Extensive experiments on real datasets demonstrate that DPLT offers desirable data utility with low computation and communication costs. Output and Input Data Perturbations for Differentially Private Databases In today's ultra-connected world, the production and consumption of digital data has become immensely huge in volume. Differential privacy is a relatively new approach which attempts to provide strong privacy protection to users' data, while still maintaining outside access to data. However, the comparison of these methods in terms of performance and database suitability remains an open question. In this paper, we will provide a comprehensive comparison of input and output data perturbations for differentially private databases and evaluate the results in terms of accuracy, privacy, efficiency and scalability. Privacy-preserving Distributed Data Fusion Based on Attribute Protection Privacy-preserving distributed data fusion is a pretreatment process in data mining involving security models. In this paper, we present a method of implementing multi-party data fusion, wherein redundant attributes of a same set of individuals are stored by multiple parties. In particular, the merged data does not suffer from background attacks or other reasoning attacks, and individual attributes are not leaked. To achieve this, we present three algorithms that satisfy K-Anonymous and differential privacy. Experimental results on real data sets suggest that the proposed algorithm can effectively preserve information in data mining tasks. No standards are currently tagged "Differential Privacy" Privacy Software Engineer Siri - ML Research Scientist (Private Learning) Core ML - ML Privacy Researcher
CommonCrawl
Renewal Reward Perspective on Linear Switching Diffusion Systems in Models of Intracellular Transport Maria-Veronica Ciocanel ORCID: orcid.org/0000-0001-6859-46591, John Fricks2, Peter R. Kramer3 & Scott A. McKinley4 Bulletin of Mathematical Biology volume 82, Article number: 126 (2020) Cite this article In many biological systems, the movement of individual agents is characterized having multiple qualitatively distinct behaviors that arise from a variety of biophysical states. For example, in cells the movement of vesicles, organelles, and other intracellular cargo is affected by their binding to and unbinding from cytoskeletal filaments such as microtubules through molecular motor proteins. A typical goal of theoretical or numerical analysis of models of such systems is to investigate effective transport properties and their dependence on model parameters. While the effective velocity of particles undergoing switching diffusion dynamics is often easily characterized in terms of the long-time fraction of time that particles spend in each state, the calculation of the effective diffusivity is more complicated because it cannot be expressed simply in terms of a statistical average of the particle transport state at one moment of time. However, it is common that these systems are regenerative, in the sense that they can be decomposed into independent cycles marked by returns to a base state. Using decompositions of this kind, we calculate effective transport properties by computing the moments of the dynamics within each cycle and then applying renewal reward theory. This method provides a useful alternative large-time analysis to direct homogenization for linear advection–reaction–diffusion partial differential equation models. Moreover, it applies to a general class of semi-Markov processes and certain stochastic differential equations that arise in models of intracellular transport. Applications of the proposed renewal reward framework are illustrated for several case studies such as mRNA transport in developing oocytes and processive cargo movement by teams of molecular motor proteins. Avoid the most common mistakes and prepare your manuscript for journal editors. Microscale biological agents frequently change biophysical state, which results in significant changes in their movement behavior. Intracellular cargo, for example, switches among active transport, diffusive transport, and paused states, each resulting from different mechanochemical configurations of the cargo, cytoskeletal filaments, and the molecular motors that bind them (Hancock 2014; Bressloff and Newby 2013). Models for this behavior can be either deterministic (typically partial differential equations, PDEs) or stochastic (often continuous-time Markov chains, CTMCs, or stochastic differential equations, SDEs) depending on whether the investigation focuses on population properties (deterministic methods) or individual paths (stochastic methods). Each state is commonly characterized in terms of a mean velocity, fluctuations about the mean velocity, and a distribution of time spent in the state, sometimes but not always determined by classical reaction rate theory. Explicit solutions for these models are rarely available, so asymptotic or numerical methods are often deployed to investigate and characterize the model's predictions. The study of deterministic models often relies on numerical simulation using PDE integration methods (Wang et al. 2003; Cox and Matthews 2002; Trong et al. 2015), while stochastic models are simulated with Monte Carlo/Gillespie algorithms (Müller et al. 2008; Kunwar and Mogilner 2010; Müller et al. 2010; Allard et al. 2019) to generate individual trajectories that are then analyzed statistically. However, these computations can be quite costly, especially when one wants to understand how bulk transport properties (like effective velocity or diffusivity) depend on individual model parameters. When possible, asymptotic analysis allows for explicit approximation of transport properties, which can validate, complement, or even replace numerical simulations (Reed et al. 1990; Brooks 1999; Pavliotis 2005; Pavliotis and Stuart 2008; Popovic et al. 2011; McKinley et al. 2012; Bressloff and Xu 2015; Ciocanel et al. 2017). The long-term effective velocity of state-switching particles is often straightforward to compute, usually obtained by calculating the fraction of time spent in each state and correspondingly averaging the associated state velocities. On the other hand, this weighted average technique is not valid when calculating a particle's effective diffusivity, since this nonlinear quantity depends, via the Kubo formula (Kubo 1963), on the correlation of velocity at two different times. This in turn depends on the dynamics in the change in biophysical state beyond the stationary distribution of state. For example, randomness in the switching dynamics will produce a positive effective diffusivity even when the diffusivity within each state is zero. Generalizing some previous work (Brooks 1999; Hughes et al. 2011; Krishnan and Epureanu 2011; Hughes et al. 2012; Ciocanel et al. 2017), we consider this problem of computing effective diffusivity for a class of state-switching particle models that can be expressed in a framework where the sequence of states are given by a Markov chain, but the times spent in these states are not necessarily exponentially distributed as in a continuous-time Markov chain. Since we assume that the state process Markov chain is positive recurrent, the particle position process can be described as a regenerative increment process in a sense defined by Serfozo (2009), for example. That is to say, we consider processes that almost surely return to some base state at a sequence of (random) regeneration times such that the dynamics after a regeneration time are independent from those that occur before. As a result, we can decompose the process into what we refer to as cycles, in which the particle starts in a base state, undergoes one or more state changes, and then revisits the base state again. The dynamics within each cycle are independent of other cycles, and we can use the renewal reward theorem to perform asymptotic calculations by viewing the total displacement within each cycle as its reward and viewing the cycle durations as times between regenerations. An early application of the idea of computing effective particle velocity and diffusivity by decomposition and analysis of the dynamics in terms of independent cycles was to understand the large enhancement of (non-switching) particle diffusion in a tilted periodic potential (Reimann et al. 2001, 2002). Our primary motivating examples are related to intracellular transport. Some prominent recent investigations include the study of mRNA localization in oocyte development (Zimyanin et al. 2008; Trong et al. 2015; Ciocanel et al. 2017), cell polarization in the budding yeast (Bressloff and Xu 2015), neurofilament transport along axons (Jung and Brown 2009; Li et al. 2014), interactions of teams of molecular motor proteins (Klumpp and Lipowsky 2005; Müller et al. 2008; Kunwar and Mogilner 2010; Müller et al. 2010; Tjioe et al. 2019), and sliding of parallel microtubules by teams of cooperative identical motors (Allard et al. 2019). Microtubule-based transport of cargo is typically mediated by kinesin motors moving material to the cell periphery and by dynein motors carrying it to the nucleus. Understanding population-scale behaviors, such as protein localization, that arise from local motor interactions remains an open question. While multiple motor interactions are usually thought to be resolved through a tug-of-war framework (Müller et al. 2008), it has been observed that important predictions made by the tug-of-war framework are not consistent with in vivo experimental observations (Kunwar et al. 2011; Hancock 2014). The work presented in this paper can aid theoretical efforts to relate local motor cargo dynamics to predictions for large-scale transport. PDE Methods for Markovian Switching For hybrid switching diffusion processes (Yin and Zhu 2010), in which particles independently switch with continuous-time Markovian dynamics between states that have different velocities and/or diffusivities, the law of a particle can be expressed in terms of its associated forward Kolmogorov equations with an advection–reaction–diffusion structure: $$\begin{aligned} \frac{\partial \varvec{{u}}(y,t)}{\partial t} = A^\mathrm{T}\varvec{{u}} - V \partial _y \varvec{{u}} + D \varDelta \varvec{{u}}\,. \end{aligned}$$ Here, we will think of \(\varvec{{u}}\) as an \((N+1)\)-dimensional column vector (indexed from 0 to N) of the concentrations of particle populations in different dynamical states, which also obey the forward Kolmogorov equations with a different normalization. The dynamics are governed by matrices \(A, V, D \in {\mathbb {R}}^{(N+1) \times (N+1)}\), where V and D are diagonal matrices, with real constant diagonal entries \(v_0, v_1,\ldots ,v_N\) for V corresponding to the particle velocities in each state, and positive real constant diagonal entries \(d_0, d_1,\ldots ,d_N\) for D corresponding to the diffusion coefficients in each state. The matrix A is the transition rate matrix of the associated finite state continuous-time recurrent Markov chain (CTMC), J(t), which tracks the state of the particle at a given time. That is to say, each off-diagonal entry \(a_{ij}\) can be interpreted as the rate at which a particle in state i switches to state j. The diagonal entries of A are non-positive and correspond to the total rate out of a given state. The rows of A sum to zero. Assuming that the CTMC is irreducible, it follows that A admits a zero eigenvalue with algebraic and geometric multiplicity one, and the corresponding normalized zero-eigenvector \(\varvec{{\pi }}\) is the stationary distribution of J(t). Either quasi-steady-state reduction (Bressloff and Newby 2013) or homogenization theory (Pavliotis and Stuart 2008) can be used to reduce the complexity of the advection–reaction–diffusion system (1) to a scalar advection–diffusion equation of the form: $$\begin{aligned} \frac{\partial c(y,t)}{\partial t} = v_{\text {eff}} \partial _y c(y,t)+ D_{\text {eff}} \varDelta c(y,t)\,. \end{aligned}$$ with constant effective velocity \( v_{\text {eff}} \) and constant effective diffusivity \( D_{\text {eff}} \) for the particle concentration without regard to state \( c(y,t) = \sum _{i=0}^N u_i (y,t) \). Quasi-steady-state reduction assumes the stochastic switching dynamics occurs on a fast scale relative to the advection–diffusion dynamics, while homogenization theory applies at sufficiently large space and time scales relative to those characterizing the dynamical scales. These different asymptotic conditions give in general distinct results when the transport and switching rates have explicit dependence on space, but when, as in the present case, they are spatially independent, the formulas for the effective transport coefficients coincide. (This is because the time scale of advection/diffusion is linked purely to the spatial scale, so the large spatial-scale assumption of homogenization will perforce induce a time-scale separation between the switching and transport dynamics, as assumed in quasi-steady-state reduction.) The effective velocity is computed by averaging the velocity in each state, weighted by the stationary distribution of the particle states: $$\begin{aligned} v_{\text {eff}} = \varvec{{v}} \cdot \varvec{{\pi }}\,, \end{aligned}$$ where \(\varvec{{v}} = (v_0,v_1,\ldots ,v_N)^\mathrm{T}\). The effective diffusivity is given, from an equivalent long-time effective dynamical description for intracellular transport derived by Lyapunov–Schmidt reduction on the low wavenumber asymptotics of the Fourier transform of Eq. (1) in Ciocanel et al. (2017, (2018), by $$\begin{aligned} D_{\text {eff}}&= \varvec{{d}} \cdot \varvec{{\pi }} - \varvec{{v}} \cdot (\overline{A^\mathrm{T}})^{-1} (\varvec{{v}}\circ \varvec{{\pi }} -v_{\mathrm {eff}}\varvec{{\pi }}) \,, \end{aligned}$$ $$\begin{aligned} \varvec{{v}} \circ \varvec{{\pi }} = (v(0)\pi (0),v(1)\pi (1),\ldots ,v(N)\pi (N))^\mathrm{T} \end{aligned}$$ denotes the Hadamard product (component-wise multiplication of vectors). Here \(\varvec{{d}} = (d_0,d_1,\ldots ,d_N)^\mathrm{T}\), and \( \overline{A^\mathrm{T}} \) is the restriction of \( A^\mathrm{T} \) to its range \( {{\,\mathrm{Ran}\,}}(A^\mathrm{T}) \) (vectors orthogonal to \( (1,1,\ldots ,1)^\mathrm{T} \)). Note that the operation involving the inverse of \( (\overline{A^\mathrm{T}})^{-1}\) is well defined since \( \overline{A^\mathrm{T}}\) is a full-rank matrix mapping \( {{\,\mathrm{Ran}\,}}(A^\mathrm{T}) \) to \( {{\,\mathrm{Ran}\,}}(A^\mathrm{T}) \), and its inversion in Eq. (3) is applied to a vector in \( {{\,\mathrm{Ran}\,}}(A^\mathrm{T}) \). We remark that the homogenization formula is often written (Cioranescu and Donato 1999; Pavliotis and Stuart 2008) in an equivalent adjoint form to Eq. (3), with a centering of the leading vector \( \varvec{{v}} \rightarrow \varvec{{v}} - v_{\mathrm {eff}} (1,1,\ldots ,1)^\mathrm{T}\) that renders the formula indifferent to the choice of how to invert \( A^\mathrm{T} \). The term \(\varvec{{d}} \cdot \varvec{{\pi }}\) above reflects the contributions to the asymptotic diffusivity from pure diffusion, while the second term captures the interactions between the advection and reaction terms. Applications of quasi-steady-state reduction to biophysical systems with state switching and diffusion can be found in Newby and Bressloff (2010b, (2010c), Bressloff and Newby (2011), Bressloff and Newby (2013), Bressloff and Xu (2015). Homogenization of Brownian motor models was conducted in Pavliotis (2005), Kramer et al. (2010). Summary of Method Based on Regeneration Cycles These foregoing methods (Pavliotis and Stuart 2008; Ciocanel et al. 2017; Bressloff and Newby 2013) rely on the fully Markovian structure of the dynamics, with the state-switching process in particular taking the form of a continuous-time Markov chain with exponentially distributed state durations. In this work, we consider a generalized framework in which we require only that the sequence of states visited form a discrete-time recurrent Markov chain, but do not require exponentially distributed state durations, so the state-switching process J(t) need not be a continuous-time Markov chain. Moreover, we allow for more general random spatial dynamics within a state that also need not be fully Markovian. Our framework and method of analysis rather require only a regenerative structure of the dynamics, with repeated returns to a base state, at which moments the future randomness is decoupled from the previous history. For simplicity in exposition, we will restrict attention to the spatially homogeneous setup where all switching statistics and transport coefficients within each state are independent of spatial location, and comment in the discussion in Sect. 6 on how our results might extend to the spatially inhomogeneous context. We use renewal reward theory and a functional central limit theorem to derive effective drift and diffusion for these more general switching diffusion systems in terms of the analysis of a single regeneration cycle. The calculation framework also results in an expression for the expected run length of cargo undergoing switching diffusion. Our approach builds on previous applications of renewal reward processes modeling motor-stepping and chemical cycles of bead-motor assays (Krishnan and Epureanu 2011; Hughes et al. 2011, 2012; Miles et al. 2018; Shtylla and Keener 2015) and extends the technique to accommodate more complex models with dynamics depending on the amount of time spent in the current state, as described in Sect. 2. Given the renewal reward framework, the analysis of the model reduces to computing the correlated spatial displacement and time duration of each cycle, which we study in Sect. 3. We illustrate the usefulness of the probabilistic renewal reward techniques with several case studies. In Sect. 4, we show that our method of deriving effective velocity and diffusivity agrees with predictions in Ciocanel et al. (2017) arising from a Lyapunov–Schmidt reduction approach equivalent to homogenization for partial differential equations describing mRNA concentrations as in (1). In Sect. 5, we show that our method also agrees with previous theoretical and numerical analyses of transport properties for cargo pulled by teams of molecular motors. In the case of tug-of-war dynamics, with cargo transported by teams of opposite-directed motors, our framework provides predictions on the dependence of effective diffusivity on the ratio of stall to detachment force of the pulling motors. We also apply this method to a model accounting for increased reattachment kinetics when motors are already attached to the cargo and show that teams of opposite-directed motors have lower effective velocities but larger run lengths than teams consisting of the dominant motor only. Finally, we show that our effective diffusivity calculation agrees with stochastic simulations of sliding microtubule behavior driven by teams of bidirectional motors for a large range of load sensitivity. Since the method reduces to calculating the moments of times and spatial displacements in each dynamic state, this framework may also be useful for analyzing complex, higher-dimensional models [such as Bergman et al. (2018)], where cellular components interact in complicated ways, but moments of the dynamical changes in each state could be estimated numerically. As the experimental data on motor interactions develop rapidly, the framework proposed may prove useful in analyzing novel models and in understanding the dependence of effective transport properties on model parameters. Mathematical Framework and Examples The type of path we have in mind in this work is displayed in Fig. 1, a simulated continuous, stochastic process that switches between several stereotypical behaviors. Let the real-valued process \(\{X(t) : t \ge 0\}\) be the time-dependent position of a particle and let \(\{J(t) : t \ge 0\}\) denote the time-dependent underlying (e.g., biophysical) state, taking values from the finite state space \(S=\{0, 1, 2, \ldots , N\}\). Switches between the states take place at the random times \(\{t_k : k \in {\mathbb {N}}\}\), and we use \(\{J_k : k \in {\mathbb {N}}\}\) to denote the state during the kth time interval \([t_{k-1},t_k)\). We set \(t_0=0\) and \(J_1=J(0)\). We assume that the sequence of states \(\{J_k : k \in {\mathbb {N}}\}\) forms a time-homogeneous recurrent Markov chain with zero probability to remain in the same state on successive epochs. Given the state \(J_k\), the associated state duration \( t_{k}-t_{k-1}\) and spatial displacement \(X(t_{k}) - X(t_{k-1})\) are conditionally independent of all other random variables in the model (but not necessarily of each other). Moreover, the conditional joint distribution of \( t_{k}-t_{k-1} \) and \( X(t_{k})-X(t_{k-1}) \) given \( J_k \) depends only on the value of \(J_k\) and not separately on the index k. In other words, the dynamics of (J(t), X(t)) have a statistically time-homogeneous character. An example of the type of particle trajectory considered in this work. This simulated trajectory corresponds to the position of intracellular cargo (such as a vesicle) experiencing periods of active and diffusive transport. The path shown here illustrates switches between a forward transport state, a backward transport state, a stalled state, and a freely diffusing state in the framework of Eq. (5). The dashed vertical lines indicate random times \(\{t_k : k \in {\mathbb {N}}\}\) when there are switches in the biophysical state. The base "renewal" state is free diffusion, and the red dashed vertical lines correspond to times \(\{T_k : k \in {\mathbb {N}}\}\) when the system enters the base state. We denote the times spent in each state by \(\tau _k\), as detailed in Sects. 2.1 and 2.2. In the language of the paper, the red lines correspond to the regeneration times and the total spatial displacements and times between these regeneration times are the "rewards" and the cycle durations, respectively (see Sect. 2.1) One general subclass of the processes considered can be expressed as follows: The random times \( \{t_k\}_{k=0}^{\infty } \) are generated by sampling \( t_{k}-t_{k-1} \) independently from their conditional marginal distributions given the Markov chain states \(J_k\), and then conditioned upon these random variables, the spatial process X(t) is governed by a stochastic differential equation with coefficients depending on the current state, the value of X upon entry into the current state, and the time since entry into the current state. That is, we express the conditional dynamics of X(t) as: $$\begin{aligned} \begin{aligned} \mathrm {d}X(t) = \sum _{k = 1}^\infty 1_{[t_{k-1},t_{k})} (t) \Big (\alpha _{J_k}\Big (X(t),X(t_{k-1}),t-t_{k-1}\Big ) \mathrm {d}t + \sqrt{2 d_{J_k}} \mathrm {d}W(t)\Big ), \end{aligned} \end{aligned}$$ where \(\alpha _j : {\mathbb {R}}^2\times {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) is a function that describes the drift (deterministic component of the dynamics) of a particle while in the state j, and \(d_j\) is the diffusivity in that state. (In general, the diffusive coefficients might also depend on the position of the particle and the recent history of the process, but we restrict ourselves to memoryless, additive noise for this discussion.) Consider, for example, the stochastic process associated with the PDE (1), for which we set the drift terms in (5) to be \(\alpha _j = v_j\), where \(v_j\) is the constant jth diagonal entry of the velocity matrix V in (1). The diffusion coefficients would correspond to the entries of the diagonal matrix D in (1). The process would switch between states as a CTMC with rate matrix A, which, through standard theory (Lawler 1995, Sec. 3.2), induces a transition probability matrix for the sequence of states \( \{ J_k : k \in {\mathbb {N}}\}\): $$\begin{aligned} P_{ij}= {\left\{ \begin{array}{ll} \frac{A_{ij}}{{\bar{\lambda }}_i} &{} \text { for } i \ne j, \\ 0 &{} \text { for } i=j.\end{array}\right. } \end{aligned}$$ where \( {\bar{\lambda }}_i \equiv \sum _{j \in S {\setminus } \{i\}} A_{ij} \) is the total transition rate out of state i. The state duration in state i would be exponentially distributed with mean \( {\bar{\lambda }}_i^{-1} \). We can articulate a more biophysically explicit model of motor cargo dynamics that takes into account, for example, the fluctuations of the cargo around the unobserved motor trajectory. Such a process is depicted in Fig. 1 and is inspired by data from kymograph readouts of cargo movement as seen in Encalada et al. (2011) and analyzed by a tool developed by Neumann et al. (2017). For the simulation depicted in Fig. 1, there are four states: a forward processive state, a backward processive state, and a stalled state—each of which is characterized by having a drift with speed \(v_j\) plus Ornstein–Uhlenbeck type fluctuations [as described for example in Smith and McKinley (2018)]—and a freely diffusing state where the drift term equals zero. That is, \(\alpha _0 = 0\) for the freely diffusing state and for \(j > 0\), $$\begin{aligned} \alpha _j(y,y_0,t) = -\frac{\kappa }{\gamma } \big (y - (v_j t + y_0)\big )\,, \end{aligned}$$ where \(\kappa \) is a spring constant, \(\gamma \) is the viscous drag, and \(v_j\) is the velocity associated with the jth state. The term \((v_j t + y_0)\) indicates the theoretical position of a processive molecular motor that is simultaneously bound to the particle and to a microtubule. We note that there are at least two ways that the process X(t) can be considered to be non-Markovian and still fall within the set of models to which our results apply. The first, which is captured by the drift term (6), is that the process X(t) has memory in the sense that resolving X(t) on any interval in \((t_k, t_{k+1})\) depends on the value \(X(t_k)\). A second allowable non-Markovian dynamic can be obtained by choosing the state duration times \(t_k - t_{k-1}\) given state \(J_k \) to have a non-exponential distribution. As long as the stochastic process of states \(\{J_k\}\) is a time-homogeneous, positive recurrent Markov chain, the technique we present will apply. In Sect. 2.3, we share a few examples from the molecular motors literature that include detailed assumptions about the set of achievable states and transitions among them. We note that these examples vary in their assumptions about fluctuations about mean behavior. In some cases, the dynamics are assumed to be "piecewise deterministic," similar to the class of models studied by Brooks (1999) in which each state is characterized by a fixed velocity parameter \(\alpha _j = v_j\) with the state diffusivity \(d_j\) set to zero. In some of the other examples, fluctuations about the mean are included and would contribute to the long-term diffusivity as a result. Of course, fluctuations are always present in these dynamics (sometimes due to variability in the motor stepping, sometimes due to fluctuations in cargo position). There are natural ways to add these considerations to the models in Sect. 2.3 and express the dynamics within the framework of Eq. (5). Decomposition into Regenerative Cycles and Renewal Reward Structure Here we outline our procedure for calculating the effective velocity and diffusivity of particles undergoing switching dynamics. The strategy is to break the process into independent "cycles" that are marked by returns to a chosen base state. As shown in Krishnan and Epureanu (2011), the analysis using the renewal reward structure is not affected by this initial choice of base state. An elementary exposition of this "dissection principle" concept can be found in Resnick (1992, Sec. 2.5). We define these times of re-entry into the base state as regeneration times \(\{T_n\}\). In what follows, we will view the consecutive spatial displacements and time durations of the regenerative cycles to be the rewards and cycle durations of a classical renewal reward process (Cox 1962). Because the cycle statistics are independent and identically distributed after the first regeneration time \( T_1\), we define (in the sense of distribution) random variables for a generic cycle \(n\ge 2\): $$\begin{aligned} \begin{aligned} \varDelta X&{\mathop {=}\limits ^{{\mathcal {D}}}}X(T_{n}) - X(T_{n-1}); \qquad \varDelta T {\mathop {=}\limits ^{{\mathcal {D}}}}T_{n}-T_{n-1}; \quad \text { and} \\ M&{\mathop {=}\limits ^{{\mathcal {D}}}}\sup _{t\in [T_{n-1},T_{n}]} |X(t) - X(T_{n-1})|. \end{aligned} \end{aligned}$$ We rely on the functional central limit theorem (FCLT) presented in Serfozo (2009) for our asymptotic results. To this end, we define the quantities $$\begin{aligned} \begin{aligned} \mu&:= {\mathbb {E}}(\varDelta T); \quad a := \frac{{\mathbb {E}}(\varDelta X)}{{\mathbb {E}}(\varDelta T)}; \text { and} \\ \sigma ^2&:= \text {Var}(\varDelta X - a \varDelta T). \end{aligned} \end{aligned}$$ As in previous work on molecular motor systems (Hughes et al. 2011, 2012), the FCLT justifies defining the effective (long-run) velocity and effective (long-run) diffusivity of the process X(t) in terms of properties of the regenerative increments as follows: $$\begin{aligned} v_{\text {eff}}&:= \lim _{t \rightarrow \infty } \frac{1}{t} X(t) = a = \frac{{\mathbb {E}}(\varDelta X)}{{\mathbb {E}}(\varDelta T)}; \end{aligned}$$ $$\begin{aligned} D_\text {eff}&:= \lim _{t \rightarrow \infty } \frac{1}{2t} \text {Var}(X(t)) \nonumber \\&\,= \frac{\sigma ^2}{2 \mu } = \frac{1}{2 {\mathbb {E}}(\varDelta T)} \Big (\text {Var}(\varDelta X) + v_{\text {eff}}^2 \text {Var}(\varDelta T) - 2 v_{\text {eff}} \text {Cov}(\varDelta X, \varDelta T)\Big ). \end{aligned}$$ In more technically precise terms, the FCLT states: For \(r \in {\mathbb {Z}}_+\), define \(Y_r(t) := (X(rt) - art)/(\sigma \sqrt{r/\mu }).\) If \(a, \mu , \sigma , {\mathbb {E}}(M)\), and \(E((\varDelta T)^2)\) are all finite, then \(\lim _{r \rightarrow \infty } Y_r = B\) in distribution for \(t \in [0,1]\), where \(\{B : t \in [0,1]\}\) is a standard Brownian motion (Whitt 2002). Notation for Events within Each Regeneration Cycle The mathematical analysis in Sect. 3 focuses on calculation of the moments of the cycle duration and spatial displacement (reward) in an independent cycle of the process (introduced in Sect. 2.1). Here we introduce notation for events occurring within a single regeneration cycle. We denote the number of steps in the nth cycle by $$\begin{aligned} \eta ^{(n)} := \min \{k \ge 1 : J_{k+K_{n-1}+1} = 0 \}, \end{aligned}$$ where \(K_0 = 0\) and \(K_n = \sum _{i = 1}^n \eta ^{(i)}\). We will let \(\tau _{k}^{(n)} = t_{K_{n-1}+k}-t_{K_{n-1}+k-1}\) denote the times spent in each step of the nth cycle and \( \xi _{k}^{(n)} = X(t_{K_{n-1}+k})-X(t_{K_{n-1}+k-1}) \) denote the corresponding spatial displacements. The total time \(\varDelta T\) and displacement \(\varDelta X\) accrued in a cycle \( n\ge 2 \) before returning to the base state are then naturally the sum of these stepwise contributions: $$\begin{aligned} \varDelta T := \sum _{k=1}^{\eta ^{(n)}} \tau _k^{(n)} \text { and } \varDelta X := \sum _{k=1}^{\eta ^{(n)}} \xi _k^{(n)}. \end{aligned}$$ In what follows, we drop the superscript denoting the index n of the cycle, since the cycles have statistically independent and identically distributed behavior for \( n \ge 2\). We will decompose each cycle into what is accrued during the first step (\(\tau _1\) and \(\xi _1\)) associated with the visit to the base state, and what accrues in all subsequent steps in the cycle, which we label $$\begin{aligned} \varDelta {\tilde{T}} := \varDelta T - \tau _1 \text { and } \varDelta {{\tilde{X}}} := \varDelta X - \xi _1 \,. \end{aligned}$$ For each state \(j \in S\) of the underlying Markov chain, let \(\{\tau _k(j),\xi _k (j)\}_{k = 1}^\infty \) be a sequence of iid pairs of random variables drawn from the conditional joint distribution of durations and displacements occurring during a sojourn in state j. The rewards collected in each step can then be written as $$\begin{aligned} \tau _k = \sum _{j=0}^{N}\tau _k(j) 1_{\{J_k = j\}} \text { and } \xi _k = \sum _{j=0}^{N}\xi _k(j) 1_{\{J_k = j\}} \,. \end{aligned}$$ In the statements of our main theorems, it will be useful to have a notation for a vector of random variables with distributions for the time durations and spatial displacements that are associated with the states \(S = \{0, 1, \ldots , N\}\): $$\begin{aligned} \varvec{{\tau }} {\mathop {=}\limits ^{{\mathcal {D}}}}(\tau (0),\tau (1),\ldots ,\tau (N)) \text { and } \varvec{{\xi }} {\mathop {=}\limits ^{{\mathcal {D}}}}(\xi (0),\xi (1),\ldots ,\xi (N)). \end{aligned}$$ So, for any step number \(k \in {\mathbb {N}}\), we have that the vector \((\tau _k(0),\tau _k(1),\ldots ,\tau _k(N))\) is equal in distribution to the vector \(\varvec{{\tau }}\) and likewise for the spatial displacements. The four-state example illustrated in Fig. 1 is just one of many models for intracellular transport that is carried out by multiple molecular motors. To provide context for this framework and for our result in Sect. 3, Proposition 1, here we introduce several canonical examples from the literature where intracellular transport of cargo can be modeled as a stochastic process with regenerative increments. Often, cargo fluctuations are neglected in models when a motor cargo complex is in a processing state (Müller et al. 2008, 2010; Kunwar and Mogilner 2010). This is equivalent to taking a limit in which the cargo is effectively instantaneously restored by the motor cargo tether to a fixed mechanical equilibrium configuration with respect to the motor. (2-state advection–diffusion model of particle transport). Consider a 2-state advection–diffusion model for the dynamics of protein particles (such as mRNA) as illustrated in Ciocanel et al. (2017, Figure 3A), with a freely diffusing state and an active transport state. Assume that the times spent by the particles in each state are drawn from an exponential distribution $$\begin{aligned} \tau (0)&\sim \text {Exp}(\beta _2)\,,\\ \tau (1)&\sim \text {Exp}(\beta _1)\,. \end{aligned}$$ Here \(\beta _1\) and \(\beta _2\) are the transition rates between states and the notation \( \text {Exp} (r) \) denotes an exponential distribution with parameter r (equal to the inverse of the mean). The spatial displacement in each state is given by: $$\begin{aligned} \xi (0)&= \sqrt{2D\tau (0)}Z \,,\\ \xi (1)&= v \tau (1)\,, \end{aligned}$$ where D is the diffusion coefficient in the freely diffusing state, v is the speed in the active transport state, and Z is an independent standard normal random variable. (4-state advection–reaction–diffusion model of particle transport). More realistic representations of the dynamics of cellular protein concentrations lead to considering the more complex 4-state model illustrated in Ciocanel et al. (2017, Figure 3B), where particles may diffuse, move in opposite directions, or be paused. The state durations are exponentially distributed, with the switching rates between dynamical states provided in Ciocanel et al. (2017, Figure 3B), and the spatial displacements in each state are given by: $$\begin{aligned} \xi (0)&= \sqrt{2D\tau (0)}Z \,,\\ \xi (1)&= v_+ \tau (1) \,,\\ \xi (2)&= v_- \tau (2) \,,\\ \xi (3)&= 0\,, \end{aligned}$$ with \(v_+\) the particle speed in the forward active transport state and \(v_-\) the particle speed in the backward active transport state. (Cooperative models of cargo transport). Consider the cooperative transport models proposed in Klumpp and Lipowsky (2005); Kunwar and Mogilner (2010), where processive motors move cargo in one direction along a one-dimensional filament. These models assume a maximum number N of motor proteins, firmly bound to the cargo, that may act simultaneously in pulling the cargo in a specified direction (see Klumpp and Lipowsky (2005, Figure 1) for model visualization). The biophysical state (dynamic behavior) is defined by the number \(0 \le n \le N \) of these motors that are bound to a filament and therefore actively contributing to transport. In a state with n motors attached to a filament, the cargo moves at a velocity \(v_n\), motors can unbind from the filaments with rate \(\epsilon _n\), or additional motors can bind to the filaments with rate \(\pi _n\). The expressions for these transport model parameters are reproduced from Kunwar and Mogilner (2010), together with a nonlinear force–velocity relation: $$\begin{aligned} v_n(F)&= v \left( 1-\left( \frac{F}{nF_s}\right) ^w \right) \,, \end{aligned}$$ $$\begin{aligned} \epsilon _n(F)&= n \epsilon e^{F/(nF_d)} \,, \end{aligned}$$ $$\begin{aligned} \pi _n&= (N-n)\pi \,. \end{aligned}$$ Here v is the load-free velocity of the motor, \(\epsilon \) is the load-free unbinding rate, and \(\pi \) is the motor binding rate. F is the externally applied load force, \(F_s\) is the stall force, and \(F_d\) is the force scale of detachment. The exponent w determines the nature of the force–velocity relation considered, with \(w=1\) corresponding to a linear relation, \(w<1\) corresponding to a concave sub-linear force–velocity curve, and \(w>1\) corresponding to a convex super-linear force–velocity curve (Kunwar and Mogilner 2010). The times and displacements in each state n (with \(0 \le n \le N\) motors bound to the filaments) are therefore given by: $$\begin{aligned} \tau (n)&\sim \text {Exp}\big (r_{\mathrm {out}}(n)\big )\,, \nonumber \\ \xi (n)&= v_n(F) \tau (n) \,, \end{aligned}$$ where \(r_{\mathrm {out}}(n) = \epsilon _n(F)+\pi _n\) is the transition rate out of the state with n motors [see Klumpp and Lipowsky (2005, Figure 1)]. (Tug-of-war models of cargo transport). Cargoes often move bidirectionally along filaments, driven by both plus and minus-directed motors. For example, kinesin moves cargo toward the plus end of microtubules while dynein moves it toward the minus end. In Müller et al. (2008, (2010), the authors propose a model where a tug-of-war between motors drives cargo in opposite directions, with transport by several motors leading to an increase in the time the cargo remains bound to a microtubule and is pulled along a particular direction. In these models, teams of maximum \(N_+\) plus- and \(N_-\) minus-end motors are bound to the cargo, and the biophysical state is given by the pair of indices \( (n_+,n_-)\) with \( 0 \le n_+ \le N \), \( 0\le n_- \le N \) indicating the number of plus and minus motors bound to the filament and thereby contributing actively to the transport (see Müller et al. (2008, Figure 1) for model visualization). A key assumption for this model is that motors interact when bound to the filament since opposing motors generate load forces, and motors moving in the same direction share the load. In addition, they assume that motors move with the same velocity as the cargo in any state (Müller et al. 2008, 2010). This model uses the following expressions for the transport parameters: $$\begin{aligned} v_c(n_+,n_-)&= \frac{n_+F_{s+}-n_-F_{s-}}{n_+F_{s+}/v_{f+} + n_-F_{s-}/v_{b-}} \,, \end{aligned}$$ $$\begin{aligned} \epsilon _{+}(n_+)&= n_+ \epsilon _{0+} e^{F_c/(n_+F_{d+})} \,, \end{aligned}$$ $$\begin{aligned} \pi _+(n_+)&= (N_+-n_+)\pi _{0+} \,. \end{aligned}$$ Here indices \(+\) and − refer to the plus- and minus-end directed motors under consideration. The model parameters are as follows: \(F_s\) is the stall force, \(F_d\) is the force scale for detachment, \(\epsilon _0\) is the load-free unbinding rate, \(\pi _0\) is the motor binding rate, \(v_f\) is the forward velocity of the motor (in its preferred direction of motion), and \(v_b\) is the slow backward velocity of the motor considered. Equation (16) applies for the case when \(n_+F_{s+}>n_-F_{s-}\) [stronger plus motors, Müller et al. (2008)], and an equivalent expression with \(v_{f+}\) replaced by \(v_{b+}\) and \(v_{b-}\) replaced by \(v_{f-}\) holds for \(n_+F_{s+}\le n_-F_{s-}\) (stronger minus motors). Equivalent expressions for the binding and unbinding rates hold for the minus-end directed motors. In the case of stronger plus motors, the cargo force \(F_c\) when pulled by \(n_+\) plus and \(n_-\) minus motors is given by Müller et al. (2008): $$\begin{aligned} F_c(n_+,n_-)&= \lambda n_+ F_{s+} + (1-\lambda )n_- F_{s-} \,, \nonumber \\ \lambda&= \frac{1}{1+ \frac{n_+F_{s+}v_{b-}}{n_-F_{s-}v_{f+}}}\,, \end{aligned}$$ with equivalent expressions for stronger minus motors as described above and in Müller et al. (2008). The times and displacements accumulated at each time step and in each state are defined as in Eq. (15) in Example 3. Analysis within a Single Cycle From standard renewal reward and functional central limit theorem results, which we detailed in Sect. 2, we have related the computation of effective velocity and diffusivity via Eqs. (7) and (8) to analyzing the first and second moments and correlation of the spatial displacement and time spent in each regeneration cycle. In this section, the main result is Proposition 1, which provides these statistics. We begin with Lemma 1, by recalling a standard recursion formula for the moments of the reward accumulated until hitting a designated absorbing state. We include the proof of this lemma for completeness and as an example of the moment generating function approach we use in Lemma 2. In Proposition 1, we address the calculation of total displacement and time duration during the regeneration cycles described in Sect. 2. Let 0 be the base state that marks the beginning of a new renewal cycle. We denote the set of remaining states as \(S \backslash \{0\}\), and define \({\tilde{P}}\) as the \(N \times N\) substochastic matrix containing the probabilities of transition among these non-base states only. Generally, we use the symbol ~ to refer to a vector or a matrix whose components corresponding to the base state have been removed. Let R denote the total reward accumulated until the state process hits the base state. Note that the value of R will depend on what the initial state of the process is. In our motor transport examples, R corresponds to the time \(\varDelta {\tilde{T}}\) or the displacement \(\varDelta {\tilde{X}}\) accumulated after stepping away from the base state and before returning to the base state. Let \(\rho _k\) denote the reward accumulated at each time step, recalling that time increments are denoted \(\tau _k\) and displacement increments \(\xi _k\) in Sect. 2.2. By introducing random variables \( \rho _k (j) \) for \( j \in S\) and \( k \in {\mathbb {N}}\) that indicate the reward received at step k if the particle is also in state j at that step, we can use indicator variables for the state to express: \(\rho _k =\sum _{j=1}^N \rho _k(j) 1_{\{J_k = j\}}\) and $$\begin{aligned} R = \sum _{k=1}^{\eta } \rho _k = \sum _{k=1}^{\eta } \sum _{j=1}^N \rho _k(j) 1_{\{J_k = j\}} \,. \end{aligned}$$ In the same way that we defined the distribution for the time durations and spatial displacements through the random vectors \(\varvec{{\tau }}\) and \(\varvec{{\xi }}\) in Eq. (11), we define the distribution of generic rewards through the vector of random rewards associated with each state: $$\begin{aligned} \varvec{{\tilde{\rho }}} = (\rho (1), \rho (2), \ldots ,\rho (N)). \end{aligned}$$ The tilde notation is used here to be consistent with the connotation that tilde implies the zero state is excluded. When we need component-wise multiplication, we use the Hadamard power notation [see Eq. (4)]: $$\begin{aligned} \varvec{{\tilde{\rho }}}^{\circ n} = (\rho ^n(1), \rho ^n(2),\ldots ,\rho ^n(N)). \end{aligned}$$ We define the moment-generating functions of the reward collected until the state process hits the base state, and of the reward in state i, respectively, by the following vectors: $$\begin{aligned} \varvec{{\phi }}(s): \,\, \phi _i(s)&:= {\mathbb {E}}(e^{sR} \, | \,J_1=i)\,, \text {and} \nonumber \\ \varvec{{\psi }}(s): \,\, \psi _i(s)&:= {\mathbb {E}}(e^{s \rho (i)})\,. \end{aligned}$$ Characteristic functions could alternatively be used to handle rewards whose higher moments are not all finite; the results for the low order moments we calculate would be the same. Note that here and in the following, we will typically use index i to refer to states \(i \in S \backslash \{0\}\). In Lemma 1, \(J_1\) is the state in the initial step of the process. We seek a general recursion relation for \({\mathbb {E}}(R^n|J_1 = i)\) and denote the corresponding vector of moments for all \(i \in S \backslash \{0\}\) by \({\mathbb {E}}_{S \backslash \{0\}}(R^n)\). The following result is a variation on similar recursion formulas for rewards accumulated in Markov chains (Hunter 2008; Palacios 2009). Let \(\{J_k\}_{k \ge 1}\) be a time-homogeneous, positive recurrent Markov chain with a transition probability matrix P (over a finite state space S) that has zeroes for each of its diagonal entries. Let the reward variables R and \(\varvec{{{\tilde{\rho }}}}\) be defined as in Eqs. (19) and (20), respectively. For \(n \in {\mathbb {N}}\), define the column vector $$\begin{aligned} {\mathbb {E}}_{S {\setminus } \{0\}}(R^n) := \big (E(R^n \, | \,J_1 = 1), \, \ldots \, , E(R^n \, | \,J_1 = N)\big ). \end{aligned}$$ Then, this vector—the expected reward accumulated up to the first time that the state process \(\{J_k\}\) hits the base state 0—satisfies the recursion relation $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(R)= & {} (I-{\tilde{P}})^{-1} {\mathbb {E}}(\varvec{{{\tilde{\rho }}}}); \nonumber \\ {\mathbb {E}}_{S \backslash \{0\}}(R^n)= & {} (I-{\tilde{P}})^{-1} \left( {\mathbb {E}}(\varvec{{{\tilde{\rho }}}}^{\circ n}) + \sum _{m=1}^{n-1} \left( {\begin{array}{c}n\\ m\end{array}}\right) \mathrm {diag}({\mathbb {E}}(\varvec{{{\tilde{\rho }}}}^{\circ (n-m)})) \, {\tilde{P}} \, {\mathbb {E}}_{S \backslash \{0\}}(R^m) \right) .\nonumber \\ \end{aligned}$$ Here \( {\tilde{P}} \) is the substochastic matrix component of P excluding the base state 0, and \(\varvec{{\tilde{\rho }}}^{\circ n}\) is the Hadamard n-th power vector defined in Eq. (21). Let R, the reward accumulated until hitting the base state 0, be decomposed into the reward from the first and from subsequent steps as follows: \(R = \rho _1 + \check{R}\). We calculate the moment-generating function of R conditioned on the initial state \(J_1 = i\) as follows: $$\begin{aligned} \phi _i(s) :=&\, {\mathbb {E}}(e^{sR}\, | \,J_1=i) \nonumber \\ =&\, \sum _{j \in S} {\mathbb {E}}(e^{sR} \, | \,J_1=i,J_2=j) P_{ij} \nonumber \\ =&\, \sum _{j \in S} {\mathbb {E}}(e^{s\rho _1} e^{s\check{R}}\, | \,J_1=i,J_2=j) P_{ij}\nonumber \\ =&\, {\mathbb {E}}(e^{s \rho _1}\, | \,J_1=i) \left( {\mathbb {E}}(e^{s\check{R}}|J_2=0)P_{i0} + \sum _{j \in S \backslash \{0\}} {\mathbb {E}}(e^{s\check{R}} \, | \,J_2=j) P_{ij} \right) \nonumber \\ =&\, {\mathbb {E}}(e^{s \rho (i)}) \left( P_{i0} + \sum _{j \in S \backslash \{0\}} {\mathbb {E}}(e^{sR} \, | \,J_1=j) P_{ij} \right) \nonumber \\ =&\, \psi _i(s) \left( P_{i0} + \sum _{j \in S \backslash \{0\}} \phi _j(s) P_{ij} \right) , \end{aligned}$$ where \(\psi _i(s)\) is defined in Eq. (22). In the fourth line, we used the Markov property, and in the fifth line, we used the fact that $$\begin{aligned} (\check{R} \, | \,J_2=j) \sim (R \, | \,J_1=j)(1-\delta _{j0}) \end{aligned}$$ where \( \delta _{ij}\) is the Kronecker delta function. Defining $$\begin{aligned} f_i(s)&= \psi _i(s) P_{i0}\,, \; i \in S \backslash \{0\} \,,\\ G(s)&= \{ G(s,i,j); i,j \in S \backslash \{0\}: G(s,i,j)=\psi _i(s) P_{ij}\} \,, \end{aligned}$$ then we can write Eq. (24) in matrix vector form: $$\begin{aligned} \varvec{{\phi }}(s)&= \varvec{{f}}(s) + G(s) \varvec{{\phi }}(s)\,. \end{aligned}$$ Since the moments of the reward before hitting the base state can be calculated using $$\begin{aligned} {\mathbb {E}}(R^n\, | \,J_1=i) = \frac{\partial ^n}{\partial s^n} \phi _i(s)|_{s=0} \,, \end{aligned}$$ we calculate the derivatives: $$\begin{aligned} \frac{\partial ^n \varvec{{\phi }}(s)}{\partial s^n}&= \frac{\partial ^n \varvec{{f}}(s) }{\partial s^n} + \sum _{m=0}^{n}\left( {\begin{array}{c}n\\ m\end{array}}\right) \frac{\partial ^{n-m} G(s) }{\partial s^{n-m}} \frac{\partial ^m \varvec{{\phi }}(s) }{\partial s^m} \,. \end{aligned}$$ For the first moment (\(n=1\)), each component yields: $$\begin{aligned} \frac{\partial \phi _i(s)}{\partial s}&= P_{i0}{\mathbb {E}}(\rho _1\, | \,J_1=i) + \sum _{j \in S \backslash \{0\}} P_{ij} {\mathbb {E}}(\rho _1\, | \,J_1=i) + \sum _{j \in S \backslash \{0\}} P_{ij}\frac{\partial \phi _j(s)}{\partial s} \\&= {\mathbb {E}}(\rho (i)) + \sum _{j \in S \backslash \{0\}} P_{ij}\frac{\partial \phi _j(s)}{\partial s} \,. \end{aligned}$$ Evaluating at \(s=0\) for \(n=1\), we have $$\begin{aligned} E(R \, | \,J_1 = i) = E(\rho (i)) + \sum _{j \in S {\setminus }\{0\}} P_{ij} E(R \, | \,J_n = j). \end{aligned}$$ Writing in vector form and solving for \(E_{S {\setminus } \{0\}}(R)\) yield the first part of Eq. (23). For higher-order moments (\(n>1\)): $$\begin{aligned} \frac{\partial ^n \phi _i(s)}{\partial s^n}&= P_{i0}{\mathbb {E}}(\rho _1^n\, | \,J_1=i) + \sum _{j \in S \backslash \{0\}} P_{ij} {\mathbb {E}}(\rho _1^n\, | \,J_1=i) \\&\qquad + \sum _{m=1}^{n-1}\left( {\begin{array}{c}n\\ m\end{array}}\right) {\mathbb {E}}(\rho _1^{n-m}\, | \,J_1=i) \sum _{j \in S \backslash \{0\}} P_{ij} \frac{\partial ^m \phi _j(s)}{\partial s^m} + \sum _{j \in S \backslash \{0\}} P_{ij}\frac{\partial ^n \phi _j(s)}{\partial s^n} \\&= {\mathbb {E}}(\rho (i)^n) + \sum _{j \in S \backslash \{0\}} P_{ij}\frac{\partial ^n \phi _j(s)}{\partial s^n} \\&\qquad + \sum _{m=1}^{n-1}\left( {\begin{array}{c}n\\ m\end{array}}\right) {\mathbb {E}}(\rho (i)^{n-m}) \sum _{j \in S \backslash \{0\}} \!\! P_{ij} \frac{\partial ^m \phi _j(s)}{\partial s^m} \,. \end{aligned}$$ Evaluating at \(s=0\) gives the recursion relation expressed in the second part of Eq. (23). \(\square \) Corollary 1 Let \(\varvec{{\tau }}\) and \(\varvec{{\xi }}\) denote the vectors of state-dependent time duration and spatial displacements as defined in Eq. (11). Let \(\varDelta T\) and \(\varDelta X\) denote the total time elapsed and displacement accumulated by a state-switching particle up until its state process \(\{J_k\}_{k \ge 1}\) returns to the base state 0 [see Eqs. (9)]. Moreover, recall the first-step decomposition \(\varDelta T = \tau _1 + \varDelta {{\tilde{T}}}\) and \(\varDelta X = \xi _1 + \varDelta {{\tilde{X}}}\) (see Eqs. (10)). Suppose that the state process \(\{J_k\}_{k \ge 1}\) and its associated transition probability matrix P satisfy the assumptions of Lemma 1. Then, $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}})&= (I-{\tilde{P}})^{-1} {\mathbb {E}}(\varvec{{{\tilde{\tau }}}}) \,, \end{aligned}$$ $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}})&= (I-{\tilde{P}})^{-1} {\mathbb {E}}(\varvec{{{\tilde{\xi }}}}) \,, \end{aligned}$$ $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}}^2)&= (I-{\tilde{P}})^{-1} \left( {\mathbb {E}}(\varvec{{{\tilde{\tau }}}}^{\circ 2}) + 2 \mathrm {diag}({\mathbb {E}}(\varvec{{{\tilde{\tau }}}})){\tilde{P}} \, {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}}) \right) \, \end{aligned}$$ $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}}^2)&= (I-{\tilde{P}})^{-1} \left( {\mathbb {E}}(\varvec{{{\tilde{\xi }}}}^{\circ 2}) + 2 \mathrm {diag}({\mathbb {E}}(\varvec{{{\tilde{\xi }}}})){\tilde{P}} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}}) \right) \,, \end{aligned}$$ where \(\varvec{{{\tilde{\tau }}}}\) and \(\varvec{{{\tilde{\xi }}}}\) are the vectors of time durations and spatial displacements excluding the base state. These results follow directly from Lemma 1, with \(\varDelta {{\tilde{T}}}\) and \(\varDelta {{\tilde{X}}}\), respectively, playing the role of the reward R. \(\square \) Let \(\varvec{{\tau }}\), \(\varvec{{\xi }}\), \(\varDelta {\tilde{T}}\), \(\varDelta {\tilde{X}}\), P, and \(\{J_k\}_{k \ge 1}\) be defined as in Corollary 1. Then, $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}})&= (I-{\tilde{P}})^{-1}\left( {\mathbb {E}}(\varvec{{{\tilde{\tau }}}} \circ \varvec{{{\tilde{\xi }}}}) + \mathrm {diag}({\mathbb {E}}(\varvec{{{\tilde{\xi }}}})){\tilde{P}} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}})\right. \nonumber \\&\qquad + \left. \mathrm {diag}({\mathbb {E}}(\varvec{{{\tilde{\tau }}}})){\tilde{P}} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}}) \right) \,. \end{aligned}$$ We use an argument similar to the re-arrangement of the moment-generating function in Eq. (25) in the proof of Lemma 1. Here we decompose the time and displacement into the first step after the base state and the subsequent steps: \(\varDelta {\tilde{T}} = \tau _2 + \check{T}\) and \(\varDelta {\tilde{X}} = \xi _2 + \check{X}\). Since we are interested in the cross-moment of the duration and displacement, we consider the following moment-generating function: $$\begin{aligned} \phi _{i}(r,s)&= {\mathbb {E}}(e^{s \varDelta {\tilde{X}}}e^{r \varDelta {\tilde{T}}}|J_1 = 0, J_2 = i) \nonumber \\&= \sum _{j \in S} {\mathbb {E}}(e^{s \varDelta {\tilde{X}}} e^{r \varDelta {\tilde{T}}}|J_1=0,J_2 = i, J_3 = j) P_{ij} \nonumber \\&= \sum _{j \in S} {\mathbb {E}}(e^{s\xi _2}e^{r\tau _2} e^{s \check{X}}e^{r \check{T}}|J_2=i,J_3=j) P_{ij} \nonumber \\&= {\mathbb {E}}(e^{s\xi _2}e^{r\tau _2}|J_2=i) \sum _{j \in S} {\mathbb {E}}(e^{s \check{X}} e^{r \check{T}}|J_2=i,J_3=j) P_{ij} \nonumber \\&= {\mathbb {E}}(e^{s\xi (i)}e^{r\tau (i)}) \sum _{j \in S} {\mathbb {E}}(e^{s \check{X}} e^{r \check{T}}|J_3=j) P_{ij} \nonumber \\&= \psi _i(r,s) P_{i0} + \psi _i(r,s) \sum _{j \in S \backslash \{0\}} \phi _j(r,s) P_{ij} \,, \end{aligned}$$ where \(\psi _i(r,s) = {\mathbb {E}}(e^{s\xi (i)}e^{r\tau (i)})\). For the calculation of the cross-term \({\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}})\), we note that \(\frac{\partial ^2 \phi _i}{\partial r \partial s}|_{s=r=0} = {\mathbb {E}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}}|J_2=i)\) and calculate: $$\begin{aligned} \frac{\partial ^2 \phi _i}{\partial r \partial s}&= \frac{\partial ^2 }{\partial r \partial s} \left( \psi _i(r,s) \sum _{j \in S \backslash \{0\}} \phi _j(r,s) P_{ij} + \psi _i(r,s) P_{i0} \right) \nonumber \\&= \frac{\partial }{\partial r} \left( \frac{\partial \psi _i(r,s)}{\partial s} \sum _{j \in S \backslash \{0\}} \phi _j(r,s) P_{ij} + \psi _i(r,s) \sum _{j \in S \backslash \{0\}} \frac{\partial \phi _j(r,s)}{\partial s} P_{ij} + \frac{\partial \psi _i(r,s) }{\partial s} P_{i0} \right) \nonumber \\&= \frac{\partial ^2 \psi _i(r,s)}{\partial s \partial r} \sum _{j \in S \backslash \{0\}} \phi _j(r,s) P_{ij} + \frac{\partial \psi _i(r,s)}{\partial s} \sum _{j \in S \backslash \{0\}} \frac{ \partial \phi _j(r,s)}{\partial r} P_{ij} \nonumber \\&\qquad + \frac{\partial \psi _i(r,s)}{\partial r} \sum _{j \in S \backslash \{0\}} \frac{\partial \phi _j(r,s)}{\partial s} P_{ij} + \psi _i(r,s) \sum _{j \in S \backslash \{0\}} \frac{\partial ^2 \phi _j(r,s)}{\partial s \partial r} P_{ij} + \frac{\partial ^2 \psi _i(r,s) }{\partial s \partial r} P_{i0} \,. \end{aligned}$$ Evaluating the above at \(s=r=0\) yields: $$\begin{aligned} {\mathbb {E}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}}|J_2=i)&= {\mathbb {E}}(\tau _2 \xi _2|J_2 = i) + {\mathbb {E}}(\xi _2|J_2 = i) \sum _{j \in S \backslash \{0\}} P_{ij} E(\varDelta {\tilde{T}}|J_2=j) \nonumber \\&\qquad \qquad + {\mathbb {E}}(\tau _2|J_2 = i) \sum _{j \in S \backslash \{0\}} P_{ij} {\mathbb {E}}(\varDelta {\tilde{X}}|J_2=j) \nonumber \\&\qquad \qquad + \sum _{j \in S \backslash \{0\}} P_{ij} {\mathbb {E}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}}|J_2=j) \,. \end{aligned}$$ $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}})&= {\mathbb {E}}(\varvec{{{\tilde{\tau }}}} \circ \varvec{{{\tilde{\xi }}}}) + \mathrm {diag}({\mathbb {E}}(\varvec{{{\tilde{\xi }}}})){\tilde{P}} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}}) \nonumber \\&\qquad + \mathrm {diag}({\mathbb {E}}(\varvec{{{\tilde{\tau }}}})){\tilde{P}} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta X) + {\tilde{P}}{\mathbb {E}}_{S \backslash \{0\}}(\varDelta T \varDelta X), \end{aligned}$$ which yields Eq. (30). \(\square \) An alternative derivation of equation (30) would be to use a polarization argument for the expectation of the product: $$\begin{aligned} {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}}) = \frac{1}{4} \left( {\mathbb {E}}_{S \backslash \{0\}}((\varDelta {{\tilde{X}}} + \varDelta {{\tilde{T}}})^2) - {\mathbb {E}}_{S \backslash \{0\}}((\varDelta {{\tilde{X}}}- \varDelta {{\tilde{T}}})^2)\right) \,. \end{aligned}$$ In this approach, the moment-generating function depending on both cycle time \(\varDelta {{\tilde{T}}}\) and cycle displacement \(\varDelta \tilde{X}\) introduced in Eq. (31) is not required, since Lemma 1 can be directly applied to give explicit formulas for the second moments of the reward \(R=\varDelta {\tilde{X}} + \varDelta {\tilde{T}}\) and \(R=\varDelta {\tilde{X}} - \varDelta {\tilde{T}}\). We proceed to Proposition 1, which provides the quantities necessary to compute the effective velocity and diffusivity of the cargo dynamics using classical theory (see Eqs. (7) and (8) and the procedure in Sect. 2.1). (First- and second-order statistics of rewards in a renewal cycle) Consider a regenerative cycle of a discrete-time time-homogeneous recurrent Markov chain that takes its values in the discrete state space \(S=\{0,1,2,\ldots ,N\}\) with probability transition matrix P with zero diagonal entries, starting at base state 0 until its first return to base state 0. The associated time \( \varDelta T\) and spatial displacement \( \varDelta X \) are defined as in Eq. (9). The random variables \(\tau (0)\) and \(\xi (0)\) have the distributions of the time duration and spatial displacement that are accumulated in the base state, and \(\varvec{{p}}^{(1)}\) is the vector of transition probabilities from the base state in the first step of a cycle, i.e., the first row of P. The moments of the cycle time and displacement rewards are then given by: $$\begin{aligned} {\mathbb {E}}(\varDelta T)&= {\mathbb {E}}(\tau (0)) + \varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}}) \,, \nonumber \\ {\mathbb {E}}(\varDelta X)&= {\mathbb {E}}(\xi (0)) + \varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}}) \,, \nonumber \\ \mathrm {Var}(\varDelta T)&= \mathrm {Var}(\tau (0)) + \varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}}^2) - (\varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}}))^2\,, \nonumber \\ \mathrm {Var}(\varDelta X)&= \mathrm {Var}(\xi (0)) + \varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}}^2) - (\varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}}))^2\,, \nonumber \\ \mathrm {Cov}(\varDelta X,\varDelta T)&= \mathrm {Cov}(\tau (0),\xi (0)) + \varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}} \varDelta {\tilde{X}}) \nonumber \\&\qquad - (\varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{T}}))(\varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S \backslash \{0\}}(\varDelta {\tilde{X}})) , \end{aligned}$$ where the first, second, and cross-moments of the time \(\varDelta {{\tilde{T}}}\) and the displacement \(\varDelta {{\tilde{X}}}\) are given by Eqs. (26), (27), (28), (29), (30) in Corollary 1 and Lemma 2. With state 0 as base state, we decompose the cycle time into the time spent in the base state \(\tau _1 = \tau (0)\) and the time \(\varDelta {\tilde{T}}\) spent from leaving the base state until returning to the base state. Therefore, the total time in a cycle is given by \(\varDelta T = \tau _1 + \varDelta {\tilde{T}}\), and similarly, the total spatial displacement in a cycle is \(\varDelta X = \xi _1 + \varDelta {\tilde{X}}\). We apply the law of total expectation by conditioning on the state \(J_2\) that the process visits after the base state: $$\begin{aligned} {\mathbb {E}}(\varDelta T)&= {\mathbb {E}}({\mathbb {E}}(\varDelta T|J_2))\nonumber \\&= \sum _{i \in S {\setminus } \{0\}} {\mathbb {E}}(\varDelta T|J_2 = i)P_{0i} \nonumber \\&= \sum _{i \in S {\setminus } \{0\}} {\mathbb {E}}(\tau _1 + \varDelta {{\tilde{T}}}|J_2 = i)P_{0i} \nonumber \\&= {\mathbb {E}}(\tau (0)) + \sum _{i \in S {\setminus } \{0\}} {\mathbb {E}}(\varDelta {{\tilde{T}}}|J_2 = i)P_{0i} \nonumber \\&= {\mathbb {E}}(\tau (0)) + \varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S {\setminus } \{0\}} (\varDelta {{\tilde{T}}}) \,, \end{aligned}$$ where as before \(S \backslash \{0\}\) is the set of transient states and \(P_{0i}\) is the probability of switching from base state 0 to state i. A similar calculation applies to the first moment of the cycle reward \({\mathbb {E}}(\varDelta X)\). For the second moments, we use the law of total variance as follows: $$\begin{aligned} \mathrm {Var}(\varDelta T)&= {\mathbb {E}}(\mathrm {Var}(\varDelta T|J_2)) + \mathrm {Var}({\mathbb {E}}(\varDelta T|J_2)) \nonumber \\&= {\mathbb {E}}(\mathrm {Var}(\tau _1 + \varDelta {{\tilde{T}}}|J_2)) + \mathrm {Var}({\mathbb {E}}(\tau _1 + \varDelta {{\tilde{T}}}|J_2)) \nonumber \\&= {\mathbb {E}}(\mathrm {Var}(\tau (0)) + \mathrm {Var}( \varDelta {{\tilde{T}}}|J_2)) + \mathrm {Var}({\mathbb {E}}(\tau (0)) + {\mathbb {E}}(\varDelta {{\tilde{T}}}|J_2)) \nonumber \\&= \mathrm {Var}(\tau (0)) + {\mathbb {E}}(\mathrm {Var}( \varDelta {{\tilde{T}}}|J_2)) + \mathrm {Var}({\mathbb {E}}(\varDelta {{\tilde{T}}}|J_2)) \nonumber \\&= \mathrm {Var}(\tau (0)) + \sum _{i \in S{\setminus } \{0\}} \mathrm {Var}( \varDelta {{\tilde{T}}}|J_2=i) P_{0i} \nonumber \\&\qquad + \sum _{i \in S{\setminus } \{0\}} ({\mathbb {E}}(\varDelta {{\tilde{T}}}|J_2=i))^2 P_{0i} - \left( \sum _{i \in S{\setminus } \{0\}} {\mathbb {E}}(\varDelta {{\tilde{T}}}|J_2=i)P_{0i} \right) ^2 \nonumber \\&= \mathrm {Var}(\tau (0)) + \sum _{i \in S{\setminus } \{0\}} {\mathbb {E}}( \varDelta {{\tilde{T}}}^2|J_2=i) P_{0i} - \left( \sum _{i \in S{\setminus } \{0\}} {\mathbb {E}}(\varDelta {{\tilde{T}}}|J_2=i)P_{0i} \right) ^2 \nonumber \\&= \mathrm {Var}(\tau (0)) + \varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S {\setminus } \{0\}}(\varDelta {{\tilde{T}}}^2) -(\varvec{{p}}^{(1)} \cdot {\mathbb {E}}_{S {\setminus } \{0\}}(\varDelta {{\tilde{T}}}))^2 \,, \end{aligned}$$ and similarly for \(\mathrm {Var}(\varDelta X)\). The covariance term can then be obtained via the polarization formula from the formulas for the variances. \(\square \) Application to Models of Intracellular Transport Proposition 1 and the calculation procedure in Sect. 2.1 can be applied to understand the long-term dynamics of protein intracellular transport described in Sect. 2 in Examples 1 and 2. The effective velocity and diffusivity of proteins are key in understanding large timescale processes such as mRNA localization in frog oocytes (Ciocanel et al. 2017) and cell polarization in the budding yeast (Bressloff and Xu 2015). 2-State Advection–diffusion Model of Particle Transport In the following, we consider the 2-state transport-diffusion model for the dynamics of mRNA particles described in Example 1 and illustrated in Ciocanel et al. (2017, Figure 3A). We show how the calculations in Proposition 1 can be applied to determine the large-time effective velocity and diffusivity of the particles. In the 2-state model, the probability transition matrix is simply \(P = \begin{pmatrix} 0 &{}\quad 1\\ 1 &{}\quad 0 \end{pmatrix}.\) In this example, we take the diffusing state as the base state; however, our results are not dependent on this choice of base state as mentioned in Sect. 2.1. The substochastic matrix of the probabilities of transition between the other states (Bhat and Miller 2002) is then simply the scalar \({\tilde{P}} = 0\) in this case, while the vector of transitions out of the base state is simply \( \varvec{{p}}^{(1)} = [1].\) The first and second moments of the cycle duration are given by Eqs. (26) and (28) with \((I-{\tilde{P}} )^{-1} = 1\). Similarly, the moments of the spatial displacement are given by Eqs. (27) and (29). In this model, we have that \(S\backslash \{0\}=\{1\}\) and \({\tilde{\tau }}_k^{\circ n} = \tau _k^n(1)\) for the time reward and \(\xi _k^{\circ n} = \xi _k^n(1)\) for the spatial displacement reward in the active transport state. In the 2-state system, these values are simply scalars: $$\begin{aligned} \begin{aligned} {\mathbb {E}}_1(\varDelta {\tilde{T}})&= {\mathbb {E}}(\tau (1)) = 1/\beta _1 \,, \\ {\mathbb {E}}_1(\varDelta {\tilde{X}})&= {\mathbb {E}}(\xi (1)) = v/\beta _1 \,,\\ {\mathbb {E}}_1(\varDelta {\tilde{T}} \varDelta {\tilde{X}})&= {\mathbb {E}}(\tau (1) \xi (1)) = 2v/\beta _1^2 \,,\\ {\mathbb {E}}_1(\varDelta {\tilde{T}}^2)&= {\mathbb {E}}(\tau (1)^2) = 2/\beta _1^2\,,\\ {\mathbb {E}}_1(\varDelta {\tilde{X}}^2)&= {\mathbb {E}}(\xi (1)^2) = 2v^2/\beta _1^2 \,. \end{aligned} \end{aligned}$$ The statistics of the cycle are therefore given by: $$\begin{aligned} \begin{aligned} {\mathbb {E}}(\varDelta T)&={\mathbb {E}}(\tau (0)) + {\mathbb {E}}_1(\varDelta {\tilde{T}}) \varvec{{p}}^{(1)}(1) = \frac{1}{\beta _2} + \frac{1}{\beta _1} \,, \\ {\mathbb {E}}(\varDelta X)&={\mathbb {E}}(\xi (0)) + {\mathbb {E}}_1(\varDelta {\tilde{X}}) \varvec{{p}}^{(1)}(1) = 0 + v/\beta _1 = \frac{v}{\beta _1} \,,\\ \text {Var}(\varDelta T)&= \text {Var}(\tau (0)) + {\mathbb {E}}_1(\varDelta {\tilde{T}}^2) \varvec{{p}}^{(1)}(1) - ({\mathbb {E}}_1(\varDelta {\tilde{T}}) \varvec{{p}}^{(1)}(1))^2 = \frac{1}{\beta _2^2} + \frac{1}{\beta _1^2}\,,\\ \text {Var}(\varDelta X)&= \text {Var}(\xi (0)) + {\mathbb {E}}_1(\varDelta {\tilde{X}}^2) \varvec{{p}}^{(1)}(1) - ({\mathbb {E}}_1(\varDelta {\tilde{X}}) \varvec{{p}}^{(1)}(1))^2 = \frac{2D}{\beta _2} + \frac{v^2}{\beta _1^2} \,,\\ \text {Cov}(\varDelta T,\varDelta X)&= \text {Cov}(\tau (0),\xi (0)) + {\mathbb {E}}_1(\varDelta {\tilde{T}} \varDelta {\tilde{X}}) \varvec{{p}}^{(1)}(1) \\&\qquad - \left( {\mathbb {E}}_1(\varDelta {\tilde{T}}) \varvec{{p}}^{(1)}(1)\right) \left( {\mathbb {E}}_1(\varDelta {\tilde{X}}) \varvec{{p}}^{(1)}(1)\right) = \frac{v}{\beta _1^2}\,. \end{aligned} \end{aligned}$$ Equations (7) and (8) then provide expressions for the effective velocity and diffusivity of the particles as in Hughes et al. (2011, (2012), Whitt (2002): $$\begin{aligned} v_{\mathrm {eff}}&= \frac{{\mathbb {E}}(\varDelta X)}{{\mathbb {E}}(\varDelta T)} = v \frac{\beta _2}{\beta _1+\beta _2}\,, \nonumber \\ D_{\mathrm {eff}}&= \frac{1}{2 {\mathbb {E}}(\varDelta T)} (v_{\mathrm {eff}}^2 \text {Var}(\varDelta T) + \text {Var}(\varDelta X) - 2v_{\mathrm {eff}}\text {cov}(\varDelta T,\varDelta X)) \nonumber \\&= D\frac{\beta _1}{\beta _1+\beta _2} + v^2 \frac{\beta _1 \beta _2}{(\beta _1+\beta _2)^3} \,. \end{aligned}$$ Note that the effective velocity is given by the speed in the transport state multiplied by the fraction of time the mRNA particles spend in the moving state. The effective diffusivity has a more complicated expression, but clearly shows the dependence of this quantity on each model parameter. These expressions agree with the results of Eqs. (2), (3) as outlined in Ciocanel (2017). 4-State Advection–Reaction–Diffusion Model of Particle Transport Our calculation procedure and Proposition 1 extend to more complicated and realistic models such as the 4-state model described in Example 2 and illustrated in Ciocanel et al. (2017, Figure 3B). By considering the stochastic transitions between dynamic states and the durations and displacements accumulated in each state, the effective velocity and diffusion of cargo can be calculated in an intuitive way even for such complex models with many transition states. Since this approach requires calculating the inverse of the invertible matrix \(I-{\tilde{P}}\) (see Bhat and Miller 2002; Dobrow 2016) to determine the fundamental matrix, the approach presented here is easily implemented in a software package such as Mathematica or MATLAB for symbolic derivation of the effective transport properties for models with multiple states [see sample code in the repository on GitHub (2019)]. In Fig. 2, we illustrate the good agreement of the results in Ciocanel et al. (2017) with our calculation procedure in Sect. 2.1 [Proposition 1 combined with Eqs. (7) and (8)] based on 15 sets of parameters estimated in Ciocanel et al. (2017). In addition, we validate results from both approaches by carrying out numerical simulations of the particle transport process and empirically estimating the effective transport properties. In particular, we set up a Markov chain of the 4-state model in Ciocanel et al. (2017, Figure 3B). For each parameter set, we consider \(N_R = 500\) stochastic realizations of the dynamics, and for each iteration, we run the process until a fixed large time \(T_f = 5\times 10^4\), which keeps the computation feasible. We then estimate the effective velocity and diffusivity as follows: $$\begin{aligned} v_{\mathrm {eff}}&\approx \frac{(\sum _{i=1}^{N_R} X_i(T_f))/N_R}{T_f}\,, \\ D_{\mathrm {eff}}&\approx \frac{\left( \sum _{i=1}^{N_R}(X_i(T_f) - (\sum _{i=1}^{N_R}X_i(T_f))/N_R)^2\right) /(N_R-1)}{2T_f}\,, \end{aligned}$$ where \(X_i(T_f)\) are the simulated final positions of the particle at time \(T_f\) in iteration i. The different parameter sets (labeled by index) in Fig. 2a–b correspond to simulations using parameter estimates based on FRAP mRNA data from different frog oocytes in Gagnon et al. (2013), Ciocanel et al. (2017). The good agreement of the theoretical and simulated effective velocity and diffusivity shows that the analytical approach proposed is a good alternative to potentially costly simulations of the stochastic process up to a large time. The theoretical formulas for effective velocity and diffusivity are long-time asymptotic results, which raises the question of how well they apply at finite times. In Fig. 2c, d, we show the difference between the predicted and simulated effective velocity and diffusivity for each parameter set as a function of simulation time \( T_f \). The convergence rate for the renewal reward asymptotic theory relies on a large number of regeneration cycles. We can estimate the number of regeneration cycles in a simulation as \( T_f/{\mathbb {E}}(\varDelta T)\), with the expected cycle time \( {\mathbb {E}}(\varDelta T )\) computed for each parameter set from Eq. (32). For the five parameter sets used in Fig. 2c, d, the cycle time \({\mathbb {E}}(\varDelta T )\) is given by: 114s, 477s, 135s, 175s, and 747s. Across the 15 parameter sets considered, we found that 10 regeneration cycles were usually sufficient for the simulated finite-time velocity and diffusivity to be within 10% of the theoretical asymptotic value. a, b Effective velocity (a) and effective diffusivity (b) of particles switching between diffusion, bidirectional transport, and stationary states as in (Ciocanel et al. 2017, Figure 3B) for different parameter sets. Blue triangles correspond to predictions based on the homogenization or equivalent analysis (Ciocanel et al. 2017) of the corresponding PDEs [Eq. (1)], filled red dots correspond to estimates from multiple simulated realizations of the Markov chain, and yellow circles correspond to predictions based on analysis of the corresponding renewal process model combined with Proposition 1. c, d Difference between effective velocity (c) and effective diffusivity (d) as computed by renewal reward asymptotics and Monte Carlo simulations over a finite time \( T_f \). Results from five of the parameter sets from (a, b) are shown, and the axes are in log scale Application to Cooperative and Tug-of-War Models of Cargo Transport The framework presented here also extends to models of cargo particles driven by changing numbers of motor proteins. The analytical calculation of transport properties of cargo pulled by motors in the same or opposite directions could replace or complement costly numerical simulations of individual cargo trajectories. In the following, we consider both models of cooperative cargo transport with identical motors (Klumpp and Lipowsky 2005; Kunwar and Mogilner 2010) and tug-of-war models of bidirectional transport driven by identical or different motors moving in opposite directions (Müller et al. 2008, 2010). While not discussed here, this framework may also prove useful in analyzing stochastic models of nanoparticulate transport in biogels, where states correspond to the number of occupied binding sites on nanoparticulates and to the number of molecular anchors crosslinking them to the matrix of polymers (Newby et al. 2017). Cooperative Models of Cargo Transport We start by considering the cooperative transport models described in Sect. 2, Example 3, and studied by Klumpp and Lipowsky (2005); Kunwar and Mogilner (2010), with processive motors that move along a one-dimensional microtubule and transport cargo in only one direction. The cargo movement is described in terms of the force-driven velocities \(v_n\), unbinding rates \(\epsilon _n\), and binding rates \(\pi _n\) in each state with n motors simultaneously bound to the cargo and the microtubule [see Eqs. (12), (13), and (14)]. In this section, we use the kinetic parameters for conventional kinesin-1 provided in Klumpp and Lipowsky (2005) (see Table 1 in "Appendix"). Our calculation of the effective velocity of cargo agrees with the derivation in Klumpp and Lipowsky (2005), which uses the stationary solution of the master equation for probabilities of the cargo being in each state (i.e., carried by n motors). We note that there are two notions of effective velocity (and diffusivity) that can be used in studying this model: one is to calculate the effective velocity of the cargo averaged over the bound states only (the asymptotic velocity without detachment along a theoretical infinite length microtubule) (Klumpp and Lipowsky 2005; Kunwar and Mogilner 2010), and the second is to calculate the overall effective velocity that also accounts for periods of detachment from microtubules. For the \(N=2\) motors model, Klumpp and Lipowsky (2005) and Kunwar and Mogilner (2010) report the average velocity for bound cargo (first notion): $$\begin{aligned} v_{\mathrm {eff}} = v_1 \frac{\pi _0 \epsilon _2}{\pi _0 \epsilon _2+\pi _0\pi _1} + v_2 \frac{\pi _0 \pi _1}{\pi _0 \epsilon _2+\pi _0\pi _1}\,. \end{aligned}$$ Since we are interested in the overall effective velocity of the particles in the context of their full dynamics, we include the state where no motors are bound to the filament in our calculation, so that the effective velocity with respect to the overall dynamics is given by: $$\begin{aligned} v_{\mathrm {eff}}&= v_1 \frac{\pi _0 \epsilon _2}{\epsilon _1\epsilon _2 + \pi _0 \epsilon _2+\pi _0\pi _1} + v_2 \frac{\pi _0 \pi _1}{\epsilon _1\epsilon _2 + \pi _0 \epsilon _2+\pi _0\pi _1}\,. \end{aligned}$$ Using the calculation of the overall effective velocity in (34), we predict a similar dependence of the effective velocity under a range of force loads as in Klumpp and Lipowsky (2005) using the formula (33). The dashed curves in Fig. 3a, c agree with the behavior of sub- and super-linear motors under different load forces as reported in Kunwar and Mogilner (2010, Figure 2C-D), including the fact that sub-linear motors have lower effective velocities for any choice of the load force and for all maximum motor numbers N considered (A, \(w=0.5\)), while super-linear motors are faster and therefore have larger effective velocities than linear motors (C, \(w=2\)). Effective velocity (a, c) and effective diffusivity (b, d) of cargo driven by a maximal number N of forward motor proteins as a function of the load force under various force–velocity exponents w (Eq. (12)). The stall force used for kinesin is \(F_s = 6\) pN (see Table 1 in Appendix for all parameters). Solid lines correspond to motors with a linear force–velocity relation, and dashed lines correspond to sub-linear motors with a convex-up velocity–force relation (top row) and, respectively, to super-linear motors with a concave force–velocity relation (bottom row) The insight from our method lies in the prediction of the effective diffusivity as a function of load for each type of motor. Figure 3b, d shows that the \(N=1\) motor transport case has a large effective diffusivity under no load because of the switching between the paused and moving states. As the force load increases to stall \(F_s\), the velocity of the single motor state decreases to 0: \(v_1(F) = v\left( 1-F/F_s \right) \). Therefore, the active transport state switches to a stationary state at \(F=F_s=6\) pN, leading to decreased effective diffusivity as the cargo switches between dynamic states with similar behaviors. For \(N=2\) and \(N=3\), the calculation of the effective diffusivity allows us to re-visit the cooperative transport models for a large range of load forces and observe a new phenomenon in the classical models of Klumpp and Lipowsky (2005), Kunwar and Mogilner (2010). The broader sweep of the load force parameter in Fig. 3b, d shows a non-monotonic dependence of the effective diffusivity on load force for all types of motors considered (linear, sublinear, and superlinear), with an increase in effective diffusivity of cargo at low load forces and a decrease at large load forces. While it is not immediately clear what leads to this phenomenon, we conjecture that this observation may be a result of the balance between two competing effects: on the one hand, as the load increases, there is more detachment of motors [see (13)] and thus more frequent switches between transport and stationary states, leading to an increase in effective diffusivity; on the other hand, the increase in load force leads to a decrease in the speeds of the motor-driven cargo states [see (12)] and thus a decrease in effective diffusivity. Tug-of-War Models of Cargo Transport In Example 4 in Sect. 2, we consider the case where plus- and minus-directed motors can drive cargo bidirectionally along filaments. The cargo velocities \(v_c(n_+,n_-)\), unbinding rates \(\epsilon _{+/-}(n_+,n_-)\), and binding rates \(\pi _{+/-}(n_{+/-})\) depend on the number of plus motors \(n_+\) and minus motors \(n_-\) at each state. Effective velocity (a, c) and effective diffusivity (b, d) of cargo driven by maximum N forward and maximum N backward motor proteins as a function of kinesin-1 stall force \(F_s\); the detachment force is \(F_d = 3\) pN (see Table 1 in "Appendix" for all parameters). Panels (a, b) correspond to identical forward and backward motors with kinetic parameters for kinesin-1, and panels (c, d) correspond to kinesin-1 forward motors and conventional dynein backward motors Identical plus and minus motors. With kinesin parameters drawn from Müller et al. (2008) (see Table 1 in "Appendix"), we first calculate the transport properties of cargo in these models for identical plus and minus motors in equal numbers (\(N_+=N_-\)). We vary the stall force of the kinesin motor to determine if the theoretical effective velocity and diffusivity capture the differences obtained in the numerical simulation studies in Müller et al. (2008, (2010) for weak motors (small stall to detachment force ratio \(f = F_s/F_d\)) and strong motors (large f). As expected, the effective velocity in this symmetric case of identical motors is zero for all stall forces (see Fig. 4a). The predicted effective diffusivity in Fig. 4b shows that for weak motors, the effective diffusivity is small, and different maximum numbers of motors do not lead to significant differences. This is similar to the results in Müller et al. (2008), where the simulated cargo trajectories show small fluctuations and the probability distribution for the velocity has a single maximum peak corresponding to approximately equal numbers of plus and minus motors attached. However, for strong motors with a larger stall to detachment force ratio, the effective diffusivity increases considerably for all models. This is consistent with the observation in Müller et al. (2008) that strong motors lead to cascades of unbinding of minus motors until only plus motors stay bound (and vice versa), so that the spread of the cargo position is predicted to be larger. The larger motor numbers lead to a more significant increase in effective diffusivity as observed in Müller et al. (2010), where the simulated diffusion coefficient grows exponentially with motor numbers and therefore leads to a more productive search of target destinations throughout the domain (Müller et al. 2010). It is worth noting that the method we develop in Sect. 2.1 extends to cases where slow diffusive transport rather than pausing is observed in the unbound state (see Sect. 4 for another example with a diffusive state). As expected, when the cargo has an intrinsic diffusion coefficient that is nonzero, the effective velocity of the cargo does not change; however, the effective diffusivity is consistently larger than in the case where the unbound cargo is fully stationary (results not shown). Distinct plus and minus motors. When considering dynein as the minus-end directed motor in the bidirectional transport model, we use the kinetic parameters estimated to fit Drosophila lipid droplet transport in Müller et al. (2008) (see Table 1 in "Appendix"). Figure 4c shows that the cargo is predicted to move in the forward (kinesin-driven) direction with a positive effective velocity. We again observe increased transport efficiency for larger numbers of motors. With increasing stall force, the velocity of individual runs in each state increases, and therefore, the effective velocity increases and then plateaus. This asymmetric motor case also results in effective diffusivity that decreases past a small stall force and then stabilizes (see Fig. 4d). Since the kinesin motor dominates the dynamics, there are fewer excursions backwards than in the case of identical motors, so that the effective diffusivity is an order of magnitude smaller. Larger teams of motors regularize the dynamics and display decreased effective diffusivity. We remark that an asymmetric tug-of-war may even occur in motor interactions of a single type, as recently observed in force-gliding assay experiments of kinesins moving microtubule cargoes in Tjioe et al. (2019). The analysis proposed here could be applied to models accounting for switching between different behaviors (states) of kinesin motors, such as "driving" kinesins pulling on a microtubule and "resisting" kinesins holding it back (Tjioe et al. 2019). Effective velocity (a, b) and expected run length (c, d) of cargo driven by maximum \(N_1\) forward (kinesin-1) and maximum \(N_2\) backward (dynein) motor proteins \((N_1,N_2)\) as a function of the reattachment factor \(\rho \). Panels (a, c) use conventional dynein motor kinetics as in Müller et al. (2008, (2010) while panels (b, d) use dynein–dynactin–BicD2 (DDB) complex parameters as in Ohashi et al. (2019); see Table 2 in "Appendix" for all parameters. The run lengths in panels b and d are plotted on a log–log scale to allow for visualization of differences between the models considered. The definitions of effective velocity and run length used are provided in the text Reattachment in Models of Cargo Transport In vitro experiments have suggested that binding rates of molecular motors at specific locations may be regulated by the concentration of the same or opposite-directed motors (Hancock 2014), as well as by the availability of microtubule filaments. To test for the impact of reattachment kinetics in the standard transport models of Müller et al. (2008, (2010), we modify the binding rate in (18) to account for a higher likelihood of reattachment when a motor (of either type) is already attached to the microtubule: $$\begin{aligned} \pi _+(n_+,n_-)&= {\left\{ \begin{array}{ll} N_+\pi _{0+}, \quad \mathrm {\, \, if \, \, n_++n_-=0} \,,\\ (N_+-n_+)\rho \pi _{0+}, \quad \mathrm {\, \, \, \, else} . \end{array}\right. } \end{aligned}$$ Here \(\rho >0\) denotes the reattachment factor, and an equivalent expression is valid for the binding rate for minus motors \(\pi _-(n_+,n_-) \). \(\rho =1\) corresponds to the binding kinetics in the previous sections, and \(\rho >1\) denotes an increased reattachment likelihood when other motors are attached. Figure 5 illustrates the effective velocity (panels a,b) and the expected cargo run length (Eq. (32) for \(\varDelta X\), panels c,d) for values of \(\rho \) ranging from 1 to 50, in the context of models labeled \((N_1,N_2)\) with transport driven by maximum \(N_1\) forward (kinesin-1) motors and \(N_2\) backward (dynein) motors. Here we report the overall effective velocity of the cargo according to the second definition in Sect. 5.1, which includes both attached and detached cargo states in the calculation. In addition, the mean run length is calculated as the mean total displacement over a cycle starting with all motors detached until its return to a completely detached state, namely \({\mathbb {E}}(\varDelta X)\) in Eqs. (32). Note the base state of complete detachment makes no contribution to the mean displacement. Classically in tug-of-war modeling, dynein has been viewed as a "weaker partner" than kinesin family motors. In this parameter regime (Müller et al. 2008, 2010), dynein has both a smaller stall force and smaller critical detachment force than kinesin-1. As a result, when equal numbers of kinesin-1 and dynein are simultaneously attached, kinesin-1 dominates transport. However, it has recently been shown that it might not be realistic to consider dynein in the absence of its helper proteins, particularly dynactin and BicD2. Together, these form a complex referred to as DDB, and the associated parameter values (Ohashi et al. 2019) are much more "competitive" with kinesin-1 in a tug-of-war scenario (see Table 2 in "Appendix"). In Fig. 5, we display the effective velocity and expected run length of kinesin-1 vs dynein (panels a,c), and kinesin-1 versus DDB (panels b,d) dynamics. In Fig. 5a, the effective velocity of cargo driven by teams of motors approaches the effective speed predicted for kinesin-only motor teams (models (1, 0) and (2, 0)) for small values of \(\rho \), but then decreases as \(\rho \) becomes larger for conventional dynein motility. As observed in recent studies, activated dynein competes more efficiently with kinesin, and therefore, the teams of opposite-directed motors are consistently slower than teams consisting of only the forward kinesin motor protein in Fig. 5b (Ohashi et al. 2019). The expected run lengths in Fig. 5c, d illustrate that teams of multiple motors are characterized by significantly increased processivity on microtubules as the reattachment factor becomes larger. When considering conventional dynein, the difference in processive cargo motion between the cooperative and tug-of-war models is only observed at large values of the reattachment constant \(\rho \) (\(>10\), see Fig. 5c). This is due to the fact that the backward motor (conventional dynein) in the Müller et al. (2008) model is weak with a small detachment force, so that overcoming the large dynein unbinding rate requires large values of the reattachment factor. On the other hand, activated dynein in the DDB complex is a more equal competitor to kinesin, with predictions of the expected run length in Fig. 5d confirming the experimental observations of larger unloaded run lengths in Ohashi et al. (2019). Comparison of effective diffusivity estimates for parallel microtubules driven bidirectionally by kinesin motors as a function of the scaled load sensitivity \(\gamma \). a Stochastic simulations from Allard et al. (2019) are marked with blue stars, first passage time approximation in Allard et al. (2019) marked with red circles, and renewal reward calculation marked with yellow triangles. Following Allard et al. (2019), we only allow states with i kinesin motors moving forward and \(K-i\) kinesin motors moving backward, with \(0\le i \le K\) and \(K=35\) (see Table 3 in "Appendix" for all parameters). The vertical axis is plotted on a log scale to allow for visualization of differences between effective diffusivity estimated from simulations and analytical approximations. b Percent error of the first passage approximation in red circles and of the renewal reward calculation in yellow triangles with respect to the simulation results in Allard et al. (2019) Microtubule Sliding Model As a final example of the applicability of our method, we consider a recent investigation into microtubule motility and sliding by Allard et al. (2019). The authors consider a continuous-time Markov chain model of the interaction of two parallel microtubules, cross-linked and moved by multiple identical kinesin motors. Depending on which microtubule the motor heads are attached to, they push the microtubule pair apart in one of the two directions (one of which is arbitrarily assigned to be "positive"). The model assumes that motor attachment to microtubules occurs quickly relative to detachment, allowing a reduced number of dynamic states with microtubules driven by i motors pushing in the positive direction and \(K-i\) motors pushing in the negative direction (where K is the maximal number of motors that fit the overlap region between the two parallel microtubules). The detachment rates are therefore given by $$\begin{aligned} \kappa _i^+&= (K-i)\kappa _0 \exp {\left( \gamma \frac{i}{K}\right) } \,, \nonumber \\ \kappa _i^-&= i\kappa _0 \exp {\left( \gamma \frac{K-i}{K}\right) } \,, \end{aligned}$$ where \(\kappa _i^+\) is the rate at which a motor pulling in the negative direction is replaced by one pulling in the positive direction, \(\kappa _i^-\) is the is the rate at which a motor pulling in the positive direction is replaced by one moving in the negative direction, \(\kappa _0\) is a force-free transition rate, and \(\gamma \) is a dimensionless load sensitivity defined as twice the stall force divided by the detachment force scale (Allard et al. 2019). The relative velocity of the parallel microtubules in each state is given by $$\begin{aligned} \varDelta v_i&= V_m \frac{2i-K}{K} \,, \end{aligned}$$ where \(V_m\) is the speed of a single motor (Allard et al. 2019). A main point of this study is that parallel microtubules may slide bidirectionally with respect to each other, with a zero mean velocity due to symmetry; thus, the long-term microtubule transport is characterized by diffusive behavior. The effective diffusivity of the microtubule pair driven by a total of 35 motors is measured in Allard et al. (2019) through fitting the slope of the mean squared displacement in stochastic simulations at long time (stars in Fig. 6a) and comparing to a theoretical approximation in terms of a first passage time problem (open circles in Fig. 6a). Our method based on renewal reward theory yields predictions of the effective diffusivity that are closer to the estimates derived from large-time stochastic simulations (marked with triangles in Fig. 6a). This is further illustrated in Fig. 6b, which shows the percentage error of the approximations with respect to the available simulation estimates in Allard et al. (2019). To make the relationship between effective diffusivity and load sensitivity more clear, we illustrate results for many intermediary values. Our proposed analytical framework also facilitates the possibility of subsequent systematic asymptotic approximations to study dependence on underlying biophysical parameters. In this work, we consider examples from the intracellular transport literature where particles undergo switching dynamics. In particular, we are interested in determining the effective velocity and diffusivity as well as the expected run length of these particles as they switch between biophysical behaviors such as diffusion, active transport, and stationary states. We propose a method that is based on defining the underlying Markov chain of state switches and the independent cycles of the dynamics marked by returns to a chosen base state. Emphasizing the cyclic structure of the behavior allows us to treat the time durations and spatial displacements of particles in these regenerative cycles as the cycle durations and rewards in a renewal reward process. Through calculation of the statistics of cycle time and displacement, this robust framework provides a rigorous means to study how the asymptotic behavior of switching systems depends on model parameters. We have restricted our considerations to the case where the switching dynamics and transport coefficients within each state have no spatial dependence. We expect, just as in QSS and homogenization methods, that the regenerative approach will also apply when the dependence between the switching and transport behavior is slowly varying in space, giving now effective velocity and diffusivity that depend on space over the same large scale. In QSS, this requires a scale separation in which switching kinetics are fast relative to transport over spatial variations; see, for example, Newby and Bressloff (2010a); Bressloff and Newby (2013). Homogenization will also work under these conditions, as well as to a broader class of models where the transport coefficients within each state depend also on smaller spatial scales, where its results would depart from those of QSS. Both of these PDE-based approaches produce effective Fokker–Planck equations that give unambiguous interpretation of spatially dependent diffusivity in the context of the equivalent stochastic description. We expect the regenerative cycle approach developed here should also apply in a generalization of the context in which QSS works, namely, when the spatial scale of transport over a regeneration cycle is small compared to the spatial variation in the switching kinetics and/or transport coefficients within each state. The sense in which to interpret the resulting spatially dependent diffusivity would presumably be the same as for QSS, but this would require a detailed derivation to investigate. The extension of our approach to compute effective transport when the switching and/or state-dependent transport coefficients depend on small as well as large spatial scales is more challenging and problematic because regeneration of the stochastic process would require not only a return to the base state but also to the same values of the transport coefficients. Similarly, while the presence of spatial boundaries can be handled in a QSS framework via a boundary layer analysis of the forward Kolmogorov equation (Zmurchok et al. 2017), spatial boundaries would be a rather difficult challenge for the regeneration cycle analysis because the hitting of the boundary during a cycle would at least ostensibly disrupt its generic statistical properties. In particular, our extension to switching dynamics that are not fully Markovian moves the analysis outside of the framework of partial differential equations, so singular perturbation analysis as in QSS and homogenization cannot be so flexibly applied. We have emphasized the application of the regenerative cycle methodology to effective transport coefficients, particularly for canonical tug-of-war models describing the transport of cargo by teams of molecular motor proteins. Previous investigations of the effective transport of cargo in these multi-state models have considered individual trajectories of the dynamics, computed using Monte Carlo simulations with the Gillespie algorithm (Müller et al. 2008; Kunwar and Mogilner 2010; Müller et al. 2010). These studies determine the effective velocity of the particles analytically by calculating the distribution of the number of bound motors from the stationary solution of the master equation (Klumpp and Lipowsky 2005). However, determining the effective diffusivity in these studies relied on numerical simulations. Our method proposes a faster and explicit investigation of the impact of model parameters on the effective diffusivity. For instance, Fig. 4 (top right) captures the different behavior of identical motor teams involved in tug-of-war dynamics when the ratio of stall to detachment force is small (weak motors with small effective diffusivity) versus large (strong motors with increasing effective diffusivity). This observation is consistent with simulations in Müller et al. (2008), where the large force ratios correspond to a dynamic instability where only one motor type is primarily bound at the end of an unbinding cascade (Müller et al. 2008, 2010). Multiple experiments summarized in Hancock (2014) have shown that inhibition of one motor type reduces transport in both directions in several systems, suggesting a "paradox of co-dependence" in bidirectional cargo transport. Several mechanisms accounting for this paradox were proposed, including the microtubule tethering mechanism recently explored in Smith and McKinley (2018). The hypothesis for this mechanism is that motors switch between directed active transport and a weak binding or diffusive state. The recent experimental study in Feng et al. (2018) suggests that teams of kinesin-1 motors coordinate transport using help from the dynamic tethering of kinesin-2 motors. This work shows that when kinesin-1 motors detach, tethering of kinesin-2 to the microtubule ensures that cargo stays near the filament to allow for subsequent reattachment (Feng et al. 2018). Our approach allows us to assess the dependence of the dynamics on a potentially increased reattachment rate for cargo that is already bound to the filament by at least one motor (Fig. 5). Implementing this change in the standard binding models in Müller et al. (2008); Kunwar and Mogilner (2010); Müller et al. (2010) for both kinesin-1/dynein and kinesin-1/DDB dynamics shows a decrease in overall effective velocity, but very large increases in potential run length. This could be consistent with the paradox in that experimentalists would observe more kinesin-directed activity when the reattachment rate is sufficiently high. We have made MATLAB and Mathematica sample code available for the calculation of effective velocity, diffusivity, and run lengths in a cooperative model of one-directional transport (as discussed in Sect. 5.1) and a tug-of-war model of bidirectional transport (as discussed in Sect. 5.2) (GitHub 2019). The code for these examples can be readily adapted to allow for a general probability transition matrix for the state dynamics, together with the probability distributions for the times and displacement in each state, to extend to other models of the processive movement of molecular motors and cargo transport. As the theory of how motors coordinate to transport cargo continues to develop at a rapid pace, the analysis developed here will provide a tool for new models accounting for tethered and weakly binding states with stochastic transitions whose rates do not depend on spatial position. The framework also extends to complex models with diffusion and binding reactions in higher dimensions, where the moments of times and spatial displacements in each state may be estimated using simulation and the calculation of effective transport quantities reduces to calculation of these moments. Allard J, Doumic M, Mogilner A, Oelz D (2019) Bidirectional sliding of two parallel microtubules generated by multiple identical motors. J Math Biol 79:1–24 Bergman JP, Bovyn MJ, Doval FF, Sharma A, Gudheti MV, Gross SP, Allard JF, Vershinin MD (2018) Cargo navigation across 3d microtubule intersections. Proc Natl Acad Sci 115(3):537–542 Bhat UN, Miller GK (2002) Elements of applied stochastic processes, vol 3. Wiley-Interscience, Hoboken Bressloff PC, Newby JM (2011) Quasi-steady-state analysis of two-dimensional random intermittent search processes. Phys Rev E 83(6):061139 Bressloff PC, Newby JM (2013) Stochastic models of intracellular transport. Rev Mod Phys 85(1):135 Bressloff PC, Xu B (2015) Stochastic active-transport model of cell polarization. SIAM J Appl Math 75(2):652–678 Brooks EA (1999) Probabilistic methods for a linear reaction-hyperbolic system with constant coefficients. Ann Appl Probab 9:719–731 Ciocanel MV (2017) Modeling intracellular transport during messenger RNA localization in Xenopus oocytes. Ph.D. thesis, Brown University Ciocanel V, Kreiling JA, Gagnon JA, Mowry KL, Sandstede B (2017) Analysis of active transport by fluorescence recovery after photobleaching. Biophys J 112(8):1714–1725 Ciocanel MV, Sandstede B, Jeschonek SP, Mowry KL (2018) Modeling microtubule-based transport and anchoring of mRNA. SIAM J Appl Dyn Syst 17(4):2855–2881 Cioranescu D, Donato P (1999) An introduction to homogenization. Oxford University Press, New York Cox DR (1962) Renewal theory. Methuen, London Cox SM, Matthews PC (2002) Exponential time differencing for stiff systems. J Comput Phys 176(2):430–455 Dobrow RP (2016) Introduction to stochastic processes with R. Wiley, New York Encalada SE, Szpankowski L, Xia Ch, Goldstein LS (2011) Stable kinesin and dynein assemblies drive the axonal transport of mammalian prion protein vesicles. Cell 144(4):551–565 Feng Q, Mickolajczyk KJ, Chen GY, Hancock WO (2018) Motor reattachment kinetics play a dominant role in multimotor-driven cargo transport. Biophys J 114(2):400–409 Gagnon JA, Kreiling JA, Powrie EA, Wood TR, Mowry KL (2013) Directional transport is mediated by a dynein-dependent step in an RNA localization pathway. PLOS Biol 11(4):e1001551 GitHub (2019) Sample Matlab and Mathematica code for effective velocity and diffusivity calculation. https://github.com/scottmckinley/stochastics-lab/tree/master/effective-transport. Accessed 10 Oct 2019 Hancock WO (2014) Bidirectional cargo transport: moving beyond tug of war. Nat Rev Mol Cell Biol 15(9):615 Hughes J, Hancock WO, Fricks J (2011) A matrix computational approach to kinesin neck linker extension. J Theor Biol 269(1):181–194 Hughes J, Hancock WO, Fricks J (2012) Kinesins with extended neck linkers: a chemomechanical model for variable-length stepping. Bull Math Biol 74(5):1066–1097 Hunter JJ (2008) Variances of first passage times in a markov chain with applications to mixing times. Linear Algebra Appl 429(5–6):1135–1162 Jung P, Brown A (2009) Modeling the slowing of neurofilament transport along the mouse sciatic nerve. Phys Biol 6(4):046002 Klumpp S, Lipowsky R (2005) Cooperative cargo transport by several molecular motors. Proc Natl Acad Sci USA 102(48):17284–17289 Kramer PR, Latorre JC, Khan AA (2010) Two coarse-graining studies of stochastic models in molecular biology. Commun Math Sci 8(2):481–517 Krishnan A, Epureanu BI (2011) Renewal-reward process formulation of motor protein dynamics. Bull Math Biol 73(10):2452–2482 Kubo R (1963) Stochastic Liouville equations. J Math Phys 4(2):174–183 Kunwar A, Mogilner A (2010) Robust transport by multiple motors with nonlinear force-velocity relations and stochastic load sharing. Phys Biol 7(1):016012 Kunwar A, Tripathy SK, Xu J, Mattson MK, Anand P, Sigua R, Vershinin M, McKenney RJ, Clare CY, Mogilner A et al (2011) Mechanical stochastic tug-of-war models cannot explain bidirectional lipid-droplet transport. Proc Natl Acad Sci 108(47):18960–18965 Lawler GF (1995) Introduction to stochastic processes. Chapman & Hall, New York Li Y, Brown A, Jung P (2014) Deciphering the axonal transport kinetics of neurofilaments using the fluorescence photo-activation pulse-escape method. BMC Neurosci 15(Suppl 1):P132 McKinley SA, Athreya A, Fricks J, Kramer PR (2012) Asymptotic analysis of microtubule-based transport by multiple identical molecular motors. J Theor Biol 305:54–69 Miles CE, Lawley SD, Keener JP (2018) Analysis of nonprocessive molecular motor transport using renewal reward theory. SIAM J Appl Math 78(5):2511–2532 Müller MJ, Klumpp S, Lipowsky R (2008) Tug-of-war as a cooperative mechanism for bidirectional cargo transport by molecular motors. Proc Natl Acad Sci 105(12):4609–4614 Müller MJ, Klumpp S, Lipowsky R (2010) Bidirectional transport by molecular motors: enhanced processivity and response to external forces. Biophys J 98(11):2610–2618 Neumann S, Chassefeyre R, Campbell GE, Encalada SE (2017) Kymoanalyzer: a software tool for the quantitative analysis of intracellular transport in neurons. Traffic 18(1):71–88 Newby J, Bressloff PC (2010a) Local synaptic signaling enhances the stochastic transport of motor-driven cargo in neurons. Phys Biol 7(3):036004 Newby J, Bressloff PC (2010b) Random intermittent search and the tug-of-war model of motor-driven transport. J Stat Mech Theory Exp 04:P04014 Newby JM, Bressloff PC (2010c) Quasi-steady state reduction of molecular motor-based models of directed intermittent search. Bull Math Biol 72(7):1840–1866 Newby J, Schiller JL, Wessler T, Edelstein J, Forest MG, Lai SK (2017) A blueprint for robust crosslinking of mobile species in biogels with weakly adhesive molecular anchors. Nat Commun 8(1):1–10 Ohashi KG, Han L, Mentley B, Wang J, Fricks J, Hancock WO (2019) Load-dependent detachment kinetics plays a key role in bidirectional cargo transport by kinesin and dynein. Traffic 20(4):284–294 Palacios JL (2009) On the moments of hitting times for random walks on trees. J Probab Stat 2009:1–4 MathSciNet Google Scholar Pavliotis GA (2005) A multiscale approach to Brownian motors. Phys Lett A 344(5):331–345 Pavliotis G, Stuart A (2008) Multiscale methods: averaging and homogenization. Springer, Berlin Popovic L, McKinley SA, Reed MC (2011) A stochastic compartmental model for fast axonal transport. SIAM J Appl Math 71(4):1531–1556 Reed MC, Venakides S, Blum JJ (1990) Approximate traveling waves in linear reaction-hyperbolic equations. SIAM J Appl Math 50(1):167–180 Reimann P, Van den Broeck C, Linke H, Hänggi P, Rubi J, Pérez-Madrid A (2001) Giant acceleration of free diffusion by use of tilted periodic potentials. Phys Rev Lett 87(1):010602 Reimann P, Van den Broeck C, Linke H, Hänggi P, Rubi J, Pérez-Madrid A (2002) Diffusion in tilted periodic potentials: enhancement, universality, and scaling. Phys Rev E 65(3):031104 Resnick S (1992) Adventures in stochastic processes. Birkhäuser Boston Inc., Boston Serfozo R (2009) Basics of applied stochastic processes. Springer, Berlin Shtylla B, Keener JP (2015) Mathematical modeling of bacterial track-altering motors: track cleaving through burnt-bridge ratchets. Phys Rev E 91(4):042711 Smith JD, McKinley SA (2018) Assessing the impact of electrostatic drag on processive molecular motor transport. Bull Math Biol 80:1–36 Tjioe M, Shukla S, Vaidya R, Troitskaia A, Bookwalter CS, Trybus KM, Chemla YR, Selvin PR (2019) Multiple kinesins induce tension for smooth cargo transport. eLife 8:e50974 Trong PK, Doerflinger H, Dunkel J, St Johnston D, Goldstein RE (2015) Cortical microtubule nucleation can organise the cytoskeleton of Drosophila oocytes to define the anteroposterior axis. eLife 4:e06088 Wang H, Peskin CS, Elston TC (2003) A robust numerical algorithm for studying biomolecular transport processes. J Theor Biol 221(4):491–511 Whitt W (2002) Stochastic-process limits: an introduction to stochastic-process limits and their application to queues. Springer, Berlin Yin G, Zhu C (2010) Hybrid switching diffusions, stochastic modelling and applied math, vol 63. Springer, New York Zimyanin VL, Belaya K, Pecreaux J, Gilchrist MJ, Clark A, Davis I, St Johnston D (2008) In vivo imaging of oskar mRNA transport reveals the mechanism of posterior localization. Cell 134(5):843–853 Zmurchok C, Small T, Ward MJ, Edelstein-Keshet L (2017) Application of quasi-steady-state methods to nonlinear models of intracellular transport by molecular motors. Bull Math Biol 79(9):1923–1978. https://doi.org/10.1007/s11538-017-0314-1 Department of Mathematics and Biology, Duke University, Durham, USA Maria-Veronica Ciocanel School of Mathematical and Statistical Sciences, Arizona State University, Tempe, USA John Fricks Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, USA Peter R. Kramer Department of Mathematics, Tulane University, New Orleans, USA Scott A. McKinley Correspondence to Maria-Veronica Ciocanel. MVC was supported by The Ohio State University President's Postdoctoral Scholars Program and by the Mathematical Biosciences Institute at The Ohio State University through NSF DMS-1440386. JF, PRK, and SAM are supported by NIH R01GM122082-01. In this appendix, we provide tables with parameter values from the models considered in Sect. 5. Table 1 Parameters for the plus-end motor (conventional kinesin-1) and minus-end motor (cytoplasmic dynein) in Example 4 (Sect. 2.3) and in Sect. 5.2, from Klumpp and Lipowsky (2005), Müller et al. (2008) Table 2 Parameters for the DDB complex from Ohashi et al. (2019) in the reattachment model in Sect. 5.3 Table 3 Parameters from Allard et al. (2019) for the microtubule sliding model in Sect. 5.4 Ciocanel, MV., Fricks, J., Kramer, P.R. et al. Renewal Reward Perspective on Linear Switching Diffusion Systems in Models of Intracellular Transport. Bull Math Biol 82, 126 (2020). https://doi.org/10.1007/s11538-020-00797-w Renewal reward theory Intracellular transport Processive motor transport Mathematics Subject Classification 60J20
CommonCrawl
Erbium dopants in nanophotonic silicon waveguides Lorenz Weiss, Andreas Gritsch, Benjamin Merkel, and Andreas Reiserer Lorenz Weiss,† Andreas Gritsch,† Benjamin Merkel, and Andreas Reiserer* Quantum Networks Group, Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Straße 1, D-85748 Garching, Germany and Munich Center for Quantum Science and Technology (MCQST), Ludwig-Maximilians-Universität München, Fakultät für Physik, Schellingstraße 4, D-80799 München, Germany †These authors contributed equally to this work. *Corresponding author: [email protected] Lorenz Weiss https://orcid.org/0000-0002-0719-3082 Andreas Reiserer https://orcid.org/0000-0001-9595-7363 L Weiss A Gritsch B Merkel A Reiserer Lorenz Weiss, Andreas Gritsch, Benjamin Merkel, and Andreas Reiserer, "Erbium dopants in nanophotonic silicon waveguides," Optica 8, 40-41 (2021) Coupling erbium dopants in yttrium orthosilicate to silicon photonic resonators and waveguides (OE) Heterogeneous silicon nitride photonics (OPTICA) Advanced apparatus for the integration of nanophotonics and cold atoms (OPTICA) Cavity quantum electrodynamics Hole burning Photonic crystal cavities Quantum information processing We perform resonant spectroscopy of erbium implanted into nanophotonic silicon waveguides, finding 1 GHz inhomogeneous broadening and homogeneous linewidths below 0.1 GHz. Our study thus introduces a promising materials platform for on-chip quantum information processing. Individual dopants and atom-like defects in solids are promising platforms for quantum technology. Among all optically active dopants studied to date, either in silicon [1,2] or in other crystals [3], erbium stands out because it exhibits a coherent transition within the main wavelength band of optical telecommunication. Here, high material transparency ensures compatibility with the mature platform of silicon nanophotonics, and the minimal loss of optical fibers might enable quantum networks that span global distances if sufficient coherence is achieved. In silicate crystals, the ground state coherence of erbium can exceed 1 s [4], and that of the optical transition is among the best in any solid. While this has enabled first quantum information experiments with bulk crystals, the integration into nanophotonic structures is highly desirable. This would not only allow for robust and cost-effective fabrication of devices, but can also enhance the interaction strength between the dopants and single photons, thus overcoming the notoriously small dipolar transition strength of erbium. Recently, silicon nanostructures on top of yttrium orthosilicate (YSO) [5] have enabled the control of single dopants [6]. However, this and related materials are incompatible with standard complementary metal oxide semiconductor (CMOS) processing and have abundant isotopes with nuclear spins that deteriorate the coherence. In contrast, silicon is an almost spin-free material, and isotopic purification largely eliminates the nuclear spin bath and facilitates ultra-narrow optical linewidths [2]. This makes the direct integration of erbium into silicon nanophotonic structures, as studied in this work, highly promising. Its low solubility prevents sufficient erbium doping during growth from the melt [7]. Therefore, optical characterization requires epitaxial growth or ion implantation. So far, the resulting two-dimensional samples have precluded resonant spectroscopy. We overcome this limitation by integrating erbium dopants into 0.4 mm long nanophotonic wire waveguides [Fig. 1(a)]. With broadband edge coupling, we detect around 2% of the emitted fluorescence. Even in our weakly doped (${\sim}{10^{17}} \;{{\rm{cm}}^{- 3}}$) samples, annealed for 10 min at ${10}^{3} \;{\rm{K}}$, this enables resonant fluorescence spectroscopy using 1 ms long excitation pulses. During each pulse, a ${\sim}100 \;{\rm{MHz}}$ frequency sweep avoids bleaching by persistent spectral hole burning to long-lived spin states. In contrast to previous measurements in silicon [1,7,8], resonant fluorescence spectroscopy is sensitive to all dopants that decay radiatively. We thus observe a completely different spectrum [Fig. 1(b)]. Instead of a terahertz-wide distribution, we find only a few sharp Lorentzian peaks, as expected for crystals with low defect concentration. Their inhomogeneous linewidth ranges from 1 to 4 GHz FWHM [Fig. 1(c), blue], and is further reduced in a magnetic field of 0.2 T, in which the peaks split into up to 12 lines depending on the site. The observed width is similar to that in YSO when coupled to nanostructures [6], and its narrowness indicates dopant integration at well-defined lattice sites while preserving the crystalline structure. This is a key step towards quantum controlled applications of erbium-doped silicon. Fig. 1. (a) Experimental setup. A $220 \times 700 \;{\rm{nm}}$ small silicon waveguide (gray) is terminated by a broadband photonic crystal mirror, fabricated on a commercial silicon-on-insulator chip (blue). Efficient broadband coupling is achieved with a tapered fiber touching the waveguide (blue). The bottom inset shows a cross section. Erbium (black arrows) has been implanted into the fundamental guided mode (red). The top inset shows a sketch of the setup. A tunable laser is pulsed via two acousto-optical modulators (AOM) before exciting the dopants. The emitted fluorescence is detected using a single-photon detector (SPD). (b) Fluorescence measurement. We observe nine narrow fluorescence peaks in the telecom C-band, all of which are well fit by Lorentzian curves (inset). (c) Optical properties. Using pulsed resonant spectroscopy, we measured the fluorescence lifetime (red), homogeneous linewidth (black), and inhomogeneous broadening (blue) with (dots) and without (crosses) magnetic field for all observed sites. The presence of several distinct sites might be caused by erbium forming complexes with other impurities, such as oxygen [7]. By changing the annealing procedure, using purer starting material, or co-implanting other dopants, we expect that the number, position, and relative height of the resonances can be engineered. As in other host crystals, the remaining inhomogeneous broadening of the lines is attributed to random strain fields in the waveguide. We expect that—similar to other emitters in silicon [2]—the broadening can be further reduced in isotopically purified samples, opening exciting prospects for ultra-narrow optical lines in nanophotonic structures. After characterizing the linewidth, we measure the fluorescence lifetime after the excitation laser is switched off. We observe an exponential decay of ${\sim}1 \;{\rm{ms}}$ on every peak [Fig. 1(c), red]. Compared to other erbium host materials [4–6], this constitutes a reduction by an order of magnitude that holds promise for an enhanced light–matter interaction strength. Remarkably, magnetic dipole transitions can be held responsible for the major part of this decay, as their expected lifetime in silicon is around 1.5 ms [9] and thus close to the measured value. While photonic crystal waveguides and cavities may further shorten the lifetime, using erbium-doped silicon in advanced quantum networking protocols will still require small homogeneous and spectral diffusion linewidths, which we investigate next. Our measurement technique is based on transient spectral hole burning. Because of saturation, the fluorescence signal $S$ increases nonlinearly with the laser intensity $I$ at a single frequency, $S \propto \sqrt I$. We therefore apply three laser fields of about the same intensity $I$, generated at equidistant frequency separation within the inhomogeneous linewidth by electro-optical modulation of the excitation laser. If the separation of the three lines is larger than both the homogeneous and the spectral diffusion linewidth, the signal will increase threefold compared to that of a single field. For small detunings, however, the signal will increase by only ${\sim}\sqrt 3$. Thus, by scanning the modulation frequency, we can upper-bound the homogeneous and spectral diffusion linewidth on the timescale of the radiative lifetime. Again, we preclude detrimental effects of persistent spectral hole burning by slightly changing the laser frequency between repetitions of the experiment. Inverted Lorentzian fits to the data give a spectral diffusion linewidth between 45 and 110 MHz for the individual erbium sites [Fig. 1(c), black]. The value observed in nanophotonic silicon is thus about four times larger than in nanocavity-coupled YSO [6]. We can exclude dipolar interactions with other magnetic moments in the crystal as a broadening mechanism, as both our dopant concentration and the interaction with the nuclear spin bath is too small. Thus, we attribute the measured spectral diffusion to the proximity of interfaces, a common issue for all solid-state quantum emitters [3] including erbium in other hosts [5,6]. At the used intensity, two-photon absorption should generate about ${10^4}$ free carriers in the waveguide during each excitation pulse. This can change the state of charge traps caused by crystalline defects and dangling bonds at the surface, leading to fluctuating electric fields at the position of the dopant and thus a broadening of the line via the Stark effect. In our samples, the maximum distance of the dopants to the closest interface is around 100 nm. With typical erbium Stark coefficients of $100 \;{\rm{Hz}} \cdot {\rm{m}}/{\rm{V}}$, the expected frequency shift for a single fluctuating charge trap at this distance is tens of megahertz, matching our observations. In future devices, spectral diffusion may therefore be reduced by using waveguides of larger dimension and surface termination (e.g., by hydrogenation). In addition, stabilizing the state of charge traps by applying strong electric fields seems highly promising. In summary, we have introduced erbium-doped silicon nanophotonics as a novel materials platform for quantum technology. Well-established fabrication techniques should allow for the realization of robust, low-cost, and multiplexed quantum devices. Applications will profit from an increased optical depth in longer waveguides when using an optimized implantation procedure. Alternatively, the dopant interaction with light can be enhanced with resonators. In ultra-high-$Q$ photonic crystal cavities [10], we expect a Purcell enhancement by six orders of magnitude. This would not only bring the system into the strong-coupling regime of cavity quantum electrodynamics, but also shorten the radiative decay to the nanosecond range, such that the lifetime-limited linewidth is of the same order as the spectral diffusion linewidth we observed. Thus, we expect that erbium-doped silicon can be used as an optical interface of single spin qubits operating in the telecom C-band. This offers unique promise for cavity-based quantum networks and distributed quantum information processors based on a scalable platform. European Research Council (757772); Deutsche Forschungsgemeinschaft (EXC-2111-390814868); Daimler und Benz Stiftung. 1. C. Yin, M. Rancic, G. G. de Boo, N. Stavrias, J. C. McCallum, M. J. Sellars, and S. Rogge, Nature 497, 91 (2013). [CrossRef] 2. L. Bergeron, C. Chartrand, A. T. K. Kurkjian, K. J. Morse, H. Riemann, N. V. Abrosimov, P. Becker, H.-J. Pohl, M. L. W. Thewalt, and S. Simmons, PRX Quantum 1, 020301 (2020). [CrossRef] 3. D. D. Awschalom, R. Hanson, J. Wrachtrup, and B. B. Zhou, Nat. Photonics 12, 516 (2018). [CrossRef] 4. M. Rančić, M. P. Hedges, R. L. Ahlefeldt, and M. J. Sellars, Nat. Phys. 14, 50 (2018). [CrossRef] 5. E. Miyazono, I. Craiciu, A. Arbabi, T. Zhong, and A. Faraon, Opt. Express 25, 2863 (2017). [CrossRef] 6. M. Raha, S. Chen, C. M. Phenicie, S. Ourari, A. M. Dibos, and J. D. Thompson, Nat. Commun. 11, 1605 (2020). [CrossRef] 7. A. J. Kenyon, Semicond. Sci. Technol. 20, R65 (2005). [CrossRef] 8. H. Przybylinska, W. Jantsch, Y. Suprun-Belevitch, M. Stepikhova, L. Palmetshofer, G. Hendorfer, A. Kozanecki, R. J. Wilson, and B. J. Sealy, Phys. Rev. B 54, 2532 (1996). [CrossRef] 9. C. M. Dodson and R. Zia, Phys. Rev. B 86, 125102 (2012). [CrossRef] 10. T. Asano and S. Noda, Proc. IEEE 106, 2183 (2018). [CrossRef] C. Yin, M. Rancic, G. G. de Boo, N. Stavrias, J. C. McCallum, M. J. Sellars, and S. Rogge, Nature 497, 91 (2013). L. Bergeron, C. Chartrand, A. T. K. Kurkjian, K. J. Morse, H. Riemann, N. V. Abrosimov, P. Becker, H.-J. Pohl, M. L. W. Thewalt, and S. Simmons, PRX Quantum 1, 020301 (2020). D. D. Awschalom, R. Hanson, J. Wrachtrup, and B. B. Zhou, Nat. Photonics 12, 516 (2018). M. Rančić, M. P. Hedges, R. L. Ahlefeldt, and M. J. Sellars, Nat. Phys. 14, 50 (2018). E. Miyazono, I. Craiciu, A. Arbabi, T. Zhong, and A. Faraon, Opt. Express 25, 2863 (2017). M. Raha, S. Chen, C. M. Phenicie, S. Ourari, A. M. Dibos, and J. D. Thompson, Nat. Commun. 11, 1605 (2020). A. J. Kenyon, Semicond. Sci. Technol. 20, R65 (2005). H. Przybylinska, W. Jantsch, Y. Suprun-Belevitch, M. Stepikhova, L. Palmetshofer, G. Hendorfer, A. Kozanecki, R. J. Wilson, and B. J. Sealy, Phys. Rev. B 54, 2532 (1996). C. M. Dodson and R. Zia, Phys. Rev. B 86, 125102 (2012). T. Asano and S. Noda, Proc. IEEE 106, 2183 (2018). Abrosimov, N. V. Ahlefeldt, R. L. Arbabi, A. Asano, T. Awschalom, D. D. Becker, P. Bergeron, L. Chartrand, C. Chen, S. Craiciu, I. de Boo, G. G. Dibos, A. M. Dodson, C. M. Faraon, A. Hanson, R. Hedges, M. P. Hendorfer, G. Jantsch, W. Kenyon, A. J. Kozanecki, A. Kurkjian, A. T. K. McCallum, J. C. Miyazono, E. Morse, K. J. Noda, S. Ourari, S. Palmetshofer, L. Phenicie, C. M. Pohl, H.-J. Przybylinska, H. Raha, M. Rancic, M. Riemann, H. Rogge, S. Sealy, B. J. Sellars, M. J. Simmons, S. Stavrias, N. Stepikhova, M. Suprun-Belevitch, Y. Thewalt, M. L. W. Thompson, J. D. Wilson, R. J. Wrachtrup, J. Yin, C. Zhong, T. Zhou, B. B. Zia, R. PRX Quantum (1) Semicond. Sci. Technol. (1)
CommonCrawl
Linkage between cross-equatorial potential vorticity flux and surface air temperature over the mid–high latitudes of Eurasia during boreal spring Chen Sheng1,2, Guoxiong Wu1,2, Bian He ORCID: orcid.org/0000-0002-7290-22011,2, Yimin Liu1,2 & Tingting Ma1 Climate Dynamics volume 59, pages 3247–3263 (2022)Cite this article A Correction to this article was published on 27 September 2022 This article has been updated The source of potential vorticity (PV) for the global domain is located at the Earth's surface. PV in one hemisphere can exchange with the other through cross-equatorial PV flux (CEPVF). This study investigates the features of the climatic mean CEPVF, the connection in interannual CEPVF with the surface thermal characteristics, and the associated mechanism. Results indicate that the process of positive (negative) PV carried by a northerly (southerly) wind leads to the climatologically overwhelming negative CEPVF over almost the entire equatorial cross-section, while the change of the zonal circulation over the equator is predominately responsible for CEPVF variation. By introducing the concept of "PV circulation" (PVC), it is demonstrated that the interannual CEPVF over the equator is closely linked to the notable uniform anomalies of spring cold surface air temperature (SAT) over the mid–high latitudes of Eurasia by virtue of the PVC, the PV-θ mechanism, and the surface positive feedback. Further analysis reveals that equatorial sea surface temperature (SST) forcing, such as the El Niño–Southern Oscillation and tropical South Atlantic uniform SST, can directly drive anomalous CEPVF by changing the zonal circulation over the equator, thereby influencing SAT in the Northern Hemisphere. All results indicate that the equilibrium linkage between CEPVF and extratropical SAT is mainly a manifestation of the response of extratropical SAT to tropical forcing by virtue of PVC, and that the perspective of PVC can provide a reasonably direct and simple connection of the circulation and climate between the tropics and the mid–high latitudes. Potential vorticity (PV) theory, with its mathematical elegance and completeness, has increasingly attracted the attention of dynamicists. Research on PV has deepened our understanding of various atmospheric phenomena, primarily because of its invertibility and conservation (e.g., Hoskins et al. 1985). Application of PV theory to synoptic meteorology has revealed the dynamic mechanisms of many complex weather events. Earlier related studies mostly focused on the redistribution of atmospheric interior PV (e.g., Hoskins et al. 1985), in which the impacts of PV redistribution on the general circulation (e.g., Hoskins et al. 2003; Hoskins 2015; Liu et al. 2007; Luo et al. 2018a, b; Ortega et al. 2018; Xie et al. 2020) and severe weather (e.g., Wu et al. 1995; Wu and Cai 1997; Ma et al. 2022; Zhang et al. 2021a) are of primary interest. Climate is the cumulative impact of weather over a certain period, e.g., month, season, year, decade or longer. Over long periods, energy generation and dissipation of the atmospheric system are prominent. Because the PV equation explicitly includes the impacts of diabatic heating and frictional dissipation on PV development, it is convenient and instructive to use the PV equation to study climate and its variations. With regard to climate, the features of the source and budget of the atmospheric PV have evoked great interests. For example, Haynes and McIntyre (1987, 1990) introduced the concept of PV density or PV substance, and proposed the impermeability theorem of PV that suggests PV is conserved over a closed isentropic surface. Based on the impermeability theorem, Hoskins (1991) proposed a three-fold division of the atmosphere: the Overworld, Middleworld, and Underworld. The "Overworld" is the region encompassed by isentropic surfaces that are everywhere above the tropopause. The "Middleworld" is the region with isentropic surfaces crossing the tropopause but not striking the Earth's surface. The "Underworld" is the region with isentropic surfaces intercepting the Earth's surface. He further illustrated that global atmospheric PV is changeable in the Underworld, but constant above the Underworld. Bretherton and Schӓr (1993) envisioned the impermeability theorem in terms of the so-called effective velocity, and proved that a particle of PV density or PV substance that moves with effective velocity always remains on the same isentropic surface. These findings imply that changes in globally integrated atmospheric PV depend solely on the PV flux on the Earth's surface (e.g., Sheng et al. 2021). Recently, a number of relevant data analysis regarding the source of Earth's surface PV has been published (e.g., Ma et al. 2019, 2022; Sheng et al. 2021, 2022; Zhang et al. 2021b). The PV generated near the Earth's surface can be transported to the interior of the atmosphere, even from one hemisphere to the other. If the global atmospheric domain that is above the Underworld were covered by a lid of a potential temperature surface and divided into two hemispheric atmospheric domains by a vertical boundary along the equator, then because the PV flux cannot penetrate the upper lid, the hemispheric Earth's surface and vertical cross section along the equator become two effective boundaries through which the penetrating PV flux determines the changes in hemispheric PV. Over the long term, the integrated surface PV flux should be compensated by the integrated cross-equatorial PV flux (CEPVF). The CEPVF at the equatorial vertical cross section can therefore be considered as a monitor through which the behavior of the surface PV flux can be detected. Because surface PV flux is related to surface air temperature (SAT), diagnosing the variation of CEPVF might further help improve understanding of SAT variation. However, the features and climatological effects related to the integrated CEPVF have received little attention in the meteorological literature. The major objective of this study is to explore the features of the CEPVF and its connection to climate with focus on the SAT of the Northern Hemisphere (NH). SAT is a vital basic variable in atmospheric science. Variations in SAT have pronounced effects on agriculture, socioeconomic development, and societal activities (Chen and Wu 2018). Spring marks the transition from winter into summer, and it is the season for crops, vegetation recovery, and snowmelt (Chen et al. 2019). Any change in SAT over Eurasia during boreal spring could exert prominent influence on ecosystem recovery (Labat et al. 2004; Wang et al. 2011; Chen et al. 2019). In particular, SAT over Eurasia during boreal spring could connect atmospheric anomalies of the preceding winter and subsequent Asian summer monsoon activity (Ogi et al. 2003; Chen and Wu 2018) by changing the land–sea thermal contrast (Liu and Yanai 2001; D'Arrigo et al. 2006), in which the snow cover provides a memory effect (Ogi et al. 2003; Chen et al. 2016). Therefore, the linkage between CEPVF and SAT over the mid–high latitudes of Eurasia during boreal spring is a specific focus of this study. The remainder of the paper is organized as follows. Section 2 presents the data, method and theory. Section 3 presents the CEPVF climatology. Section 4 analyzes the interannual relationship between the integrated CEPVF and SAT over the mid–high latitudes of Eurasia during boreal spring and its possible mechanism. Section 5 examines the possible forcing factors of the CEPVF. Finally, the summary and discussion are provided in Sect. 6. Data, method, and theory This study uses monthly mean SAT data from the MERRA2 reanalysis product (Rienecker et al. 2011; Lucchesi 2012; https://disc.gsfc.nasa.gov/datasets?project=MERRA-2) and sea surface temperature (SST) data from the COBE SST dataset (Ishii et al. 2005; https://psl.noaa.gov/data/gridded/data.cobe.html). The averaged SST anomalies of the NINO34 index (defined by the region: 5° S–5° N, 120°–170° W), which are used to represent the El Niño–Southern Oscillation (ENSO) condition, are obtained from the following web address: https://psl.noaa.gov/data/climateindices/list/. PV and PV flux as defined in Eqs. (1) and (3), respectively, involve the product of various variables and therefore the contribution of the transient process might not be represented properly in monthly mean data. To address this problem, we use 3-hourly instantaneous data on the pressure level obtained from MERRA2 to calculate PV and PV flux. The variables include air temperature, zonal wind, and meridional wind. The research period of this study is 1980–2017. The horizontal resolution of all MERRA2 data is 1.25° × 1.25°. The horizontal resolution of the SST data is 1° × 1°. Climatic mean values calculated over December, January, and February (DJF), March, April, and May (MAM), June, July, and August (JJA), and September, October, and November (SON) are used to represent the conditions of boreal winter, spring, summer, and autumn, respectively. Besides the common correlation analysis, regression analysis and composite analysis, the partial correlation is also used in this study to investigate the correlation between the two variables under study by excluding the impact from a third variable. The partial correlation is calculated as follows: $$r_{12,3} = \frac{{r_{12} - r_{13} r_{23} }}{{\sqrt {(1 - r_{13}^{2} )(1 - r_{23}^{2} )} }},$$ where r12 is the correlation coefficient between variables 1 and 2, r13 is the correlation coefficient between variables 1 and 3, and r23 is the correlation coefficient between variables 2 and 3. The coefficient r12,3 is the partial correlation coefficient between variable 1 and variable 2 with variable 3 removed (Zar 2010). The linear trend and decadal variation (more than 9 years) of the data are removed to highlight interannual variability. PV, PV budget equation, and PV flux Here, W is defined as PV per unit volume, the PV density or PV substance. The expression of W in the general form (Haynes and McIntyre 1987, 1990; Bretherton and Schӓr 1993) is as follows: $$W = \vec{\zeta }_{a} \cdot \nabla \theta ,$$ where \(\vec{\zeta }_{a}\) is the 3D absolute vorticity vector, and \(\theta\) is potential temperature. The budget equation for PV can be expressed in the conservation form as follows (Haynes and McIntyre 1987, 1990): $$\frac{\partial W}{{\partial t}} = - \nabla \cdot \vec{J},$$ with the "PV flux" $$\vec{J} = \vec {V}_{e} W = \vec{V} W - \vec{ \zeta}_{a} \dot{\theta } - \vec {F}_{\zeta } \theta ,$$ in which \(\vec{V}\) is the 3D wind vector, \(\dot{\theta } = \frac{d\theta }{{dt}}\) is diabatic heating rate, \(\vec{F} = (F_{x} ,F_{y} )\) is frictional force, \(\vec {F}_{\zeta }\) is the vorticity of frictional force, and $$\vec {V}_{e} = \vec{J} /W = \vec{V} - (\vec{ \xi}_{a} \dot{\theta } + \vec {F}_{\zeta } \theta )/W$$ is effective velocity. According to Bretherton and Schӓr (1993), because $$\frac{\partial \theta }{{\partial t}} + \vec{V}_{e} \cdot \nabla \theta \equiv 0,$$ the effective velocity \(\vec {V}_{e}\) is parallel to \(\theta\) surface. Therefore, following Eq. (3), the PV flux \(\vec{ J }\) is directed along isentropic surfaces without penetration, which represents an alternative way of envisioning the impermeability theorem. Because the divergence of curl is zero, substituting Eq. (1) into Eq. (2) and choosing the form of \(\vec{ J }\) that satisfies Eq. (2) gives (Bretherton and Schӓr 1993) $$\frac{{\partial \vec{ \xi }_{a} \theta }}{\partial t} = - \vec{ J } .$$ By integrating Eq. (4) over the given period \(\Delta t\) and adopting the following definition: $$\vec{J_s} = - \vec{ \xi }_{a} \theta ,$$ $$\vec {J_s} = \int_{{t_{0} }}^{{t_{0} + \Delta t}} {\vec{ J} } dt + \user2{C }=\int_{{t_{0} }}^{{t_{0} + \Delta t}} ({\vec {V}}_{e} W )dt + \user2{C},$$ where C is a time-independent constant field. The flux \(\vec{J_s}\) can be considered the temporally accumulated PV flux over a given period (\(\Delta t\)) with a different unit to PV flux \(\vec{J}\). From Eq. (5), we can also obtain $$W = - \nabla \cdot (\vec{J_s}).$$ (6b) On the basis of Eq. (6b), and through mimicking the atmospheric convergence \(C = - \nabla \cdot \vec{V}\), in which \(\vec{ V}\) is the atmospheric circulation, the accumulated PV flux \(\vec{J_s}\) can be referred to as PV circulation (hereafter, PVC). From Eq. (6a), we can see PVC represents the cumulative effects of the transient PV flux. From Eqs. (2) and (6b), we can also discern the difference between \(\vec{J_s}\) and \(\vec{ J}\). For a specific domain enclosed by boundary A, its gross PV is determined by the sum of the cross-A PVC \(\vec{J_s}\), whereas its gross PV generation is determined by the sum of the cross-A PV flux \(\vec{ J}\). In simple terms, the convergence of \(\vec{J_s}\) is PV itself (Eq. 6b), whereas the convergence of \(\vec{ J }\) is the local PV generation (Eq. 2). Gross PV budget in the atmosphere We define a theta surface (\(\theta_{T}\)) above the Underworld as the top of the atmosphere that is under investigation. Because flux \(\vec{ J }\) at \(\theta_{T}\) is parallel to the \(\theta_{T}\) surface, there is no flux \(\vec{ J }\) perpendicular to the \(\theta_{T}\) surface (Bretherton and Schӓr 1993; Schneider et al. 2003). Integrating the PV budget equation (Eq. 2) globally from the Earth's surface to the \(\theta_{T}\) surface and using Gauss's theorem, we obtain the following: $$\frac{\partial }{\partial t}\iiint\limits_{global} Wdv\; = - \iiint\limits_{global} {\nabla \cdot (\vec{ J} )}dv\; = \; - \iint\limits_{globalsurf} {\vec{ J}\cdot {\overrightarrow {ds}} }.$$ If the domain is confined to the NH, then we have $$\frac{\partial }{\partial t}\iiint\limits_{NH} Wdv\; = - \iiint\limits_{NH} {\nabla \cdot (\vec{J} )}dv\; = \; - \iint\limits_{NHsurf} {\vec{ J}\cdot {\overrightarrow {ds}} } + \iint\limits_{{{\varvec{EQ}}}} {CEPVFds},$$ in which \(ds = dxdh\) with h ranging from the surface to \(\theta_{T}\), and $$CEPVF = \vec{J }\cdot {\vec {j}} \; = J^{y} ,$$ (7c) which is the meridional component of PV flux across the equatorial vertical section and represents cross-equatorial PV transport. The superscript indicates the vector component. Equation (7a) indicates that the budget of globally integrated PV is determined solely by the Earth's surface PV flux, which is consistent with previous studies (Haynes and McIntyre 1987, 1990; Hoskins 1991; Schneider et al. 2003). Equation (7b) indicates that CEPVF also contributes to the PV budget in the NH. Because the PV budget is balanced in the NH in terms of the long-term mean (Eq. 7b), the integrated CEPVF can be regarded as a monitor from which the integrated surface PV flux conditions can be detected. The surface PV flux is related to the surface thermal conditions (Eq. 3), which means that CEPVF is closely linked to the surface thermal conditions. PV flux and cross-equatorial PV flux in a pressure coordinate system Most reanalysis data are archived in a pressure coordinate system. For hydrostatic large-scale motion, the pressure coordinate analogue of vorticity may be written as follows (Sheng et al. 2021): $$\vec {\zeta}_{a} = \frac{\partial v}{{\partial p}}\;\vec{i} - \frac{\partial u}{{\partial p}}\;\vec{j}\; - \; \left(f + \frac{\partial v}{{\partial x}} - \frac{\partial u}{{\partial y}} \right) \;\vec{k},$$ where \((u,v)\) is the horizontal wind vector, and the lowercase letter vectors \((\vec {i} ,\vec { j} ,\vec {k} )\) indicate unit vectors pointing eastward, northward, and downward, respectively. Then, the component form of PVC (\(\vec{J_s}\)) can be written as follows: $$\vec {J_s} = - \vec {\zeta}_{a} \theta = (J_{s}^{x} , \, J_{s}^{y} , \, J_{s}^{p} ) = - \frac{\partial v}{{\partial p}}\theta \;\vec{i} + \frac{\partial u}{{\partial p}}\theta \;\vec{j}\; + \;\left(f + \frac{\partial v}{{\partial x}} - \frac{\partial u}{{\partial y}}\right)\theta \;\vec{k},$$ in which $$J_{s}^{x} = - \frac{\partial v}{{\partial p}}\theta ,\;\;J_{s}^{y} = \frac{\partial u}{{\partial p}}\theta , \, J_{s}^{p} = (f + \zeta )\theta .$$ From Eqs. (6a), (7c), and (8b), we can obtain the following important relation: $$\Delta \int_{{t_{0} }}^{{t_{0} + \vartriangle t}} {CEPVF \, } dt = \Delta J_{s}^{y} = \Delta \left(\frac{\partial u}{{\partial p}}\theta \right),$$ where \(\Delta\) indicates the temporal change or time variation. Equation (9) means that the temporal variation of CEPVF over a given period is determined substantially by the variation of vertical shear in the zonal wind weighted by potential temperature. The PV flux \(\vec{ J }\) (Eq. 3) consists of advective flux (\(\vec{ J_a}\); the first term), heating flux (the second term), and friction flux (the third term). The effect of friction always tends to offset a change in PV but it cannot alter the sign of PV change. In simple terms, friction makes the eventual observed change more moderate than it would be otherwise. The effect of friction is usually confined in the surface layer, as is discussed in Sect. 4.2.3, but it is neglectable in the free atmosphere. In the following analysis concerning the free atmosphere, we do not further investigate the effect of friction in relation to CEPVF. Along the equator, the heating flux is smaller than the advective flux after integrating vertically over the equator (table not shown), and it tends to slightly offset the advective flux. Moreover, the correlation coefficient between the annual mean integrated advective flux and the sum of both the annual mean integrated advective and heating fluxes is as high as 0.92, passing the 0.01 significance level. This indicates that CEPVF is dominated by advective flux. Thus, we first consider only the advective flux in calculating CEPVF for the free atmosphere. The meridional component of PV flux in Eq. (3) therefore gives $$CEPVF = \vec {J_a} \cdot \vec {j} \; = J^{y} = vW.$$ From Eq. (10), the vertically and zonally integrated CEPVF along the equator can be expressed as follows: $$\left\{ {CEPVF} \right\} = \int\limits_{EQ} {\int_{pt}^{ps} {CEPVF\;dpdx} } = \int\limits_{EQ} {\int_{pt}^{ps} {vW\;dpdx} } .$$ From Eq. (11), a CEPVF index, namely CEPVFI, is defined as the normalized time series of {CEPVF}: $$CEPVFI = \left\{ {CEPVF} \right\}^{\prime } /\sigma ,$$ where \(\sigma\) is the standard deviation of {CEPVF} and \( \left\{ {CEPVF} \right\}^{\prime }\) is the {CEPVF} anomaly. A positive (negative) CEPVFI value corresponds to anomalous net northward (southward) CEPVF over the equatorial section. Since the climate mean CEPVF is negative (will be shown in Fig. 1), the positive (negative) anomalous CEPVF also implies less (more) southward CEPVF. Climatic mean distribution of cross-equatorial PV flux (CEPVF) along the equator in a MAM, b JJA, c SON, and d DJF. Unit: 10−7 kg−1 K m2. Black shading indicates land Because the 380 K isentropic surface is near the tropopause over the equator (Wilcox et al. 2012), and because the 100 hPa isobaric surface almost overlaps the 380 K isentropic surface at the equator, the 100 hPa isobaric surface can be considered the tropical tropopause at the equator. Calculation shows that the correlation between the CEPVFI when the upper integral boundary is set at 100 hPa and 380 K successively is as strong as 0.91, exceeding the 0.01 significance level (figure not shown). Furthermore, because the raw data are based on an isobaric surface, directly employing pressure coordinates can avoid extrapolation and interpolation errors. Thus, to focus on the signal in the troposphere in a pressure coordinate system, the upper integral boundary (pt in Eq. 11) is chosen as 100 hPa in this study. Climatology of cross-equatorial PV flux Several important climatological features of CEPVF in different seasons are presented in Fig. 1. These features, which appear in all seasons, include two layers with strong southward CEPVF: one in the upper layer within 200–100 hPa and the other in lower layer within 1000–850 hPa; strong southward CEPVF occurring preferentially around the land in the lower layer; and negative CEPVF dominating over the entire equatorial section, which is the most interesting phenomenon. The CEPVF is linked with the meridional wind and PV. Zhao and Lu (2020) investigated the vertical structure of cross-equatorial flow (meridional wind). Their results clearly showed two layers in the distribution of cross-equatorial flow with absolute maxima at 200–100 and 1000–700 hPa. High PV in the atmosphere is also concentrated in both the upper layer (Hoskins et al. 1985) and the lower layer (Sheng et al. 2021; Zhao and Ding 2009). Thus, the two layers of CEPVF (Fig. 1) do make sense because they are the product of the meridional wind and PV; however, it remains unclear why negative CEPVF is dominant over the entire equatorial section. It might be conjectured that PV reverses sign across the equator with positive values to the north and negative values to the south. At the equator, a northerly wind (v < 0) brings positive PV (PV > 0) into the Southern Hemisphere (SH), while a southerly wind (v > 0) brings negative PV (PV < 0) into the NH. In both cases, CEPVF is always negative. Therefore, negative CEPVF is observed over almost the entire equatorial section. The equatorial cross section of the distribution of PV and the meridional wind is shown in Fig. 2. In all seasons, in the areas with red dots (v > 0), PV is mostly negative (PV < 0), whereas in the areas without dots (v < 0), PV is mostly positive (PV > 0). This configuration verifies the physical process, as documented above, that a northerly wind (v < 0) brings positive PV (PV > 0) into the SH, while a southerly wind (v > 0) brings negative PV (PV < 0) into the NH. This physical process can also be seen in Hoskins et al. (2020) and Hoskins and Yang (2021, see their Fig. 1c). Thus, these results indicate that it is this configuration that results in the overwhelming negative CEPVF over the equatorial section. However, certain areas of positive values do exist (Fig. 1) where the climatological meridional wind and PV have opposite signs, e.g., the region with positive values in the upper level shown in Fig. 1b, c. Because CEPVF is a combination of mean and transient eddy processes, these positive values imply that the transient eddy process also contributes to CEPVF, and thus the relevant physical implication deserves further study. Same as Fig. 1 but for PV per unit volume. Unit: 10−7 K m s kg−1. Red dots indicate southerly wind (v > 0) and areas without dots indicate northerly wind (v < 0) The climatological annual cycle of the {CEPVF} and its standard deviation in each season are shown in Fig. 3a, b, respectively. The positive {CEPVF} component is very small and negative CEPVF dominates the {CEPVF} (Fig. 3a), which is consistent with Fig. 1. The negative {CEPVF} indicates that in terms of the climatological mean there is net PV transport from the NH to the SH. The {CEPVF} shows a semiannual cycle with absolute minima in April and November and absolute maxima in January and July. This semiannual cycle is consistent with the cycle of cross-equatorial mass exchange (Zhang et al. 2008). The value of the standard deviation of {CEPVF} peaks in MAM (Fig. 3b), reaches a trough in DJF, and is intermediate in JJA and SON. This means the strongest (weakest) interannual variability of {CEPVF} occurs in boreal spring (winter). a Climatological annual cycle of {CEPVF}. Unit: 104 K m Pa s−1. Yellow, gray, and black bars indicate northward, southward, and total CEPVF along the equator, respectively. b Nondimensional {CEPVF} standard deviation in different seasons Cross-equatorial PV flux and SAT over Eurasia during boreal spring As documented in Sect. 2.3, because CEPVF is closely related to surface PV flux in the climatological mean state (Eq. 7b), and because surface PV flux is related to surface SAT (Eq. 3), there should be close linkage between CEPVF and SAT. This is shown to be true in data analyses, particularly in relation to boreal spring. In this section, we show this interannual connection between {CEPVF} and SAT over the mid–high latitudes of Eurasia during boreal spring when the interannual variability of {CEPVF} is the strongest (Fig. 3b) and discuss its possible mechanism. Relationship between CEPVF and SAT The variation of the CEPVFI (Eq. 12) during boreal spring is shown in Fig. 4a. A positive (negative) value of the CEPVFI indicates net anomalous PV transport from the SH (NH) to the NH (SH). The CEPVFI shows strong interannual variability with the variation exceeding one standard deviation in 10 of the 38 years. A regression map of SAT against the CEPVFI over the mid–high latitudes of the Eurasian continent is presented in Fig. 4b, which shows a broad uniform negative pattern over the mid–high latitudes of Eurasia. Embedded within this broad uniform pattern are three significant negative centers: the Mediterranean region, the area southwest of Lake Baikal, and the far east of Russia. This finding indicates that a positive (negative) phase of the CEPVFI corresponds to cooling (warming) over Eurasia. This pattern bears close resemblance to the first empirical orthogonal function (EOF) mode of SAT over Eurasia during boreal spring that was revealed by Chen et al. (2016, see their Fig. 1a). Examination of the correlation between the CEPVFI and the time series of the first EOF mode of SAT indicates that the correlation coefficient reaches 0.36, exceeding the 0.05 significance level. This indicates that CEPVF is related closely to SAT over the mid–high latitudes of Eurasia during boreal spring. However, how CEPVF variation is related to SAT over the northern Eurasian continent remains unclear and requires further investigation. a Normalized time series of spring CEPVFI. b Regression coefficients of SAT against the CEPVFI during boreal spring (shading, °C). Areas exceeding the 0.05 significance level are highlighted by black dots. Three centers of negative SAT are evident along the black line Possible mechanism CEPVF, general circulation, and meridional PVC at the equator To understand the physical process via which the {CEPVF} is related to SAT, we first investigate the distribution of anomalous CEPVF along the equatorial section. The correlation between the CEPVFI and CEPVF along the equator is shown in Fig. 5a. It is evident that anomalous CEPVF over the equatorial Pacific is predominantly northward (southward) above (below) 700 hPa (Fig. 5a). Conversely, CEPVF over the equatorial Indian Ocean is southward (northward) in the upper (lower) layer. Distribution of correlation coefficients between the CEPVFI and a \(J_{{}}^{y}\) and b \(J_{s}^{y}\) (shading) and the latitudinal circulation (vectors) along the equatorial section during boreal spring. Vectors exceeding the 0.05 significance level are shown. Areas exceeding the 0.05 significance level are highlighted by black dots. c Regression of variation of \(J_{s}^{y}\): term A (red dashed line), term B (green solid line), and term C (blue solid line) on CEPVFI along the equator at 300 hPa. Unit: 10−1 m K s−1 Pa−1. The result regarding term B is multiplied by 100 for clarity. d Same as c except for 850 hPa The correlation map between the CEPVFI and meridional component of the PVC (\(J_{s}^{y}\); Eq. 8b) is shown in Fig. 5b. Figure 5b bears marked resemblance to the vertical structure of anomalous CEPVF shown in Fig. 5a, which is also in accordance with Eq. (9). The vertical structure with northward CEPVF in the upper level and compensatory southward CEPVF in the lower level over the equatorial Pacific is very clear (Fig. 5b). Moreover, the inverse vertical structure over the equatorial Indian Ocean is also prominent. This anomalous CEPVF pattern (Fig. 5b) revealed by the CEPVFI is consistent with the first EOF mode on \(J_{s}^{y}\) over the equatorial section, and the correlation coefficient of their time series reaches 0.68, passing the 0.01 significance level (figure not shown), which indicates that the CEPVFI could well capture the dominant mode of variation of CEPVF. A more interesting characteristic is that the structure shown in Fig. 5b is less chaotic than that in Fig. 5a. This is mainly because the PVC (\(\vec {J_s}\)) is the time-integrated \(\vec{J}\) (Eq. 6a), which has the high-frequency information filtered out. It should be noted that the convergence of \(\vec{J}\) represents the generation of W, whereas the convergence of \(\vec {J_s}\) is equals to W itself. The similarity between Fig. 5a and b thus becomes natural. It should also be noted that certain differences remain between Fig. 5a and b. This is mainly because \(\vec{ J }\) is approximated to the advective PV flux, whereas \(\vec {J_s}\) includes the total effect of advection, diabatic heating, and friction. Despite this difference, the similarity between Fig. 5a and b confirms that flux \(\vec{J}\) and flux \(\vec {J_s}\) are consistent in depicting the variation of CEPVF. The nature of \(J_{s}^{y}\) is the vertical shear of the zonal wind weighted by potential temperature. From the definition of \(J_{s}^{y}\) (Eq. 8b), the transitional area with the zero contour in Fig. 5b corresponds to the maximum or minimum zonal wind. As can be seen, the regions of maximum and minimum zonal wind are located in the lower layer near 700 hPa over the equatorial Pacific and Indian Ocean, respectively. Consequently, over the Pacific, the \(J_{s}^{y}\) is positive above the maximum zonal wind but negative below, whereas over the Indian Ocean, the \(J_{s}^{y}\) is negative above the minimum zonal wind but positive below. The above results indicate that the variation of CEPVF is mainly related to the anomalous zonal circulation along the equator. This result can be verified using the following linear temporal expansion of the meridional component of PVC (Eq. 8b): $$ \frac{{\Delta J_{s}^{y} }}{A} = \Delta \left(\frac{\partial u}{{\partial p}}\theta \right) \approx \frac{{\frac{\partial u}{{\partial p}}\cdot \Delta \theta }}{B} + \frac{\left(\Delta \frac{\partial u}{{\partial p}}\right) \cdot \theta }{C},$$ which means that the temporal variations of CEPVF (term A) are induced by the variations of potential temperature (term B) and the zonal circulation (term C). The regression of terms A (red dashed line), B (green solid line), and C (blue solid line) on the CEPVFI along the equator at 300 and 850 hPa are shown in Fig. 5c, d, respectively. Because the regression of term B (green line) on CEPVFI is very small, it is multiplied by 100 for clarity in the figure. It is evident that term A along the equator matches well with term C at both upper (Fig. 5c) and lower (Fig. 5d) levels, further confirming that the temporal variation of CEPVF is determined substantially by changes of the zonal circulation along the equator. Wu and Meng (1998) found strong coupling between the monsoonal zonal circulation over the equatorial Indian Ocean and the Walker circulation over the Pacific. The coupling system operates very much like a pair of gears running on the equatorial Indian Ocean and Pacific (denoted as GIP), and the "gearing point" of the two cells is located near the Maritime Continent. When one cell rotates clockwise, the other rotates anticlockwise. The direction of this gearing with a clockwise (anticlockwise) Walker circulation over the Pacific and anticlockwise (clockwise) monsoonal zonal circulation over the equatorial Indian Ocean is defined as positive (negative). Wu and Meng (1998) demonstrated that the GIP is linked closely to ENSO. The circulation shown in Fig. 5b is in accordance with the negative GIP reported in their study, which suggests that CEPVF might be associated with ENSO. CEPVF, PV, and PVC Because the convergence of PVC (\(\vec {J_s}\)) is directly related to PV itself (Eq. 6b), whereas the convergence of PV flux (\(\vec{ J}\)) represents PV generation (Eq. 2), for clear and intuitive illustration of the variation of PV itself we consider PVC in the following analysis. The distributions of the correlation coefficients between the CEPVFI and PV as well as horizontal PVC at different levels are presented in Fig. 6. Generally, in high-latitude regions, the PV distribution exhibits an equivalent barotropic structure, whereas in the tropics, both PV and horizontal PVC in the upper troposphere are out of phase with their counterparts in the lower troposphere. Corresponding to the positive phase of the CEPVFI, northward CEPVF can be seen clearly over the Pacific at 200 hPa (Fig. 6a). This northward CEPVF converges anomalous PV to the north of the equator, compensating the reduction of PV in situ. Meanwhile, southward CEPVF is also significant over the Maritime Continent and the Indian Ocean. The southward PVC over the Indian Ocean transfers PV toward the equator, corresponding to the formation of a belt of PVC divergence associated with negative PV over the north of the Tibetan Plateau (TP). This belt of divergence exudes PV that is conducive to the formation of positive PV over the mid–high latitudes of Eurasia (Eq. 6b). It is further found that the broad pattern of positive PV bears close resemblance to the SAT pattern shown in Fig. 4b, suggesting connection between SAT anomalies and PV anomalies. The broad pattern of positive PV with three nodes over the Eurasian continent exhibits an equivalent barotropic structure that intrudes downward to 500 hPa (Fig. 6b). Distribution of correlation coefficients between CEPVFI and PV (shading) and horizontal \(\vec {J_s}\) (vectors) at a 200, b 500, and c 850 hPa during boreal spring. Areas exceeding the 0.05 significance level are highlighted by black dots. Vectors exceeding the 0.05 significance level are plotted. The blue line denotes the TP topographic boundary of 3000 m The signal of horizontal PVC over the equator at 500 hPa is weaker than at 200 hPa, especially over the Indian Ocean. However, the three positive PV centers over Eurasia are very clear and the flux vector is significant (Fig. 6b). The horizontal PVC at 850 hPa (Fig. 6c) is opposite that in the upper level (Fig. 6a) along the equator, which is in accordance with Fig. 5b. Although the three positive PV centers are not clear over the mid–high latitudes of Eurasia at this lower level, the in situ divergence of the PVC vectors is prominent. It is worth noting that near the equator over the Pacific sector PV is positive to the north and negative to the south at 850 hPa, and that this is reversed at 500 and 200 hPa (Fig. 6). This is because part of the PV (\(- (f - \partial u/\partial y) \cdot \partial \theta /\partial p\)) changes its sign across the equator, particularly if the maximum/minimum zonal wind is located at the equator. The intensified equatorial westerly flow in the lower troposphere and easterly flow above 600 hPa over the Pacific sector (Fig. 5b) thus contribute to the anomalous PV distribution there. To the west of the Maritime Continent, for the same reason, PV is negative to the north and positive to the south at 850 and 500 hPa, and the situation is reversed at 200 hPa (Fig. 6). It is because the equatorial westerly flow dominates below 400 hPa in this sector, whereas an easterly flow prevails aloft (Fig. 5b). To further reveal the characteristics of the broad positive PV pattern in the mid–high latitudes over the Eurasian continent shown in Fig. 6a, the meridional vertical section crossing the central positive PV center (averaged over 70°–105° E) embedded in this broad PV pattern is presented in Fig. 7. The correlation coefficients between the CEPVFI and both PVC and PV are displayed. Because the positive vertical direction is downward in pressure coordinates, mimicking the vertical pressure velocity (\(\omega\)), the vertical component of PVC (\(J_{s}^{p}\)) is multiplied by (− 1) for intuitively plotting. Corresponding to the positive phase of the CEPVFI, in the tropical area below 300 hPa, positive PV exists to the south of the equator and negative PV exists to the north. This is because the prevailing equatorial zonal wind below 300 hPa in the Indian Ocean sector is easterly (Fig. 5b), resulting in cyclonic vorticity to the south of the equator and anticyclonic vorticity to the north. In both the NH and SH, the PVC is downward in the region of positive PV but upward in the region of negative PV. This is because, according to the definition (Eq. 8b), the vertical component of PVC multiplied by (− 1) (\(- J_{s}^{p} = - (f + \zeta )\theta\)) possesses the opposite sign to PV in a hydrostatic atmosphere. Distribution of correlation coefficients between the CEPVFI and the 70°–105° E mean PV circulation (\(J_{s}^{y}\); − \(J_{s}^{p}\)) (vectors) and PV (shading). Areas exceeding the 0.05 significance level are highlighted by black dots. Vectors exceeding the 0.05 significance level are plotted A remarkable feature evident in Fig. 7 over the TP is the significant area of divergence (convergence) of PVC associated with negative (positive) PV in the upper troposphere above (below) 400 hPa. Diagnosis of the correlation between the CEPVFI and diabatic heating shows that in the area over the TP (28°–40° N, 80°–105° E), the area-averaged diabatic heating increases (decreases) with height below (above) 300 hPa (figure not shown). Positive and negative PV is then generated in the lower layer and upper layer, respectively. This explains at least partly why PV distribution is positive in the lower layer and negative in the upper layer over the TP. The divergence of PVC over the TP connects CEPVF to the SH and transports PV to high latitudes, which is conducive to the formation of a notable positive PV column over the mid–high latitudes of Eurasia (Eq. 6b). Specifically, the divergence over the TP is related to the two gyres of PVC: one to its south and the other to its north. The southern one crosses the equator into the SH in the upper layer and returns to the NH in the lower layer, whereas the northern one moves northward into the higher latitudes, converging and intruding PV downward to the north of the TP, leading to increasing PV over the entire column in the mid–high latitudes of Eurasia. Finally, the positive PV column with an equivalent barotropic structure is formed over the mid–high latitudes of Eurasia. PV, SAT, and surface feedback To further reveal how the equivalent barotropic PV column is retained and how it is related to the in situ cold SAT, the vertical cross section of the correlation between the CEPVFI and both PV and potential temperature (along the black line in Fig. 4b) is shown in Fig. 8. Corresponding to the positive phase of the CEPVFI, three equivalent barotropic positive PV columns (Fig. 8) are located at positions coincident with the three cold SAT centers. Based on the PV-θ mechanism (Hoskins et al. 1985, 2003), isentropes in the troposphere will be vertically "sucked" toward a positive PV anomaly, which means the isentropes bow downward (upward) in the upper (lower) layer. As upward (downward) bowing of isentropes indicates a cold (warm) atmosphere, the three equivalent barotropic positive PV columns therefore lead to the broad uniform pattern of cold SAT shown in Fig. 4b. Distribution of correlation coefficients between the CEPVFI and potential temperature (contours; solid (dashed) lines indicate positive (negative) values) and PV (shading) along the black line displayed in Fig. 1b during boreal spring. The black vertical dashed lines indicate the negative SAT centers displayed in Fig. 4b. Areas exceeding the 0.05 significance level are highlighted by black dots As predicted by the PV-θ mechanism, an outstanding common feature (Fig. 8) is that a warm (cool) anomaly appears in each column above (below) 300 hPa. Recalling that the PV source is at the Earth's surface, at the boundary surface, the vertical PV transport due to flux \(\vec {J_s}\) is \(\vec {J_s} \cdot \vec {n} \approx \int_{{t_{0} }}^{{t_{0} + \vartriangle t}} {( - f\dot{\theta } - F_{\zeta } \theta )} dt\), which depends on both heating and friction, where \(\vec{n}\) is an upward unit vector. Because a cold surface favors a local anticyclone circulation (Thorpe 1985; Hoskins et al. 1985), the atmosphere then exerts anticyclonic stress on the Earth, and the Earth reacts to the atmosphere by producing cyclonic torque, generating positive PV. Moreover, surface cooling over a cold surface also contributes to the generation of positive PV, which in return contributes to the formation of the lower part of the positive PV column over the cold surface, as presented in Fig. 8. According to the thermal wind relation, a region with cold SAT and a cold air column above can generate a cyclonic vertical shear circulation. Conversely, a warm anomaly in the upper positive PV column should correspond to an anticyclonic vertical shear circulation. Consequently, a maximum anomalous cyclone circulation should exist in the middle column at around 300 hPa. Because the column is warm aloft and cold below, the static stability should also be maximized in the middle layer. Therefore, the positive PV can extend from the lower layer to the middle layer, as shown in Fig. 8. In return, the PV in the positive barotropic PV columns can further maintain the "warm aloft and cold below" thermal structure of the column due to the PV-θ mechanism. Consequently, the broad uniform pattern of cold SAT shown in Fig. 4b is retained at an equilibrium state attributable to positive feedback between the atmospheric circulation and surface cooling. The above discussion reveals that the significant connection between CEPVF and SAT over the mid–high latitudes of Eurasia is sustained via the PVC, the PV-θ mechanism, and the surface feedback involving friction, diabatic cooling, and the atmospheric circulation. Possible forcing factors of the CEPVF The anomalous CEPVF is related closely to the vertical distribution of anomalous zonal circulation over the equator (Eq. 9), and therefore any factors inducing the anomalous equatorial zonal circulation could drive the CEPVF. Because ENSO events have strong effect on the zonal circulation (Bjerknes 1969), ENSO as a strong signal of air–sea interaction along the equator could be a driver of CEPVF. The correlation coefficient between the spring CEPVFI and the preceding winter NINO34 index is as high as 0.62 (figure not shown), and the correlation coefficient between the spring CEPVFI and the current spring NINO34 index is as high as 0.67, as shown in Fig. 9a, both passing the 0.01 significance level and indicating a strong relation between ENSO and CEPVF. The correlation between NINO34 and the meridional component of the PVC (\(J_{s}^{y}\)) is shown in Fig. 9b. In the warm phase of ENSO (Fig. 9b), descending motion is observed over the Maritime Continent, and anomalous westerly and easterly winds are observed in the lower level over the equatorial Pacific and equatorial Indian Ocean, respectively, presenting a negative GIP pattern (Wu and Meng 1998). The CEPVF induced by the warm phase of ENSO is very similar to the CEPVF revealed by the CEPVFI (Fig. 5b). This confirms that ENSO is an important forcing factor of CEPVF. a Normalized time series of the CEPVFI, NINO34 index, and TSAI during boreal spring. b Same as Fig. 5b but for the spring NINO34 index. c Same as Fig. 5b but for the TSAI index In addition to ENSO, by employing the partial correlation between the CEPVFI and SST with the spring NINO34 index removed, we found another significant forcing signal of the tropical South Atlantic uniform SST mode (TSAM, Fig. 10), which is characterized as SST anomalies with the same sign over the region (30° S–0°, 40° W–10° E). This mode is a major EOF pattern for tropical South Atlantic variability (Huang et al. 2004). The time series of the area-averaged SST in the blue box in Fig. 10 is defined as the TSAM index (TSAI, shown in Fig. 9a) to represent tropical South Atlantic forcing. The correlation coefficient between the TSAI and the leading EOF time series of the SST in the blue box is as high as 0.99 (figure not shown), indicating that the following results are insensitive to the definition of the SST index. The correlation coefficient between the spring TSAI and the spring CEPVFI is 0.35, passing the 0.05 significance level. However, the correlation coefficient between the spring TSAI and the spring NINO34 index is as low as − 0.016, failing the significance test (Fig. 9a). This implies that variation of SST in the tropical South Atlantic is unrelated to ENSO during boreal spring, which is consistent with both Chang et al. (2006) and Kucharski et al. (2007). The TSAM and ENSO therefore can be considered two independent forcing factors that drive CEPVF during boreal spring. As can be seen from Fig. 9c, a significant zonal circulation related to the TSAM is observed around the Maritime Continent, and the induced CEPVF that is concentrated around the Maritime Continent is significant. Distribution of partial correlation coefficients between the CEPVFI and SST with the spring NINO34 index removed. The area covered by the blue box is (30° S–0°, 40° W–10° E). Areas exceeding the 0.05 significance level are highlighted by black dots. The solid blue line denotes the TP topographic boundary of 3000 m To evaluate the relative roles of the two forcing factors, a regression equation is constructed as CEPVFI* = 0.68(spring NINO34) + 0.36(TSAI) through multiple regression analysis, as shown in Fig. 11a. The correlation between the CEPVFI and the CEPVFI* reconstructed by the spring NINO34 index and the TSAI is 0.75, passing the 0.01 significance level. The explained variance contributed by the NINO34 index and TSAI is approximately 45% and 13% of the total, respectively. Because the NINO34 index and TSAI are unrelated, the total variance explained by these two forcing factors is approximately 58%. These results indicate that the effects of ENSO and the TSAM largely dominate the variation of the CEPVFI. a Normalized time series of the CEPVFI (red line) and CEPVFI* (blue line) during boreal spring. b Composite SAT differences between strong positive (greater than 1 standard deviation) and negative (less than − 1 standard deviation) ENSO phases during boreal spring. c As in b but for the TSAI. Areas exceeding the 0.1 significance level are highlighted by white dots Because the CEPVF-related zonal circulation induced by ENSO is different to that of the TSAM, the contributions of these two factors to the anomalous SAT should be different. To separate the contributions of ENSO and the TSAM to the anomalous SAT associated with the CEPVFI, Fig. 11 also presents the composite SAT differences between the strong positive (greater than 1 standard deviation) and strong negative (less than − 1 standard deviation) NINO34 index (Fig. 11b) and TSAI (Fig. 11c). ENSO (Fig. 11b) is mainly responsible for the overall cooling over the mid–high latitudes of Eurasia. There are three evident cooling centers, i.e., over the Mediterranean region, northwest of Lake Baikal, and the far east of Russia, similar to the pattern of anomalous SAT induced by the CEPVFI shown in Fig. 4b, except that the center near Lake Baikal is shifted further northward. The TSAM (Fig. 11c) is mainly responsible for the cooling center to the south of Lake Baikal. The two independent forcing factors together largely contribute to the emergence of the wide-ranging cooling pattern related to the CEPVFI over the mid-latitudes of Eurasia. Summary and discussion The change in globally integrated atmospheric PV depends solely on PV flux on the Earth's surface, which is the lower boundary of the global domain. For the NH domain, the equatorial vertical section becomes another boundary, and the CEPVF becomes another boundary condition for the change of gross PV in the NH. Because the integrated surface PV flux associated with surface thermal conditions is balanced climatologically by the CEPVF (see Eqs. 3 and 7b), the CEPVF is therefore related closely to the thermal conditions of the Earth's surface. The results of this study highlight that it is the atmospheric PVC (\(\vec {J_s}\), Eq. 8a) that links the CEPVF to the tropical forcing and extratropical SAT response at the Earth's surface. In addition to theoretical analysis, this study explored the climatological mean features of the CEPVF and focused on the correlation between CEPVF and SAT anomalies over the mid–high latitudes of Eurasia during boreal spring on the interannual time scale. The major results concerning the relationship between CEPVF and SAT are shown schematically in Fig. 12 and summarized as follows. Climatologically, overwhelming negative CEPVF largely dominates the entire equator, indicating that net PV is transported from the NH to the SH. We highlight that the physical process of a northerly wind (v < 0) bringing positive PV (PV > 0) and a southerly wind (v > 0) bringing negative PV (PV < 0) dominates the distribution of the CEPVF. A CEPVF index, namely the CEPVFI, is defined as the normalized time series of the zonally and vertically integrated CEPVF over the equator. On the interannual time scale, the positive phase of the CEPVFI is closely related to the variation of SAT, particularly the significant cooling over the mid–high latitudes of Eurasia, through the PVC, PV-θ mechanism, and surface feedback, as indicated schematically in Fig. 12. The CEPVF over the equatorial Indian Ocean corresponds to the formation of a belt of divergence of PV over the north of the TP, which transfers PV toward the equator and contributes to the broad positive PV in the upper troposphere over the mid–high latitudes of Eurasia (Fig. 12a). The positive PV intrudes downward into the lower layer and forms three positive PV columns (Fig. 12b). Finally, owing to the PV-θ mechanism, the isentropes in the lower troposphere bow upward within these equivalent barotropic positive PV columns, leading to overall cold SAT over the mid–high latitudes of Eurasia (Fig. 12b, c). In return the cold surface and its cooling feedback to the atmosphere via surface friction and the diabatic process produce positive PV in the lower troposphere within the PV column. Through cooperation with the thermal wind relation, therefore, the cold SAT and the positive PV aloft are maintained at an equilibrium state. Because CEPVF is intrinsically proportional to a function of \(- \frac{\partial u}{{\partial p}}\) (Eq. 9), the variation of CEPVF can result from the change of the zonal circulation over the equator (Fig. 5c, d). Two independent forcing factors, namely ENSO and the TSAM, are identified as the main drivers of the CEPVF associated with the zonal circulation (Fig. 12c). Together, ENSO and the TSAM explain approximately 58% of the CEPVFI variance, of which ENSO accounts for approximately 45% and the TSAM accounts for approximately 13%. Owing to the differences in CEPVF induced by ENSO and the TSAM, the contributions of these two factors to the anomalous SAT differ. A warm ENSO is mainly responsible for the overall cold SAT over the mid–high latitudes of Eurasia, while a warm TSAM is mainly responsible for the cold SAT to the south of Lake Baikal. In combination, ENSO and the TSAM jointly explain a large proportion of the CEPVFI variance, and they influence a major part of the variation of SAT associated with the CEPVF during boreal spring. Schematic showing the CEPVF influence on SAT over the mid–high latitudes of Eurasia during boreal spring. a PV (shading) and horizontal PV flux (vectors) at 200 hPa, b cross section of PV (shading) and potential temperature (contours), and c SST over oceans and SAT over land. Yellow vectors indicate the zonal circulation over the equator. d Logic diagram summarizing the findings of this study. The upper row with red boxes indicates physical nodes in this study, and the lower row with blue boxes indicates the foundation between these nodes Simplistically, with reference to Fig. 12d, we argue that once heating forcing (e.g., ENSO) appears over the tropics (A), the zonal circulation along the equator will change (B). This process represents manifestation of the consensus that anomalous heating could induce anomalous zonal circulation (e.g., Bjerknes 1969). According to Eq. (9), anomalous zonal circulation (B) will induce anomalous CEPVF (C). Because PV is equal to the convergence of PVC (Eq. 6b), changes in CEPVF (C) at the equatorial boundary of the NH will stimulate adjustment of PVC within the NH, leading to change in PV over the NH (D). Finally, in response to the PV-θ mechanism (Hoskins et al. 1985, 2003) and owing to surface feedback associated with surface friction and diabatic cooling, anomalous cold SAT over the mid–high latitudes of Eurasia (E) is retained. On the basis of the results of this study, it may be speculated that the physical nature of the equilibrium linkage between CEPVF and extratropical SAT is mainly a manifestation of the response of extratropical SAT to tropical forcing that induces anomalous CEPVF. In this linkage, the CEPVF is much like a monitor through which the impacts of various tropical signals (including but not limited to ENSO and the TSAM) on extratropical SAT can be comprehensively detected. In this process, the PVC is like a bridge through which tropical forcing communicates with extratropical SAT. There are numerous frameworks through which to view the general circulation; however, the necessity to understand the behavior of the atmosphere demands that we seek an increasing array of diagnostic tools (Hoskins 1991). This study sheds new light on the connection between the tropical and mid–high-latitude circulations. Owing to the existence of the easterly wind in the tropics, it is difficult to explain how tropical signals could cross the easterly wind belt and affect the extratropical circulation within the framework of Rossby waves. The perspective of PVC provides a relatively direct and simple connection between the tropics and the mid–high latitudes. Although the tropical forcing factors that could drive the variability of zonal circulation associated with CEPVF anomalies are physically defensible (e.g., Bjerknes 1969) and statistically significant, numerical sensitivity experiments should be conducted in future to further explore the detailed roles of ENSO and the TSAM in driving the zonal circulation associated with CEPVF anomalies. In addition to boreal spring, the climatological effects related to CEPVF in different seasons should be studied carefully to establish the connection between CEPVF and certain well-known extratropical forcing (e.g., the Arctic Oscillation, Arctic Sea ice, and South Asian high) and further deepen our insight into climate dynamics. The MERRA2 reanalysis dataset is available at https://disc.gsfc.nasa.gov/datasets?project=MERRA-2. The COBE SST dataset is available at https://psl.noaa.gov/data/gridded/data.cobe.html. A Correction to this paper has been published: https://doi.org/10.1007/s00382-022-06504-w Bjerknes J (1969) Atmospheric teleconnections from equatorial pacific. Mon Weather Rev 97:163–172. https://doi.org/10.1175/1520-0493(1969)097%3c0163:atftep%3e2.3.co;2 Bretherton CS, Schӓr C (1993) Flux of potential vorticity substance—a simple derivation and a uniqueness property. J Atmos Sci 50:1834–1836. https://doi.org/10.1175/1520-0469(1993)050%3c1834:fopvsa%3e2.0.co;2 Chang P, Fang Y, Saravanan R, Ji L, Seidel H (2006) The cause of the fragile relationship between the pacific El Nino and the Atlantic Nino. Nature 443:324–328. https://doi.org/10.1038/nature05053 Chen S, Wu R (2018) Impacts of early autumn arctic sea ice concentration on subsequent spring Eurasian surface air temperature variations. Clim Dyn 51:2523–2542. https://doi.org/10.1007/s00382-017-4026-x Chen S, Wu R, Liu Y (2016) Dominant modes of interannual variability in Eurasian surface air temperature during boreal spring. J Clim 29:1109–1125. https://doi.org/10.1175/jcli-d-15-0524.1 Chen SF, Wu RG, Chen W (2019) Projections of climate changes over mid–high latitudes of Eurasia during boreal spring: uncertainty due to internal variability. Clim Dyn 53:6309–6327. https://doi.org/10.1007/s00382-019-04929-4 D'Arrigo R, Wilson R, Li J (2006) Increased Eurasian-tropical temperature amplitude difference in recent centuries: implications for the Asian monsoon. Geophys Res Lett. https://doi.org/10.1029/2006gl027507 Haynes PH, McIntyre ME (1987) On the evolution of vorticity and potential vorticity in the presence of diabatic heating and frictional or other forces. J Atmos Sci 44:828–841. https://doi.org/10.1175/1520-0469(1987)044%3c0828:oteova%3e2.0.co;2 Haynes PH, McIntyre ME (1990) On the conservation and impermeability theorems for potential vorticity. J Atmos Sci 47:2021–2031. https://doi.org/10.1175/1520-0469(1990)047%3c2021:otcait%3e2.0.co;2 Hoskins BJ (1991) Towards a PV-θ view of the general-circulation. Tellus Ser A Dyn Meteorol Oceanogr 43:27–35. https://doi.org/10.1034/j.1600-0870.1991.t01-3-00005.x Hoskins B (2015) Potential vorticity and the PV perspective. Adv Atmos Sci 32:2–9. https://doi.org/10.1007/s00376-014-0007-8 Hoskins BJ, Yang GY (2021) The detailed dynamics of the Hadley cell Part II: December–February. J Clim 34:805–823. https://doi.org/10.1175/jcli-d-20-0504.1 Hoskins BJ, McIntyre ME, Robertson AW (1985) On the use and significance of isentropic potential vorticity maps. Q J R Meteorol Soc 111:877–946. https://doi.org/10.1256/smsqj.47001 Hoskins B, Pedder M, Jones DW (2003) The omega equation and potential vorticity. Q J R Meteorol Soc 129:3277–3303. https://doi.org/10.1256/qj.02.135 Hoskins BJ, Yang GY, Fonseca RM (2020) The detailed dynamics of the June–August Hadley cell. Q J R Meteorol Soc 146:557–575. https://doi.org/10.1002/qj.3702 Huang BH, Schopf PS, Shukla J (2004) Intrinsic ocean-atmosphere variability of the tropical Atlantic Ocean. J Clim 17:2058–2077. https://doi.org/10.1175/1520-0442(2004)017%3c2058:iovott%3e2.0.co;2 Ishii M, Shouji A, Sugimoto S, Matsumoto T (2005) Objective analyses of sea-surface temperature and marine meteorological variables for the 20th century using ICOADS and the KOBE collection. Int J Climatol 25:865–879. https://doi.org/10.1002/joc.1169 Kucharski F, Bracco A, Yoo JH, Molteni F (2007) Low-frequency variability of the Indian monsoon–ENSO relationship and the tropical Atlantic: the "weakening" of the 1980s and 1990s. J Clim 20:4255–4266. https://doi.org/10.1175/jcli4254.1 Labat D, Godderis Y, Probst JL, Guyot JL (2004) Evidence for global runoff increase related to climate warming. Adv Water Resour 27:631–642. https://doi.org/10.1016/j.advwatres.2004.02.020 Liu XD, Yanai M (2001) Relationship between the Indian monsoon rainfall and the tropospheric temperature over the Eurasian continent. Q J R Meteorol Soc 127:909–937. https://doi.org/10.1002/qj.49712757311 Liu YM, Hoskins B, Blackburn M (2007) Impact of Tibetan orography and heating on the summer flow over Asia. J Meteorol Soc Jpn 85B:1–19. https://doi.org/10.2151/jmsj.85B.1 Lucchesi R (2012) File specification for merra products. GMAO office note no. 1 (version 2.3). https://gmao.Gsfc.Nasa.Gov/pubs/docs/lucchesi528.Pdf. Accessed 18 Apr 2019 Luo D, Chen X, Dai A, Simmonds L (2018a) Changes in atmospheric blocking circulations linked with winter arctic warming: a new perspective. J Clim 31:7661–7678. https://doi.org/10.1175/jcli-d-18-0040.1 Luo D, Chen X, Feldstein SB (2018b) Linear and nonlinear dynamics of north Atlantic oscillations: a new thinking of symmetry breaking. J Atmos Sci 75:1955–1977. https://doi.org/10.1175/jas-d-17-0274.1 Ma TT, Wu GX, Liu YM, Jiang ZH, Yu JH (2019) Impact of surface potential vorticity density forcing over the Tibetan Plateau on the south china extreme precipitation in January 2008. Part I: data analysis. J Meteorol Res 33:400–415. https://doi.org/10.1007/s13351-019-8604-1 Ma TT, Wu G, Liu Y, Mao J (2022) Abnormal warm sea-surface temperature in the Indian Ocean, active potential vorticity over the Tibetan Plateau, and severe flooding along the Yangtze River in summer 2020. Q J R Meteorol Soc. https://doi.org/10.1002/qj.4243 Ogi M, Tachibana Y, Yamazaki K (2003) Impact of the wintertime north Atlantic Oscillation (NAO) on the summertime atmospheric circulation. Geophys Res Lett. https://doi.org/10.1029/2003gl017280 Ortega S, Webster PJ, Toma V, Chang HR (2018) The effect of potential vorticity fluxes on the circulation of the tropical upper troposphere. Q J R Meteorol Soc 144:848–860. https://doi.org/10.1002/qj.3261 Rienecker MM et al (2011) MERRA: NASA's modern-era retrospective analysis for research and applications. J Clim 24:3624–3648. https://doi.org/10.1175/jcli-d-11-00015.1 Schneider T, Held IM, Garner ST (2003) Boundary effects in potential vorticity dynamics. J Atmos Sci 60:1024–1040. https://doi.org/10.1175/1520-0469(2003)60%3c1024:beipvd%3e2.0.co;2 Sheng C et al (2021) Characteristics of the potential vorticity and its budget in the surface layer over the Tibetan Plateau. Int J Climatol 41:439–455. https://doi.org/10.1002/joc.6629 Sheng C, He B, Wu GX, Liu YM, Zhang SY (2022) Interannual influences of the surface potential vorticity forcing over the Tibetan Plateau on east Asian summer rainfall. Adv Atmos Sci. https://doi.org/10.1007/s00376-021-1218-4 Thorpe AJ (1985) Diagnosis of balanced vortex structure using potential vorticity. J Atmos Sci 42:397–406. https://doi.org/10.1175/1520-0469(1985)042%3c0397:dobvsu%3e2.0.co;2 Wang X, Piao S, Ciais P, Li J, Friedlingstein P, Koven C, Chen A (2011) Spring temperature change and its implication in the change of vegetation growth in North America from 1982 to 2006. Proc Natl Acad Sci USA 108:1240–1245. https://doi.org/10.1073/pnas.1014425108 Wilcox LJ, Hoskins BJ, Shine KP (2012) A global blended tropopause based on era data. Part I: climatology. Q J R Meteorol Soc 138:561–575. https://doi.org/10.1002/qj.951 Wu GX, Cai YP (1997) Vertical wind shear and down-sliding slantwise vorticity development. Chin J Atmos Sci 21:273–282 (in Chinese) Wu GX, Meng W (1998) Gearing between the indo-monsoon circulation and the Pacific-Walker circulation and the ENSO. Part I: data analyses. Sci Meteorol Sin 22:470–480 (in Chinese) Wu GX, Cai YP, Tang XJ (1995) Moist potential vorticity and slantwise vorticity development. Acta Meteorol Sin 53:387–405 (in Chinese) Xie Y, Wu G, Liu Y, Huang J (2020) Eurasian cooling linked with arctic warming: insights from PV dynamics. J Clim 33:2627–2644. https://doi.org/10.1175/jcli-d-19-0073.1 Zar JH (2010) Biostatistical analysis. Q R Biol 18:797–799 Zhang Y, Huang F, Gong XQ (2008) The characteristics of the air mass exchange between the northern and southern hemisphere. J Trop Meteorol 24:74–80 (in Chinese) Zhang G, Mao J, Liu Y, Wu G (2021a) PV perspective of impacts on downstream extreme rainfall event of a Tibetan Plateau vortex collaborating with a southwest china vortex. Adv Atmos Sci 38:1835–1851. https://doi.org/10.1007/s00376-021-1027-9 Zhang G, Mao J, Wu G, Liu Y (2021b) Impact of potential vorticity anomalies around the eastern Tibetan Plateau on quasi-biweekly oscillations of summer rainfall within and south of the Yangtze Basin in 2016. Clim Dyn 56:813–835. https://doi.org/10.1007/s00382-020-05505-x Zhao L, Ding YH (2009) Potential vorticity analysis of cold air activities during the east Asian summer monsoon. Chin J Atmos Sci 33:359–374 Zhao X, Lu R (2020) Vertical structure of interannual variability in cross-equatorial flows over the maritime continent and Indian Ocean in boreal summer. Adv Atmos Sci 37:173–186. https://doi.org/10.1007/s00376-019-9103-0 This work was supported financially by the National Natural Science Foundation of China (41730963, 91937302), the Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (QYZDY-SSW-DQC018), and the Guangdong Major Project of Basic and Applied Basic Research (2020B0301030004). We thank the reviewers for their constructive and valuable suggestions and comments, which help us to substantially improve and strengthen the paper. This work was supported financially by the National Natural Science Foundation of China (41730963, 91937302), the Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (QYZDY-SSW-DQC018), and the Guangdong Major Project of Basic and Applied Basic Research (2020B0301030004). State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (LASG), Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, 100029, China Chen Sheng, Guoxiong Wu, Bian He, Yimin Liu & Tingting Ma College of Earth and Planetary Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China Chen Sheng, Guoxiong Wu, Bian He & Yimin Liu Chen Sheng Guoxiong Wu Bian He Yimin Liu Tingting Ma Correspondence to Guoxiong Wu or Bian He. The original online version of this article was revised: " The two sentence which are not scientifically rigorous have been removed in the original article". Sheng, C., Wu, G., He, B. et al. Linkage between cross-equatorial potential vorticity flux and surface air temperature over the mid–high latitudes of Eurasia during boreal spring. Clim Dyn 59, 3247–3263 (2022). https://doi.org/10.1007/s00382-022-06259-4 Accepted: 10 March 2022 Issue Date: December 2022 Potential vorticity (PV) Cross-equatorial PV flux (CEPVF) El Niño–Southern Oscillation (ENSO) Surface air temperature (SAT) Tropical South Atlantic
CommonCrawl
Izv. RAN. Ser. Mat.: Izv. Akad. Nauk SSSR Ser. Mat., 1971, Volume 35, Issue 3, Pages 530–572 (Mi izv2021) This article is cited in 69 scientific papers (total in 70 papers) A Torelli theorem for algebraic surfaces of type $K3$ I. I. Pyatetskii-Shapiro, I. R. Shafarevich Abstract: In this paper it is proved that an algebraic surface of type $K3$ is uniquely determined by prescribing the integrals of its holomorphic differential forms with respect to a basis of cycles of the two-dimensional homology group, if the homology class of a hyperplane section is distinguished. Full text: PDF file (4718 kB) Mathematics of the USSR-Izvestiya, 1971, 5:3, 547–588 UDC: 513.6 MSC: Primary 14C30, 14D20, 14J10; Secondary 10B10, 14G99 Citation: I. I. Pyatetskii-Shapiro, I. R. Shafarevich, "A Torelli theorem for algebraic surfaces of type $K3$", Izv. Akad. Nauk SSSR Ser. Mat., 35:3 (1971), 530–572; Math. USSR-Izv., 5:3 (1971), 547–588 \Bibitem{PyaSha71} \by I.~I.~Pyatetskii-Shapiro, I.~R.~Shafarevich \paper A~Torelli theorem for algebraic surfaces of type~$K3$ \jour Izv. Akad. Nauk SSSR Ser. Mat. \mathnet{http://mi.mathnet.ru/izv2021} \jour Math. USSR-Izv. \vol 5 \crossref{https://doi.org/10.1070/IM1971v005n03ABEH001075} http://mi.mathnet.ru/eng/izv2021 http://mi.mathnet.ru/eng/izv/v35/i3/p530 F. A. Bogomolov, "O mnogoobraziyakh s trivialnym kanonicheskim klassom", UMN, 28:6(174) (1973), 193–194 F. A. Bogomolov, "Kähler manifolds with trivial canonical class", Math. USSR-Izv., 8:1 (1974), 9–20 V. V. Nikulin, "An analogue of the Torelli theorem for Kummer surfaces of Jacobians", Math. USSR-Izv., 8:1 (1974), 21–41 V. V. Nikulin, "On Kummer surfaces", Math. USSR-Izv., 9:2 (1975), 261–275 A. N. Tyurin, "On intersections of quadrics", Russian Math. Surveys, 30:6 (1975), 51–105 A. N. Rudakov, I. R. Shafarevich, "Inseparable morphisms of algebraic surfaces", Math. USSR-Izv., 10:6 (1976), 1205–1237 V. V. Nikulin, "Konechnye gruppy avtomorfizmov kelerovykh poverkhnostei tipa KZ", UMN, 31:2(188) (1976), 223–224 Vik. S. Kulikov, "Epimorfnost otobrazheniya periodov dlya poverkhnostei tipa $K3$", UMN, 32:4(196) (1977), 257–258 A. N. Rudakov, I. R. Shafarevich, "Supersingular $K3$ surfaces over fields of characteristic 2", Math. USSR-Izv., 13:1 (1979), 147–165 A. N. Rudakov, I. R. Shafarevich, "Quasi-elliptic surfaces of type $K3$", Russian Math. Surveys, 33:1 (1978), 215–216 V. M. Kharlamov, "Isotopic types of nonsingular surfaces of fourth degree in $\mathbb{RP}^3$", Funct. Anal. Appl., 12:1 (1978), 68–69 V. V. Nikulin, "Integral symmetric bilinear forms and some of their applications", Math. USSR-Izv., 14:1 (1980), 103–167 V. V. Nikulin, "On arithmetic groups generated by reflections in Lobachevskii spaces", Math. USSR-Izv., 16:3 (1981), 573–601 A. N. Tyurin, "A local invariant of a Riemannian manifold", Math. USSR-Izv., 19:1 (1982), 125–149 A. N. Rudakov, T. Tsink, I. R. Shafarevich, "The influence of height on degenerations of algebraic surfaces of type $K3$", Math. USSR-Izv., 20:1 (1983), 119–135 V. V. Nikulin, "Involutions of integral quadratic forms and their applications to real algebraic geometry", Math. USSR-Izv., 22:1 (1984), 99–172 S. P. Demushkin, A. I. Kostrikin, S. P. Novikov, A. N. Parshin, L. S. Pontryagin, A. N. Tyurin, D. K. Faddeev, "Igor' Rostislavovich Shafarevich (on his sixtieth birthday)", Russian Math. Surveys, 39:1 (1984), 189–200 V. M. Kharlamov, "Classification of nonsingular surfaces of degree $4$ in $\mathbb{RP}^3$ with respect to rigid isotopies", Funct. Anal. Appl., 18:1 (1984), 39–45 Wolf Barth, Klaus Hulek, "Projective models of Shioda modular surfaces", manuscripta math, 50:1 (1985), 73 È. B. Vinberg, "Hyperbolic reflection groups", Russian Math. Surveys, 40:1 (1985), 31–75 A. N. Skorobogatov, "The Kuga–Satake variety of a Kummer surface", Russian Math. Surveys, 40:1 (1985), 243–244 V. V. Nikulin, "On correspondences between K3 surfaces", Math. USSR-Izv., 30:2 (1988), 375–383 S. A. Kuleshov, "An existence theorem for exceptional bundles on $\mathrm K3$ surfaces", Math. USSR-Izv., 34:2 (1990), 373–388 Amit Giveon, Dirk-Jan Smith, "Symmetries on the moduli space of (2,2) superstring vacua", Nuclear Physics B, 349:1 (1991), 168 Paul S Aspinwall, Mark Gross, "Heterotic-heterotic string duality and multiple K3 fibrations", Physics Letters B, 382:1-2 (1996), 81 Gritsenko V.A., Nikulin V.V., "Automorphic forms and Lorentzian Kac-Moody algebras. Part II", International Journal of Mathematics, 9:2 (1998), 201–275 S. Kondo, "A complex hyperbolic structure for the moduli space of curves of genus three", crll, 2000:525 (2000), 219 A. N. Tyurin, "Special Lagrangian geometry as slightly deformed algebraic geometry (geometric quantization and mirror symmetry)", Izv. Math., 64:2 (2000), 363–437 V. V. Nikulin, "On the Classification of Hyperbolic Root Systems of Rank Three", Proc. Steklov Inst. Math., 230:3 (2000), 1–241 Gritsenko V.A., Nikulin V.V., "The arithmetic mirror symmetry and Calabi-Yau manifolds", Communications in Mathematical Physics, 210:1 (2000), 1–11 Degtyarev A., Itenberg I., Kharlamov V., "Real Enriques Surfaces", Real Enriques Surfaces, Lect. Notes Math., 1746, Springer-Verlag Berlin, 2000, VII+ K. KOIKE, H. SHIGA, N. TAKAYAMA, T. TSUTSUI, "STUDY ON THE FAMILY OF K3 SURFACES INDUCED FROM THE LATTICE (D4)3⊕ <-2> ⊕ < 2>: STUDY ON THE FAMILY OF K3 SURFACES", Int. J. Math, 12:09 (2001), 1049 C. G. Madonna, V. V. Nikulin, "On a Classical Correspondence between K3 Surfaces", Proc. Steklov Inst. Math., 241 (2003), 120–153 D. O. Orlov, "Derived categories of coherent sheaves and equivalences between them", Russian Math. Surveys, 58:3 (2003), 511–591 V. V. Nikulin, "On Correspondences of a K3 Surface with Itself. I", Proc. Steklov Inst. Math., 246 (2004), 204–226 Degtyarev A., Itenberg I., Kharlamov V., "Finiteness and quasi-simplicity for symmetric K3-surfaces", Duke Mathematical Journal, 122:1 (2004), 1–49 Kontsevich M., Soibelman Y., "Affine structures and non-archimedean analytic spaces", Unity of Mathematics - IN HONOR OF THE NINETIETH BIRTHDAY OF I.M. GELFAND, Progress in Mathematics, 244, 2006, 321–385 Catanese F., "QED for algebraic varieties", Journal of Differential Geometry, 77:1 (2007), 43–75 V. V. Nikulin, "On the connected components of moduli of real polarized K3-surfaces", Izv. Math., 72:1 (2008), 91–111 C.-Y. Chi, S.-T. Yau, "A geometric approach to problems in birational geometry", Proc Natl Acad Sci, 105:48 (2008), 18696 C. G. Madonna, V. V. Nikulin, "Explicit correspondences of a K3 surface with itself", Izv. Math., 72:3 (2008), 497–508 Macri E., Stellari P., "Automorphisms and autoequivalences of generic analytic K3 surfaces", Journal of Geometry and Physics, 58:1 (2008), 133–164 Macri E., Stellari P., "Infinitesimal Derived Torelli Theorem for K3 Surfaces (with an Appendix by Sukhendu Mehrotra)", Internat Math Res Notices, 2009, no. 17, 3190 Huybrechts D., "The Global Torelli Theorem: Classical, Derived, Twisted", Proceedings of Symposia in Pure Mathematics: Algebraic Geometry Seattle 2005, Vol 80, Pts 1 and 2, Proceedings of Symposia in Pure Mathematics, 80, eds. Abramovich D., Bertram A., Katzarkov L., Pandharipande R., Thaddeus M., Amer Mathematical Soc, 2009, 235–258 Artebani M., Hausen J., Laface A., "On Cox rings of K3 surfaces", Compositio Math., 2010 Evis Ieronymou, "Diagonal quartic surfaces and transcendental elements of the Brauer group", JMJ, 2010, 1 Shingo Taki, "Classification of non-symplectic automorphisms of order 3 on K3 surfaces", Math. Nachr, 2010, n/a Ekaterina Amerik, "On an automorphism of Hilb[2] of certain K3 surfaces", Proceedings of the Edinburgh Mathematical Society, 54:01 (2011), 1 Isao Naruki, Daisuke Tarama, "Algebraic geometry of the eigenvector mapping for a free rigid body", Differential Geometry and its Applications, 2011 Viacheslav V. Nikulin, "Self-correspondences of K3 surfaces via moduli of sheaves and arithmetic hyperbolic reflection groups", Proc. Steklov Inst. Math., 273 (2011), 229–237 Jason Hadnot, "Differential geometry of the Fermat quartic and theta functions", Journal of Geometry and Physics, 62:2 (2012), 137 Shingo Taki, "Classification of non-symplectic automorphisms on K3 surfaces which act trivially on the Néron–Severi lattice", Journal of Algebra, 358 (2012), 16 Daisuke Tarama, "Elliptic K3 surfaces as dynamical models and their Hamiltonian monodromy", centr.eur.j.math, 2012 Izv. Math., 77:3 (2013), 509–524 A. Giveon, D.-J. Smit, "Symmetries on the Space of (2, 2) Superstring Vacua and Automorphism Groups of Calabi-Yau Manifolds", Progress of Theoretical Physics Supplement, 102 (2013), 351 V. V. Nikulin, "Kählerian K3 surfaces and Niemeier lattices. I", Izv. Math., 77:5 (2013), 954–997 V.V.. Nikulin, "Elliptic Fibrations On K3 Surfaces", Proceedings of the Edinburgh Mathematical Society, 2013, 1 È. B. Vinberg, "On the algebra of Siegel modular forms of genus 2", Trans. Moscow Math. Soc., 74 (2013), 1–13 V. V. Nikulin, "Degenerations of Kählerian K3 surfaces with finite symplectic automorphism groups", Izv. Math., 79:4 (2015), 740–794 Esnault H., Oguiso K., "Non-Liftability of Automorphism Groups of a K3 Surface in Positive Characteristic", Math. Ann., 363:3-4 (2015), 1187–1206 V. V. Nikulin, "Degenerations of Kählerian K3 surfaces with finite symplectic automorphism groups. II", Izv. Math., 80:2 (2016), 359–402 Nikulin V.V., "Kahlerian K3 Surfaces and Niemeier Lattices, II", Development of Moduli Theory - Kyoto 2013, Advanced Studies in Pure Mathematics, 69, ed. Fujino O. Kondo S. Moriwaki A. Saito M. Yoshioka K., Math Soc Japan, 2016, 421–471 Garbagnati A., Sarti A., "Kummer surfaces and K3 surfaces with $(\mathbb{Z} /2\mathbb{Z} )^4$ symplectic action", Rocky Mt. J. Math., 46:4 (2016), 1141–1205 V. V. Nikulin, "Degenerations of Kählerian K3 surfaces with finite symplectic automorphism groups. III", Izv. Math., 81:5 (2017), 985–1029 Valery Gritsenko, Viacheslav V. Nikulin, "Examples of lattice-polarized $K3$ surfaces with automorphic discriminant, and Lorentzian Kac–Moody algebras", Trans. Moscow Math. Soc., 78 (2017), 75–83 Gritsenko V. Nikulin V.V., "Lorentzian Kac-Moody Algebras With Weyl Groups of 2-Reflections", Proc. London Math. Soc., 116:3 (2018), 485–533 V. V. Nikulin, "Classification of Picard lattices of K3 surfaces", Izv. Math., 82:4 (2018), 752–816 I. A. Taimanov, "A canonical basis of two-cycles on a $K3$ surface", Sb. Math., 209:8 (2018), 1248–1256 V. A. Krasnov, "Real Kummer surfaces", Izv. Math., 83:1 (2019), 65–103 This page: 2037
CommonCrawl
Giving a value (meaning) to mathematical expressions (symbols, formulas, etc.). In mathematics such values are mathematical objects (sets, operations, expressions, etc.). The value itself is called an interpretation of the corresponding expression. Examples. The value (or interpretation) of the symbol $\cdot$ can be the multiplication operation on real numbers, the addition operation on integers, etc. Suppose that the first of these interpretations is used for $\cdot$. If the symbols $x$ and $y$ denote real numbers (i.e. variables with as possible domain of definition the entire real axis), then the value of the expression $x\cdot y$ is the mapping transforming each pair of real numbers into their product; if the values of $x$, $y$ are, respectively, $6$ and $2.5$, then the value of the expression $x\cdot y$ is the number 15. As the value (interpretation) of a statement of planar Lobachevskii geometry in the Poincaré model, the corresponding statement of planar Euclidean geometry may serve. The most important interpretations are set-theoretical interpretations of expressions of logical languages. If one discusses the simultaneous interpretations of all expressions of a language, then one has an interpretation of the language. A set-theoretical interpretation of a logical language includes a specification of the values of constants — of the object, function and predicate constants and constants of higher degrees (constants for predicates of predicates, etc.), as well as a specification of the domain of applicability of the variables — of the object, function, etc., variables. In multi-sort interpretations, differing object variables may have differing domains of applicability; the same applies for function variables, etc. The interpretations that are most often used, however, are those for which all object variables, as well as the function variables with identical numbers of arguments, etc., have the same domain of applicability. If the domain of variation of the object variables (sometimes called the domain, or support, of the interpretation) is a set $D_0$, then the domain of variation of $n$-place function variables is a set $D_n$ of $n$-place operations on $D_0$. Often $D_n$ is taken to be the set of all $n$-place operations on $D_0$; in this case the domain of variation of the function variables is often not mentioned. Values of object constants are elements of $D_0$, those of function constants are elements of $D_1,D_2,\ldots$. In a set-theoretical interpretation of a logical language, the interpretation of a term (i.e. the value of the term in the given interpretation) is the mapping assigning to each choice of values of variables of the language (or, in a slightly different definition, to each choice of values of the variables participating in the term) an element of the domain of the interpretation, by a definite rule. This mapping is usually given by induction on the structure of terms. In order to obtain an interpretation of the formulas of a language, it is necessary, apart from the components mentioned above, to specify some non-empty set $A$, called the set of logical values. The interpretations of $n$-place predicate constants are mappings from $D_0^n$ into $A$; in particular, zero-place predicate constants are elements of $A$. If there are zero-place, one-place, etc. predicate variables in the language, then their domains of variation are, respectively, the set $A$, some subset of $A^{D_0}$ containing the interpretations of all one-place predicate constants, etc. An interpretation of a formula is defined, analogous to that of a term, as a mapping assigning to each choice of values of object, function and predicate variables of the language an element of $A$. An important kind of set-theoretical interpretations are algebraic interpretations, in which operations on $A$ are taken as values (interpretations) of logical connectives, mappings from the set of subsets of $A$ into $A$ (generalized operations on $A$) as values of quantifiers, and where the interpretation of a formula is defined by induction with respect to the structure. Kripke models are the most important among the other set-theoretical interpretations. The Boolean-valued algebraic interpretations are characterized by the fact that the set $A$ is a complete Boolean algebra; while the values of connectives and quantifiers are: for the conjunction — intersection; for the existential quantifier — taking the least upper bound, etc. Classical interpretations play an especially important role. They are defined as Boolean-valued interpretations with a two-element Boolean algebra $A$. The concept of truth of formulas in a given interpretation is defined by distinguishing certain elements in $A$. For example, for classical interpretations it is natural to take the unit of the Boolean algebra as distinguished element (the unit is also called "truth"). A formula is called true in a given interpretation if its interpretation takes only distinguished values. A model (or regular interpretation, or simply an interpretation, cf. Model (in logic)) of a system of formulas of a certain language is an interpretation of this language in which all formulas of the system are true. The term standard interpretation is used when among all possible values (interpretations) of a certain expression there is a generally accepted one. For example, the standard interpretation of the symbol $=$ in a classical interpretation is that of coincidence of elements, and the standard interpretation of $+$ and $\cdot$ in arithmetic are addition and multiplication of natural numbers. Analogously, one introduces the concepts of the standard interpretations of a language and a standard model. In particular, the classical interpretation of first-order arithmetic with predicate constant $=$ and function constants $+$ and $\cdot$, interpreted as above, is called standard. Apart from set-theoretical interpretations of logical languages one uses others too. E.g., interpretations in which expressions of one logical language are interpreted as expressions in another logical language (see Immersion operation) are used in proving decidability, undecidability and relative consistency of logical theories. See also Constructive logic. [1] E. Rasiowa, R. Sikorski, "The mathematics of metamathematics" , Polska Akad. Nauk (1963) [2] A. Church, "Introduction to mathematical logic" , 1 , Princeton Univ. Press (1956) [3] E. Mendelson, "Introduction to mathematical logic" , v. Nostrand (1964) Besides Kripke models, Boolean-valued models (cf. Boolean-valued model) are important interpretations. Both can be viewed as special cases of sheaf models, or interpretations in topoi (cf. Topos). See, e.g., [a1]. [a1] J. Lambek, P. Scott, "Higher order categorical logic" , Cambridge Univ. Press (1986) Interpretation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Interpretation&oldid=32931 This article was adapted from an original article by A.L. Semenov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Interpretation&oldid=32931"
CommonCrawl
The effect of projections on fractal sets and measures in Banach spaces OTT W, HUNT B, Kaloshin V. 2006. The effect of projections on fractal sets and measures in Banach spaces. Ergodic Theory and Dynamical Systems. 26(3), 869–891. OTT, WILLIAM; HUNT, BRIAN; Kaloshin, VadimISTA We study the extent to which the Hausdorff dimension of a compact subset of an infinite-dimensional Banach space is affected by a typical mapping into a finite-dimensional space. It is possible that the dimension drops under all such mappings, but the amount by which it typically drops is controlled by the 'thickness exponent' of the set, which was defined by Hunt and Kaloshin (Nonlinearity12 (1999), 1263–1275). More precisely, let $X$ be a compact subset of a Banach space $B$ with thickness exponent $\tau$ and Hausdorff dimension $d$. Let $M$ be any subspace of the (locally) Lipschitz functions from $B$ to $\mathbb{R}^{m}$ that contains the space of bounded linear functions. We prove that for almost every (in the sense of prevalence) function $f \in M$, the Hausdorff dimension of $f(X)$ is at least $\min\{ m, d / (1 + \tau) \}$. We also prove an analogous result for a certain part of the dimension spectra of Borel probability measures supported on $X$. The factor $1 / (1 + \tau)$ can be improved to $1 / (1 + \tau / 2)$ if $B$ is a Hilbert space. Since dimension cannot increase under a (locally) Lipschitz function, these theorems become dimension preservation results when $\tau = 0$. We conjecture that many of the attractors associated with the evolution equations of mathematical physics have thickness exponent zero. We also discuss the sharpness of our results in the case $\tau > 0$. Ergodic Theory and Dynamical Systems OTT W, HUNT B, Kaloshin V. The effect of projections on fractal sets and measures in Banach spaces. Ergodic Theory and Dynamical Systems. 2006;26(3):869-891. doi:10.1017/s0143385705000714 OTT, W., HUNT, B., & Kaloshin, V. (2006). The effect of projections on fractal sets and measures in Banach spaces. Ergodic Theory and Dynamical Systems. Cambridge University Press. https://doi.org/10.1017/s0143385705000714 OTT, WILLIAM, BRIAN HUNT, and Vadim Kaloshin. "The Effect of Projections on Fractal Sets and Measures in Banach Spaces." Ergodic Theory and Dynamical Systems. Cambridge University Press, 2006. https://doi.org/10.1017/s0143385705000714. W. OTT, B. HUNT, and V. Kaloshin, "The effect of projections on fractal sets and measures in Banach spaces," Ergodic Theory and Dynamical Systems, vol. 26, no. 3. Cambridge University Press, pp. 869–891, 2006. OTT, WILLIAM, et al. "The Effect of Projections on Fractal Sets and Measures in Banach Spaces." Ergodic Theory and Dynamical Systems, vol. 26, no. 3, Cambridge University Press, 2006, pp. 869–91, doi:10.1017/s0143385705000714.
CommonCrawl
How does the vectorization map relate to the Choi and Kraus representations of a channel? I know that the Choi operator is a useful tool to construct the Kraus representation of a given map, and that the vectorization map plays an important role in such construction. How exactly does the vectorization map work in this context, and how does it relate the Choi and Kraus representations of a given map? quantum-operation kraus-representation glS♦ Tobias FritznTobias Fritzn One way to understand the relationship between the Choi representation of a channel and its possible Kraus representations is to use the vectorization map. Suppose that we have two finite-dimensional Hilbert spaces $\mathcal{X}$ and $\mathcal{Y}$, and that we have fixed a standard basis $\{|1\rangle,\ldots,|n\rangle\}$ of $\mathcal{X}$ and a standard basis $\{|1\rangle,\ldots,|m\rangle\}$ of $\mathcal{Y}$. The vectorization map is the linear map of the form $$ \text{vec}:\text{L}(\mathcal{X},\mathcal{Y})\rightarrow\mathcal{Y}\otimes\mathcal{X}, $$ where $\text{L}(\mathcal{X},\mathcal{Y})$ is the space of all linear maps from $\mathcal{X}$ to $\mathcal{Y}$, that is defined by the following action on standard basis elements: $$ \text{vec}\bigl( | j\rangle \langle k |\bigr) = |j\rangle |k\rangle. $$ For other elements of $\text{L}(\mathcal{X},\mathcal{Y})$ the action of the mapping is determined by linearity. If we view elements of $\text{L}(\mathcal{X},\mathcal{Y})$ as $m\times n$ matrices that act by left-multiplication on column vectors of dimension $n$, the action of the vectorization mapping is to take the rows of the matrix, turn them into column vectors, and then stack them on top of one another like this (in the case $n=m=2$): $$ \text{vec}\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha\\ \beta\\ \gamma\\ \delta \end{pmatrix}. $$ Now the relationship between all of the possible Kraus representations and the Choi representation of a channel $\Phi$ having the form $\Phi:\text{L}(\mathcal{X}) \rightarrow \text{L}(\mathcal{Y})$ can be easily stated: for any choice of a positive integer $N$ and operators $A_1,\ldots,A_N\in\text{L}(\mathcal{X},\mathcal{Y})$, it is the case that $$ \Phi(X) = \sum_{k=1}^N A_k X A_k^{\dagger} $$ is a Kraus representation of $\Phi$ if and only if the Choi representation of $\Phi$ satisfies the equality $$ J(\Phi) = \sum_{k=1}^N \text{vec}(A_k) \text{vec}(A_k)^{\dagger}. $$ This relationship allows you to convert between the two relationships pretty easily. To get the Choi representation from a Kraus representation, it is just a matter of computing. To go from a Choi representation to a Kraus representation, you can use the spectral theorem to write $$ J(\Phi) = \sum_{k=1}^r v_k v_k^{\dagger} $$ for vectors $v_1,\ldots,v_r\in\mathcal{Y}\otimes\mathcal{X}$, where $r=\text{rank}(J(\Phi))$, then take your Kraus operators to be the operators $A_1,\ldots,A_r\in\text{L}(\mathcal{X},\mathcal{Y})$ that satisfy $\text{vec}(A_k) = v_k$ for each $k\in\{1,\ldots,r\}$. This is why $\text{rank}(J(\Phi))$ is always the minimum possible number of Kraus operators you need to specify a given channel. One warning about this relationship: I am assuming that the Choi representation of a channel $\Phi$ of the form described above is defined as $$ J(\Phi) = \sum_{1\leq j,k \leq n} \Phi\bigl(|j\rangle \langle k|\bigr) \otimes |j\rangle \langle k|. $$ Some people choose to define the Choi representation with the tensor factors in the reverse order, like this: $$ J'(\Phi) = \sum_{1\leq j,k \leq n} |j\rangle \langle k| \otimes \Phi\bigl(|j\rangle \langle k|\bigr). $$ If you do that, you can still get a similar relationship, but you will have to define your vectorization mapping in a different way, as $\text{vec}'(|j\rangle\langle k|) = |k\rangle|j\rangle$. (You will find the vectorization map defined in these two different ways by different authors, so you just need to be aware of which definition is being used whenever you see this mapping.) Concerning references, you can find this relationship in Choi's 1975 paper: Man-Duen Choi. Completely positive linear maps on complex matrices. Linear Algebra and Its Applications 10: 285-290, 1975. Choi's proof of this relationship is terse, but certainly comprehensible. You can also find more detail in Section 4.4 of Mark Wilde's book: Mark Wilde. Quantum Information Theory, second edition. Cambridge University Press, 2017. Assuming you'll forgive me for a self-citation, I also cover this in Section 2.2.2 of my book, using a similar notation to the description above: John Watrous. The Theory of Quantum Information. Cambridge University Press, 2018. (Available at https://cs.uwaterloo.ca/~watrous/TQI/ .) John WatrousJohn Watrous $\begingroup$ wish I could give more than one upvote.. :) $\endgroup$ – QuestionEverything Not the answer you're looking for? Browse other questions tagged quantum-operation kraus-representation or ask your own question. How does the spectral decomposition of the Choi operator relate to Kraus operators? What is the "complementary map" of a channel with given Kraus decomposition? Do the Kraus operators of a CPTP channel need to be orthogonal? Deduce the Kraus operators of the dephasing channel using the Choi Direct derivation of the Kraus representation from the natural representation, using SVD How does a map being "only" positive reflect on its Choi representation? Do the eigenvalues of the Choi matrix have any direct physical interpretation? Choi matrix in QETLAB Prove that different Kraus decompositions are related through a unitary, using the Choi isomorphism Special properties of a channel whose Kraus decomposition contains Identity
CommonCrawl